Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Category Archives: vSphere

Remote in-place upgrade from ESX3.5 to vSphere

 

I have an ESX 3.5 host running on an ML-110g4 that I want to upgrade to the beta of vSphere.

The vSphere client seems to come with a remote update utility as shown below.

image

It will download the upgrade packages over the network and then upgrade the OS remotely, nice no iLo or trip to the DC required… hopefully! {this was done with beta code remember}

Point it at ESX4 DVD .iso file

image

Validating the .iso file image

Accepting the EULA

image

Supply the appropriate credentials for the ESX host

image

Host needs to be in maintenance mode, as you might expect 🙂

image

with the host in maintenance mode it can continue

image

You need to choose a location to store the virtual disk file for the COS

image

Enter failure options, what to do if it doesn’t work!

image

Summary screen and we’re ready to go..

image

On its way

 imageimage

unfortunately this eventually timed out

image

Upon investigating my ESX 3.5 host was shutting down and got stuck for some reason, manual reboot saw the upgrade process proceeding on the console, this was done with the RC beta build so this could be a bug..

about 30mins later it was all complete and my ESX 3.5 host was upgraded to vSphere.

image

Very useful feature 🙂

I wrote this post last month but hopefully now the covers are off vSphere I can post more of this without falling foul of the NDA that covered the beta programme.

Today is vSphere Launch Day

 

The covers are coming off today – you can still register for the webcast here if you want to watch it live.

I have been participating in the private beta for a couple of months and have been privy to most of the announcement information via the VMware partner programme, it’s all been under NDA until tomorrow so I have not been able to discuss them in public.

Unfortunately I have  a customer engagement for most of today so won’t be folowing/posting live but I’m sure there will be plenty of coverage from other v12n bloggers and via Twitter – Rich Bramley has some good suggestions here on how to keeo up with things today.

I have a couple of vSphere related posts queued up and once things are in the open I look forward to discussing vSphere with you in more detail once the covers are off 🙂

Answers on the Cisco Nexus vSwitch – what is it and is vShield the same?

 

Just seen this post and was particularly interested in how the Cisco vSwitch works – it is shipped as part of ESX, and enabled/unlocked by a licence key, you need to download an OVF virtual appliance to manage it.

That answers one of the big things I’ve been meaning to find out whilst I’m here; I also attended a session on vShield zones and came away with a mixed bag of thoughts – is it a baked-in part of the next version of ESX or is it run in a virtual machine? – I have resolved to head for the hands-on Labs to try it out for myself; hopefully I will get time.

VMworld Europe Day 2: Keynote

 

Well day 2 got underway with the much anticipated keynote session from Steve Herrod who is CTO and VP of R&D or “technical stuff”.

He covered some of the previous announcements and did manage to clarify that vSphere is the implementation of VDC-OS (so it’s the new name for Virtual Infrastructure).

Steve Herrod let on that he was watching twitter during the other keynotes and adjusted his presentation accordingly 🙂

vSphere

There were some examples of Oracle OLTP application scaling that have been done in vSphere;

    • <15% overhead on 8 way vCPU VM
    • 24k DB transactions/sec

Some example stats of disk I/O were shown that acheiving 250MB/sec of disk I/O took 510 disk spindles to saturate I/O… the point being that you’ll need a very large amount of hardware before you start running into disk/VM bus performance issues, and this is constantly increasing.

Virtualizing Exchange is another area where VM’ing can take advantage of multi-core processors for large enterprise apps; break into multiple virtualized mailbox servers to make best use of multi-core hardware; Exchange doesn’t really use the CPU horsepower of modern kit – it’s more about disk I/O (and as they showed this isn’t a practical blocker).

Steve ran over the components of vSphere again, adding a bit more detail – I won’t cover them again but they are

vStorage – extensible via API, storage vendors write their own thin provisioning or snapshot interfaces that hook into VMware.

vNetwork – Distributed vSwitch maintains network state in vMotion

vSphere = scale, 64TB RAM in cluster

Power thrifty (CPU power management features)

vShield zones follows vm around DRS – DMZ for groups of VMs (demos tomorrow + breakout)

vCenter HA improvements with VC heartbeat, today 60% of people running VC on physical box to isolate management tools from the execution platform, this delivers high availability for them.

vCenter Server heartbeat which provide an Active/passive cluster solution (but not using MSCS) and configuration change replication/rollback; works over WAN or LAN – IP based with floating IP address, efficient WAN transfers.

Monitors/provides HA for the following components;

  • vCenter database
  • Licencing server
  • Upgrade manager

vCenter Scalability; 50% increase in capacity with 3k vms and 300 hosts per vCenter, in addition the VI client can now aggregate up to 10 vCenter servers in a single UI, with search functionality, can report/search.

vCenter host profiles can enforce and replicate configuration changes across multiple hosts and monitor for deviations (profile compliance)– the UI looks much like update manager.

The VI client performance looks much better in the demo 🙂 let’s hope it’s like that in real-life!

Biggest and most useful announcement for me was that vCenter on Linux is now available and shipping as a bet virtual appliance – just download and go – no more dependency on a Windows host to run VC, I will definitely be trying this out and you can download it yourself here.

vCloud

In terms of vCloud, the federation and long-distance vMotion sound a bit like science fiction – but there was the same opinion of vMotion when it was first announced – look at it now, VMware know how to do this stuff 🙂

Long-distance vMotion is the eventual goal but there are some challenges to overcome in engineering a reliable solution, but in the meantime SRM can deliver a similar sort of overall service, automating DR failover with array based replication and an electronic, scripted run-book.

long-distance vMotion has some other interesting usecases, enabling a follow the sun model for support and IT services – I’ve written about this previously here – this is a great goal and I would expand this suggestion to include follow the power, where you choose to move services around globally to take advantage of the most cost-efficient power, local support etc.

VMWare building an extensible and customisable portal for cloud providers based on Lab Manager which is likley to be bundled as a product.

The vCenter vCloud plug-in was demoed, this was more advanced that I had anticipated, with the target scenario being you can use one VI client to manage services across multiple clouds.

It stores auth details for each (cloud accounts) type (vCloud, drop down) works over web services API to provision/change etc

They showed how you can drag and drop a VM to and from the cloud.

this federation allows you to pick different types of cloud, for example providers that offer a Desktop as a Service (DaaS) type cloud, or one that runs entirely on “green” energy sources.

Virtual Desktop

this is another key initiative and focus of investment within VMware, building up the VDI offering(s) and providing centralised desktops as well as offline/distributed scenarios in future via the Client Virtualization Platform (CVP) – some of my more off the wall thoughts on that here

Key points;

  • Central management
  • Online/offline scenarios
  • Linked clone
  • Thick client push full VM down to machine
  • Patching is challenge – master disk + linked clones
  • Thin-app; makes patching/swapping out underlying OS easier as apps are in a “bubble”.
  • Leveraging ACE server; lock USB etc.
  • CVP – client checks back to central policy server (polling)
  • allows for self-destruct or leased virtual desktop, can’t run away with apps/data

VMware are making heavy investment in PCoIP- providing 3d graphics online offline for high-demand apps (video/graphics) Jerry Chan demoed some of the PCoIP solutions they are working to using Google Earth, whilst impressive – Brian Madden has covered these in more detail here but I did notice that Steve said vClient which is the 1st time I have heard that name.

Finally, there was some coverage of the mobile phone VM platform, which whilst I see what they are aiming for and the advantages of it to a Telco (single platform to test apps against), it’s personally of less interest to me. I do hope that VMware don’t go all Microsoft and start spreading themselves into every market just because they can need to have a presence (live search, live everything etc), rather than focusing on good, core products. Whilst they are the 1st people I’ve heard of seriously working on this I don’t know how it will pan out – but will keep an open mind, I suppose a sandboxed, secured corporate phone build with a VoIP app, some heavy crypto and a 3G connection controlled under a hypervisor could be appealing to certain types of govt. “organisations”.

All in, a very good keynote session – much better focused at the main demographic of the conference (techies, well me anyway :)) and there are some good sessions scheduled for today.

More later.

DC14 – Overview of 2009 VMware Datacenter Products (VMworld Europe 2009)

 

This session was discussing new features in vSphere, or is it VDC-OS, I’m a bit confused about that one – vSphere is the new name for “Virtual Infrastructure”? that would make sense for me.

As usual this session is prefixed with a slide that all material presented is not final, and is not a commitment – things may change etc. – at least VMware point this out for the less aware people who then come and complain when something has changed at GA 🙂 this is my take on what was said… don’t sue me either 🙂

vApp is an OVF based container format to describe a virtual machine (os+app+data = workload) and what resources it needs, what SLA needs to be met etc. I like this concept.

in later releases it will also include security requirements – they use the model that vApp is like a barcode that describes a workload, the back-end vCenter suite knows how to provision and manage services to meet the requirements expressed by the vApp (resource allocation, HA/FT usage, etc.) and does so when you import the vApp.

There was some coverage of VMware Fault Tolerance (FT) using the lockstep technology, this has been discussed at length by Scott here however if I understood correctly it was said that at launch there would be some limitations; its going to be limited to 1 vCPU until a latter update, or maybe they meant experimental support at GA, with full support at a later update (update 1 maybe?) perhaps someone else at the session can clarify, otherwise there will hopefully be more details in the day 2 keynote by Steven Herrod tomorrow.

There is likely to be c.10% performance impact for VMware FT hosts due to the lockstep overhead  (this was from an answer to a delegate question, rather than in the slides).

Ability to scale-up virtual machines through hot add vRAM and vCPU as well as hot-extension of disks.

The vShphere architecture is split into several key components (named using the vPrefix that is everywhere now!:))

vCompute – scaling up the capabilities and scale of individual VMs to meet high-demand workloads.

VMDirectIO – allowing direct hardware access from within a VM; for example – a VM using a physical NIC to do TCP offload etc. – the VM has the vendor driver installed rather than VMXNET etc. to increase performance (looks to have DRS/vMotion implications)

Support for 8 way vSMP (and hot-add)

255Gb RAM for a VM

up to 40GB/s network speed within a VM.

vStorage – improved storage functionality

Thin-provisioning for pragmatic allocation of storage, can use storage vMotion to move data to larger LUNs if required without downtime – monitoring is key here – vCenter integration.

Online disk grow – increase disk size without downtime.

<2ms latency for disk I/O

API for snapshot access, enabling ISV solutions, custom bolt-ons

Storage Virtual Appliances – this is interesting to me, but no real details yet

vNetwork

Distributed Network vSwitch – some good info here – configure once, push config out to all hosts

3rd party software switches (Cisco 1000V)

vServices

vShield -  which is a self-learning and configuring firewall service and firewall/trust zones to enforce security policies

vSafe – a framework for ISV’s to plug in functionality like VM deep-inspection, essentially doing brain-surgery on a running VM via an API.

Last point before I had to leave early for a vendor meeting was about Power – vSphere has support for power management technology like SpeedStep and core sleeping and DPM (Distributed Power Management) is moving from experimental to mainstream support. This is great as long as you make sure your data centre power feed can deal with surge capacity should you need to spin up extra hosts quickly; for example at a DR site when you invoke a recovery plan. This needs thought and sizing, rather than oversubscribing power because you think you can get away with it (or don’t realise DPM is sending your servers to sleep); otherwise you may be tripping some breakers and having to find the torches when you have to “burst”.