Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Hands-On Lab 01 – vSphere Features Overview

 

I decided to venture into some hands-on labs today, after hearing about all the new features over the last couple of days it was nice to finally get my hands on them!

The lab was set to cover the following areas of potential new functionality* in vSphere;

vStorage plug-in – pluggable drivers from storage vendors to enable enhanced snapshot functionality or improved multi-pathing with their arrays.

Hot-cloning of a running VM – handy.

Host profiles and compliance management – this was quite a nice feature you define a host profile, or copy one from an existing host – it was a bit reminiscent of the Windows Group Policy Management Console in some ways – you can link profiles to individual ESX hosts or to a cluster/DC object.

Storage vMotion via the GUI – functionality has been there since v3.5 but now has no reliance on a 3rd party GUI plug-in or command line.

Online VMFS expansion – handy, so if you can extend a LUN from your array you can grow the VMFS into it online without downtime, up until now the only alternative was downtime or storage vMotion to a brand-new LUN, or to use extents which are not as safe.

Creating a vApp – this feature is similar to VM teaming in VMware Workstation but with the first of many functional additions.

  • The main target scenario for vApps are multi-teir applications where you may have a database back-end and a front-end web server. you can define start-up and shutdown order.
  • There are vApp networking settings where you appear to be able to define IP address allocations, private DHCP pools etc.
  • It has an interface which is the same as the normal resource pool UI, so you can define reservations for a vApp (or collection of VM’s so you can provide a consistent service level.
  • There wasn’t much else in there yet – but VMware have said they will be adding more features in later releases.

Configuring the distributed virtual switch (vDS)– this was an interesting lab, based around the built-in vDS which comes free with ESX, you can define port groups and uplink groups which are automatically propagated around all members of the vDS.

You have to assign the vDS to particular hosts, I’m not sure if you can attach it at a cluster or DC level – I have a separate post on the vDS and the Cisco NX1000V in the pipeline, for now know that you have 3 switch options

  • vSwitch (same as previous ESX versions)
  • Virtual Distributed Switch – distributed across multiple hosts (maybe only included in Higher editions of ESX?)
  • and the Cisco NX1000V – which is a separately licenced add-on.

You can migrate normal vSwitch configurations into the vDS via the UI and it’s pretty simple to use.

Configuring VMware Fault Tolerance (FT) – this was a great lab and a great new feature you just right-click on a VM and enable FT, it then automatically hot-clones a copy of the VM and keeps it in lockstep, where all of the CPU instructions executed on 1 VM and shipped across the network to the secondary copy which shows up as VM_NAME (Secondary) in the UI.

Once FT is enabled the summary screen shows you details of any lag between the protected VM and it’s secondary instance.

The lab gets you to kill the primary and the failover was instant as far as I could tell with the very simple Debian OS we were protecting, it then automatically re-clones the secondary copy to re-establish FT, very cool. I’m looking forward to getting my hands on a real copy and putting it though it’s paces.

 

Overall the vSphere client (as it’s now renamed* in this lab at least) feels much quicker and responsive than previous versions.

Interestingly the back-end ESX lab environment is implemented as ESXi4 instances running as a virtual machines, which is a brilliant way to do test and development work with ESX (some of my previous posts on this here). It has been hinted that this will be officially supported, we had to switch to a physical ESX farm to do the FT lab as it has specific hardware and CPU requirements, for which they were using HP DL385 servers and the back-end storage was EMC.

*There were plenty of disclaimers over any product names being placeholders, so whilst I mention ESXi4 that does not constitute any kind of legal confirmation from VMware as to what was or will be called. It does hint that the ESXi and ESX with service-console model could continue through the next major release – I did hear one VMware chap refer to “ESX classic” which I would assume is the with service console version 🙂

4 responses to “Hands-On Lab 01 – vSphere Features Overview

  1. Pingback: VMworld 2009 Europe Linkage » Yellow Bricks

  2. John March 12, 2009 at 4:03 am

    I’ve seen different reports about full support for SATA drives in Vsphere but what I’m wondering if there is any native support for virtuals to see SATA DVD/RW drives instead of through an emulation layer. I’d like to have one SATA DVD/RW drive in the host be able to be shared amongst all my virtuals without having to use the client to connect/disconnect one which is buggy at best.

  3. Pingback: VMware Europe 2009 linkage « DeinosCloud’s Blog

  4. Pingback: vSphere, Come and Get it « Virtualization, Windows, Infrastructure and all that “stuff” in-between

Leave a comment