Advertisements

Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

VMware ESX 5

 

Ok, so vSphere (ESX4) has only just been released, but what would you like to see in the next major version? Hyper V R2 will be out soon, and I would expect it’s successor within a further 18 months. whilst vSphere is a technically better product now Microsoft are going to be throwing a significant amount of resource at building up the Hyper V product line so VMware need to keep innovating to be significantly ahead.

As the VMware vendor and partner ecosystem grows will it stifle growth in the core product? – I see this happening with Microsoft – they don’t want to produce an all singing and dancing core product as there are literally thousands of ISV’s that they don’t necessarily want to put out of business; so Microsoft core products are “good-enough” but for more advanced features you turn to an ISV (think Terminal Services & Citrix)

So, open question really – here’s my starter for 10 – What would you like to see in ESX 5?

Host Based Replication

SAN storage brings a single point of failure; even with all the best HA controllers and disk arrangements, it’s still one unit –human error or a bad firmware could corrupt all your disks – you can buy a 2nd one and do replication but that’s expensive (twice as expensive infact) and failover can require downtime (automated with SRM etc.).. and what if you need to physically move it to another datacentre? that’s a lot of risk.

In this previous post I proposed a slightly different architecture, leveraging the FT features for a branch office solution – that same model could mean a more distributed architecture with n+1, 2 or 3 x ESX nodes running FT’d VMs for high availability on cheap, commodity hardware – using DAS storage and replicating over standard IP networks.

if you look at companies like Amazon, Google etc. their cloud platforms leverage virtualization (Xen) but I would bet they don’t rely on enormous SANs to run them, they use DAS storage and replication, they expect individual (or even datacentre) failures and can work around them by keeping multiple copies of everything – but they don’t have an expensive storage model – they use cheap commodity kit and provide the HA in the software – with some enhancements the FT feature could provide an equivalent;

Host based replication also makes long-distance clustering more realistic – relying on plain old IP to do the replication, rather than proprietary SAN-SAN replication (previous thoughts on this here)

Microsoft have already moved in this direction with core products like Exchange and SQL, Exchange CCR and SQL Mirroring are pure-IP based replication technologies that address the issues with traditional single copy clusters

Now, with VMware being owned by EMC I could see this as being something of a problem but I hope they can see the opportunity here, you can achieve some of this using storage virtual machines (like Openfiler+Replication in a VM, or Datacore).

Stateless ESX Nodes

A mode where nodes can be PXE booted (or from firmware like ESXi) and have their configurations assigned/downloaded – no manual installs, all DHCP (or reserved DHCP) addressing

when combined with cheap, automatically provisioned and managed virtualization nodes with commodity DAS storage, you could envisage the following scenario..

  • Rack a new HP DL360g7 with ESX 5i server on a USB key (or PXE booted), attach power, network and walk away
  • it registers itself at boot time with a management node(s) downloads its configuration
  • based on dynamically assigned HA policy it replicates copies of virtual machines from elsewhere in the ESX cloud, once up to speed it becomes a secondary or tertiary copy.

You can imagine a policy-driven intelligent load and availability controller (vCenter 5) which ensures there are always copies of a VM on at least 2 or 3 physical machines in more than one location

Distributed Processing

This is getting a bit sci-fi, but the foundations in infrastructure and technology are being laid now with high-speed interconnects like Infiniband…

With more operating systems and applications starting to optimize for multi-core and hot-add CPU and memory, a very advanced hypervisor scheduler combined with very fast host interconnects like Infiniband or 10GbE could see actual CPU load and memory access being distributed across multiple physical hypervisors;

For example; imagine a 24 vCPU SQL Server virtual machine with 1Tb of vRAM having it’s code executed across 10 quad-CPU physical hosts. effectively multi-core processing but across multiple physical machines – moving what currently happens within the a single physical CPU and bus across the network between disparate machines.

The advantage of this is that developers would only have to write apps that work within current SMP technology – the hypervisor masks the complexity of doing this across multiple hosts, CPUs and networks with a high degree of caching and manages concurrency between processes.

You could combine this with support for hot-add CPU and memory features for apps that could scale massively on-demand and then down again, without having to engineer complex layer 7 type solutions.

Anyway, and please note this is pure personal conjecture rather than anything I have heard from VMware or elsewhere – enough from me; what would YOU like to see…?

Advertisements

9 responses to “VMware ESX 5

  1. William August 7, 2009 at 11:02 pm

    IO DRS will definitely be in the next relesae (Will be previewed @ VMworld 09).

    I think with the hot add vCPU/vMemory, this can potentially be integrated with DRS + SLA tools like AppSpeed + CapacityIQ to dynamically hot-add resources such as extra vCPUs and vMemory for a given workload and remove them when not needed. That would be a pretty neat feature but also complicated one to solve. Would be interesting to see

    –William

    • Kent August 10, 2009 at 6:45 pm

      I was actually thinking along the same lines as you with host replication, but I was going to go a step further. For the branch office deployments, I am thinking of a “fault tolerant” host solution where all VMs and the service console mgmt is built for fault tolerance. The VMs would be kept in sync with the storage replication between the two hosts. The host configuration would be kept in sync with host profiles between the two boxes, and each location would be a single point of installation administration, i.e. one would not have separately configure / maintain each box.

      The second idea is transparent storage sharing similar to the transparent page sharing idea where identical storage is deduplicated by the host on any storage. Think NetApp ASIS built-into the host itself and storage independent.

  2. DaSein August 11, 2009 at 6:21 pm

    Hardware passthough to support hardhare natively. Asterisk PBX in ESX would be nice.

  3. Pingback: It’s voting time.. « Virtualization, Windows, Infrastructure and all that “stuff” in-between

  4. gandalfk7 January 11, 2010 at 10:11 pm

    I was just thinking of stateless nodes this morning installing a 4i, in large environments it would be very interesting.
    A “simple” drop drop-in-the-rack & go.

    I like the idea.

    btw, interesting blog.

    Matteo

    • vinf January 12, 2010 at 11:44 am

      Thanks – yeah it would be very cool – PXE boot the OS into RAM on a blade and I could envision a single central console with pools of ESXi hosts with the ability to apply configs to each.

  5. Pingback: Where Next for VMware Workstation? « Virtualization, Windows, Infrastructure and all that “stuff” in-between

  6. Raj M April 12, 2011 at 3:36 pm

    IS there any API released for ESX 5.. if not does anybody have some idea about it….

  7. Omid Boloori May 14, 2011 at 5:07 am

    I’d like to see StorageDRS as a feature in ESXi 5. We’re all now seeing storage tiering being offered by EMC and NetApp; it’d be nice to integrate StorageDRS into these pools to take advantage of faster disk (when needed).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: