Join 2,575 other subscribers
My ramblings on the stuff that holds it all together
This session was discussing new features in vSphere, or is it VDC-OS, I’m a bit confused about that one – vSphere is the new name for “Virtual Infrastructure”? that would make sense for me.
As usual this session is prefixed with a slide that all material presented is not final, and is not a commitment – things may change etc. – at least VMware point this out for the less aware people who then come and complain when something has changed at GA 🙂 this is my take on what was said… don’t sue me either 🙂
vApp is an OVF based container format to describe a virtual machine (os+app+data = workload) and what resources it needs, what SLA needs to be met etc. I like this concept.
in later releases it will also include security requirements – they use the model that vApp is like a barcode that describes a workload, the back-end vCenter suite knows how to provision and manage services to meet the requirements expressed by the vApp (resource allocation, HA/FT usage, etc.) and does so when you import the vApp.
There was some coverage of VMware Fault Tolerance (FT) using the lockstep technology, this has been discussed at length by Scott here however if I understood correctly it was said that at launch there would be some limitations; its going to be limited to 1 vCPU until a latter update, or maybe they meant experimental support at GA, with full support at a later update (update 1 maybe?) perhaps someone else at the session can clarify, otherwise there will hopefully be more details in the day 2 keynote by Steven Herrod tomorrow.
There is likely to be c.10% performance impact for VMware FT hosts due to the lockstep overhead (this was from an answer to a delegate question, rather than in the slides).
Ability to scale-up virtual machines through hot add vRAM and vCPU as well as hot-extension of disks.
The vShphere architecture is split into several key components (named using the vPrefix that is everywhere now!:))
vCompute – scaling up the capabilities and scale of individual VMs to meet high-demand workloads.
VMDirectIO – allowing direct hardware access from within a VM; for example – a VM using a physical NIC to do TCP offload etc. – the VM has the vendor driver installed rather than VMXNET etc. to increase performance (looks to have DRS/vMotion implications)
Support for 8 way vSMP (and hot-add)
255Gb RAM for a VM
up to 40GB/s network speed within a VM.
vStorage – improved storage functionality
Thin-provisioning for pragmatic allocation of storage, can use storage vMotion to move data to larger LUNs if required without downtime – monitoring is key here – vCenter integration.
Online disk grow – increase disk size without downtime.
<2ms latency for disk I/O
API for snapshot access, enabling ISV solutions, custom bolt-ons
Storage Virtual Appliances – this is interesting to me, but no real details yet
Distributed Network vSwitch – some good info here – configure once, push config out to all hosts
3rd party software switches (Cisco 1000V)
vShield - which is a self-learning and configuring firewall service and firewall/trust zones to enforce security policies
vSafe – a framework for ISV’s to plug in functionality like VM deep-inspection, essentially doing brain-surgery on a running VM via an API.
Last point before I had to leave early for a vendor meeting was about Power – vSphere has support for power management technology like SpeedStep and core sleeping and DPM (Distributed Power Management) is moving from experimental to mainstream support. This is great as long as you make sure your data centre power feed can deal with surge capacity should you need to spin up extra hosts quickly; for example at a DR site when you invoke a recovery plan. This needs thought and sizing, rather than oversubscribing power because you think you can get away with it (or don’t realise DPM is sending your servers to sleep); otherwise you may be tripping some breakers and having to find the torches when you have to “burst”.