Virtualization, Cloud, Infrastructure and all that stuff in-between
My ramblings on the stuff that holds it all together
Carolina VMware User Summit in Charlotte, NC
Bit of a change of location for this week as I find myself at the Carolina VMware User Summit (VMUG) meeting in Charlotte, NC. (for anyone else not from here; you can familiarise yourselves with NC at Wikipedia like I did :)).
First impressions for this event are that wow, it’s BIG – for any London VMUG’ers this is at least 10 times at big probably more. If it helps frame the session for us Europeans, if you’ve ever been to TechEd Europe in recent years- the main hall and seating is probably the same size as the lunch halls in Barcelona with a bunch of break-out rooms.
There was a great turn-out with a number of vendors sponsoring and exhibiting as well as a number of well respected industry experts, as I think Chad Sakac pointed out, this is almost a mini-VMworld!
The opening keynote by Scott Davis, CTO Desktops at VMware covered view and VMware’s aspirations for enabling desktop as a Service (DaaS) with VMware view, there was nothing new here, but it was good to get it laid out
One point that I did pick up on I that there still won’t be any offline Mac support in view 4.5, this seems like a high-demand feature to me, given the number of Macs I see in corporate environments these days is multiplying exponentially, Fusion VMs still lack a centralised command and control infrastructure outside of normal AD and Group Policy for "corporate" Windows VMs.
Then Nexus 1000v Architecture and Deployment Jason Nash from varrow his blog is here and he has some excellent articles on implementing the Nexus 1000v (NX1k)
Varrow have done a number of deployments of the NX1k, and some interesting points and gotchas I noted from the presentation as are follows;
Why are people implementing the NX1k? The typical use-case is for your network team to be able to use familiar tools to manage, configure and maintain the environment.
However an interesting operational point for non NX1K environments is that if you need to do a packet-level debug of a problem or have a packet-level IPS type device that works via a span-port on a traditional vSwitch in a DRS cluster you will loose visibility of the traffic if your VM moves to an alternative host through vMotion or HA, in the NX1K world everything moves with the VM.
Upstream physical Cisco switches are not absolutely required for the NX1k to function but it enables useful functions like CDP which are really useful where there are multiple layers of abstraction.
There are essentially 2 components of the NX1k both of which are implemented as virtual machines
Virtual Supervisor Module (VSM), typically 2 for redundancy in an active/passive configuration – these control configuration and management of the virtual Cisco switches, analogous to Cisco Supervisor line-cards for the Catalyst range of chassis switches. Most people implement these as DRS/HA enabled virtual machines.
Virtual Ethernet Module) one instance runs on each ESX node participating in a cluster with a NX1K dvSwitch
There is also a physical appliance for running the VSM, the Nexus 1010 which is a re-branded Cisco UCS200 rack server that can run up to 4 instances of a VSM, and there is likely to be a future implementation that fits into a chassis type switch as a blade, however the majority of customer implementations have been using VSMs running on a DRS/HA enabled vSphere cluster as the actual resource/supportability requirements don’t typically require a dedicated appliance.
One of the most common problems seen "in the field" come from a loss of control traffic between the VSM and VEM,which can result in modules going offline or "flaky" functionality
VSM<—>VEM comms – uses 2 x L2 VLAN to work they can both live on same VLAN but this isn’t best-practice
Control = heartbeat between VSM and VEM
Packet = CDP, IGMP, SNMP, netflow/span
Both need to be trunked across ALL switches.
In the UI and command-line "ethernet" denotes a physical network connection, "vethernet" surfaces in vSphere as a port-group with associated QoS policy.
The VEM can be patched using VMware Update Manager (VUM) but it sometimes NX1k releases don’t appear on the VUM list for several days after release, so be sure to check.
Many customers keep non-VM access networks, such as COS, vMotion on traditional non-NX1KV switches to remove any scope for a configuration error totally knocking out access – something I’ve written about before on this post
Next up was Mike DiPetrillo (Global Cloud Architect with VMware twitter/Blog – “All about VMware vCloud”
Mike covered off the key concepts behind cloud and VMware’s view; I’ve written about this before so I won’t recount it here again.
Some interesting points I noted;
There is a different/hybrid skill-set for people working with cloud, it’s less about silo’ing and people need to evolve or be left behind
Networking – it all needs to be plumbed together, automation is needed
Storage – to design and operate at scale in a flexible manner
Programming/Automation – to create/maintain automation at scale
Servers – manage/maintain at scale
Virtualization – to enable flexibility
People are moving to “cloud” in the same way they moved to server virtualization, test & development first, gaining comfort before moving to production, this is something I’ve definitely seen played out in my line of work.
The technical “stuff” behind cloud is pretty easy, it’s servers, storage, virtualization. networking – the hard stuff is gluing it all together and automating it to achieve the self-service type functionality (either internally or for the public) – this “orchestration” is the complicated part.
There was then a vExpert Panel discussion between the following luminaries,
Mike Laverick (Independent) Blog/Twitter
A lot of the chat was pretty much storage focused with Chad [EMC] and Vaughn [NetApp] although it didn’t end up in a fight, the general consensus was that deep-array integration is a good thing to make things easier to operate and manage and EMC and NetApp are leading the way with their code and vStorage API Integration.
It was interesting although, I would like to have seen a wider discussion but those were the questions posed. I also think storage choice is not just a black or white decision (shirt-colour pun intended :))
And, finally Chad Sakacc did a great session titled “Infrastructure Technologies for VMware and the Private Cloud”
Chad’s a great presenter and this has been covered elsewhere on the Internet by a lot of the vSpecialist team but the key points for me were.
EMC plug-ins for EMC array’s are freely downloadable for EMC customers and partners, if you use the Celerra VSA you can play with this yourself now, on your own laptop – see Nick’s Uber VSA here the coolest part was that using the plug-ins you can configure LUNs and storage on your array from within vCenter- handy for a lab or smaller shop where you may not have a dedicated “storage guy”. – you can see some demos and get more info on these plugins on Chad’s blog here
One thing I like about Chad is he is a geek, so understands people want to see demos, not just slides and he had a good deck of pre-recorded demo’s of the cooler EMC technologies like the VM teleporter and the “upcoming, soon to be released super-secret, VMware vCloud product that cannot be named, but has been” 🙂
There was also a demo of the an upcoming release of the EMC Ionix product, which allows auto-discovery of vBlock infrastructure and “a single pane of glass” for administering all aspects of a vBlock – UCS blades (via service profiles), storage and networking.
Ionix + upcoming secret VMware vCloud product seem to solve some of the orchestration and provisioning difficulties that Mike DiPetrillo alluded to in his session and from what I see I now get it, very clever.
In summary, it was an enjoyable day and I had some great conversations with people in the “meet the experts” room, Next up for me is BriForum – and if I get time I’m going to get those EMC plug-ins configured with a Celerra VSA to show in my BriForum session next week.
**edited to fix some embarrassingly obvious typos! – I claim jetlag :)**
Simon – thanks very much for being there…. Was indeed a gret VMUG, and big by any standard (but still smaller than some in the Netherlands 🙂
In retrospect, I think Vaughn and I (mostly I – I REALLY have to work on getting things out “in a nutshell”) should/could have given more time for answers from Mike and Scott on the panel. Hope the audience found it useful, regardless.
I am indeed a geek, and while my career at EMC has me focused a lot on business, team construction, alliances and other things – my nightmare is losing touch and becoming a talking head – so, to regain a “grounded sense”, I go back and play in a lab for a few days. Helps me regain calm and perspective to work through geek problems 🙂
Thanks again!
Pingback: After the Regional VMware User Group in Charlotte « Jason Nash’s Blog