Virtualization, Cloud, Infrastructure and all that stuff in-between
My ramblings on the stuff that holds it all together
Category Archives: Grid Storage
Using the VCE/vBlock concept to aid disaster relief in situations like the Haiti Earthquake
Seeing the tragic events of the last couple of days in Haiti played out on the news spurred me into evolving some thinking that I had been working on, the sheer scale of infrastructure destruction left by the earthquake in Haiti is making it hard to get relief distributed via road, so airlifting and military assistance is the only realistic method of getting help around.
Whilst providing physical, medical, food and engineering relief is of paramount importance during a crisis, communications networks are vital to co-ordinate efforts between agencies, it is likely that whatever civil communications infrastructure, cell towers, landlines etc. are badly impacted by the earthquake so aid agencies rely on radio based systems, however as in the “business as usual” world the Internet can act as a well-understood common medium for exchanging digital information and services – if you can get access.
Crisis Camp is a very interesting and noble concept for gathering technically minded volunteers around the world to collaborate on producing useful tools for relief staff on the ground, missing people databases, geo-mapping mashups on Google Earth etc. using open source tools and donated people time makes this a free/low-cost soft-solution for relief agencies.
However, with the scale of infrastructure destruction in large disasters getting access to shared networks, bandwidth and cellular communications networks on the ground is likely to be difficult – in this post I propose a vendor neutral solution, whilst I reference the VCE/vBlock concept which is essentially an EMC/Cisco/VMware product line; the concept of a packaged, pre-built and quick to deploy infrastructure solution can apply equally to a single or multi-vendor “infrastructure care package” – standardisation and/or abstraction are the key to making it flexible (sound familiar to your day job?) by using virtual machines as the building blocks of useful services able to run on any donated/purchased/loaned hardware.
These care packages would typically be required for 2-3 months to aid disaster relief during the worst periods and whilst civil infrastructure is re-established. None of this stuff is free in the normal world, it’s a physical product, it’s tin, cables, margin and invoices but is flexible enough that it could be redeployed again and again as needs dictate, with my UN or DEC hat on this is a pool of shared equipment that can be sent around the world and deployed in 24hours to aid on the ground relief efforts, donated, loaned by vendors or sponsors.
What is it?
A bunch of low-power footprint commodity servers, storage and communications gear packed into a single, specialised shock-rack with a generator (gas/diesel/solar as available) and battery backup.
It makes heavy use of virtualization technologies to provide high-availability of data and services to work around individual equipment and/or rack failures due to damage or loss of power (generator out of fuel or localized aftershock etc.)
Because systems running to support relief operations typically will only be required for short term use, virtual appliances are an ideal platform, for example a pre-configured database cluster or web server farm, technologies like SpringSource can be used to deploy and bootstrap web applications around the infrastructure into virtual appliances.
Data storage and replication is achieved not using expensive hardware array based solutions but DAS storage within the blades (or shared disk stores) using virtual storage appliances like the HP Lefthand networks VSA or Celerra VSA or OpenFiler – allowing the use of cheap, commodity storage but achieving block-level replication between multiple storage locations via software – each blade uses storage within the same rack, if access to the storage fails it can be restarted on an alternative blade or an alternative rack (like the HA feature of vSphere)
These racks are deployed across a wide geographic area – creating a meshed wireless network using something like WiMax to handle inter-mesh and backhaul transit and local Femtocell/WiFi technology, providing 3 services
- private communications – for inter rack replication and data backhaul
- public data communications – wireless IP based internet access with a local proxy server/cache (backhaul via satellite or whatever is available – distributed across the mesh)
- local access to a public cellular system femtocell (GSM, or whatever the local standard is)
The availability/load balancing features of modern hypervisors like VMware’s HA/DRS and FT technology can re-start virtual machines to an alternative rack should one fail. Because the VSA technology replicates datastores between all racks at a block level using a p2p type protocol it’s always possible to restart a virtual appliance elsewhere within the infrastructure – but on a much wider scale and with a real-impact.
Ok, but what does it do?
Even if you were to establish a meshed communications network to assist with disaster relief activities on the ground, bandwidth and back-haul to the Internet or global public telecoms systems will be at a premium, chances are any high-bandwidth civil infrastructure will be damaged or degraded and satellite technology is expensive and can have limited bandwidth and high-latency.
The mesh system this solution could provide can give a layer of local caching and data storage, thinking particularly with the Google Maps type mashups people at Crisiscamp are discussing to help co-ordinate relief efforts that can require transferring a large amount of data – if you could get a local data cache of all the mapping information within the mesh transfer times would be drastically reduced.
this is really just a bunch of my thoughts on how you can take current hypervisor technology and build a p2p type private cloud infrastructure in a hurry, virtualization technology brings a powerful opportunity in that it can support a large number of services in a small power footprint; the more services that can be moved from dedicated hardware and run inside a virtual machine (for example a VoIP call manager, video conferencing system or GSM base station manager) mean less demand for scarce fuel and power resources on the ground; and virtualization brings portability – less dependence on a dedicated “black-box” that is hard to replace in the field, virtualization means you can use commodity x86 hardware, and have enough spares to keep things working or work around failures.
The technology to build this type of emergency service is available today with some tweaking. The key is having it in-place and ready to ship on a plane to wherever it is needed in the world, some more developed nations have this sort of service in-country for things like emergency cellular networks following hurricanes but it will need a lot of international co-operation to make this a reality on a global scale.
Whilst I’m not aware of any current projects by international relief agencies to build this sort of system I’d like to draw people’s attention to the possibilities.
The DEC are accepting donations for the Haiti earthquake relief fund at the following address.
or the international red-cross appeal here
VMware Client Hypervisor (CVP) – Grid Application Thoughts
Today VMware announced the client hypervisor they are producing and a collaboration with Intel on the hardware support (VT) and management (vPro), Citrix made a similar announcement last month (some analysis from the trusty Brian Madden here).
If the client side device is now running a hypervisor this would presumably extend the same encapsulation principles from datacentre/server virtualization to the desktop; where more than one OS instance could run on a client; for example a Linux and a Windows VM side by side, sharing data or isolated for security/compliance reasons – network traffic securely routed or encapsulated to keep it separate.
With most PC hardware that’s probably still a lot of computing horsepower around the estate that is underused or idle while the user goes to lunch, or doing lightweight tasks.
Grid based applications are much discussed in the banking/geophysical world as they need to crunch vast amounts of data and are well suited to horizontal scaling. On an Internet scale, there are distributed grids like SETI or Folding@Home – crunching towards a common goal.
What if you have a centralised server than can stream down virtual appliances that run such applications and thus distributed services – isolated from the user through the hypervisor, resource controlled so that they process in the background or when the CPU is idle or by a central “resource policy”.
What if you could then sell this compute capacity back to a “grid” provider – which federates and dispatches grid jobs;
of course, you can technically do this now because multi-tasking has been standard on most desktop operating systems since the late 80’s but security has always been a concern, what if that “grid” application contains malicious code or a bug which can leak data from your machine or the corporate network – this problem hasn’t really been solved to-date, Java etc. provide sandboxes but they depend on a lot of components from the core OS stack and don’t address network isolation.
Now you have an option to provide a high level of instance and network isolation between business systems and grid/public applications by using a client hypervisor – much in the same way that VMware ESX is the foundation for a multi-tenant cloud through vSwitches & Private VLANs etc.
Take that idea to the next level, what if you could distribute your server workload around your desktop estate rather than maintain a large central compute facility?
High-availability through something like VMware FT and DRS/HA make features of the underlying hardware like RAID, redundant power supplies less of a focus point, arguably you are providing high availability at the hypervisor/software level rather than big-iron.
You could also do something like provide a peer to peer file system leveraging local storage on the device to provide local LAN access to files from caches – the hypervisor isolates the virtual appliance from the end-user to divide administrative access to systems and services.
There is a lot of capacity in this “desktop cloud”… and maybe some smart ways to use it, conventional IT thinking says this is a bit wacky but I definitely think there is something in it….thoughts?
Solid Sate SAN, Storage vMotion and VMWare – HSM for your VMs
You’ve been able to buy solid state SAN technology like the Tera-RAMSAN from TMS which gives you up to 1Tb of storage, presented over 4Gb/s fibre channel or Infiniband @10Gb/s… with the cost of flash storage dropping its going to soon fall in to the realms of affordability (from memory a year ago 1Tb SSD SAN was about £250k, so would assume that’s maybe £150k now – would be happy to see current pricing if anyone has it though).
If you were able to combine this with a set of ESX hosts dual-connected to the RAMSAN and traditional equipment (like an HP EVA or EMC Clariion) over a FC or iSCSI fabric then you could possibly leverage the new Storage vMotion features that are included in ESX 3.5 to achieve a 2nd level of performance and load levelling for a VM farm.
It’s pretty common knowledge that you can use vMotion and the DRS features to effectively load level or average VM CPU and memory load across a number of VMWare nodes within a cluster.
Using the infrastructure discussed above could add a second tier of load balancing without downtime to a DRS cluster. If a VM needs more disk throughput or is suffering from latency then you could move them to/from the more expensive solid-state storage tiers to FC-SCSI or even FATA disks, this ensures you are making the best use of fast, expensive storage vs. cheap, slow commodity storage.
Even if Virtual Center doesn’t have a native API for exposing this type of functionality or criteria for the DRS configuration you could leverage the plug-in or scripting architecture to use a manager of managers (or here) to map this across an enterprise and across multiple hypervisors (Sun, Xen, Hyper V)
I also see EMC integrating flash storage into the array itself, would be even better if you could transparently migrate LUNS to/from different arrays and disk storage without having to touch ESX at all.
Note: This is just a theory I’ve not actually tried this – but am hoping to get some eval kit and do a proof on concept…
Hot-Swap Datacentres
There’s an interesting post over on Forrester research blog by James Staten. he’s talking some more about data centres in a container; making the data centre the FRU rather than a server or server components (Disk, PSU etc.).
This isn’t a new idea but it I’m sure the economics of scale currently mean this is currently suitable for the computing super-powers (Google, Microsoft – MS are buying them now!) – variances in local power/comms cost could soon force companies to adopt this approach rather than be tied to a local/national utility company and their power/comms pricing.
But just think if you are a large out-sourcing type company you typically reserve, build and populate data centres based on customer load, now this load can be variable; customers come and go (as much as you would like to keep them long-term this is becoming a commodity market and customer’s demand you are able to react quickly to changes in THEIR business model – which is typically why they outsource – they make it YOUR problem to service their needs).
It would make sense if you could dynamically grow and shrink your compute/hosting facility based on customer demand in this space – thats not so easy to do with a physical location as you are tied to it in terms of power availability/cost and lease period.
New suite build out at a typical co-lo company can take 1-2 months to establish networking, racks, power distribution, cabling, operational procedures etc. (and that’s not including physical construction if it’s a new building) – adopting the blackbox approach could significantly reduce the start-up time and increase your operational flexibility
Rather than invest in in-suite structured cabling, rack and reusable (or dedicated) server/blade infrastructures why not just have terminated power, comms and cooling connections and plug them in as required within a secured warehouse like space.
Photos from Sun Project Blackbox
You could even lease datacentre containers from a service provider/supplier to ensure there is no cap-ex investment required to host customers.
If your shiny new data centre is runs out of power then you could relocate it a lot easier (and cheaply) as it’s already transportable rather than tied to the physical building infrastructure; you are able to follow the cheapest power and comms – nationally or even globally.
As I’ve said before the more you virtualize the contents of your datacentre the less you care about what physical kit it runs on… you essentially reserve power from a flexible compute/storage/network “grid” – and that could be anything/anywhere.
Interesting Article on how DreamWorks are Speeding up Access for Animators
I have a geeky secret; I used to be really into ray-tracing and 3D graphics not so much from an “art” point of view – although I do have an interest in that and computer modelling/visualisation checks a lot of boxes for me as I always wanted to be a civil engineer or architect (well, I kind of am… but with computers..!)
it was one of the only applications I found in the early/mid 90’s that could really tax a machine and I spent a lot of time playing with large render jobs using PovRay and progressed to 3D studio for DOS and then a bit of a dabble with building render farms using 3DS Max before I had to go and get a “proper” job with less spare time.
I would love the time to get back into it, with the power available today you could produce some awesome images, although maybe I am somewhat hampered through lack of talent… maybe that will be downloadable now?
….So anyway, here’s an interesting article on how DreamWorks Animation have sped up access to their render farm using Ibrix Parallel file server software… they shift a lot of data!
I’ve worked on a project where we’ve tried to implement similar high-performance grid-based storage systems for large media files; but they were somewhat less successful/undeveloped; this one looks promising.
I wonder if these kind of vendors will start moving into the virtualization space; it’s essentially the same principal.
Deliver large flat files (.VMDK), over cheap/scalable commodity media (GigE) as quick a possible
This would reduce the depende.ncy on expensive back-end fibre channel SANs, and you could invest more in flexible Ethernet – or maybe Infiniband to deliver networking and storage within a “virtual fabric”
If it’s “virtual” and “grid” based the quality/features of individual hardware devices (DL380, NAS device etc.) that make it up the overall grid are less important and a 100% software approach gives you the flexibility to pick & choose building blocks from the most appropriate/affordable manufacturer rather than be locked into a costly single vendor solution (HP EVA, EMC Clariion, DMX etc.)
Thanks to Martin at Bladewatch for the link.