Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

What is the Cloud..?

 

Following on from some discussion on Scott Lowe’s blog around the lack of a clear definition of cloud computing, I offer this as my opinion… it’s just that; and I’d welcome comments. I’m an infrastructure chap by trade but my motto is there are no apps without infrastructure, and this cloud stuff is all about making things easier for developers to create useful stuff quickly and cheaply but it needs somewhere to run.

Sam Johnstone has also done some work on formalising this stack, but I offer the following up for discussion.

A cloud is just a pool of flexible/on-demand “resource” a way of abstracting the  underlying complexities of how things are executed or stored, provisioned etc.

This is analogous to the way the modern imagecomputer, OS and applications have developed over the last 20 years into multiple abstraction layers – meaning a developer no longer has to explicitly know how to control devices at a low level, for example; moving bits in and out of registers, reading/writing sectors to disks. They make API calls down a stack to BIOS, firmware, operating systems, drivers, code libraries (.DLL etc.) and more recently moving to web services at the top level interface.

Presently solution/infrastructure architects work with individual servers, roles and services on bits of dedicated hardware which are often bound to a specific location, ensuring that they architect the solution to deliver the required level of availability, serviceability etc.

I see “the cloud” as a way of providing the next level of abstraction – with an eventual goal of architects being able to design systems with no roles or services tied to specific servers/databases/hardware/datacentres/continents(!) where SOA-type applications and transactions are executed across one or more cloud platforms without having to have a detailed understanding of the underlying infrastructure.

In the interim the cloud can deliver a platform to quickly deploy and change infrastructure to deliver applications, right-sizing capacity based on actual usage rather than over-sizing done early into the development cycle of an application.

Fewer hard physical ties to the local infrastructure that supports application components, combined with adoption of virtualization of servers, networks etc. means that you can relocate services locally, nationally or globally through data and configuration migration rather than a traditional lift & shift of servers, switches, SAN’s racks etc. with the associated risk and downtime etc.

With standardisation or adoption of a common infrastructure architecture this could allow for a real market place to develop, where the customer can choose the most appropriate or cost-effective region or service provider to run their application(s) – either based on the cost of local power, comms, or response time, legal jurisdiction or SLA without being tied to a specific service provider or physical location.

For example; some of my previous posts on this sort of physical DC portability are here, if you combine this with a high level of virtualization and the cloud reference architecture you have a compelling solution for large infrastructures, Zimory also have an interesting proposition for brokering resources between multiple cloud providers

There are two fundamental things required to deliver this nirvana…

1) Flexible Infrastructure Architecture (do-able now with current tech )image

This is where I see my cloud reference architecture sitting, you could have multiple instances of this architecture split between on/off/3rd party premise providers – this provides a layer of abstraction between the physical hardware/networking/site world and an “application” or server instance (as it’s encapsulated in a VM – which is just persisted as a file on some storage.

 

2) Distributed Runtime/Services Layer (work starting now, needs development and standardisation)

To enable the cloud to really distribute applications (and thus computing power) across the Internet means you have to build a distributed or entirely democratic/autonomous controller mechanism (almost an OS for the cloud) which acts as a compiler, API, interpreter, Job Controller etc. to execute and manage applications (code/apps/scripts) that developers produce

This distributed runtime /services layer runs on server instances hosted on/in a cloud infrastructure (see item 1) that are managed by a service provider, to my mind there is no other way to achieve this you can’t easily write an app and just have it run across multiple locations without some kind of underlying abstraction layer taking care of the complexities of state, storage, parallelism etc. this is where Microsoft’s Azure, Amazon AWS & Google have API’s for things like databases , payment gateways, messaging, storage across their distributed infrastructures.

However all of them are doing it in a proprietary way – Azure/AWS provide their own API rather than a standardised set of services that true cloud apps can be written to and then moved between various cloud infrastructures or providers.

image

It’s important to note that the two are not mutually exclusive, clouds based on this reference architecture can still run traditional static one server/one app type services, even desktop/VDI but it also maintains server instances that run the actual distributed runtime/services layer for applications that are written to take advantage of it – there is a common underlying platform.

This model and standardisation helps address the concerns of most businesses when they start to talk about clouds, data ownership and security by giving them the opportunity to selectively adopt cloud technologies and transition workloads between on/off-premise clouds as needs dictate.

10 responses to “What is the Cloud..?

  1. Sam Johnston January 9, 2009 at 1:37 pm

    Hi there,

    Cloud standards will happen, but it won’t be via processes we’re used to. Amazon’s APIs for example are relatively clean and a good example to follow (as Eucalyptus have done) and others have (relatively) independently come up with similar results. Once we have a large enough pool of users standards should naturally evolve and there is little point in forcing the issue. There is after all good reason for saying “the devil’s in the details” – just take a look at Grid and WS-* – premature standards at this early stage could well throw a wet blanket over the rampant innovation taking place.

    Your reference architecture looks good at a glance, but again it’s too much detail for users (even if relevant for providers and enablers) and it lacks cloudy concepts like local node storage and fast, generic interconnects (eg [10]GigE). The blade formfactor may well see a resurgence now we’ve found a good application for them but there’s somewhere some more standardisation might be useful (eg for an enclosure which turns a handful of horizontal rack units into a bunch of smaller vertical ones, such that I can install generic blades from various vendors). Google appear to be leading the way on next generation hardware anyway.

    What is sure is that there are interesting times ahead!

    Sam

  2. Rodos January 9, 2009 at 2:12 pm

    Simon, I have continued the conversation in my own answer to what is the cloud?

    http://rodos.haywood.org/2009/01/what-is-cloud-conversation.html

    I think much (not all) of what you are describing in the text is utility computing which is the basis for delivery of most cloud infrastructure, certainly the IaaS part.

    Great conversation you have kept rolling here and look forward to some good interaction and keeping it going.

  3. Pingback: A Quick Thought Regarding Cloud Computing - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers

  4. Pingback: Cloud Computing - A Fluffy Lining? | TechHead.co.uk

  5. vinf January 15, 2009 at 11:09 am

    Sam/Scott

    some good points you made there on lack of standards, I made the point in the post that they are all being done in a proprietary way now, for the distributed runtime/services layer idea to work it needs that standardisation; otherwise we just keep building “pots” of clouds that can’t talk to eachother easily.

    Blades/rack mounts/10Gb/1Gb hardware; it’s all abstracted in my reference architecture throught the heavy use of a hypervisor – the underlying hardware and interconnects can be designed and upgraded/scaled back based on load without affecting service (using tech like HA and DRS)

    To me what would be ideal is if Amazon, MS-Azure and Google let you download the VM’s they use to run bits of their clouds as “appliances” and run them on-premise; (we know for a fact that Amazon and MS use hypervisors to run the underlying services) the fact that you can run them on a standard/reference architecture stops it becoming pots of implementation/service – you can still provide them on one infrastructure if you need to.

    This would instantly get round the whole standards bit for me (no slow and painful design by comittee, the best or most commerically astute tech wins) – even if you only rent/licence the instances from Amazon/MS/Google to cover their IPR/dev cost you don’t use *their* compute power/inf to execute/store you use your own.

    This would be a good way for them to increase adoption and break down those data ownership/risk barriers for the corporate market – let’s face it are the banks going to put their crown jewels into Amazon’s hands? but would they leverage Amazon/MS/Google cloud API tech to build big scale distributed applications if they could run it in their own walled garden?, I would say a resounding yes!

    This is why I particularly like VMWare’s vCloud concept – all you really need to standardise is the hypervisor layer (or use products like PlateSpin PowerConvert to convert VMs from one to another), the rest is config – the “clever” bit is the tech that runs inside the runtime/services layer code {VM appliance} that you lease/buy from Amazon etc. – look how sucessful Google have been with their very expensive (re-badged Dell :)) search appliances that they sell for “on-premise” use – it’s the same principal for me.

    Microsoft have said they are “looking” at this type of on-premise running for exactly this reason, although they have told me that Azure would be very complicated for a customer to run on-premise – I would think this is more due to it being so new, and needing a lot more work to make it run this way – essentially packaging it and makign it simple for the end IT user/implementer; this is MS bread & butter business (Windows Server, Live Services etc.)

    In reality this is no different from a VMTN virtual appliance that you can deploy internally and build apps to talk to; like a LAMP VM with SugarCRM for example, but with some back-end cleverness to deal with distributing load/execution etc.

    Anyway, just my 2p 🙂

  6. Pingback: vinf.net at VMWorld Europe 2009 « Virtualization, Windows, Infrastructure and all that “stuff” in-between

  7. Pingback: Long Distance vMotion… heading to the vCloud « Virtualization, Windows, Infrastructure and all that “stuff” in-between

  8. Dave May 18, 2010 at 4:31 am

    I like how Surge defines the cloud…

    In the software industry, “The Cloud” is used as a metaphor to represent the Internet. Any computing resource, software, or service that can be shared over the Internet is considered to be in the Cloud. This sharing of resources and anytime/anywhere accessibility makes Internet-based software more efficient and cost effective than traditional on-premise software.

    http://www.surgeforward.com/InternetCloud.aspx

  9. Pingback: Park your own Azure Cloud in your Carpark with Microsoft « Virtualization, Windows, Infrastructure and all that stuff in-between

  10. Pingback: Silent Data Corruption in the Cloud and building in Data Integrity « Virtualization, Cloud, Infrastructure and all that stuff in-between

Leave a comment