Virtualization, Cloud, Infrastructure and all that stuff in-between
My ramblings on the stuff that holds it all together
HP ML115 G5 Cheap Lab Servers going, going, going…
Just got this from ServersPlus – if you are thinking of building your own vTARDIS I would suggest you get in quick, I would assume the newer G6’es are coming but seem to be much more expensive and less ESX compatible based on Techhead’s view of the ML110 G6
Good morning,
I just wanted to let you know the latest info on the ever popular HP ML115 tower server which has landed back in stock with us today. The product has now been announced as end of life and will no longer be available after this last batch which HP have produced.
As a result, HP have also increased the cost price on the final shipments meaning that the lowest price we can offer them at is £249.95. Whilst I appreciate that this is quite a jump from the previous price, it is still by far the cheapest tower server available for its specification, and we still expect the final units to sell through quickly.
HP have announced that the ML115 will be replaced but cannot confirm any specs, prices or ETAs at this stage. Therefore, if you are interested in securing any of the last of the current model, I’d ask you to get in touch so that I can put them aside for you.Regards
Ben Voce
Senior Account Manager
Direct:
01977 739 014Email:
ben.voce@serversplus.comWeb:
www.serversplus.com
Get them whilst they are hot!
ACADIA…thoughts
Now that the VCE joint coalition has announced their new CEO they launched the acadia.com website with a blog.
The Acadia proposition is interesting as a joint approach to delivering private cloud infrastructure – this sort of pre-packaged solutions offering with good vendor support is a welcome addition to the industry, but other than tighter links to the product vendors I’m not sure what more they bring to the table over a traditional VAR.
As an aside, I worked on a very similar concept in 2008 for my current employer, although on a much smaller scale – we built a repeatable private cloud stack around a set of well-understood technologies.
I have deployed it a number of times and as a professional services organisation I have seen 1st hand how this base template approach has helped accelerate not only the pre-sales and design process but also the delivery of actual infrastructure to the end-customer – particularly when working to build infrastructure for a new solution where current metrics and sizing information just isn’t available.
You can read my original thoughts about my work on a private cloud platform here – I do however think the VCE coalition has some way to go yet around it’s software licensing before it’s really workable on a true ‘pay as you go’ basis – rather than bundling everything up into a traditional commercial lease-purchase type agreement for hardware and software.
I also have yet to see some more innovative commercial models for the procurement of the infrastructure itself – although the vBlock is designed to scale in a horizontal, modular fashion if you need to scale down how do you do that? the cost is “sunk” with the vendor/reseller and I can’t see them wanting to undo that traditional commercial model.
I’ve seen IBM start to pioneer some mainframe pay as you go type commercial models down into the x86 space, where they ship you a fully loaded system and you pay for capacity that you use – this kind of works for vendors as they don’t have to pay margin to resellers and distribution if they can do it direct and it comes from their own factories at “cost” prices, a traditional VAR would find that this carries a significant financial risk so would usually seek to offset this via a contracted capacity and guaranteed capacity expansion.
I wonder if this could be a key selling tool of the ACADIA proposition – at a guess I’d say EMC/VMware/Cisco still want to sell tin/software as a capital item and get it out of their warehouses and bank the outright sale but they have a stake in the ACADIA business, they are the shareholder(s).
What if the ACADIA business were able to act as a financial intermediary – buying kit (hardware or software) from the VCE partners, levering volume and special pricing via its owners, handling logistics and leasing infrastructure out to the end-customer with professional services, rather than relying 100% on sales margin and professional/managed services revenue.
In theory ACADIA could build a diverse enough pool of customers that it could weather storms in any specific market sector (financials, telco, media etc.) to keep an overall positive profit and market performance. Because the “product” is built around a standard set of components (the vBlock) managing and re-distributing inventory items between customers is more feasible because it’s easier to keep “stock” of components or entire vBlocks– in this mode ACADIA could act almost as a super-VAR in traditional terms, but with some more creative financial models enabled by access to better "raw” pricing. (raw in the sense there are less middle-men and commissions to pay)
if they were able to pull this off then I can see a significant advantage over the more traditional VAR’s but do VCE risk treading on the toes of their traditional partners, distribution and resellers?
Augmented Reality TFTLondon
I attended another session of The Fantastic Tavern London (#TFTLondon) this evening hosted by Matt Bagwell and Michelle Flynn – this evening’s event was centred around ‘realities’ and specifically augmented reality.
My post on the previous TFT London is available here; as you’ll see from that post Augmented Realty was voted as a hot topic so warranted further exploration.
If you’re not familiar with the concept of Augmented Reality (AR) – watch this video, it’s essentially adding to what you see and sense in the real world with useful information, to-date most implementations are geared around providing spatial information to maps or scenes such as finding the nearest tube station or restaurant.
The evening opened with some discussion of how the current ‘thirtysomething’ generation grew up with emersion through video games, Doom was one of my favourites and whilst I’m not still a gamer I can see how the experience is immersive for a lot of people to this day.
The reality of realities is that it’s not quite there yet to enrich our daily lives, the iPhone example is cool, but it still requires a device which isn’t ‘natural’ to operate, we don’t all walk around with our iPhones outstretched infront of us…
Well, maybe some fellow London commuters do, they should really watch out where they are going otherwise they could see the reality of a totally different kind of AR (Accident and emeRgency – sorry!:))
A lot of things that in the 1980/90’s were considered futuristic still aren’t mainstream technology today, for example the Terminator’s heads-up type display but they are in some places…
Several cars manufacturers have this sort of option today (and some had it in the late 80’s) and it’s had a military application for a long time, these technologies will eventually become commoditized and thus cheap and accessible to all, much like the mobile phone has become almost ubiquitous.


Paul Dawson of EMC consulting put forward the view that there is also something missing, most current AR implementations only operate in the 4 dimensions (the 3 dimensions of physics and the dimension of time) but they don’t really address the 5 senses, and considering this is how we, as humans really experience our environments AR isn’t really contributing much in the way of real augmentation.
AR can point out linear things like a tube station entrance or a dog wearing a wig, but it can’t contextually give you information that is relevant to you; for example – There is a Marks & Spencer branch, it’s lunch time, you’re hungry and it’s queue is only 30 seconds, compared to your usual sandwich shop which has a queue of 5 mins.
Additionally current AR is very device bound; it’s not really a natural way of giving you information
Imagine an implementation in the built environment around you that listens to actual conversations and displays them in a type of tag cloud or some embedded displays that recognise your face and some attributes about you and where you are going, offering some advice on the quickest way; or maybe even the closest gym to lose some of that weight? 🙂

That’s quite an interesting proposition to me, my personal favourite example of an AR implementation is the Lego Augmented Reality Kiosk, which is available in all good Lego shops (it is said that I have an unhealthy, mildly OCD-type interest in Lego)
There is also an application available at Tesco that will allow you to take a photo of a bottle of wine and have it provide further information (more info here), this sort of application has been around for a while but this uses visual recognition, rather than traditional bar-code scanning – so imagine the wider application of this concept to the environment around you – rather than relying on traditional GPS, barcode, tagging type technology image recognition is used, which is potentially far more accurate as it’s based on what you actually see from your view point and position.
As technology develops and is miniaturised the hope is that AR displays will acheive an almost embedded form-factor, such as built into a normal pair of glasses or even a contact lens!
Johannes Kebeck from the Microsoft Bing Maps team talked about how geo-spatial information and public mapping and are being merged with crowd-sourced information, tagging and imagery to produce rich, sources for augmented reality solutions.
He also talked about how Microsoft have a preview of a commercial data catalogue for the Windows Azure Platform codenamed Dallas – where people can find, buy and sell data sources for these sort of applications, leveraging the scale of Azure for the analysis and processing of large data sets.
Several interesting use-cases were demonstrated;
- Using Flickr integration with Bing maps to provide historical photographs of buildings, allowing a time-slider control to see what something looked like 50 years ago or at night-time.
- The large number of free online data sources means there is a large amount of information available, most of this is historical or static but increasingly with crowd-sourcing, microblogging and sensor type networks these are being augmented with real-time information, for example fuel prices.
- There was an example of a crowd-sourced maps mashup that was done for the Haiti disaster (my thoughts on some kind of emergency infrastructure for these situations here) in a matter of days people contributed significant real-time information on local conditions, aid levels and casualties to allow better targeting of relief.
- Or on a more local area, feeding real-time crime statistics into a map to show crime hot-spots.
A lot of this seems to be discussed in this TED session, I haven’t watched it yet; but it looks very interesting
Up until now most of these tools and technologies haven’t been easilt accessible to the typical consumer and end-user, I’ve written about Microsoft Photosynth before but it’s an example of an easy to use end-user interface into this sort of AR technology and a lot of work is going into this area.
Neogeography is a new word to me, but you can read about it at wikipedia, I like the idea.
Lastly some of the UX team from EMC Consulting’s Studio had to tiptoe around an NDA to talk at a high-level about some of the work they are doing at the moment with customers in the Augmented Reality space for industrial customers, augmenting engineers viewpoint of plant with relevant information, as well as real-time 2-way feeds of critical safety information.
Good quality immersion for the end-user is considered a key to making people feel empowered by the AR tools, rather than merely using them as a tool, to achieve this you need a good quality experience and they have coined the phrase High-Fidelity 3D, particularly for applications like military or surgical practice where for the end-user to get the most benefit it has to seem real; there is obviously a lot that has been pioneered in the video game industry in this space.
They have built some interesting PoC systems, notably around smart-metering for the home, with a flexible UI that allows the end-user to dive in and out of the represented house and appliances and customise it to represent their house.
For me one of the most interesting parts of this session was that with immersive/AR type applications there are a lot more end-user factors to consider like ergonomics, RSI and complementing learnt muscle-memory type skills.
Most AR applications will require one of more multi-touch type input devices, the traditional keyboard and mouse are well known quantities but new devices are less field-proven, there are also health & safety implications – you need to ensure the solution doesn’t encourage people to take risks or put them in danger.
For many people vehicle control skills (like a car steering wheel and pedals) are well learnt (muscle-memory) type skills so adopting something radically different makes it hard for people to switch between and slows adoption.
Almost playing back to the behavioural architecture concepts of the previous TFT evening – the team discussed the concept of built-in rewards, or status levels within industrial type AR applications to make it a more engaging experience for the end-user and encourage adoption – I could imagine how an example displaying a user’s skill level operating equipment (n00b, speed-demon, Fork-lift Ninja etc.) helps to encourage development and keep people operating within allowable parameters (speed limits for example).
This may sound a bit airy-fairy (for want of a better term) but consider this, 1993’s 15 year old playing Doom is now 32 and as time passes the percentage of people brought up with this sort of gaming experience and expectation grows. Today’s upcoming generation is already fully immersed in social media – maybe it’s not such an alien concept for the professional world of the near future after all.
I’d like to thank Matt and Michelle for an interesting evening, and a bit of a break from the norm of my day-job and this blog – if you’re interested in this sort of thing – the next event is on 19th August (location TBC) and is likely to be a full day called “The Lock Inn” – keep an eye on Matt and Michelle’s blogs for more details, I also appear to have been volunteered by a colleague to speak about clouds or something at the event, so look out for that 🙂
I also had a go on an iPad (not yet released here in the UK) thoughts – very nice screen and UI (as you’d expect) but it was a lot heavier than I was expecting, I only had it for a couple of minutes but it wasn’t very comfortable.
Best quote of the evening: “Let the sausages flow…” Matt Bagwell, Creative Director, EMC Consulting 🙂
Where Next for VMware Workstation?
I love VMware Workstation, I have used it since about 1999 when I was first introduced to virtualization and it totally revolutionised the way I did my home and work lab study and later production systems.
Since then it has always introduced new features with every version that seemed to be back-ported into the server products, as I understand it the record and replay features underpinned the code that became VMware Fault-Tolerance, and the same with linked-clones and thin-provisioning in vSphere.
There has been integration with developer environments for better debugging and I guess a lot of Workstation has gone into the Fusion product for the Mac – but it did get me thinking, where is next for Workstation – beyond the usual performance tweaks that seem to get make in every version?
What I think would be great, and it ties into my previous post on vendor hardware emulators is a pluggable hardware abstraction layer(HAL)/Driver architecture for VMware Workstation.
Workstation does a brilliant job at virtualizing x86/64 hardware and to-date that has been its primary task but I wonder if it could be expanded into a more modular architecture product to support wider development and use of other hardware platforms on x86/64.
There are many emulators available out there for developers for mobile phone chipsets, custom ASICs etc. but these are often hard to configure for the end-user and are very bespoke to the devices they are developed for.
With the amount of spare horsepower and low price-point available to commodity x86/64 hardware all it needs is a common virtualization/emulation product to unify it together and you have a very powerful product with a huge market, not only for developers but for operations people who no longer need a huge lab of bespoke hardware, mobile phones and devices to support end-users – it’s all available in a virtual machine.
Thinking slightly wider in scope, If it were also back-ported and integrated into future versions of the vSphere product line you have a very powerful back-end server product – VMware talk of the software mainframe, this is bringing what some mainframes currently do for virtualizing x86 server, but making it a MUCH wider application.
Whilst the initial pay-off would be with developer licenses rather than enterprise/large scale licensing agreements with Hyper V and Xen rapidly catching up on the Hypervisor front VMware need something cutting-edge to keep them ahead of the game, and consider the enterprise implications for this;
Lots of customers running workloads on SPARC hardware/OS – porting to x86 Solaris isn’t simple, is the cost/performance benefit still there for SPARC customers in the world of cheap and fast x64 hardware – emulating/virtualizing SPARC CPU workloads onto x64 could be a big draw for Sun customers, particularly with the Oracle acquisition and VMware targeting vSphere at large scale Oracle customers this could prove easier than porting legacy apps from SPARC in the same way virtualization has revolutionised the x86 server space.
Or an ESX cluster running a mix of x86/64, SPARC, ARM, iPhone, Set-Top Box, AS/400 virtual machine workloads – either as a test and dev, support or even production solution.
Sure, emulation has an overhead – so does virtualization but x86/64 hardware is cheap and off-the-shelf, add in a distributed ESX processing cluster (my thoughts on that here) and you could probably build something with equivalent or even better performance for less.
Interesting concept (to me anyway)… thoughts?
Hardware Vendors… release the emulators to the masses PLEASE!!
If you are a consultant or are an end-user trying to teach/learn new technology it’s not easy to constantly have access to lab and demo kit, most vendors can lend you evaluation hardware for your lab but there is generally a finite supply and you don’t have it for long – if you are busy it’s hard to dedicate time to this, most people are only able to dip in and out every now and then and physical kit means it has to be hosted somewhere so requires remote access.
I’m a hands-on person and I find I learn/understand things much better if I can concrete my reading with “fiddling” with the UI or mocking up configurations rather than just reading the whitepapers.
If you are primarily a hardware vendor like EMC, HP or Cisco you should want people to play with your kit around their own time, whilst this doesn’t always mesh with a traditional sales-force driven model where there is a structured qualification, sales and follow-up model let’s be honest this is 2010, I’m clever 🙂 I don’t really need a sales drone to hold my hand in spending my own {employers’} money or take me golfing – if your product is good and I need it I will buy/spec it once I’ve seen what it can do in MY environment on MY terms.
This definitely isn’t to say there is no place for sales or tech pre-sales in the modern world, far from it – I will probably need someone with product specific knowledge for technical help, or someone to help me price out a solution and options – but I don’t need a sales person’s commission or end of quarter figures driving MY evaluation or purchase process – incentives to buy in a certain timeframe are perfectly acceptable as they help the end-user evaluator focus their priorities, but it’s not the start of the process.
As I’ve written before simulators/emulators/Virtual Machines are ideal as a pre-sales tool, no complicated pre-sales process, just get your tech in the hands of people who can then internally demonstrate its capability to those that sign the cheques.
It just means a bit of registration for a download which gets passed to your pre-sales people so they can follow-up with the potential customer – but you’ve empowered the end-user to do your marketing “foot in the door” with the people that matter for you– and it’s essentially “free”.
Additionally one of the primary concerns with infrastructure technologies is management, you often don’t really know how well you can manage or integrate with your existing management toolsets without actually trying it – VM/Emulators are an almost (financially) risk-free way to try this out.
Most enterprise hardware (blade chassis, SAN’s, switches) are moving to use virtualization and commodity hardware under the hood this opens up some interesting possibilities to distribute packaged up Virtual Machine .OVF versions of your “hardware”/firmware product that people can download and run on VMware Workstation/Player etc.
Most already vendors have these internally for development teams, afterall this is cheaper and more practical than giving each developer a but of hardware to develop against.
This is the list of people that “get it” IMHO …
EMC are streets ahead of the game with this, and have had the Celerra VSA available for some time, CLARiiON coming soon and with the V-MAX being based around commodity Xeon CPU hardware it has to be on the cards.
Zeus produce an amazing traffic manager product which has been available as a virtual machine or hardware appliance for ages – with a free/quick download VM trial.
HP have a time-limited version of the LeftHand VSA and the EVA simulator (accredited partners only) but could go significantly further, why not a VM version of the EVA controllers accessible over iSCSI or FCoE? they may be based on custom ASIC type hardware but I would assume some sort of emulation from x86 is possible (see my idea for VMware Workstation later on).
I have also been reliably informed that there is a version of the HP Virtual Connect “firmware” used internally at HP that runs inside VMware Workstation – making this available to the public (even with pre-sales registration) would be a great way for people to quickly understand how VC technology works and could integrate into their environments.
NetApp have the ONTAP simulator (partners only), I’m not familiar with the product but it does appear to emulate real NetApp hardware and management interfaces
Cisco have recently released a beta of the UCS platform emulator to developers – from the screenshot Steve Chambers posted this looks like an excellent idea with significant scope for use as a pre-sales tool as well.

With the recent announcement around the death of Dynamips maybe Cisco should better leverage their relationship with VMware and produce a hardware emulator layer to allow IOS or CatOS to run on x86 under VMware Workstation, it’s been reported that the Nexus range runs under a native hypervisor and the NX1000V is already out there.
The 6500 ACE modules would also be a welcome offering as a VM!
Now, there needs to be some expectation setting, I’m not asking for production ready/supported virtual machines of your IPR and hardware margin – that won’t always make sense but availability for learning, pre-sales, training or testing/development is an excellent use-case.
I have previously heard people voice concerns over this sort of model as it allows your technology and IPR to easily walk into the hands of your competitors to allow them to reverse engineer it and steal your ideas, I don’t buy this – I’m pretty sure “the competition” are 1st on the list to buy your hardware when it becomes generally available, either directly or via an anonymous 3rd party – infact HP seemed to have a pretty good stock of UCS hardware :).
London VMware User Group – 6th May
The next VMware User group meeting (VMUG) has been announced, I’ll be presenting on virtualizing terminal server workloads, which is a presentation based on some recent work I have undertaken with a customer.
It’s also great to see Alan back for another PowerShell session – this is definitely worth attending – register now if you haven’t already as it was a very full house last time round.
Agenda as follows –
The London VMUG Steering Committee are pleased to announce the next UK London VMware User Group meeting, kindly sponsored by RES Software, to be held on Thursday 6th May 2010. We hope to see you at the meeting, and afterwards for a drink or two, courtesy of VMware.
To register your interest in attending, please send an email to londonvmug@yahoo.com with up to two named attendees from your organisation. If you do not receive a confirmation mail, please don’t just turn up since we will not be able to admit you to the meeting. Please separately mention if you intend attending Alan’s PowerCLI workshop at 1100. Content from the meetings will continue to be uploaded to http://www.box.net/londonug, NDA permitting.
Our meeting will be held at the Thames Suite, London Chamber of Commerce and Industry, 33 Queen Street, London EC4R 1AP, +44 (0)20 7248 4444. The nearest tube station is Mansion House, location information is available here. Reception is from 1230 for a prompt 1pm start, to finish around 5pm.Our agenda looks something like this:
1100 – 1200 (Optional) PowerCLI / Powershell workshop – Alan Renouf. Please bring your own curly brackets
1230 – 1300 Arrive & Refreshments
1300 – 1320 Welcome & News – Alaric Davies
1320 – 1400 Sponsor Presentation – RES Software
Migrating ESX v3.5 to ESX4i on HP Blades – Colin Style, Prudential
Brunel University virtualisation experiences – Peter Polkinghorne, Brunel University
1500 – 1520 Refreshment break
Virtualising Terminal Server workloads – Simon Gallagher
Interactive (and possibly contentious!) Panel Discussion – UG Committee with contributions from the floor
1645 – 1700 Close
1700 – Pub
Discount Code for BriForum
As you may have seen before I am giving a session at BriForum in Chicago around low-cost virtual lab environments using a variant of my vT.A.R.D.I.S demo system with as many live demos as possible.
I have a $200 discount code available for my readers, if you want to take advantage of this offer please email or twitter me (details on about page) and I will send you one.
BriForum is still the best technical conference I have ever attended for content (and I last went in 2007!) – get yourself there!
Streaming Install of Office 2010 Beta
This is clever, Microsoft are going to offer a version of Office 2010 that you can install on demand using the App-V technology that they acquired from Softricity a few years ago.
Read the link to the Office 2010 team blog here and you can try the beta for yourself here
I like the idea of App-V technology (and VMware’s ThinApp equivalent) a lot for the desktop and space it’s a very clever way of delivering apps.
Latest HP Firmware Maintenance CD v9
I can never seem to find these via the HP.com page, and Google eventually manages to find the best link so I have blogged this as much for my own reference as your use.
HP, how about it shows up properly in the site search?
Here are the details, hot of the {.ISO} press.. 🙂
Firmware – CD-ROM
Description
Current version
Size (MB)
Estimated download time
Previous version
Download here
9.00
12 Apr 2010
805
56K: >8h
512K: 3h
8.70
22 Jan 2010
Installing Windows 7 from a USB Flash Drive and Multi-Boot from VHD
This is a useful tool which I haven’t come across before – Windows 7 USB/DVD Download tool – it will create a bootable USB flash drive which you can use to install Windows 7
Combine this with a boot from .VHD setup and you have a very flexible multi-boot solution, it also seems to work with Windows 2008 R2 if you need to install Hyper-V on your laptop, and then combine this with virtualized ESXi in VMware Workstation (or boot ESXi from USB) and you have an excellent hypervisor demo machine and general Windows laptop.

