Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Windows 7 Remote Desktop Client – Nice Touch

 

You can download and install the new, updated RDC 7.0 client for free for any OS from Windows XP SP3 and later (it ships with Windows 7) known issues here and detailed feature comparison per OS here and downloads for various client OS’es are below

Update for Windows Vista, x86-based versions

DownloadDownload the Update for Windows Vista for x86-based systems package now.

Update for Windows Vista, x64-based versions

DownloadDownload the Update for Windows Vista for x64-based systems package now.

Update for Windows XP, x86-based versions

DownloadDownload the Update for Windows XP for x86-based systems package now.

A nice touch that i discovered by accident is that you can move the top title bar along the top of a full-screen RDP session by clicking and dragging, this is really handy if you work on multiple full screen RDP sessions inside another RDP session – for example a jump-off box to a protected subnet or when you have a full-screen application.

image

image

AkA Terminal Services client, Remote Desktop Protocol Client, RDS Client, Remote Desktop Client, RDC, RDP, RDS 🙂

Enjoy

The End of Spinvox

 

I have been using SpinVox’s voicemail to email/text service for nearly 2 years and it’s been really useful to my daily workflow as I can tag/flag emailed voicemails to ensure they are dealt with. I just abruptly got a text saying the service will be finishing in 7 days.

Dear customer, we regret to inform you that SpinVox is no longer supporting free accounts and your service will expire in the next several days. For other options, please visit www.spinvox.com or call your mobile network provider should you wish to re-enable standard voicemail service. We thank you for your support and apologize for any inconvenience

Whilst the quality of translations has definitely declined over the last year; this seems to have been an unfortunate part of their aggressive translation off-shoring strategy – talk of “the brain”, the much feted automated translation engine seems to have been mostly wishful thinking.

Shame, I will have to try and find something else now… but I’m pretty sure they used to reverse SMS bill be £5/month!

It seems Spinvox’s technology has been merged into a telco-only service from Nuance, so you can’t directly subscribe to it.

More details for existing subscribers here – when it finishes you will need to reset your voicemail settings, as it won’t happen automatically

When the SpinVox voicemail-to-text service expires does my phone automatically return to voicemail mode?

No. Users will need to remove their mobile phone’s divert settings to the SpinVox voicemail-to-text service or update the divert number to their new voicemail, as appropriate for the user’s voicemail plan.

In the UK, you can reactivate standard voicemail by entering the following code and pressing call:

Vodafone: 1211

O2: 1750

All others: ##004#

How to Reset ILOM password on a Sun x4200 Server

Power down and disconnect the server, open up the server, place a jumper over Jumper P4 (different models like the M2 use different ports) but P4 is on a block of 3 2-pin jumpers on the right-hand side near the power supplies.

Power up the server and wait for the OS to boot (important), login via the serial console connection; the user/password will now be root/changeme

Reset the password via the web GUI and power down the server

remove the jumper and it will work using the password you set; if you forget to remove the jumper it will always default the password to changeme

As a bonus you can change the IP address via the serial console whilst it’s defaulted

login using root/changeme

cd /SYS/network

show (will display the current config)

set pendingipaddress=1.1.1.1 (or whatever you need)

set pendingipgateway=1.1.1.1 (or whatever you need)

set commitpending=true

and after a minute or so the IP address will be set and should be accessible over the network

The full service manual is online here

Don’t forget to remove the P4 jumper, if you can’t find a jumper  as they are quite rare these days – look for an old HP or Compaq server, that’s what I did 🙂

HP ML115 G5 Cheap Lab Servers going, going, going…

 

Just got this from ServersPlus – if you are thinking of building your own vTARDIS I would suggest you get in quick, I would assume the newer G6’es are coming but seem to be much more expensive and less ESX compatible based on Techhead’s view of the ML110 G6

Good morning,
I just wanted to let you know the latest info on the ever popular HP ML115 tower server which has landed back in stock with us today. The product has now been announced as end of life and will no longer be available after this last batch which HP have produced.
As a result, HP have also increased the cost price on the final shipments meaning that the lowest price we can offer them at is £249.95. Whilst I appreciate that this is quite a jump from the previous price, it is still by far the cheapest tower server available for its specification, and we still expect the final units to sell through quickly.
HP have announced that the ML115 will be replaced but cannot confirm any specs, prices or ETAs at this stage. Therefore, if you are interested in securing any of the last of the current model, I’d ask you to get in touch so that I can put them aside for you.

Regards

Ben Voce

Senior Account Manager

Direct:
01977 739 014

Email:
ben.voce@serversplus.com

Web:
www.serversplus.com

Get them whilst they are hot!

ACADIA…thoughts

 

Now that the VCE joint coalition has announced their new CEO they launched the acadia.com website with a blog.

The Acadia proposition is interesting as a joint approach to delivering private cloud infrastructure – this sort of pre-packaged solutions offering with good vendor support is a welcome addition to the industry, but other than tighter links to the product vendors I’m not sure what more they bring to the table over a traditional VAR.

As an aside, I worked on a very similar concept in 2008 for my current employer, although on a much smaller scale – we built a repeatable private cloud stack around a set of well-understood technologies.

I have deployed it a number of times and as a professional services organisation I have seen 1st hand how this base template approach has helped accelerate not only the pre-sales and design process but also the delivery of actual infrastructure to the end-customer – particularly when working to build infrastructure for a new solution where current metrics and sizing information just isn’t available.

You can read my original thoughts about my work on a private cloud platform here – I do however think the VCE coalition has some way to go yet around it’s software licensing before it’s really workable on a true ‘pay as you go’ basis – rather than bundling everything up into a traditional commercial lease-purchase type agreement for hardware and software.

I also have yet to see some more innovative commercial models for the procurement of the infrastructure itself – although the vBlock is designed to scale in a horizontal, modular fashion if you need to scale down how do you do that? the cost is “sunk” with the vendor/reseller and I can’t see them wanting to undo that traditional commercial model.

I’ve seen IBM start to pioneer some mainframe pay as you go type commercial models down into the x86 space, where they ship you a fully loaded system and you pay for capacity that you use – this kind of works for vendors as they don’t have to pay margin to resellers and distribution if they can do it direct and it comes from their own factories at “cost” prices, a traditional VAR would find that this carries a significant financial risk so would usually seek to offset this via a contracted capacity and guaranteed capacity expansion.

I wonder if this could be a key selling tool of the ACADIA proposition – at a guess I’d say EMC/VMware/Cisco still want to sell tin/software as a capital item and get it out of their warehouses and bank the outright sale but they have a stake in the ACADIA business, they are the shareholder(s).

What if the ACADIA business were able to act as a financial intermediary – buying kit (hardware or software) from the VCE partners, levering volume and special pricing via its owners, handling logistics and leasing infrastructure out to the end-customer with professional services, rather than relying 100% on sales margin and professional/managed services revenue.

In theory ACADIA could build a diverse enough pool of customers that it could weather storms in any specific market sector (financials, telco, media etc.) to keep an overall positive profit and market performance. Because the “product” is built around a standard set of components (the vBlock) managing and re-distributing inventory items between customers is more feasible because it’s easier to keep “stock” of components or entire vBlocks– in this mode ACADIA could act almost as a super-VAR in traditional terms, but with some more creative financial models enabled by access to better "raw” pricing. (raw in the sense there are less middle-men and commissions to pay)

if they were able to pull this off then I can see a significant advantage over the  more traditional VAR’s but do VCE risk treading on the toes of their traditional partners, distribution and resellers?

Augmented Reality TFTLondon

 

I attended another session of The Fantastic Tavern London (#TFTLondon) this evening hosted by Matt Bagwell and Michelle Flynn – this evening’s event was centred around ‘realities’ and specifically augmented reality.

My post on the previous TFT London is available here; as you’ll see from that post Augmented Realty was voted as a hot topic so warranted further exploration.

If you’re not familiar with the concept of Augmented Reality (AR) – watch this video, it’s essentially adding to what you see and sense in the real world with useful information, to-date most implementations are geared around providing spatial information to maps or scenes such as finding the nearest tube station or restaurant.

The evening opened with some discussion of how the current ‘thirtysomething’ generation grew up with emersion through video games, Doom was one of my favourites and whilst I’m not still a gamer I can see how the experience is immersive for a lot of people to this day.

The reality of realities is that it’s not quite there yet to enrich our daily lives, the iPhone example is cool, but it still requires a device which isn’t ‘natural’ to operate, we don’t all walk around with our iPhones outstretched infront of us…

Well, maybe some fellow London commuters do, they should really watch out where they are going otherwise they could see the reality of a totally different kind of AR (Accident and emeRgency – sorry!:))

A lot of things that in the 1980/90’s were considered futuristic still aren’t mainstream technology today, for example the Terminator’s heads-up type display but they are in some places…

Several cars manufacturers have this sort of option today (and some had it in the late 80’s) and it’s had a military application for a long time, these technologies will eventually become commoditized and thus cheap and accessible to all, much like the mobile phone has become almost ubiquitous.

 

Paul Dawson of EMC consulting put forward the view that there is also something missing, most current AR implementations only operate in the 4 dimensions (the 3 dimensions of physics and the dimension of time) but they don’t really address the 5 senses, and considering this is how we, as humans really experience our environments AR isn’t really contributing much in the way of real augmentation.

AR can point out linear things like a tube station entrance or a dog wearing a wig, but it can’t contextually give you information that is relevant to you; for example – There is a Marks & Spencer branch, it’s lunch time, you’re hungry and it’s queue is only 30 seconds, compared to your usual sandwich shop which has a queue of 5 mins.

Additionally current AR is very device bound; it’s not really a natural way of giving you information

Imagine an implementation in the built environment around you that listens to actual conversations and displays them in a type of tag cloud or some embedded displays that recognise your face and some attributes about you and where you are going, offering some advice on the quickest way; or maybe even the closest gym to lose some of that weight? 🙂

That’s quite an interesting proposition to me, my personal favourite example of an AR implementation is the Lego Augmented Reality Kiosk, which is available in all good Lego shops (it is said that I have an unhealthy, mildly OCD-type interest in Lego)

 

There is also an application available at Tesco that will allow you to take a photo of a bottle of wine and have it provide further information (more info here), this sort of application has been around for a while but this uses visual recognition, rather than traditional bar-code scanning – so imagine the wider application of this concept to the environment around you – rather than relying on traditional GPS, barcode, tagging type technology image recognition is used, which is potentially far more accurate as it’s based on what you actually see from your view point and position.

As technology develops and is miniaturised the hope is that AR displays will acheive an almost embedded form-factor, such as built into a normal pair of glasses or even a contact lens!

 

Johannes Kebeck from the Microsoft Bing Maps team talked about how geo-spatial information and public mapping and are being merged with crowd-sourced information, tagging and imagery to produce rich, sources for augmented reality solutions.

He also talked about how Microsoft have a preview of a commercial data catalogue for the Windows Azure Platform codenamed Dallas – where people can find, buy and sell data sources for these sort of applications, leveraging the scale of Azure for the analysis and processing of large data sets.

Several interesting use-cases were demonstrated;

  • Using Flickr integration with Bing maps to provide historical photographs of buildings, allowing a time-slider control to see what something looked like 50 years ago or at night-time.
  • The large number of free online data sources means there is a large amount of information available, most of this is historical or static but increasingly with crowd-sourcing, microblogging and sensor type networks these are being augmented with real-time information, for example fuel prices.
  • There was an example of a crowd-sourced maps mashup that was done for the Haiti disaster (my thoughts on some kind of emergency infrastructure for these situations here) in a matter of days people contributed significant real-time information on local conditions, aid levels and casualties to allow better targeting of relief.
  • Or on a more local area, feeding real-time crime statistics into a map to show crime hot-spots.

A lot of this seems to be discussed in this TED session, I haven’t watched it yet; but it looks very interesting

Up until now most of these tools and technologies haven’t been easilt accessible to the typical consumer and end-user, I’ve written about Microsoft Photosynth before but it’s an example of an easy to use end-user interface into this sort of AR technology and a lot of work is going into this area.

Neogeography is a new word to me, but you can read about it at wikipedia, I like the idea.

Lastly some of the UX team from EMC Consulting’s Studio had to tiptoe around an NDA to talk at a high-level about some of the work they are doing at the moment with customers in the Augmented Reality space for industrial customers, augmenting engineers viewpoint of plant with relevant information, as well as real-time 2-way feeds of critical safety information.

Good quality immersion for the end-user is considered a key to making people feel empowered by the AR tools, rather than merely using them as a tool,  to achieve this you need a good quality experience and they have coined the phrase High-Fidelity 3D, particularly for applications like military or surgical practice where for the end-user to get the most benefit it has to seem real; there is obviously a lot that has been pioneered in the video game industry in this space.

They have built some interesting PoC systems, notably around smart-metering for the home, with a flexible UI that allows the end-user to dive in and out of the represented house and appliances and customise it to represent their house.

 

For me one of the most interesting parts of this session was that with immersive/AR type applications there are a lot more end-user factors to consider like ergonomics, RSI and complementing learnt muscle-memory type skills.

Most AR applications will require one of more multi-touch type input devices, the traditional keyboard and mouse are well known quantities but new devices are less field-proven, there are also health & safety implications – you need to ensure the solution doesn’t encourage people to take risks or put them in danger.

For many people vehicle control skills (like a car steering wheel and pedals) are well learnt (muscle-memory) type skills so adopting something radically different makes it hard for people to switch between and slows adoption.

Almost playing back to the behavioural architecture concepts of the previous TFT evening – the team discussed the concept of built-in rewards, or status levels within industrial type AR applications to make it a more engaging experience for the end-user and encourage adoption – I could imagine how an example displaying a user’s skill level operating equipment (n00b, speed-demon, Fork-lift Ninja etc.) helps to encourage development and keep people operating within allowable parameters (speed limits for example).

This may sound a bit airy-fairy (for want of a better term) but consider this, 1993’s 15 year old playing Doom is now 32 and as time passes the percentage of people brought up with this sort of gaming experience and expectation grows. Today’s upcoming generation is already fully immersed in social media – maybe it’s not such an alien concept for the professional world of the near future after all.

I’d like to thank Matt and Michelle for an interesting evening, and a bit of a break from the norm of my day-job and this blog – if you’re interested in this sort of thing – the next event is on 19th August (location TBC) and is likely to be a full day called “The Lock Inn” – keep an eye on Matt and Michelle’s blogs for more details, I also appear to have been volunteered by a colleague to speak about clouds or something at the event, so look out for that 🙂

I also had a go on an iPad (not yet released here in the UK) thoughts – very nice screen and UI (as you’d expect) but it was a lot heavier than I was expecting, I only had it for a couple of minutes but it wasn’t very comfortable.

Best quote of the evening: “Let the sausages flow…” Matt Bagwell, Creative Director, EMC Consulting 🙂

Where Next for VMware Workstation?

 

I love VMware Workstation, I have used it since about 1999 when I was first introduced to virtualization and it totally revolutionised the way I did my home and work lab study and later production systems.

Since then it has always introduced new features with every version that seemed to be back-ported into the server products, as I understand it the record and replay features underpinned the code that became VMware Fault-Tolerance, and the same with linked-clones and thin-provisioning in vSphere.

There has been integration with developer environments for better debugging and I guess a lot of Workstation has gone into the Fusion product for the Mac – but it did get me thinking, where is next for Workstation – beyond the usual performance tweaks that seem to get make in every version?

What I think would be great, and it ties into my previous post on vendor hardware emulators is a pluggable hardware abstraction layer(HAL)/Driver architecture for VMware Workstation.

image

Workstation does a brilliant job at virtualizing x86/64 hardware and to-date that has been its primary task but I wonder if it could be expanded into a more modular architecture product to support wider development and use of other hardware platforms on x86/64.

There are many emulators available out there for developers for mobile phone chipsets, custom ASICs etc. but these are often hard to configure for the end-user and are very bespoke to the devices they are developed for.

With the amount of spare horsepower and low price-point available to commodity x86/64 hardware all it needs is a common virtualization/emulation product to unify it together and you have a very powerful product with a huge market, not only for developers but for operations people who no longer need a huge lab of bespoke hardware, mobile phones and devices to support end-users – it’s all available in a virtual machine.

Thinking slightly wider in scope, If it were also back-ported and integrated into future versions of the vSphere product line you have a very powerful back-end server product – VMware talk of the software mainframe, this is bringing what some mainframes currently do for virtualizing x86 server, but making it a MUCH wider application.

Whilst the initial pay-off would be with developer licenses rather than enterprise/large scale licensing agreements with Hyper V and Xen rapidly catching up on the Hypervisor front VMware need something cutting-edge to keep them ahead of the game, and consider the enterprise implications for this;

Lots of customers running workloads on SPARC hardware/OS – porting to x86 Solaris isn’t simple, is the cost/performance benefit still there for SPARC customers in the world of cheap and fast x64 hardware – emulating/virtualizing SPARC CPU workloads onto x64 could be a big draw for Sun customers, particularly with the Oracle acquisition and VMware targeting vSphere at large scale Oracle customers this could prove easier than porting legacy apps from SPARC in the same way virtualization has revolutionised the x86 server space.

Or an ESX cluster running a mix of x86/64, SPARC, ARM, iPhone, Set-Top Box, AS/400 virtual machine workloads – either as a test and dev, support or even production solution.

Sure, emulation has an overhead – so does virtualization but x86/64 hardware is cheap and off-the-shelf, add in a distributed ESX processing cluster (my thoughts on that here) and you could probably build something with equivalent or even better performance for less.

Interesting concept (to me anyway)… thoughts?

London VMware User Group – 6th May

 

The next VMware User group meeting (VMUG) has been announced, I’ll be presenting on virtualizing terminal server workloads, which is a presentation based on some recent work I have undertaken with a customer.

It’s also great to see Alan back for another PowerShell session – this is definitely worth attending – register now if you haven’t already as it was a very full house last time round.

Agenda as follows –

The London VMUG Steering Committee are pleased to announce the next UK London VMware User Group meeting, kindly sponsored by RES Software, to be held on Thursday 6th May 2010. We hope to see you at the meeting, and afterwards for a drink or two, courtesy of VMware.

To register your interest in attending, please send an email to londonvmug@yahoo.com with up to two named attendees from your organisation. If you do not receive a confirmation mail, please don’t just turn up since we will not be able to admit you to the meeting. Please separately mention if you intend attending Alan’s PowerCLI workshop at 1100. Content from the meetings will continue to be uploaded to http://www.box.net/londonug, NDA permitting.

Our meeting will be held at the Thames Suite, London Chamber of Commerce and Industry, 33 Queen Street, London EC4R 1AP, +44 (0)20 7248 4444. The nearest tube station is Mansion House, location information is available here. Reception is from 1230 for a prompt 1pm start, to finish around 5pm.Our agenda looks something like this:

1100 – 1200 (Optional) PowerCLI / Powershell workshop – Alan Renouf. Please bring your own curly brackets
1230 – 1300 Arrive & Refreshments
1300 – 1320 Welcome & News – Alaric Davies
1320 – 1400 Sponsor Presentation – RES Software

Migrating ESX v3.5 to ESX4i on HP Blades – Colin Style, Prudential
Brunel University virtualisation experiences – Peter Polkinghorne, Brunel University

1500 – 1520 Refreshment break

Virtualising Terminal Server workloads – Simon Gallagher
Interactive (and possibly contentious!) Panel Discussion – UG Committee with contributions from the floor

1645 – 1700 Close
1700 – Pub

Discount Code for BriForum

 

As you may have seen before I am giving a session at BriForum in Chicago around low-cost virtual lab environments using a variant of my vT.A.R.D.I.S demo system with as many live demos as possible.

I have a $200 discount code available for my readers, if you want to take advantage of this offer please email or twitter me (details on about page) and I will send you one.

BriForum is still the best technical conference I have ever attended for content (and I last went in 2007!) – get yourself there!

BriForum2010-728x90-01

Streaming Install of Office 2010 Beta

 

This is clever, Microsoft are going to offer a version of Office 2010 that you can install on demand using the App-V technology that they acquired from Softricity a few years ago.

Read the link to the Office 2010 team blog here and you can try the beta for yourself here

imageclip_image002 

 

 

 

 

 

I like the idea of App-V technology (and VMware’s ThinApp equivalent) a lot for the desktop and space it’s a very clever way of delivering apps.