Advertisements

Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Category Archives: ESX

Funky USB Device Entries After Using VM Converter..

 

I’ve noticed this a couple of times, if you P2V a VM from VM Workstation to ESX using VM Converter – it brings across a virtual USB device which isn’t supported by ESX.. if you look at the properties you get the following amusing entries..

image

Now, I wish I really did have a funky USB dongle, or doohickey… sounds useful!

Advertisements

VMWare/Cisco Switching Integration

 

As noted here there is a doc that has been jointly produced between VMWare and Cisco which has all the details required for integrating VI virtual switches with physical switching.

Especially handy if you need to work with networking teams to make sure things are configured correctly to allow failover properly between redundant switches/fabrics etc. – it’s not as simple as it looks, and people often forget the switch-side configurations that are required.

Doc available here (c.3Mb PDF)

Can you run ESX as a VM under ESX?

 

Crazy, yeah – but hey you’ve got to try it, prompted by a question from Prasad – can you run ESX in a VM under ESX?

In the interest of science I just tried this, I used VM Convertor to convert my working ESX under workstation image as-is to my ESX box (hoping it would keep the custom settings intact, and saving me from having to rebuild it)

good news, the VM converter did it’s thing and it does start up on the ESX box.

..bad news, it doesn’t get past  this screen as far as I can tell…it’s sat there for a good 20mins so I don’t think its going to get much further.

image

Also tried to import my ESX 3i image to see if that would work, but VM Convertor wouldn’t import it for some reason, so will have to try a clean install on that.

image

Looks like some kind of error when it’s trying to determine what version it is..

[2008-06-13 00:23:29.164 ‘P2V’ 5748 error] [task,295] Task failed: P2VError UNKNOWN_METHOD_FAULT(sysimage.fault.OsVersionNotFound)
[2008-06-13 00:23:29.164 ‘P2V’ 5748 verbose] [task,339] Transition from InProgress to Failure requested
[2008-06-13 00:23:29.164 ‘P2V’ 5748 verbose] [task,388] Transition succeeded

Ah well, anyone know how to get this going/if it’s possible?

Slow vMotion..

 

Note to remember, don’t forget to check the duplex settings on NICs handling your vMotion traffic.

My updated clustered ESX test lab is progressing (more posts on that in the next week or so)… and I’m kind of limited in that I only have an old 24-port 100Mb Cisco hub for the networking at the moment.

vMotion warns about the switch speed as a possible issue.

image

I had my Service Console/ vMotion NICit forced to 100/full and when I 1st tried it vMotion took 2hrs to get to 10%, I changed it to auto-negotiate whilst the task was running and it completed without breaking the vMotion task ain a couple of seconds, dropped only 1 ping to the VM I moved.

Cool, it’s not production or doing a lot of workload but useful to know despite the warning it will work even if you’ve only got an old hub for your networking, and worth remembering that Duplex mis-matches can literally add hours and days onto network transfers.

Running ESX 3.5 and 3i Under VMWare Workstation 6.5 Beta Build 91182

 

Following on from my earlier post I upgraded my installation to the new build of 6.5. it un-installed the old build and re-installed the latest without a problem, took about 30mins and required a reboot of the host OS.

All my previously suspended XP/2003 VM’s resumed ok without a restart but needed an upgrade to the VMTools which did require a restart of the guest OS – all completed with no problems.

Now, onto installing ESX….

I used the settings from Eric’s post here to edit my .vmx file

ethernet0.virtualDev = “e1000”

monitor.virtual_exec = “hardware”
monitor_control.restrict_backdoor = “true”

Note – you need to select an x64 Linux version from the VM type drop down, if you have to go back and change it via the GUI after you’ve edited the .vmx file it overwrites the Ethernet card “e1000” setting to “vlance” so you need to edit again otherwise the ESX installer won’t find a compatible NIC and won’t install.

it was initially very slow to boot; 5mins on my dual core laptop with only one error – which was expected..

imageimage

To improve the performance I changed my installation to run the non-debug version of the Workstation binaries (rename the vmware-vmx.exe to vmware-vmx-debug.exe)

note: this isn’t recommended unless you know what you are doing, VMWare will rely on the output from the debug version of the code if you need to report any issues)

It also seems to work for the installable version of ESX 3i… (although I’ve not quite figured out the point of that version yet :)).

image

Install prompt

image

it did fail with an error the 1st time round..

image

this was because I had specified an IDE disk as per the ESX instructions, I changed it to a SCSI one and it worked ok.

image

Finished..

imageimage 

The ESX 3i install has a footprint of about 200Mb on disk, and ESX 3.5 uses 1.5Gb.

I’m going to keep the 3.5 install on my laptop and will try to use linked clones to maintain a couple of different versions/configs to save disk space.. I’m sure I could knock up a quick script to change the hostname/IP of each clone – if I do I’ll post it here.

Why would you want to do this? well because you can, of course 🙂 and its handy for testing patch updates and scripts for ESX management etc.

I will  also try to get a ESX DRS cluster running under workstation with a couple of ESX hosts and shared storage over iSCSI using something like OpenFiler as shown here. won’t exactly be production performance, but useful for testing and demo’ing.

Deleting a Virtual Machine from Virtual Center and Disk

 

If you deploy your VM’s from a master image using Virtual Center’s Deploy from template functionality (below).

image

When you try and delete a virtual machine you’ve created from disk

image 

You get the following prompt

Are you sure you want to delete this VM and it’s associated base disk?

Please note if other VMs are sharing this base disk, they will no longer have access to this disk.

image

This does not refer to the master VM image you deployed from; in other words if you delete the VM it does not break all other VMs deployed from the initial template.

One other point to note, when you perform “Deploy virtual machine from template” operation, the target field (below) is actually the name of the base image you are cloning, rather than the name of the eventual VM you are creating from it – odd, but that’s how it is (below)

image

Solid Sate SAN, Storage vMotion and VMWare – HSM for your VMs

 

You’ve been able to buy solid state SAN technology like the Tera-RAMSAN from TMS which gives you up to 1Tb of storage, presented over 4Gb/s fibre channel or Infiniband @10Gb/s… with the cost of flash storage dropping its going to soon fall in to the realms of affordability (from memory a year ago 1Tb SSD SAN was about £250k, so would assume that’s maybe £150k now – would be happy to see current pricing if anyone has it though).

If you were able to combine this with a set of ESX hosts dual-connected to the RAMSAN and traditional equipment (like an HP EVA or EMC Clariion) over a FC or iSCSI fabric then you could possibly leverage the new Storage vMotion features that are included in ESX 3.5 to achieve a 2nd level of performance and load levelling for a VM farm.

image

It’s pretty common knowledge that you can use vMotion and the DRS features to effectively load level or average VM CPU and memory load across a number of VMWare nodes within a cluster.

Using the infrastructure discussed above could add a second tier of load balancing without downtime to a DRS cluster. If a VM needs more disk throughput or is suffering from latency then you could move them to/from the more expensive solid-state storage tiers to FC-SCSI or even FATA disks, this ensures you are making the best use of fast, expensive storage vs. cheap, slow commodity storage.

Even if Virtual Center doesn’t have a native API for exposing this type of functionality or criteria for the DRS configuration you could leverage the plug-in or scripting architecture to use a manager of managers (or here) to map this across an enterprise and across multiple hypervisors (Sun, Xen, Hyper V)

I also see EMC integrating flash storage into the array itself, would be even better if you could transparently migrate LUNS to/from different arrays and disk storage without having to touch ESX at all.

Note: This is just a theory I’ve not actually tried this – but am hoping to get some eval kit and do a proof on concept…

Misc bits of Useful, Recent VMWare News

 

I’ve been really busy the last couple of weeks and I’ve had to trim down my incoming RSS feeds, as there was too much noise and I was missing important things like the following;

  • Scott Lowe’s summary of sessions from VMWare’s partner Exchange, some useful information on Site Recovery Manger
  • The new VMWare Certified Design Expert (VCDX) certification – next step up from VCP, will have to have a look into it now I’ve finally managed to re-schedule my cancelled QA course – official VM announcement here.
  • Official Microsoft Clustering Support with ESX 3.5 Update 1 here
  • Some workarounds for deploying Windows Server 2008 with virtual center here – would have been nice if support was in an official update from VMWare soon; it’s not like it’s been beta’ing for a while is it (errr!)

Lifecycle Manager, Site Recovery Manager and Stage Manager Released

 

Linkage here.

VMWare are shaping up to have a really good set of management tools – lab and site recovery manager are of particular interest to me for several projects I’m working on.

VMWare Server Performance – A Practical Example

 

The following screen dump is from an HP DL380G5 server that runs all the core infrastructure under VMWare Server (the free one) for a friend’s company which I admin sometimes.

It is housed in some co-lo space and runs the average range of Windows servers used by a small but global business, Exchange SQL, Windows 2003 Terminal Services.

As a result of some planned (but not very well communicated!) power maintenance the whole building lost power earlier today, when it was restored I grabbed the following screenshot as the 15 or so Virtual Machines automatically booted.

interesting to note that all the VM’s had been configured to auto-start with the guest OS, meaning there wasn’t any manual intervention required, even though it was a totally dirty shutdown for both the host and guest OS’es (No UPS, as the building and suite is supposed to have redundant power feeds to each rack – in this instance the planned maintenance was on the building wiring so required taking down all power feeds for a 5 yearly inspection..)

There are no startup delay settings  in the free version of VMWare Server so they all start at the same time, interesting to note the following points..

The blue line that makes a rapid drop is the pages/second counter, and the 2nd big drop (green) is the disk queue length. the hilighted (white) line is the overall %CPU time, note the sample frequency was 15 seconds on this perfmon.

image 

After it had settled down, I took the following screenshot, it hardly breaks a sweat during its working day. there are usually 10-15 concurrent users on this system from around the world (access provisioned via an SSL VPN device) and a pretty heavily used Exchange mail system.

image

The box is an HP DL380 G5 with 2 x quad core CPUs (8 cores in total) and 16Gb of RAM, it has 8 x 146Gb 15k HDDs in a single RAID 5 set + hot-spare, it was purchased in early 2007 and cost c.£8,000 (UK Prices)

It runs Windows 2003 Enterprise Edition x64 edition with VMWare Server 1.0.2 (yes, its an old build.. but if it ain’t broke..) and they have purchased multiple w2k3 ent-edition licences to take advantage of the virtualisation use-rights to cover the installed virtual OS’es.

It’s been in-place for a year and hardly ever has to be touched, its rock-solidly available and the company have noticed several marked improvements since they P2V’d their old servers onto this platform, as follows;

  • No hardware failures – moving from lots of low-end servers (Dell) and desktops to a single box (10:1 consolidation)
  • The DL380 has good redundancy built in, but it’s also backed up with a h/w maintenence contract, and they also have a spare cold-standby server to resume service from backups if data is lost.
  • Less noise, the old servers were dotted around their old offices in corners, racks etc – this is the main thing they liked!
  • Simple access anywhere – using a Juniper SA2000 SSL VPN,  its easy to get secure access from anywhere
  • Less reliance on physical offices and cheap DSL-grade data communications, now the servers are hosted on the end of a reliable, data centre class network link with an SLA to back it up. if an individual office looses its ADSL connection, no real issue – people pick up their laptop(s) and work from home/starbucks etc.
  • Good comms are cheaper in data centres than in your branch offices (usually)

Hopefully this goes to show the free version of VMWare’s server products can work almost as well if budget is a big concern, ESX would definitely give some better features and make backup easier, they are considering upgrading and combining with something like Veeam Backup to handle failover/backup.