Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Category Archives: iSCSI

Nothing happens when I try to run Starwind software Manager on Windows 2008 R2

 

I am doing some work with a FusionIO solid state flash storage card at the moment (more on this in a future post) as part of this I need a windows based iSCSI target for my testing, and rather handily you can download an evaluation copy of the Starwind Enterprise Edition from here 

I usually use OpenFiler for this sort of thing, but not being a particular Linux wizz (ok, and being a bit lazy and in a hurry) I wanted to try out the Fusion IO Duo card I have been loaned, and the Linux drivers are an .RPM or .deb package for which OpenFiler doesn’t have the required package management software – so I have installed it in a Windows 2008 machine and will use the Starwind software as an iSCSI target.

(In terms of disclosure, whilst i was writing this post up last week VMware vExperts were offered an NFR license for the product, this test was done with the freely downloadable eval version rather than the NFR license we have been offered – but I urge you to check it out, it’s pretty cool.

Anyway – when you first install the software on Windows 2008 , there is a Starwind icon on the desktop

image

When you double-click on it (or any of the start menu entries nothing happens, you don’t get a UI or anything. this confused me for a while until I discovered that it places a system tray icon on boot, which you use to configure the software.

by default on my Windows 2008 R2 machine this icon is hidden, and set to only show notifications – of which there were none yet.

image

A quick trip to the customize button on the Notification area menu options on the properties of the task bar shows the default setting which is hiding it

 image

image

Setting this to show icon and notifications made it re-appear on the taskbar/notification area

image

image

You can now right-click and launch the management console image

The management console

image

It’s a bit strange that the desktop or start-menu icon doesn’t launch the manager ‘out of the box’ with Windows 2008 – but this is how to resolve it, the hint eventually came from the online help, which said to go via the system tray icon, so it just goes to show – maybe sometimes you should look at the help files!

image

Hopefully that will save you some time with your eval!

iSCSI LUN is very slow/no longer visible from vSphere host

 

I encountered this situation in my home lab recently – to be honest I’m not exactly sure of the cause yet, but I think it was because of some excessive I/O from the large number of virtualized vSphere hosts and FT instances I have been using mixed with some scheduled storage vMotion – over the weekend all of my virtual machines seem to have died and crashed or become unresponsive.

Firstly, to be clear this is a lab setup; using a cheap/home PC type SATA disk and equipment not your typical production cluster so it’s already working pretty hard (and doing quite well, most of the time too)

The hosts could ping the Openfiler via he vmkernel interface using vmkping so I knew there wasn’t an IP/VLAN problem but access to the LUNs was very slow, or intermittent – directory listings would be very slow, time out and eventually became non-responsive.

I couldn’t power off or restart VMs via the VI client, and starting them was very slow/unresponsive and eventually failed, I tried rebooting the vSphere 4 hosts, as well as the OpenFiler PC that runs the storage but that didn’t resolve the problem either.

At some point during this troubleshooting the 1TB iSCSI LUN I store my VMs on disappeared totally from the vSphere hosts and no amount of rescanning HBA’s would bring it back.

The Path/LUN was visible down the iSCSI HBA but from the storage tab of the VI client

Visible down the iSCSI path..

image

But the VMFS volume it contains is missing from the list of data stores

image

This is a command line representation of the same thing from the /vmfs/devices/disks directory.

image

OpenFiler and it’s LVM tools didn’t seem to report any disk/iSCSI problems and my thoughts turned to some kind of logical VMFS corruption, which reminded me of that long standing but never completed task to install some kind of VMFS backup utility!

At this point I powered down all of the ESX hosts, except one to eliminate any complications and set about researching VMFS repair/recovery tools.

I checked the VMKernel log file (/var/log/vmkernel) and found the following

[root@ml110-2 /]# tail /var/log/vmkernel

Oct 26 17:31:56 ml110-2 vmkernel: 0:00:06:48.323 cpu0:4096)VMNIX: VmkDev: 2249: Added SCSI device vml0:3:0 (t10.F405E46494C454009653D4361323D294E41744D217146765)

Oct 26 17:31:57 ml110-2 vmkernel: 0:00:06:49.244 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x410004168500) to NMP device "mpx.vmhba0:C0:T0:L0" failed on physical path "vmhba0:C0:T0:L0" H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

Oct 26 17:31:57 ml110-2 vmkernel: 0:00:06:49.244 cpu1:4097)ScsiDeviceIO: 747: Command 0x12 to device "mpx.vmhba0:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

Oct 26 17:32:00 ml110-2 vmkernel: 0:00:06:51.750 cpu0:4103)ScsiCore: 1179: Sync CR at 64

Oct 26 17:32:01 ml110-2 vmkernel: 0:00:06:52.702 cpu0:4103)ScsiCore: 1179: Sync CR at 48

Oct 26 17:32:02 ml110-2 vmkernel: 0:00:06:53.702 cpu0:4103)ScsiCore: 1179: Sync CR at 32

Oct 26 17:32:03 ml110-2 vmkernel: 0:00:06:54.690 cpu0:4103)ScsiCore: 1179: Sync CR at 16

Oct 26 17:32:04 ml110-2 vmkernel: 0:00:06:55.700 cpu0:4103)WARNING: ScsiDeviceIO: 1374: I/O failed due to too many reservation conflicts. t10.F405E46494C454009653D4361323D294E41744D217146765 (920 0 3)

Oct 26 17:32:04 ml110-2 vmkernel: 0:00:06:55.700 cpu0:4103)ScsiDeviceIO: 2348: Could not execute READ CAPACITY for Device "t10.F405E46494C454009653D4361323D294E41744D217146765" from Plugin "NMP" due to SCSI reservation. Using default values.

Oct 26 17:32:04 ml110-2 vmkernel: 0:00:06:55.881 cpu1:4103)FSS: 3647: No FS driver claimed device ‘4a531c32-1d468864-4515-0019bbcbc9ac’: Not supported

Due to too many SCSI reservation conflicts, so hopefully it wasn’t looking like corruption but a locked-out disk – a quick Google turned up this KB article – which reminded me that SATA disks can only do so much 🙂

Multiple reboots of hosts and the OpenFiler hadn’t cleared this situation – so I had to use vmkfstools to reset the locks and get my LUN back, these are the steps I took..

You need to find the disk ID to pass to the vmkfstools –L targetreset command, to do this from the command line look under /vmfs/devices/disks (top screenshot below)

You should be able to identify which one you want by matching up the disk identifier.

image

Then pass this identifier to the vmkfstools command as follows (your own disk identifier will be different) – hint: use cut & paste or tab-completion to put the disk identifier in.

vmkfstools-L targetreset  /vmfs/devices/disks/t10.F405E46494C4540096(…)

You will then need to rescan the relevant HBA using the esxcfg-rescan command (in this instance the LUN is presented down the iSCSI HBA – which is vmhba34 in vSphere)

esxcfg-rescan vmhba34

(you can also do this part via the vSphere client)

if you now look under /vmfs/volumes the VMFS volume should be back online, or do a refresh in the vSphere client storage pane.

All was now resolved and virtual machines started to change from (inaccessible) in the VM inventory back to the correct VM names.

One other complication was that my DC, DNS, SQL and vCenter server are all VMs on this platform and residing on that same LUN. So you can imagine the havoc that causes when none of them can run because the storage has disappeared; in this case it’s worth remembering that you can point the vSphere client directly at an ESX node, not just vCenter and start/stop VMs from there – to do this just put the hostname or IP address when you logon rather than the vCenter address (and remember the root password for your boxes!) – if you had DRS enabled it does mean you’ll have to go hunting for where the VM was running when it died.

image

In conclusion I guess there was gradual degradation of access as all the hosts fought with a single SATA disk and increased I/O traffic until the point all my troubleshooting/restarting of VMs overwhelmed what it could do. I might need to reconsider how many VMs I run from a single SATA disk as I’m probably pushing it too far – remember kids this is a lab/home setup; not production, so I can get away with it 🙂

In my case it was an inconvenience that it took the volume offline and prevented further access, I can only assume this mechanism is in-place to prevent disk activity being dropped/lost which would result in corruption of the VMFS or individual VMs.

With the mention of I/O DRS in upcoming versions of vSphere that could be an interesting way of pre-emotively avoiding this situation if it does automated storage vMotion to less busy LUNs rather than just vMotion between hosts on the basis of IOPs.

New Home Lab Design

 

I have had a lab/test setup at home for over 15 years now, it’s proven invaluable to keep my skills up to date and help me with study towards the various certifications I’ve had to pass for work, plus I’m a geek at heart and I love this stuff 🙂

over the years it’s grown from a BNC based 10mbit LAN running Netware 3/Win 3.x, through Netware 4/NT4, Slackware Linux and all variants of Windows 200x/RedHat.

Around 2000 I started to make heavy use of VMware Workstation to reduce the amount of hardware I had (8 PCs in various states of disrepair to 2 or 3 homebrew PCs) in latter years there has been an array of cheap server kit on eBay and last time we moved house I consolidated all the ageing hardware into a bargain eBay find – a single Compaq ML570G1 (Quad CPU/12Gb RAM and an external HDD array) which served fine until I realised just how much our home electricity bills were becoming!

Yes, that's the beer fridge in front of the rack :) hot & cold aisles, mmm 

Note the best practice location of my suburban data centre, beer-fridge providing hot-hot aisle heating, pressure washer conveniently located to provide fine-mist fire suppression; oh and plenty of polystyrene packing to stop me accidentally nudging things with my car. 🙂

I’ve been using a pair of HP D530 SFF desktops to run ESX 3.5 for the last year and they have performed excellently (links here here and here) but I need more power and the ability to run 64 bit VMs (D530’s are 32-bit only) I also need to start work on vSphere which unfortunately doesn’t look like it will run on a D530.

So I  a acquired a 2nd-hand ML110 G4 and added 8Gb RAM – this has served as my vSphere test lab to-date, but I now want to add a 2nd vSphere node and use DRS/HA etc. (looks like no FT for me unfortunately though) – Techhead put me onto a deal that Servers Plus are currently running so I now have 2 x ML110 servers 🙂 they are also doing quad-core AMD boxes for even less money here – see Techhead for details of how to get free delivery here

image

In the past my labs have grown rather organically as I’ve acquired hardware or components have failed; being as this time round I’ve had to spend a fair bit of my own money buying items I thought it would be a good idea to design it properly from the outset 🙂

The design goals are:

  • ESX 3.5 cluster with DRS/HA to support VM 3.5 work
  • vSphere DRS/HA cluster to support future work and more advanced beta testing
  • Ability to run 64-bit VMs (for Exchange 2007)
  • Windows 2008 domain services
  • Use clustering to allow individual physical hosts to be rebuilt temporarily for things like Hyper-V or P2V/V2P testing
  • Support a separate WAN DMZ and my wireless network
  • Support VLAN tagging
  • Adopt best-practice for VLAN isolation for vMotion, Storage etc. as far as practical
  • VMware Update manager for testing
  • keep ESX 3/4 clusters seperate
  • Resource pool for “production” home services – MP3/photo library etc.
  • Resource pool for test/lab services (Windows/Linux VMs etc.)
  • iSCSI SAN (OpenFiler as a VM) to allow clustering, and have all VMs run over iSCSI.

The design challenges are:

  • this has to live in my garage rack
  • I need to limit the overall number of hosts to the bare minimum
  • budget is very limited
  • make heavy re-use of existing hardware
  • Cheap Netgear switch with only basic VLAN support and no budget to buy a decent Cisco.

Luckily I’m looking to start from scratch in terms of my VM-estate (30+) most of them are test machines or something that I want to build separately, data has been archived off so I can start with a clean slate.

The 1st pass at my design for the ESX 3.5 cluster looks like the following

 image

I had some problems with the iSCSI VLAN, and after several days of head scratching I figured out why; in my network the various VLANs aren’t routable (my switch doesn’t do Layer 3 routing). For iSCSI to work the service console needs to be accessible from the iSCSI VKernel port. In my case I resolved this by adding an extra service console on the iSCSI VLAN to get round this problem and discovery worked fine immediately

image image

image

I also need to make sure the Netgear switch had the relevant ports set to T (Tag egress mode) for the VLAN mapping to work – there isn’t much documentation on this on the web but this is how you get it to work.

image

The vSwitch configuration looks like the following – note these boxes only have a single GbE NIC, so all traffic passes over them – not ideal but performance is acceptable.

imageimage

iSCSI SAN – OpenFiler

In this instance I have implemented 2 OpenFiler VMs, one on each D530 machine, each presenting a single 200Gb LUN which is mapped to both hosts

Techhead has a good step-by-step how to setup an OpenFiler here that you should check out if you want to know how to setup the volumes etc.

I made sure I set the target name in Openfiler to match the LUN and filer name so it’s not too confusing in the iSCSI setup – as shown below;

if it helps my target naming convention was vm-filer-X-lun-X which means I can have multiple filers, presenting multiple targets with a sensible naming convention – the target name is only visible within iSCSI communications but does need to be unique if you will be integrating with real-world stuff.

image

Storage Adapters view from an ESX host – it doesn’t know the iSCSI target is a VM that it is running 🙂

image

Because I have a non routed L3 network my storage is all hidden in the 103 VLAN, to administer my OpenFiler I have to use a browser in a VM connected to the storage VLAN, I did play around with multi-homing my OpenFilers but didn’t have much success getting iSCSI to play nicely, it’s not too much of a pain to do it this way and I’m sure my storage is isolated to a specific VLAN.

The 3.5 cluster will run my general VMs like Windows domain controllers, file servers and my SSL VPN, they will vMotion between the nodes perfectly. HA won’t really work as the back-end storage for the VM’s live inside an OpenFiler, which is a VM – but it suits my needs and storage vMotion makes online maintenance possible with some advanced planning.

Performance from VM’d OpenFilers has been pretty good and I’m planning to run as many as possible of my VMs on iSCSI – the vSphere cluster running on the ML110’s will likley use the OpenFilers as their SAN storage.

This is the CPU chart from one of the D530 nodes in the last 32hrs whilst I’ve been doing some serious storage vMotion between the OpenFiler VM’s it hosts.

image

image

image

That’s it for now, I’m going to build out the vSphere side of the lab shortly on the ML110’s and will post what I can (subject to NDA, although GA looks to be close)

Cannot Set Static IP in OpenFiler When Running as a VM

 

As a result of a power outage last week my home lab needed a reboot as my 2 x ESX D530 boxes didn’t have auto-power on setting set in BIOS, so I dutifully braved the snow to get to the garage and power them on manually.

However nothing came back online.. ESX started but my VMs didn’t auto-restart as it couldn’t find them.

The run up to xmas was a busy month and I had vague recollections of being in the midst of using storage vMotion to move all my VMs away from local storage to an OpenFiler VM in preparation for some testing.

However, in my rush to get things working the OpenFiler box didn’t have a static IP address set and was using DHCP (see where this is going…?)

So my domain controller/DNS/DHCP and Virtual Centre server were stored on the OpenFiler VM which my ESX box was running and accessed over iSCSI. As such when ESX started it couldn’t locate the iSCSI volume hosting the VM and couldn’t start anything.

imageOpenFiler couldn’t start its web admin GUI if it couldn’t get an IP address, nor would it mount the shared volumes.

 

 

Once I’d figured out what was going on, it was simple enough to get things going again;

  • Temporary DHCP scope on my router,
  • IPCONFIG/ RENEW to get a temporary DHCP address on my laptop
  • VI client directly to ESX box rather than VC and reboot the OpenFiler VM
  • Web browser to OpenFiler appliance on temporary DHCP addresss

However at this point I would have expected to be able to set a static IP address and resolve the issue for the future, however I couldn’t see any NICs in the OpenFiler config screen (see screenshot below)

image

I thought this was a bit odd, and maybe I was looking in the wrong part of the UI, but sure enough it was the correct place.

I tried updating it to the most recent software releases via the handy system update feature, which completed ok (no reboot required – beat that Windows Storage Server! :)) but still no NICs showing up, even after a couple of reboots to be absolutely sure.

image

Then, I stumbled across this thread and it seems this may be a bug (tracker here) following Jason’s suggestion I used the nano text editor via the VI remote console to edit the /opt/openfiler/var/www/includes/network.inc file on the OpenFiler VM as follows;

Before:

image

After:

image

I then refreshed the system tab in my browser session and the NICs show up;

note as part of my initial troubleshooting I added a 2nd virtual NIC to the VM, but the principal should apply regardless.

image 

And I can now set a static IP etc.

image image

I had to reboot my ESX host to get all my VM’s back from being inaccessible, I’m sure there is a cleverer way to do that, but in my case I wanted to test that the start-up procedure worked as expected now that I’ve set a static IP and re-jigged the start-up sequence so that OpenFiler starts before any other VMs that are dependent on it for their storage.

Free EMC Celerra for your Home/Lab

 

Virtualgeek has an interesting post here about a freely downloadable VM version of their Celerra product, including an HA version. This is an excellent idea for testing and lab setups, and a powerful tool in your VM Lab arsenal alongside other offerings like Xtravirt Virtual SAN and OpenFiler.

I’ve been saying for a while that companies that make embedded h/w devices and appliances should try to offer versions of the software running their devices as VM’s so people can get them into lab/test environments quickly, most tech folk would rather download and play with something now, rather than have to book and take delivery of an eval with sales drones (apologies to any readers who work in sales) and pre-sales professional services, evaluation criteria etc. if your product is good it’s going to get recommended, no smoke and mirrors required.

As such VM appliances are an excellent pre-sales/eval tool, rather than stopping people buying products. Heck, they could even licence the VM versions directly for production use (as Zeus do with their ZXTM products); this is a very flexible approach and something that is important if you get into clouds as an internal or external service provider – the more you standardise on commodity hardware with a clever software layer the more you can recycle, reuse and redeploy without being tied into specific vendor hardware etc.

Most “appliances” in-use today are actually low-end PC motherboards with some clever software in a sealed box – for example I really like the Juniper SA range of SSL VPN appliances, I recently helped out with a problem on one which was caused by a failed HDD – if  you hook up the console interface its a commodity PC motherboard in a sealed case running a proprietary secure OS – as it’s all intel based, no reason it couldn’t also run as a VM (SLL accelerator h/w can be turned off in the software so there can’t be any hard dependency on any SSL accelerator cards inside the sealed box) – adopting VM’s for these appliances provides the same (maybe even better) level of standard {virtual} hardware that appliance vendors need to make their devices reliable/serviceable.

Another example, the firmware that is embedded in the HP Virtual Connect modules I wrote about a while back runs under VMWare Workstation, HP have an internal use version for engineers to do some development and testing against, sadly they won’t redistribute it as far as I am aware.

Virtualization – the key to delivering "cloud based architecture" NOW.

 

There is a lot of talk about delivering cloud or elastic computing platforms, a lot of CxO’s are taking this all in and nodding enthusiastically, they can see the benefits.. so make it happen!….yesterday.

Moving your services to the cloud, isn’t always about giving your apps and data to Google, Amazon or Microsoft.

You can build your own cloud, and be choosy about what you give to others. building your own cloud makes a lot of sense, it’s not always cheap but its the kind of thing you can scale up (or down..) with a bit of up-front investment, in this article I’ll look at some of the practical; and more infrastructure focused ways in which you can do so.

image

Your “cloud platform” is essentially an internal shared services system where you can actually and practically implement a “platform” team that operates and capacity plans for the cloud platform; they manage it’s availability and maintenance day-day and expansion/contraction.

You then have a number of “service/application” teams that subscribe to services provided by your cloud platform team… they are essentially developers/support teams that manage individual applications or services (for example payroll or SAP, web sites etc.), business units and stakeholders etc.

Using the technology we discuss here you can delegate control to them over most aspects of the service they maintian – full access to app servers etc. and an interface (human or automated) to raise issues with the platform team or log change requests.

I’ve seen many attempts to implement this in the physical/old world and it just ends in tears as it builds a high level of expectation that the server/infrastructure team must be able to respond very quickly to the end-“customer” the customer/supplier relationship is very different… regardless of what OLA/SLA you put in place.

However the reality of traditional infrastructure is that the platform team can’t usually react as quick as the service/application teams need/want/expect because they need to have an engineer on-site, wait for an order and a delivery, a network provisioning order etc. etc (although banks do seems to have this down quite well, it’s still a delay.. and time is money, etc.)

Virtualization and some of the technology we discuss here enable the platform team to keep one step ahead of the service/application teams by allowing them to do proper capacity planning and maintain a pragmatic headroom of capacity and make their lives easier by consolidating the physical estate they manage. This extra headroom capacity can be quickly back-filled when it’s taken up by adopting a modular hardware architecture to keep ahead of the next requirement.

Traditional infrastructure = OS/App Installations

  • 1 server per ‘workload’
  • Silo’d servers for support
  • Individually underused on average = overall wastage
  • No easy way to move workload about
  • Change = slow, person in DC, unplug, uninstall, move reinstall etc.
  • HP/Dell/Sun Rack Mount Servers
  • Cat 6 Cables, Racks and structured cabling

The ideal is to have an OS/app stack that can have workloads moved from host A to host B; this is a nice idea but there are a whole heap of dependencies with the typlical applications of today (IIS/apache + scripts, RoR, SQL DB, custom .net applications). Most big/important line of business apps are monolithic and today make this hard. Ever tried to move a SQL installation from OLD-SERVER-A to SHINY-NEW-SERVER-B? exactly. *NIX better at this, but not that much better.. downtime required or complicated fail over.

This can all be done today, virtualization is the key to doing it – makes it easy to move a workload from a to b we don’t care about the OS/hardware integration – we standardise/abstract/virtualize it and that allows us to quickly move it – it’s just a file and a bunch of configuration information in a text file… no obscure array controller firmware to extract data from or outdated NIC/video drivers to worry about.

Combine this with server (blade) hardware, modern VLAN/L3 switches with trunked connections, and virtualised firewalls then you have a very compelling solution that is not only quick to change, but makes more efficient use of the hardware you’ve purchased… so each KW/hr you consume brings more return, not less as you expand.

Now, move this forward and change the hardware for something much more commodity/standardised

Requirement: Fast, Scalable shared storage, filexible allocation of disk space and ability to de-duplicate data, reduce overhead etc, thin provisioning.

Solution: SAN Storage, EMC Clariion, HP-EVA, Sun StorageTek, iSCSI for lower requirements, or storage over single Ethernet fabric – NetApp/Equalogic

Requirement: Requirement Common chassis and server modules for quick, easy rip and replace and efficient power/cooling.

Solution: HP/Sun/Dell Blades

Requirement: quick change of network configurations, cross connects, increase & decrease bandwidth

Solution: Cisco switching, trunked interconnects, 10Gb/bonded 1GbE, VLAN isolation, quick change enabled as beyond initial installation there are fewer requirements to send an engineer to plug something in or move it, Checkpoint VSX firewalls to allow delegated firewall configurations or to allow multiple autonomous business units (or customers) to operate from a shared, high bandwidth platform.

Requirement: Ability to load balance and consolidate individual server workloads

Solution: VMWare Infrastructure 3 + management toolset (SCOM, Virtual Centre, Custom you-specific integrations using API/SDK etc.)

Requirement: Delegated control of systems to allow autonomy to teams, but within a controlled/auditable framework

Solution: Normal OS/app security delegation, Active Directory, NIS etc. Virtual Center, Checkpoint VSX, custom change request workflow and automation systems which are plugged into platform API/SDK’s etc.

the following diagram is my reference architecture for how I see these cloud platforms hanging together

image 

As ever more services move into the “cloud” or the “mesh” then integrating them becomes simpler, you have less of a focus on the platform that runs it – and just build what you need to operate your business etc.

In future maybe you’ll be able to use the public cloud services like Amazon AWS to integrate with your own internal cloud, allowing you to retain the important internal company data but take advantage of external, utility computing as required, on demand etc.

I don’t think we’ll ever get to.. (or want) to be 100% in a public cloud, but this private/internal cloud allows an organisation to retain it’s own internal agility and data ownership.

I hope this post has demonstrated that whilst, architecturally “cloud” computing sounds a bit out-there, you can practically implement it now by adopting this approach for the underlying infrastructure for your current application landscape.

Slow vMotion..

 

Note to remember, don’t forget to check the duplex settings on NICs handling your vMotion traffic.

My updated clustered ESX test lab is progressing (more posts on that in the next week or so)… and I’m kind of limited in that I only have an old 24-port 100Mb Cisco hub for the networking at the moment.

vMotion warns about the switch speed as a possible issue.

image

I had my Service Console/ vMotion NICit forced to 100/full and when I 1st tried it vMotion took 2hrs to get to 10%, I changed it to auto-negotiate whilst the task was running and it completed without breaking the vMotion task ain a couple of seconds, dropped only 1 ping to the VM I moved.

Cool, it’s not production or doing a lot of workload but useful to know despite the warning it will work even if you’ve only got an old hub for your networking, and worth remembering that Duplex mis-matches can literally add hours and days onto network transfers.

Free SAN for your Home/Work ESX Lab

 

VM/Etc have posted an excellent article about a free iSCSI SAN VM appliance that you can download from Xtravirt

it uses replication between 2 ESX hosts to allow you to configure DRS/HA etc.

Excellent, I’m going to procure another cheap ESX host in the next couple of weeks so will post back on my experiences with setting this up, my previous plan meant I’d have to get a 3rd box to run an iSCSI server like OpenFiler to enable this functionality, but I really like this approach.

Sidenote  – Xtravirt also have some other useful downloads like Viso templates and an ESX deployment appliance available here