My ramblings on the stuff that holds it all together
Category Archives: Geeky
Scary story here, not exactly practical – but a demonstrated attack on an implanted pace-maker/defribillator.
if you’re not in the same country as the wordpress.com server farms, chances are your HTTP request arrived here courtesy of one of these undersea fibre optic cables… fascinating stuff, you sometimes forget about the complicated (and expensive!) physical infrastructure that underpins your browsing on t’internet!
More info on how this works here courtesy of Wikipedia
Now, I’ve been a bit skeptical about the iPhone, I’ve played with a few – nice to use but very much a 1.0 product from a software point of view (great hardware – except for the battery), this link from engadget gives a transcript of the SDK announcement/press conference – more here too.
Looks like there are some good apps coming and support for Exchange over the air via ActiveSync (EAS) – this will be a big selling point, most current EAS compatible devices are Windows Mobile and IMHO are quite poor from a usability point of view, this could change all that… the touch interface opens up a lot of interesting possibilities.
Interestingly apps will be available for the iPod touch too (at a nominal cost), making that a compelling proper PDA/media platform rather than “just a big video iPod”.
Will see how things go, but that’s the only announcement that’s even piqued my interest in getting one at some point, iTunes is neat and easy to use (bit slow, but) and will be the primary method for downloading apps.
**update: BBC iPlayer now available for the iPhone. Cool – shame it’s not 3G capable yet or that really would be compelling!**
As the Hoff posts here and on VMTN here. the proposed vulnerability that you can manipulate and possibly compromise a VM during a VMotion process isn’t exactly major, it’s clever.. but – like anything if you don’t follow the best-practice recommendations then you expose yourself to these risks… same reason they recommend you lock your server room or don’t have blank passwords – this attack is akin to gaining physical access to the hardware or being able to sniff a physical switch port – in this instance, it’s “virtual” hardware.
VMWare have always recommended keeping the VMotion traffic on a separate VLAN or network.
the other vulnerability where VMTools can be compromised on is different, but again preventable.. and not enabled on server instances of VMWare.
Have been playing with a few new widgets, and I figured out how to add HTML code into the pages that wordpress.com hosts.
If you need to do it – just add a “Text” widget and then you can put any HTML code you like in that and it gets processed as part of the page load.
I’ve also added Feedburner for my RSS feeds, I’ve seen a big spike in traffic to my blog over the last week
I’m trying to figure out where it is coming from… the default wordpress.com stats (where this site is hosted) don’t really go into much more detail than number of hits; and it doesn’t seem to tally up with the search-engine results or click referrals – so maybe that will shed some light on it.
Otherwise pop a comment on this post and let me know what you find interesting and I’ll try to tailor some content around your needs, the How to deploy a virtual machine from a template seems to be the most popular post so far.
Martin’s post here prompted me to blog something I’ve been meaning to do for a while.
Virtualization projects and services are cool; we all understand the advantages in power/cooling and the flexibility it can bring to our infrastructures.
But what about support, if you are a service provider (internal or outsourcing) you normally need to be able to offer an end-end SLA on your services. typically this would be backed off against a vendor like Microsoft or Oracle via one of their premium support arrangements.
From what I see in the industry, with most software vendors especially Microsoft there is almost no way a service provider can underwrite an SLA as application/OS vendors give themselves significant scope to say “unsupported configuration” if you are running it under a hypervisor or other VM technology… Microsoft use the term commercially reasonable in their official policy – who decides what this is?
I would totally accept that a vendor would not guarantee performance under a hypervisor – that’s understandable and we have tools to analyse, monitor and improve (Virtual Centre, MOM, DRS, increase resources etc.). but too many vendors seem to use it as a universal “get out of jail free card”.
Issues of applications with dependency on physical hardware aside (fax cards, realtime CPU, DSP, PCI cards etc.) In my entire career working with VM technology I’ve only ever seen one issue that could be directly attributed to being caused by virtualization – and to be fair that was really a VMTools issue; rather than VMWare itself.
Microsoft have an official list of their applications that are not supported here – why is this? speech server I could maybe understand as it would probably be timer/DSP sensitive – but the rest? Sharepoint? I know for a fact ISA does work under VMWare as I use it all the time.
Microsoft Virtual Server support policy http://support.microsoft.com/kb/897613
Support policy for Microsoft software running in non-Microsoft hardware virtualization software http://support.microsoft.com/kb/897615/
Exchange is specifically excluded (depending on how you read the articles)
· On the Exchange Server 2007 System requirements page it only mentioned Unified messaging as being unsupportable in a virtual environment http://technet.microsoft.com/en-us/library/aa996719.aspx
· Yet on TechNet it is clear stated that “Neither Exchange 2007 nor Exchange 2007 SP1 is supported in production in a virtual environment” http://technet.microsoft.com/en-us/library/bb232170(EXCHG.80).aspx
Credit due to a colleague for pulling together the relevant Microsoft linkage
But I know it….
a) works fully – I do it all the time.
b) Lots of people are doing this in production with lots of users (many people at VMWorld US last year)
c) VMWare have a fully-supportable x64 hypervisor – It’s just MS that don’t
Dont’ tell/ask – 99% of the time a tech support rep won’t know its running under VMWare/a.n.other hypervisor so why complicate matters by telling them – could of course back-fire on you!
Threaten – “If you won’t support under VMWare we’ll use one of your competitors applications”; however this only really works if you are the US govt. or Globocorp Inc. or operate in a very niche application market.
Mitigate – reflect this uncertainty in an SLA, best-endeavours etc. this would kill most virtualization efforts in their tracks for an enterprise customer.
The same support issue has been around for a long time; Citrix/Terminal Services, application packaging, automated installations, etc. are treated as “get out of jail free cards” by support organisations…
But whilst there are some technical constraints (usually only affecting badly written apps) with terminal services and packaging, virtualization changes the game and should make it simpler for a vendor to support as there is no complex runtime integration with a host OS + bolt-ons/hacks it’s just an emulated CPU/disk/RAM you can do whatever you like within it.
So – the open debate; what do you do? and how do you manage it?
There’s an interesting post over on Forrester research blog by James Staten. he’s talking some more about data centres in a container; making the data centre the FRU rather than a server or server components (Disk, PSU etc.).
This isn’t a new idea but it I’m sure the economics of scale currently mean this is currently suitable for the computing super-powers (Google, Microsoft – MS are buying them now!) – variances in local power/comms cost could soon force companies to adopt this approach rather than be tied to a local/national utility company and their power/comms pricing.
But just think if you are a large out-sourcing type company you typically reserve, build and populate data centres based on customer load, now this load can be variable; customers come and go (as much as you would like to keep them long-term this is becoming a commodity market and customer’s demand you are able to react quickly to changes in THEIR business model – which is typically why they outsource – they make it YOUR problem to service their needs).
It would make sense if you could dynamically grow and shrink your compute/hosting facility based on customer demand in this space – thats not so easy to do with a physical location as you are tied to it in terms of power availability/cost and lease period.
New suite build out at a typical co-lo company can take 1-2 months to establish networking, racks, power distribution, cabling, operational procedures etc. (and that’s not including physical construction if it’s a new building) – adopting the blackbox approach could significantly reduce the start-up time and increase your operational flexibility
Rather than invest in in-suite structured cabling, rack and reusable (or dedicated) server/blade infrastructures why not just have terminated power, comms and cooling connections and plug them in as required within a secured warehouse like space.
You could even lease datacentre containers from a service provider/supplier to ensure there is no cap-ex investment required to host customers.
If your shiny new data centre is runs out of power then you could relocate it a lot easier (and cheaply) as it’s already transportable rather than tied to the physical building infrastructure; you are able to follow the cheapest power and comms – nationally or even globally.
As I’ve said before the more you virtualize the contents of your datacentre the less you care about what physical kit it runs on… you essentially reserve power from a flexible compute/storage/network “grid” – and that could be anything/anywhere.
I’ve not done anything with my home ESX server this week as I’ve been busy with work; so this will be interesting – it’s been powered up all the time with all the VM’s spinning; but not doing very much.
Whist running this set of VMs.. (the CPU stats for VMEX01 and VMEX02 are a bit skewed as I added this bit after the original post and they are both running seti@home (hence increased CPU)
So, nothing interesting to see here – but might be worth bearing in mind for some kind of sizing estimate; this is a single core CPU (HT enabled) PC with 4Gb RAM and a single 500Gb SATA disk
My home office setup has a 20″ widescreen Dell TFT which I use with my laptop an elevated docking station – my laptop has a rather low screen resolution as it’s quite small so this is a great dual monitor setup. The widescreen is handy for keeping a web browser open for referring to online documentation or and working on documents or large Visio diagrams.
The only gripe is that a lot of web pages (like the BBC) waste a lot of the widescreen real-estate as they format (or don’t re-format) for different screen resolutions.
The Split Browser Plugin for Firefox (my favourite browser) that allows you to essentially have multiple browser sessions and sub-tabs in one full-screen Window.
it has load of options – if the screen layout gets a bit confusing you can bring all the split pages back to one window with multiple tabs and vice-versa.
The (also useful) IETab plug in means some of those sub-pages can also be rendered using IE – but all within Firefox.
Firefox has such a good community of developers and I have always been able to find a plug-in that does exactly the odd-feature I “need”.