Virtualization, Cloud, Infrastructure and all that stuff in-between
My ramblings on the stuff that holds it all together
Category Archives: Windows
Install Telnet client on Windows 2008 R2
Windows 2008 R2 doesn’t ship with Telnet installed by default, so if you want to check connectivity on a specific port to rule out a firewall issue you’ll need to install it manually.
You can quickly do this from the command line with the command (you don’t need access to media etc. as it’s all slipstreamed into the default install.
pkgmgr /iu:”TelnetClient”
And, once finished you can remove it with the following command – it’s advisable to do this once you’ve finished with it for security.
pkgmgr /uu:”TelnetClient”
Shortcut key for CTRL-ALT-DEL on a Mac for a Fusion VM
Apologies if this is old-hat or obvious but I didn’t realize you could do this until recently – if you have a Windows VM and want to press CTRL-ALT-DEL to logon or bring up the lock screen etc. there isn’t an obvious shortcut key as the Mac doesn’t have a physical DEL key like a normal PC keyboard, so if you’re a Fusion user you can hit the Fusion bar and choose to send CTRL-ALT-DEL to the guest OS.
However if you hit Fn-CTRL-ALT and Backspace on the Mac keyboard it has the same effect and sends CTRL-ALT-DEL to the guest (a 4-finger salute, rather than the traditonal 3-finger salute ).
Using Outlook 2010 with More than One Exchange Account Crashes Regularly
I quite liked the idea of a new feature in Outlook 2010, the ability to use more than one Exchange account at a time. People have been asking for this for years and it looked promising, in the past there was a work-around using virtualization but it was very resource intensive for most people’s machines. Since I ditched Office for Mac 2011 and went back to a Win7 VM on my Mac I thought I would give it a try as I hadn’t tried it out since the early beta builds of Office 2010 – sadly it seems things weren’t much better in the RTM build.
I have a corporate Exchange account, but my own personal email is also hosted on an Exchange server with Fasthosts – who as an aside I can’t really recommend anymore as they are still on Exchange 2003 and don’t seem to have any plans to upgrade the service to 2007, let alone Ex2010 althogh the service has been pretty reliable in the last 4 years I’ve used it.
However, I’ve found using Outlook in this dual-mailbox mode to be incredibly unreliable, it sets up fine – but within a couple of minutes Outlook locks up and becomes unresponsive – this seems to happen mainly when switching between inboxes – I’ve deleted and re-created profiles, .ost’s .pst’s – everything but I just can’t get it to work reliably.
I wonder if anyone out there has managed it – I’m using Windows 7 x64 with Office 2010 x86 (not the x64 version as per MS recommendations) they don’t seem to make much noise about this new feature – maybe this is why.
**Note: your corporate security policy may explicitly say you can’t do this – this is quite reasonable IMHO – I’ve done a lot of Exchange work in the past and whilst the Outlook security model is massively better these days a MAPI-savvy bit of Malware that you bring into Outlook via an external account could still potentially do bad things – remember the ILOVEYOU worm?**
If you want to try it out for yourself you need to totally quit Outlook (it won’t work if you have it open), go into Control panel and find the “mail” control panel applet
Click E-Mail Accounts
Click new, and follow the setup wizard, you’ll then have two Exchange accounts in your profile.
Fire up Outlook and once it’s finished “preparing your mailbox for first use” you’ll see two Exchange accounts with calendars, inbox etc. in the folder view of the UI.
However, in my experience that’s as good as it gets.. it locks up shortly after, shame as Office and in particular Outlook 2010 are pretty damned good otherwise – feel free to post your experiences..
Windows 7 Remote Desktop Client – Nice Touch
You can download and install the new, updated RDC 7.0 client for free for any OS from Windows XP SP3 and later (it ships with Windows 7) known issues here and detailed feature comparison per OS here and downloads for various client OS’es are below
Update for Windows Vista, x86-based versions
Download the Update for Windows Vista for x86-based systems package now.
Update for Windows Vista, x64-based versions
Download the Update for Windows Vista for x64-based systems package now.
Update for Windows XP, x86-based versions
Download the Update for Windows XP for x86-based systems package now.
A nice touch that i discovered by accident is that you can move the top title bar along the top of a full-screen RDP session by clicking and dragging, this is really handy if you work on multiple full screen RDP sessions inside another RDP session – for example a jump-off box to a protected subnet or when you have a full-screen application.
AkA Terminal Services client, Remote Desktop Protocol Client, RDS Client, Remote Desktop Client, RDC, RDP, RDS 🙂
Enjoy
Redesigning Active Directory for 2010 and on..
Active Directory has been implemented as part of Windows since approx 1998 when the betas of the initial Windows 2000 version were circulating. At the time directory services was Microsoft’s answer to all NT4 scalability woes and the superior management that Novell offered in Netware 4.x, that was a radically different IT world {cue Waynes world flash back}
- Most people worked in a set of fixed locations, mobile workers were by far the minority
- Those fixed locations had full all-ports network access to corporate resources internal network and/or personal firewalls were unheard of.
- People who needed remote access to the network came in by dial-up or VPN type access with token or user/password type authentication
- Starbucks was NOT your office 🙂
- your PC/laptop was owned by the company and you had less need to keep your personal on-line life running during work time or using work-resources (you shopped in real shops and people still used the phone to communicate)
- Viruses were there but the most prevalent forms propagated by infected documents and emails.
- Network connectivity was slow and/or expensive from remote locations
I’ve worked with Active Directory in a lot of depth during this time and it’s an excellent and flexible tool, however it’s now 2009 and whilst Active Directory has been enhanced over this time it isn’t radically different in terms of supporting the way we work today.
There is still a very tight integration* between a workstation (domain member) and the Domain/Forest – this relies on periodical machine account password changes.
- All authentication and group policy type activities like interactive logon, policy downloads etc. still require a large number of ports and RPC services to function – this makes firewalls like swiss-cheese, and doesn’t work well in locations with latent or slow network connections (although there are tweaks; most of these involve turning off GPO processing on slow links).
- To provide remote access to domain and corporate services a VPN layer is required to provision access, this is ok but a large part of the Windows interactive logon process still requires access to a domain controller at the CTRL-ALT-DEL logon screen – support for this is hacky at best when you are not on a full all-ports open network connection to the corporate domain – 3rd parties have custom GINA code that allows you to initiate a VPN connection before the logon is processed but it’s not a one-stop shop and users still *just don’t get it*.
- Disconnected machines (like roaming sales people) rely heavily on cached credentials, these credentials are only refreshed when you make an interactive logon to the corporate network – which requires VPN, large number of port rules; machine hygiene routines etc.
- User profiles/folder redirections don’t work particularly well in long-term disconnected scenarios and it’s difficult to maintain a consistent user profile environment for these users.
If you’ve ever had to re-build a user’s machine whilst disconnected from the network this can be a real issue.
*Machines can only be part of one domain at a time, they rely heavily on it for authentication and control.
Building standalone/workgroup machines is one answer but you have no way of managing any of the machines, tracking them, distributing configurations etc. – there is too much all or nothing and there is no middle ground in Active Directory at present – and this also makes multi-tier firewalled application platforms problematic – do you put in multiple domains to support tiers/DMZ’s or compromise security and use a single domain and wider firewall rules? if you put in workgroup machines manging security across all of them is problematic, some Microsoft products (Exchange, etc.) require an Active Directory domain and change is difficult.
In addition, high-speed Internet access is now very common and the move to “the cloud” is underfoot, with end-user devices being little more than very clever terminals.
Microsoft have made moves to support single sign on through web applications with the Active Directory Federation Services (AD-FS) in Windows 2008 but this is still geared at web applications rather than the core authentication and application services Microsoft’s desktop and server OS relies on for normal operations.
This is a list of the things I would like to see in future Active Directory and/or add-on endpoint security checkers to better support the upcoming generations of users who won’t always be on the corporate LAN, or purchase and use their own PC/laptop as well as the needs that virtualization and dynamic scaling infrastructure requires.
- Move authentication services to HTTP/S interfaces and away from RPC and dynamic ports.
- Make the group policy services available over the same HTTP/S interfaces
- This has already been done for Outlook/Exchange via the RPC over HTTP/S interface – Active Directory could use a similar concept for allowing access from external/edge services.
- Introduce a further class of machine to compliment the traditional “computer” account; an “external managed machine” (or similar) – where it isn’t necessarily a direct member of the domain but you allow a degree of trust – maybe leveraging the AD Federation Services, no local passwords held but hashed with the core AD service with an intermediate service (or core-OS component) to facilitate authentication between applications and the AD to maintain backwards compatibility for anything that runs locally and relies on traditional Windows authentication.
- Allow all communication between these external managed devices and core infrastructure over HTTP/S – so as to be tolerant of latent connections and carried over common network services.
- Allow those managed external machines to be locally administered/installed/maintained etc. (think of the Windows Mobile Phone or iPhone model that is used to allow access to Exchange email but give it a representative object in Active Directory that can be managed through policies or even disabled – even if that object is just a certificate for the device or some other representation it should be accessible through the AD tools and scripting interfaces.
- Add support for configuration compliance scanning for external managed devices (end-point security) and centralised reporting – some of this is in next gen ISA tools.
- Support for transient (often virtual..) machines that are dynamically added to a domain and removed – think of the VDI model where hundreds of machines could be created and destroyed automatically – leaving hundreds of “dead” machine accounts and reboots to support the domain join operations.
- Support and manage a corporate PC “out on the Internet" as if it were in the office (..using web services/HTTP wrappers) much like we can with Outlook 2003+ and Exchange 2003+ using RPC over HTTP/s – no complicated and difficult to use local VPN client
What would you like to see?
As an addendum; Apologies for the lack of posting recently on vinf.net which has been due to the arrival of our second child, which as you might imagine has taken up a lot of my blogging time! hopefully will get a bit more time in the coming months to support my habit!
Is your MS Application Supported under VMware, Hyper-V, Xen? – the DEFINITIVE Statement from Microsoft
A colleague has just made me aware of a new tool on the Microsoft website, it is a wizard that can tell you if specific Microsoft App/OS/Architecture combinations are supported under the SVVP (Server Virtualization Validation Programme) – I previously wrote about the SVVP here, which promised to resolve many of the pains we were experiencing.
The output from the SVVP programme has been compiled into a great web based wizard that saves all the previous leg work of reading several (sometimes conflicting) whitepapers.. here you get it straight from the horses mouth (so to speak).
You can access the Wizard via this Link
http://www.windowsservercatalog.com/svvp.aspx?svvppage=svvpwizard.htm
The wizard lists all Microsoft products
The list of hypervisor platforms supported is shown below, and you can choose the OS version (Windows 2000 and later) and the CPU architecture (x86, x64 etc.)
And, finally the most important part – a definitive statement on support for this combination
Excellent work Microsoft – come on other vendors (Oracle, Sun this means you…)
ExPrep – Script to Automate Exchange 2007 Pre-Requisite Installation
If you have ever had to install Exchange 2007 on a Windows 2008 (and 2003) server you will know that there are a number of pre-requisites that need to be installed from the OS for each role; for example IIS web services and metabase compatibility components.
You have two choices, do this via the UI using the add/remove features and roles Wizard in Server Manager or using the ServerManagerCmd.EXE command line utility – either way it’s pretty tedious to do if you have several servers to install.
Based on this handy reference from Microsoft I have built a very basic batch file that automates the installation of the pre-req components for you.
It only works on Windows 2008 (sorry no 2003 equivalent) and you use it entirely at your own risk – there are much cleverer ways of scripting this but I’m a pretty old skool DOS person, this works for me and is easy for me to maintain – feel free to re-write in something more modern and post it back here this code is probably quite hacky.
The contents of the file are here (just cut & paste into a .bat file)
@echo off REM ExPrep.bat by Simon Gallagher, ioko (http://vinf.net) REM YOU USE THIS SCRIPT ENTIRELY AT YOUR OWN RISK SET %EXPREP%=999 echo Preparing for base pre-req install ServerManagerCmd -i Web-Metabase echo you chose %EXPREP% if %EXPREP%==1 goto MBX goto end :MBX goto end :MBX-CLUSTER ServerManagerCmd -i Failover-Clustering goto end :CAS ServerManagerCmd -i RPC-over-HTTP-proxy goto end :HT :END |
Instructions:
1) Copy the script (ExPrep.bat) to your would-be Exchange server (remember Windows 2008 x64 is the only supported OS for Exchange 2007).
2) Run ExPrep.bat
3) Choose the appropriate role from the menu (note: there is no clever input validation – make sure you choose the correct one, there are pause statements before it actually does anything so you can CTRL-C to break out.
4) Sit back and wait for it to complete.
5) then run the Exchange 2007 installer from your DVD or network share as normal.
If you need to install multiple roles on a single server you can run the script multiple times, all changes are cumulative and if a component is already installed ServerManagerCmd.EXE (which the script calls) will just skip it.
If you wanted to take it further there is some excellent information about the setup process, failures and doing full unattended installations of Exchange 2007 here and here
Remember you use this entirely at your own risk, and you assume full responsibility for checking its suitability for your environment; the batch file is easy to read and customize for your own use, although I ask that if you do make changes link back here via a comment or trackback so that other people can benefit.
Windows 7 and the Intel 855GM Video Driver "Solution"
I’ve been playing about with Windows 7 in a VM for a while now in a VM, but now the beta is out I wanted to install it on a physical machine, I’m not ready yet to upgrade my main laptop to Windows 7 (although I have a cunning plan to p2v my Vista install and convert to a VHD so I can dual-boot that way which is a neat trick)
I have a Dell Inspiron 510m laptop that I use for testing things (I used it for my Patespin series) that I wanted to install Windows 7 on, it still gives pretty good performance and has 2Gb RAM – the installation itself went smoothly and quickly – less than 45mins from format to finished 1st boot, but it doesn’t detect the wireless or video card.
In my experience this isn’t that unusual for a Dell, although video did surprise me as Vista had a default driver for the Intel 855GM on-board video that worked well, there is no built-in driver in Windows 7 it would seem.
So, a bit of a problem – I’m stuck with 640×480 VGA mode which isn’t much use.
I tried several ways to hack the Vista version of the driver into my installation, all without success – it always defaulted back to the default VGA drivers, some discussion here if you are interested
In the end I came across a post suggesting that I use an application called DriverMax – this is capable of exporting and importing installed drivers, I’d not tried it before but decided to give it a go, I know Vista had a working 855GM driver so the plan was to export it from there, and import it into a Windows 7 installation as I was unsure of how to extract it from the Vista installation media.
This necessitated a format and reinstall of the Dell 510m with Vista, which was painless enough as I had an auto-install DVD that I’d previously built
Once Vista was installed there was a working video driver running – I used DriverMax to export the working driver from the running OS – no source or driver CD required via a couple of clicks in the UI to a .zip file on a USB drive.
I then formatted and reinstalled Windows 7 again and on the laptop and installed DriverMax again.
then I simply imported the driver from the .zip file
Note – it knows the driver I saved was a default Windows driver
Summary screen – important to note it can install unsigned drivers if required
After a reboot the Windows 7 installation is running with a working (full-res) video driver.
I did find one slight problem with DriverMax that I had to work-around, with the default VGA video driver the buttons on the dialog boxes were inaccessible and I couldn’t resize or hot-key around it to progress, so in the end I had to do the process via remote desktop to the Win7 machine from another machine on my network over a wired LAN connection!
It’s not an ideal solution as you have to have a working Vista installation to extract the driver from and is probably totally unsupported, this is essentially Windows 7 running a Vista video driver – but it’s a beta anyway, hopefully MS or Intel will ship an 855GM driver again when Windows 7 goes RTM.
My initial impressions are that Windows 7 seems a lot more responsive than Vista, although to be fair it’s a vanilla installation thus-far. I have high-hopes for the beta, by my reckoning the change in the code-base isn’t as fundamental as it was between XP and Vista so it’s more focused on incremental features and performance improvements. I ran beta copies of Vista on my main work machine from Beta 1 through to RTM without too many problems, maybe I’ll be confident enough to do that again this time around – the VHD booting feature is certainly compelling for what I do.
Windows OS Code Patching
Interesting article here from the ntdebug blog on how hotfixes get integrated into the windows code-base and update mechanism.
There have been some excellent posts recently on this blog offering detailed insight into the internals of Windows, if you’re interested in this kind of thing (like me) and general innards of Microsoft I’d also recommend Raymond Chen’s blog.
Many people underestimate the complexity of getting Windows out the door and keeping it serviced, I have to wonder just how well Apple* would cope given a similar scale of operation, and not having the luxury of a single “blessed” hardware platform rather than having to service literally trillions of combinations of 3rd party hardware/software/firmware/drivers etc.
I’ve seen lots of “Windows is rubbish and my Mac is ace” discussions at work and socially recently, whilst Windows definitely has its flaws, a more detailed analysis of the persons problem usually reveals that its a 3rd party app/device/driver that has caused a problem, for example;
- Outdated DivX codec giving poor performance when browsing directories with thumbnails, or crashing – fix – updated codec
- Vendor supplied wireless driver/utilities causing issues with sleep or disabling network card – using default Windows driver was as performant and fixed all issues
Microsoft get a lot of bad press around this but it’s actually because they have a pretty open framework and set of ISV/IHV/partner schemes to allow 3rd parties to tightly integrate their products (and thus profit from the Windows cash-cow) they have their HCL/SCL process, but it’s not an absolute requirement for being allowed to install product X from ABC inc.
*Not wishing to start a Mac/PC war – I use + like both, before you flame me, although I have used OSX under VMWare, as well as on Apple hardware #naughty!
Cloud Wars: VMWare vs Microsoft vs Google vs Amazon Clouds
A short time ago in a data centre, far far away…..
All the big players are setting out their cloud pitches, Microsoft are set to make some big announcements at their Professional Developer Conference at the end of October and VMWare made their VDC-OS announcements at VMWorld a couple of weeks ago, Google have had their App Engine in beta for a while and Amazon AWS is pretty well established.
With this post I hope to give a quick overview of each, I’ll freely admit I’m more knowledgeable on the VMWare/Microsoft offerings… and I stand to be corrected on any assumptions I’ve made on Google/AWS based on my web reading.
So, What’s the difference between them…?
VMWare vCloud – infrastructure led play
VMWare come from the infrastructure space, to-date they have dominated the x86 virtualization market, they have some key strategic partnerships with storage and network vendors to deliver integrated solutions.
The VMWare VDC-OS pitch is about providing a flexible underlying architecture through servers, network and storage virtualisation. why? because making everything ‘virtual’ makes for quick reconfiguration – reallocating resource from one service to another is a configuration/allocation change rather than requiring an engineer visit (see my other post on this for more info)
because VMWare’s pitch is infrastructure led it has a significant practical advantage in that it’s essentially technology agnostic (as long as it’s x86 based) you, or a service provider have the ability to build and maintain an automated birth–>death bare ‘virtual metal’ provisioning and lifecycle system for application servers/services as there is no longer a tight dependency for everything on physical hardware, cabling etc
There is no one size fits all product in this space so a bespoke solution based around a standard framework tool like Tivoli, SMS, etc. is typically required depending on organisational/service requirements.
No re-development is necessarily required to move your applications into a vCloud (hosted or internal) you just move your VMWare virtual machines to a different underlying VDC-OS infrastructure, or you use P2V, X2V tools like Platespin to migrate to a VDC-OS infrastructure.
In terms of limitations – apps can’t necessarily scale horizontally (yet) as they are constrained by their traditional server based roots. The ability to add a 2nd node doesn’t necessarily make your app scale – there are all kinds of issues around state, concurrency etc. that the application framework needs to manage.
VMWare are building frameworks to build scale-out provisioning tools – but this would only work for certain types of applications and is currently reactive unless you build some intelligence into the provisioning system.
Scott Lowe has a good round-up of VDC-OS information here & VMWare’s official page is online here
Google AppEngine– pure app framework play
An application framework for you to develop your apps within – it provides a vastly parallel application and storage framework – excellent for developing large applications (i.e Google’s bread & butter)
Disadvantage is it’s a complete redevelopment of you applications into Google compatible code, services & frameworks. You are tied into Google services – you can’t (as I understand it) take your developed applications elsewhere without significant re-development/porting.
The Google AppEngine blog is here
Microsoft Cloud Services Hosted Application stack & Infrastructure play
An interesting offering, they will technically have the ability to host .net applications from a shared hosting service, as well as integrating future versions of their traditional and well established office/productivity applications into their cloud platform; almost offering the subscription based/Software+Services model they’ve been mooting for a long time.
Given Microsoft’s market current dominance, they are very well positioned to make this successful as large shops will be able to modify existing internal .net services and applications to leverage portions of their cloud offering.
With the future developments of Hyper-V Microsoft will be well positioned to offer an infrastructure driven equivalent of VMWare’s VDC-OS proposition to service and support migration from existing dedicated Windows and Linux servers to an internal or externally hosted cloud type platform.
David Chou at Microsoft has a good post on Microsoft and clouds here
Amazon Web Services – established app framework with canned virtualization
the AWS platform provides a range of the same sort of functionality as Google AppEngine with SimpleDB, SQS and S3 but with the recently announced ability to run Windows within their EC2 cloud makes for an interesting offering with the existing ability to pick & choose from Linux based virtual machine instances.
I believe EC2 makes heavy use of Xen under the hood; which I assume is how they are going to be delivering the Windows based services, EC2 also allows you to choose from a number of standard Linux virtual machine offerings (Amazon Machine Image, AMI).
This is an interesting offering, allowing you to develop your applications into their framework and possibly port or build your Linux/Windows application services into their managed EC2 service.
Same caveat applies though, your apps and virtual machines could be tied to the AWS framework – so you loose your portability without significant re-engineering. on the flip-side they do seem to have the best defined commercial and support models and have been well established for a while with the S3 service.
Amazon’s AWS blog is available here
Conclusion
Microsoft & VMWare are best positioned to pick up businesses from the corporate’s who will likely have a large existing investment in code and infrastructure but are looking to take advantage of reduced cost and complexity by hosting portions of their app/infrastructure with a service-provider.
Microsoft & VMWare offerings easily lend themselves to this internal/external cloud architecture as you can build your own internal cloud using their off-the-shelf technology, something that isn’t possible with AWS or Google. This is likely to be the preferred model for most large businesses who need to retain ownership of data and certain systems for legal/compliance reasons.
leveraging virtualization and commercial X2V or X2X conversion tools will make transition between internal and external clouds simple and quick – which gives organisations a lot of flexibility to operate their systems in the most cost/load-effective manner as well as retain detailed control of the application/server infrastructure but freed up from the day-day hardware/capacity management roles.
AWS/Google are ideal for Web 2.0 ,start-ups and the SME sector where there is typically no existing or large code-base investment that would need to be leveraged. For a greenfield implementation these services offer low start-up cost and simple development tools to build applications that would be complicated & expensive to build if you had to worry about and develop supporting infrastructure without significant up-front capital backing.
AWS/Google are also great for people wanting to build applications that need to scale to lots of users, but without a deep understanding of the required underlying infrastructure, whilst this is appealing to corporate’s I think the cost of porting and data ownership/risk issues will be a blocker for a significant amount of time.
Google Apps are a good entry point for the SME/start-up sector and startups, and could well draw people into building AppEngine services as the business grows in size and complexity, so we may see a drift towards this over time. Microsoft have a competing model and could leverage their established brand to win over customers if they can make the entry point free/cheap and cross-platform compatible, lots of those SME/start-ups are using Mac’s or Netbooks for example.