Virtualization, Cloud, Infrastructure and all that stuff in-between
My ramblings on the stuff that holds it all together
Category Archives: Cisco
Press F8 to enter CIMC configuration does not work on Cisco c-class rack mount server
I encountered this today, on a system that is not managed by UCS manager, to setup the CIMC (HP iLo equivalent) you need to connect to the physical console with a screen and keyboard to set the initial IP address.
You do this by pressing F8 at the BIOS screen, however I couldn’t get this to work
the fix was simple, if a bit weird – I had accidentally cabled the management NIC to the serial console port on the back of the server, if you do this – it recognises you pressing F8 but then boots to a flashing prompt – I assume this is because it switches to some sort of serial console interface unstead of displaying the UI on the screen.
for reference – connect the Ethernet management cable to the correct NIC! as below
Hands-On Lab 12: Cisco Nexus 1000v Switch
This lab was very popular but I got there early this morning so didn’t have to wait – it takes you through configuring the new Cisco Nexus Virtual Switch, I was keen to understand how this works and how it integrates with vSphere.
It works as thus;
vSphere ESX is likely to ship with 3 types of virtual switch;
- vSwitch – the normal vSwitch that has always been there
- Virtual Distributed Switch – an enhanced vSwitch that can share a configuration and state across multiple ESX hosts; administered via the vSphere client (ex- VIC)
- NX1000v Virtual Switch from Cisco
The NX1000v will be included in the ESX build but is a separately licenced product which you will buy from VMware (via some kind of OEM agreement); you will enable it via a licence key and there are two components;
- VEM – Virtual Ethernet Module – runs inside the hypervisor but you don’t see it as a ‘normal’ VM – think of it in the same way as the service console is an ‘internal’ VM in ESX 3.5
- VSM – Virtual Supervisor Module – this is what you use to administer the VEM, and it’s IOS – you can use all normal Cisco IOS commands and it’s downloadable as an .OVF but it has been mentioned that it will also be available as a physical device – maybe a blade in one of the the bigger Nexus chassis?
You can only carry out basic configuration via the vSphere Client, most of it is done via the IOS CLI or your Cisco compatible configuration manager tools – it really is the same as a physical Cisco switch – just virtualized my lab had some problems which whilst the Cisco hands-on lab guys were trying to fix, port-group config was set on the vSwitch but wasn’t propagating to the vSphere UI/ESX config… they couldn’t fix it in time and I restarted the lab on an alternative machine which worked fine; this is still a pre-release implementation so it’s not suprising – but it does suggest that there is some back-end process on the VEM/VSM that synchronises configuration with ESX.
the HoL walks through configuring a port group on the NX1000v and then applying advanced ACL’s to it, for example to filter RPC traffic. the UI gives quite a lot of information about port status and traffic – but most of the interface is via the IOS CLI.
All in, an interesting lab – as good as the presentation sessions were, it makes it much easier to understand *how* these things work at a practical level when you get your hands on the UI.
The basic proposition is; if you don’t have “network people” or just need basic switch capabilities then the vSwitch and vDistributed switch suit the understanding and needs of most “server people” just fine, but if you need more advanced management and configuration tools or need to have “network people” support the ESX switching infrastructure then this is the way to go.
Answers on the Cisco Nexus vSwitch – what is it and is vShield the same?
Just seen this post and was particularly interested in how the Cisco vSwitch works – it is shipped as part of ESX, and enabled/unlocked by a licence key, you need to download an OVF virtual appliance to manage it.
That answers one of the big things I’ve been meaning to find out whilst I’m here; I also attended a session on vShield zones and came away with a mixed bag of thoughts – is it a baked-in part of the next version of ESX or is it run in a virtual machine? – I have resolved to head for the hands-on Labs to try it out for myself; hopefully I will get time.
Virtualization – the key to delivering "cloud based architecture" NOW.
There is a lot of talk about delivering cloud or elastic computing platforms, a lot of CxO’s are taking this all in and nodding enthusiastically, they can see the benefits.. so make it happen!….yesterday.
Moving your services to the cloud, isn’t always about giving your apps and data to Google, Amazon or Microsoft.
You can build your own cloud, and be choosy about what you give to others. building your own cloud makes a lot of sense, it’s not always cheap but its the kind of thing you can scale up (or down..) with a bit of up-front investment, in this article I’ll look at some of the practical; and more infrastructure focused ways in which you can do so.
Your “cloud platform” is essentially an internal shared services system where you can actually and practically implement a “platform” team that operates and capacity plans for the cloud platform; they manage it’s availability and maintenance day-day and expansion/contraction.
You then have a number of “service/application” teams that subscribe to services provided by your cloud platform team… they are essentially developers/support teams that manage individual applications or services (for example payroll or SAP, web sites etc.), business units and stakeholders etc.
Using the technology we discuss here you can delegate control to them over most aspects of the service they maintian – full access to app servers etc. and an interface (human or automated) to raise issues with the platform team or log change requests.
I’ve seen many attempts to implement this in the physical/old world and it just ends in tears as it builds a high level of expectation that the server/infrastructure team must be able to respond very quickly to the end-“customer” the customer/supplier relationship is very different… regardless of what OLA/SLA you put in place.
However the reality of traditional infrastructure is that the platform team can’t usually react as quick as the service/application teams need/want/expect because they need to have an engineer on-site, wait for an order and a delivery, a network provisioning order etc. etc (although banks do seems to have this down quite well, it’s still a delay.. and time is money, etc.)
Virtualization and some of the technology we discuss here enable the platform team to keep one step ahead of the service/application teams by allowing them to do proper capacity planning and maintain a pragmatic headroom of capacity and make their lives easier by consolidating the physical estate they manage. This extra headroom capacity can be quickly back-filled when it’s taken up by adopting a modular hardware architecture to keep ahead of the next requirement.
Traditional infrastructure = OS/App Installations
- 1 server per ‘workload’
- Silo’d servers for support
- Individually underused on average = overall wastage
- No easy way to move workload about
- Change = slow, person in DC, unplug, uninstall, move reinstall etc.
- HP/Dell/Sun Rack Mount Servers
- Cat 6 Cables, Racks and structured cabling
The ideal is to have an OS/app stack that can have workloads moved from host A to host B; this is a nice idea but there are a whole heap of dependencies with the typlical applications of today (IIS/apache + scripts, RoR, SQL DB, custom .net applications). Most big/important line of business apps are monolithic and today make this hard. Ever tried to move a SQL installation from OLD-SERVER-A to SHINY-NEW-SERVER-B? exactly. *NIX better at this, but not that much better.. downtime required or complicated fail over.
This can all be done today, virtualization is the key to doing it – makes it easy to move a workload from a to b we don’t care about the OS/hardware integration – we standardise/abstract/virtualize it and that allows us to quickly move it – it’s just a file and a bunch of configuration information in a text file… no obscure array controller firmware to extract data from or outdated NIC/video drivers to worry about.
Combine this with server (blade) hardware, modern VLAN/L3 switches with trunked connections, and virtualised firewalls then you have a very compelling solution that is not only quick to change, but makes more efficient use of the hardware you’ve purchased… so each KW/hr you consume brings more return, not less as you expand.
Now, move this forward and change the hardware for something much more commodity/standardised
Requirement: Fast, Scalable shared storage, filexible allocation of disk space and ability to de-duplicate data, reduce overhead etc, thin provisioning.
Solution: SAN Storage, EMC Clariion, HP-EVA, Sun StorageTek, iSCSI for lower requirements, or storage over single Ethernet fabric – NetApp/Equalogic
Requirement: Requirement Common chassis and server modules for quick, easy rip and replace and efficient power/cooling.
Solution: HP/Sun/Dell Blades
Requirement: quick change of network configurations, cross connects, increase & decrease bandwidth
Solution: Cisco switching, trunked interconnects, 10Gb/bonded 1GbE, VLAN isolation, quick change enabled as beyond initial installation there are fewer requirements to send an engineer to plug something in or move it, Checkpoint VSX firewalls to allow delegated firewall configurations or to allow multiple autonomous business units (or customers) to operate from a shared, high bandwidth platform.
Requirement: Ability to load balance and consolidate individual server workloads
Solution: VMWare Infrastructure 3 + management toolset (SCOM, Virtual Centre, Custom you-specific integrations using API/SDK etc.)
Requirement: Delegated control of systems to allow autonomy to teams, but within a controlled/auditable framework
Solution: Normal OS/app security delegation, Active Directory, NIS etc. Virtual Center, Checkpoint VSX, custom change request workflow and automation systems which are plugged into platform API/SDK’s etc.
the following diagram is my reference architecture for how I see these cloud platforms hanging together
As ever more services move into the “cloud” or the “mesh” then integrating them becomes simpler, you have less of a focus on the platform that runs it – and just build what you need to operate your business etc.
In future maybe you’ll be able to use the public cloud services like Amazon AWS to integrate with your own internal cloud, allowing you to retain the important internal company data but take advantage of external, utility computing as required, on demand etc.
I don’t think we’ll ever get to.. (or want) to be 100% in a public cloud, but this private/internal cloud allows an organisation to retain it’s own internal agility and data ownership.
I hope this post has demonstrated that whilst, architecturally “cloud” computing sounds a bit out-there, you can practically implement it now by adopting this approach for the underlying infrastructure for your current application landscape.
VMWare/Cisco Switching Integration
As noted here there is a doc that has been jointly produced between VMWare and Cisco which has all the details required for integrating VI virtual switches with physical switching.
Especially handy if you need to work with networking teams to make sure things are configured correctly to allow failover properly between redundant switches/fabrics etc. – it’s not as simple as it looks, and people often forget the switch-side configurations that are required.
Doc available here (c.3Mb PDF)
Useful Document for getting your Network Teams up to Speed with VMWare and it’s Virtual Networking
Doc from Cisco Here.
Cisco ASR is Virtual to the Core, all 40 of them!
Interesting article here on how Cisco have made heavy use of virtualization within their new ASR series router platform, Linux underneath and 40 core CPUs!
This type of approach does make me wonder if we will get to the stage of running traditional “network” and “storage” services as VM’s under a shared hypervisor with traditional “servers”.. totally removing the dependency on dedicated or expensive single-vendor hardware.
Commodity server blade platforms like the HP or Sun blade systems are so powerful these days, with flexible interconnect/expansion options this type of approach makes a lot of sense to me and is totally flexible.
Maybe one day it will go the other way and all your Windows boxen will run inside a Cisco NX7000 lol!
On reflection maybe all those companies have too much of a vested interest in vendor lock-in and hardware sales to make this a reality!