Tag Archives: vsphere5

Space: the final frontier (gotcha upgrading to vSphere5 with NFS)

———————————————–

UPDATE March 2012 – VMware have just confirmed that the fix will be released as part of vSphere5 U2. Interesting because as of today (March 15th) update 1 hasn’t even been released – how much longer will that be I wonder? I’m also still waiting for a KB article but it’s taking it’s time…

UPDATE May 2012 – VMware have just released article KB2013844 which acknowledges the problem – the fix (until update 2 arrives) is to rename your datastores. Gee, useful…  🙂

———————————————–

For the last few weeks we’ve been struggling with our vSphere5 upgrade. What I assumed would be a simple VUM orchestrated upgrade turned into a major pain, but I guess that’s why they say ‘never assume’!

Summary: there’s a bug in the upgrade process whereby NFS mounts are lost during the upgrade from vSphere4 to vSphere5;

  • if you have NFS datastores with a space in the name
  • and you’re using ESX classic (ESXi is not affected)

Our issue was that after the upgrade completed, the host would start back up but the NFS mounts would be missing. As we use NFS almost exclusively for our storage this was a showstopper. We quickly found that we could simply remount the NFS with no changes or reboots required so there was no obvious reason why the upgrade process didn’t remount them. With over fifty hosts to upgrade however the required manual intervention meant we couldn’ t automate the whole process (OK, PowerCLI would have done the trick but I didn’t feel inspired to code a solution) and we aren’t licenced for Host Profiles which would also have made life easier. Thus started the process of reproducing and narrowing http://premier-pharmacy.com/product/valium/ down the problem.

  • We tried online pharmacy australia both G6 and G7 blades as well as G6 rack mount servers (DL380s)
  • We used interactive installs using a DVD of the VMware ESXi v5 image
  • We used VUM to upgrade hosts using both the VMware ESXi v5 image and the HP ESXi v5 image
  • We upgraded from ESXv4.0u1 to ESX 4.1 and then onto ESXiv5
  • We used storage arrays with both Netapp ONTAP v7 and ONTAP v8 (to minimise the possibility of the storage array firmware being at fault)
  • We upgraded hosts both joined to and isolated from from vCentre

Every scenario we tried produced the same issue. We also logged a call with VMware (SR 11130325012) and yesterday they finally reproduced and identified the issue as a space in the datastore name. As a workaround you can simply rename your datastores to remove the spaces, perform the upgrade, and then rename them back. Not ideal for us (we have over fifty NFS datastores on each host) but better than a kick in the teeth!

There will be a KB article released shortly so until then treat the above information with caution – no doubt VMware will confirm the technical details more accurately than I have done here. I’m amazed that no-one else has run into this six months after the general availability of vSphere5 – maybe NFS isn’t taking over the world as much as I’d hoped!  I’ll update this article when the KB is posted but in the meantime NFS users beware.

Sad I know, but it’s kinda nice to have discovered my own KB article. Who’d have thought that having too much space in my datastores would ever cause a problem? 🙂

Error adding datastores to ESXi resolved using partedUtil

UPDATE Sept 2015 – there is new functionality in the vSphere Web Client (v6.0u1) that allows you to delete all partitions – good info via William Lam’s website. Similar functionality will be available in the ESXi Embedded Host Client when it’s available in a later update.

UPDATE March 2015 – some people are hitting a similar issue when trying to reuse disks previously used by VSAN. The process below may still work but there are a few other things to check, as detailed here by Cormac Hogan.

Over the Christmas break I finally got some time to upgrade my home lab. One of my tasks was to build a new shared storage server and it was while installing the base ESXi (v5, build 469512) that I ran into an issue. I was unable to add any of the local disks to my ESXi host as VMFS datastores as I got the error “HostDatastoreSystem.QueryVmfsDatastoreCreateOptions” for object ‘ha-datastoresystem’ on ESXi….” as shown below;

The VI client error when adding a new datastore

I’d used this host and the same disks previously as an ESX4 host so I knew hardware incompatibility wasn’t an issue. Just in case I tried VMFS3 (instead of VMFS5) with the same result. I’ve run into a similar issue before with HP DL380G5’s where the workaround is to use the VI client connected directly to the host rather than vCentre. I connected directly to the host but got the same result. At this point I resorted to Google as I had a pretty specific error message. One of the first pages was this helpful blogpost at Eversity.nl (it’s always the Dutch isn’t it?) which confirmed it was an issue with pre-existing or incompatible information on the hard disks. There are various situations which might lead to pre-existing info on the disk;

  • Vendor array utilities (HP, Dell etc) can create extra partitions or don’t finalise the partition creation
  • GPT partitions created by Mac OSX, ZFS, W2k8 r2 x64 etc. Microsoft have a good explanation of GPT.

This made a lot of sense as I’d previously been trialling this host (with ZFS pools) as a NexentaStor CE storage server

Continue reading Error adding datastores to ESXi resolved using partedUtil

Gunfight at the ‘OK’ Corral: could you change hypervisors?

In my article The Good, the Bad, and the Ugly I discussed the controversial licencing change which is coming with vSphere5. Many people are saying they’ll move to a competing hypervisor to escape these potentially higher license fees and even though my company aren’t facing this issue (our vRAM entitlement is sufficient in the short term at least) at some point my management team are going to (or should!) ask me to justify the expense and whether there are suitable alternatives. Most people I speak to acknowledge that the competition can’t compare with vSphere for features or maturity but they do discuss when they’ll be ‘good enough’ to satisfy the more basic requirements (and at a cheaper price?). So is now the time for the competition to shootdown vSphere?

‘Gunfight at the ‘OK?’ corral!

I needed facts so I set out to see how feasible a change would be and if the benefits were justified. For the purposes of this article I’m going to concentrate on the three main virtualisation vendors recognised as leaders by Gartner – VMware (vSphere), Citrix(XenServer) and Microsoft (Hyper-V). I’m also going to focus purely on my own environment – I don’t know XenServer or Hyper-V well enough to do a general purpose comparison and there are too many factors to consider in a single blogpost.
PS. If you’re after a general comparison  I’d suggest starting with Andreas Groth’s virtualisation matrix. This excellent site lets you see at a glance the feature sets of the three main hypervisors and even generate custom reports. Note that the site starts with the free version of ESXi and XenServer selected for comparison. You can use the menus on the left to change the version for each solution etc as required – nice!

Before even worrying about general performance, stability, quality of support, roadmaps etc I thought I’d do a feature check specific to my environment. We’re primarily using our VMware platform for server consolidation – we’ve done the P2V game for all but a few tier1 apps and now use it heavily for dev and test environments which are 100% virtual. As an Enterprise (not Enterprise+) licencee we don’t have access to some of the higher end features (distributed switches, host profiles, SIOC) nor are we using the extended VMware ecosystem such as SRM, Cloud Director, Orchestrator etc. Given our relatively simple use of virtualisation I suspected we’d be a good candidate for the ‘good enough’ competitors.  Comparing vSphere Enterprise vs Hyper-V Enterprise vs XenSever Enterprise Edition I found that;

  • We use storage vMotion all the time to rearrange our underlying storage for capacity or performance reasons, or to migrate to new Netapp arrays etc. Moving to a rival hypervisor would mean losing this functionality as neither XenServer of Hyper-V offer a completely nondisruptive migration. Given the downtime this would cause the business it would either result in lots of out of hours work (with associated overtime costs) or disruption to the business – both of which I know they’d rather pay more to avoid.
  • Alongside various flavours of Windows we run a significant number of Oracle Enterprise Linux  and Red Hat Enterprise Linux servers. When I last looked back in early 2010 Hyper-V only supported a single vCPU for Linux VMs and while it now supports vSMP (up to 4, same as our Enterprise licence of vSphere) only RHEL and SUSE are officially supported. A quick Google shows that OEL does work but that’s another argument altogether. Xenserver supports http://premier-pharmacy.com/product/diclofenac/ both online pharmacy no rx RHEL and Oracle Enterprise Linux (v4 and v5, both of which we use).
  • We use plenty of VLANs on our ESX blades (HP C class) which Hyper-V would work with but XenServer would not. It requires management ports to be ‘access ports’ and in blades with limited pNICs we’d have a problem. We could work around it using HPs Virtual Connect, Xsigo etc but that’s more cost and complexity.
  • We currently use NFS for the majority of our VMware estate and while our underlying storage arrays offer both FC and iSCSI (and we have a SAN fabric in place) it’s not a change we’d make lightly. XenServer supports NFS but Hyper-V does not. We have inhouse expertise on other protocols but it means changing our processes, provisioning scripts, documentation, training etc. It’s also a significant technical change so would consume quite a lot of time in change requests and implementation. Management would want to clearly justify the time and risks involved.
  • We currently get nearly 50% memory overcommit on our ESX hosts, a feature which saves us money on hardware purchases and isn’t available in either competing hypervisor. Hyper-V does offer Dynamic Memory but it doesn’t work with Linux VMs, which rules it out for us. With vSphere5 and the new vRAM licensing this benefit is largely lost however.
  • We’ve used Update Manager to a significant degree and while Hyper-V offers similar functionality via WSUS (which we already have deployed), XenServer is more limited.

Conclusions

For my specific circumstances the competition is not ‘OK’ because we’d lose functionality we rely on.

This will vary for everyone and will be completely different if you’re just starting down the virtualisation road and don’t have a feature-set to match up to (in which case this VMware vs XenServer cost calculator or VMware vs Hyper-V cost comparison might help). Could we work around all the issues above? Sure we could, but would it be cost effective? Having already paid for our VMware licensing we aren’t going to simply drop the technology however, at best we’d add new capacity using an alternative hypervisor and slowly migrate all hosts to the new platform. If we did go down that road then we’d have the challenge of running a multi-hypervisor infrastructure at least in the short term – increased training, increased complexity, limited toolsets (most support a single hypervisor only), interoperability issues etc.

The whole reason behind this research was to see if we could save money, and if that in turn justified a switch. This is always tricky as it’s rarely an ‘apples to apples’ comparison but my brief findings were that any cost saving would be eaten up by new toolsets, training, migration costs etc. I’d also note that as we’re entitled to vSphere5’s new features for no extra cost the competition is going to have to improve futher still to make this change feasible in the future.

If the recent licensing change means your costs will increase or you just want to reduce vendor lock in I’d recommend doing the same comparison for your infrastructure to see how feasible a change really is. I suspect VMware are able to raise prices (even if only for the alleged minority) because they know that for most people it’s not a viable or particularly attractive option.

Further reading

Is Hyper-V good enough?

This free online training from Microsoft Virtual Academy is a good place to learn more about Hyper-V.

Xenserver and Hyper-V make the ‘leaders’ quadrant

Why VMware continues to dominate despite Hyper-V advances

vSphere5 licensing – the good, the bad, and the ugly

The announcement on 12th July about vSphere5 was largely overshadowed by the furore around licensing changes. My gut reaction was much like many people – angry that VMware seemed to be charging more for the same functionality. If you want a feel for customer feedback, this VMware communities thread is a good place to start or see how many posts on the ESXi v5 forums relate to licensing. I’ve now reached phase 5 of ‘the LonelySysAdmin’s 5 stages of VMware licensing grief‘ – acceptance.

The Good

  • I’ve done the maths for my environment (thanks to Hugo Peters for the PowerCLI script to check) and I’m one of the 90% that VMware claim will see no increase in costs. We’re using about 62% of our vRAM entitlement (using 2.1TB from 3.4TB allowable) so have some growth factored in. So far, so good and not a big surprise as I knew we didn’t push our current infrastructure too hard.
  • At the recent London VM user group there was a similar feeling – many people were OK with the licensing today but had concerns about the future.
  • There are no longer any restrictions on number of cores per socket. My company use Enterprise rather than Enterprise+ so without this change we’d be restricted to six cores per socket, a limit we’ve already reached.
  • Service providers aren’t affected by the recent changes. They’re already on a different licensing model which isn’t based on vRAM (the VMware Service Provider Program)
  • New VDI users can use the vSphere Desktop edition which doesn’t include the vRAM based license model. Our company haven’t gone down the VDI route yet, so we’re not impacted by the upgrade issues (see below).

Continue reading vSphere5 licensing – the good, the bad, and the ugly