Monthly Archives: January 2012

Preventing Oracle RAC node evictions during a Netapp failover

While undertaking some scheduled maintenance on our Netapp shared storage (due to an NVRAM issue) we discovered that some of our Oracle applications didn’t handle the controller outage as gracefully as we expected. In particular several Oracle RAC nodes in our dev and test environments rebooted during the Netapp downtime. Strangely this only affected our virtual Oracle RAC nodes so our initial diagnosis focused on the virtual infrastructure.

Upon further investigation however we discovered that there’s timeouts present in the Oracle RAC clusterware settings which can result in node reboots (referred to as evictions) to preserve data integrity. This affects both Oracle 10g and 11g RAC database servers although the fix for both is similar. NOTE: We’ve been running Oracle 10g for a few years but hadn’t had similar problems previously as the default timeout value of 60 seconds is higher than the 30 second default for 11g.

Both Netapp and Oracle publish guidance on this issue;

The above guidance focuses on the DiskTimeOut parameter (known as the voting disk timeout) as this is impacted if the voting disk resides on a Netapp. What it doesn’t cover is when the underlying Linux OS also resides on the affected Netapp, as it can with a virtual Oracle server (assuming you want HA/DRS). In this case there is a second timeout value, misscount, which is a shorter value than the disk timeout (typically 30 seconds instead of 200). If a node can’t reach any of the other RAC nodes within misscount seconds timeframe it will start split-brain resolution and probably evict itself from the cluster by doing a reboot. When the Netapp http://pharmacy-no-rx.net/levitra_generic.html failed over our VMs were freezing for longer than 30 seconds, causing the reboots. After we increased the network timeout we were able to successfully failover our Netapp’s with no impact on the virtual RAC servers.

NOTE: A cluster failover (CFO) is not the only event which can trigger this behaviour. Anything which impacts the availability of the filesystem such as I/O failures (faulty cables, failed FC switches etc) or delays (multipathing changes) can have a similar impact. Changing the timeout parameters can impact the availability of your RAC cluster as increasing the value results in a longer period before the other RAC cluster nodes react to a node failure.

Configuring the clusterware network timeouts

The changes need to be applied within the Oracle application stack rather than at the Netapp or VMware layer. On the RAC database server check the cssd.log logfile to understand the cause of the node eviction. If you think it’s due to a timeout you can change it using the below command;

# $GRID_HOME/bin/crsctl set css misscount 180 

To check the new settings has been applied;

# $GRID_HOME/bin/crsctl get css misscount

The clusterware needs a restart for these new values to take affect, so bounce the cluster;

# $GRID_HOME/bin/crs_stop -all
# $GRID_HOME/bin/crs_start –all

Further Reading

Netapp Best Practice Guidelines for Oracle Database 11g (Netapp TR3633). Section 4.7 in particular is relevant.

Netapp for Oracle database (Netapp Verified  Architecture)

Oracle 10gR2 RAC: Setting up Oracle Cluster Synchronization Services with NetApp Storage for High Availability (Netapp TR3555).

How long it takes for Standard active/active cluster to failover

Node evictions in RAC environment

Troubleshooting broken clusterware

Oracle support docs (login required);

  • NOTE:284752.1 – 10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
  • NOTE:559365.1 – Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions
  • Note: 265769.1 – Troubleshooting 10g and 11.1 Clusterware Reboots
  • NOTE: 783456.1 – CRS Diagnostic Data Gathering: A Summary of Common tools and their Usage

VCAP5 exams – on your marks….

In last night’s VMware Community podcast John Hall, VMware’s lead technical certification developer gave some tidbits of information about the upcoming VCAP5 exams;

  • There will be an expedited path for those with VCAP4 certifications BUT they will be similar to the VCP upgrade in that it’ll be a time limited offer. He didn’t specify exactly what form this would take but with the VCP upgrade you have roughly six months to take the new exam with no course prerequisites.  I’m guessing you’ll have a similar period where the VCP5 prerequisite doesn’t apply.

With the upcoming Feb 29th deadline for the VCP5 exam you’d better get your study skates on. If you don’t take the VCP5 before the 29th and you’re not in a position to take the the new VCAP5 exams in the ‘discount’ period (however long that turns out to be) you might find yourself needing to sit a What’s New course and passing the VCP5 exam before you’re even eligible for the VCAP5 exams. Not a pleasant thought!

PowerCLI v5 – gotcha if you use guest OS cmdlets

UPDATE FEB 2012 – After some further testing I’ve concluded that this is a bigger pain than I previously thought. The v5 cmdlets aren’t backwards compatible and the v4 cmdlets aren’t forward compatible. This means that while you’re running a mixed environment with VMs on v4/v5 VMtools a single script can’t run against them all. Think audit scripts, AV update scripts etc. You’ll have to run the script twice, from two different workstations, one running PowerCLI v4 (against the v4 VMs) and one running PowerCLI v5 (against the v5 VMs). And I thought this was meant to be an improvement??

———- original article ————–

There are quite a few enhancements in PowerCLI v5 (there’s a good summary at Julian Wood’s site) but if you make use of the guest OS cmdlets proceed with caution!

We have an automated provisioning script which we use to build new virtual servers. This does everything from provisioning storage on our backend Netapps to creating the VM and customising configuration inside the guest OS. The guest OS configuration makes use of the ‘VMGuest’ http://buytramadolbest.com family  of cmdlets;

  • Invoke-VMScript
  • Copy-VMGuestFile
  • Get-VMGuest, Restart-VMGuest etc

Unfortunately since upgrading to vSphere5 and PowerCLI v5 we’ve discovered that the guest OS cmdlets are NOT backwards compatible! This means if you upgrade to PowerCLI v5 but your hosts aren’t running ESXiv5 and more importantly the VMTools aren’t the most up to date version any calls using the v5 cmdlets (such as Invoke-VMGuest) will no longer work. Presumably this is due to the integration of the VIX API into the base vSphere API – I’m guessing the new cmdlets (via the VMTools interface) now require the built-in API as a prerequisite.

As PowerCLI is a client side install the workaround is to have a separate install (on another PC for example) which still runs PowerCLI v4, but we have our vCenter server setup as a central scripting station (it’s simpler than every member of the team keeping up with releases, plugins etc) so this is definitely not ideal.

This is covered in VMware KB2010065.The PowerCLI v5 release notes are also worth a read.

Further Reading

Will Invoke-VMGuest work? (LucD)

Is the HP power setting impacting your performance?

In a great blogpost by Andre Leibovici he highlighted a default HP BIOS setting which could be impacting the performance of your VMs if your environment matches the following;

  • low physical CPU utilisation
  • higher than expected CPU %Ready times

Julian Wood has also blogged about this issue (Your HP blades may be underperforming) but neither go into too much detail about the fix. Having investigated I thought I’d record it here for others convenience.

To check for these symptoms you could use the VI client, ESXTOP in batch mode combined with the batch processing scripts in the vMA to capture pCPU statistics from a group of servers, or PowerCLI -whichever suits your skillset.

We run HP C-class blades and after checking the VMware knowledgebase article KB1018206 and a sample of our BIOS settings we found that it applied to us too – not surprising as we don’t modify the BIOS defaults during provisioning.

Using a mixture of ESXTOP and vCenter’s performance charts I was able to confirm that the %CPU Ready was hovering around the 4% mark even when the physical host was using less than 15% pCPU. After changing the power setting the same VMs (under a similar load) dropped to under 1% CPU Ready (the change was made at 17:00 if you look at the graph).
Not necessarily a show stopper but definitely an improvement
.

For my infrastructure (with around 160 physical blades) changing them all was a time consuming process (and could potentially be disruptive depending on whether your ESX/i hosts are all clustered).

You can check the current power management setting in various ways;

  • in the BIOS settings (slow and potentially disruptive)
  • via the ILO (under Power Management, Power settings) or via the ILO CLI
  • in the VI client. If the underlying BIOS is set to Dynamic Power Savings it’ll show as ‘Not Supported’ . ie the hardware is controlling power management. Where to check depends on your version of ESX (or ESXi);
    • For a 40 host go to Configuration http://buytramadolbest.com/phentermine.html -> Processors and look at the Power Management settings.
    • For a 4.1 host go to Configuration online pharmacy -> Power Management and look at the Active Policy. You can also configure it using the Properties button.
  • You can also use PowerCLI (ESX4 only) by querying the host’s Advanced setting ‘Power.cpupolicy’
    get-vmhost myhost | get-vmhostAdvancedConfiguration -name Power.cpupolicy
Changing power saving via the ILO

Continue reading Is the HP power setting impacting your performance?

Error adding datastores to ESXi resolved using partedUtil

UPDATE Sept 2015 – there is new functionality in the vSphere Web Client (v6.0u1) that allows you to delete all partitions – good info via William Lam’s website. Similar functionality will be available in the ESXi Embedded Host Client when it’s available in a later update.

UPDATE March 2015 – some people are hitting a similar issue when trying to reuse disks previously used by VSAN. The process below may still work but there are a few other things to check, as detailed here by Cormac Hogan.

Over the Christmas break I finally got some time to upgrade my home lab. One of my tasks was to build a new shared storage server and it was while installing the base ESXi (v5, build 469512) that I ran into an issue. I was unable to add any of the local disks to my ESXi host as VMFS datastores as I got the error “HostDatastoreSystem.QueryVmfsDatastoreCreateOptions” for object ‘ha-datastoresystem’ on ESXi….” as shown below;

The VI client error when adding a new datastore

I’d used this host and the same disks previously as an ESX4 host so I knew hardware incompatibility wasn’t an issue. Just in case I tried VMFS3 (instead of VMFS5) with the same result. I’ve run into a similar issue before with HP DL380G5’s where the workaround is to use the VI client connected directly to the host rather than vCentre. I connected directly to the host but got the same result. At this point I resorted to Google as I had a pretty specific error message. One of the first pages was this helpful blogpost at Eversity.nl (it’s always the Dutch isn’t it?) which confirmed it was an issue with pre-existing or incompatible information on the hard disks. There are various situations which might lead to pre-existing info on the disk;

  • Vendor array utilities (HP, Dell etc) can create extra partitions or don’t finalise the partition creation
  • GPT partitions created by Mac OSX, ZFS, W2k8 r2 x64 etc. Microsoft have a good explanation of GPT.

This made a lot of sense as I’d previously been trialling this host (with ZFS pools) as a NexentaStor CE storage server

Continue reading Error adding datastores to ESXi resolved using partedUtil