Yearly Archives: 2011

Netapp daily checks – available inodes/maxfiles

Prior to buying Netapp Operations Manager we used to run lots of daily checks to ensure the uptime and health of our Netapp controllers. Many of these checks were written using the Data ONTAP Powershell Toolkit so I thought I’d post them up in case they’re of use to anyone else.

First up is a function to check for the ‘maxfiles‘ value (the number of inodes consumed in a volume). This is typically a large number (often in the millions) and is based on the volume size, but we had an Oracle process which dumped huge numbers of tiny files on a regular basis, consuming all the available inodes. This article only covers checking for these occurrences – if you need a fix I’d suggest checking out Netapp’s advice or this discussion for possible solutions.

Simply add the function (below) to your Powershell profile (or maybe build a module) and then a Powershell one-liner can be used to check;

connect-NaController yourcontroller | get-NaMaxfiles -Percent 30

This will give you output like this;

Controller : Netapp01
Name       : test_vol01
FilesUsed  : 268947
FilesTotal : 778230
%FilesUsed : 35

Controller : Netapp01
Name       : test_vol02
FilesUsed  : 678111
FilesTotal : 1369688
%FilesUsed : 50

And here’s the function;

function Get-NaMaxfiles {
<#
.SYNOPSIS
 Find volumes where the maxfiles values is greater than a specified threshold (default 50%).
.DESCRIPTION
 Find volumes where the maxfiles values is greater than a specified threshold (default 50%).
.PARAMETER Controller
 NetApp Controller to query (defaults to current controller if not specified).
.PARAMETER Percent
 Filters the results to volumes when the %used files is greater than the number specified. Defaults to 50% if not specified.
.EXAMPLE
 connect-NaController zcgprsan1n1 | get-NaMaxfiles -Percent 30

 Get all volumes on filer zcgprsan1n1 where the number of files used is greater than 30% of the max available
#>
 [cmdletBinding()]
 Param(
 [Parameter(Mandatory=$false,
 ValueFromPipeLine=$true
 )]
 [NetApp.Ontapi.Filer.NaController]
 $Controller=($CurrentNaController)
 ,
 [Parameter(Mandatory=$false)]
 [int]
 $Percent=50
 )
 Begin {
 #check that a controller has been specified
 }
 Process {
 $exception = $null
 try {
 # create a null valued instance of $vol within the local scope
 $vols = $null
 $vols = Get-NaVol -controller $Controller -ErrorAction "Stop" | where {$_.FilesTotal -gt 0 -and ($_.FilesUsed/$_.FilesTotal)*100 -gt $Percent}
 #check that at least one volume exists on this controller
 if ($vols -ne $null) {
 foreach ($vol in $vols) {
 #calculate the percentage of files used and add a field to the Volume object with the value
 $filesPercent = [int](($vol.FilesUsed/$vol.FilesTotal)*100)
 add-member -inputobject $vol -membertype noteproperty -name Controller -value $Controller.Name
 add-member -inputobject $vol -membertype noteproperty -name %FilesUsed -value $filesPercent
 }
 }
 }
 catch {
 $exception = $_
 }
 if ($exception -eq $null) {
 $returnValue = ($vols | Sort-Object -Property "Used" -Descending | Select-Object -Property "Controller","Name","FilesUsed","FilesTotal","%FilesUsed")
 }
 else {
 $returnValue = $exception
 }
 return $returnValue
 }
}

NVRAM problems on Netapp 3200 series filers

———————————————–

UPDATE FEB 2012 – Netapp have just released a firmware update for the battery and confirmed that all 32xx series controllers shipped before Feb 2012 are susceptible to this fault. You can read more (including instructions for applying the update – it’s NOT click, click, next) via the official Netapp KB article. I’ll be applying this to my production controllers soon so I’ll let you know if I encounter any problems.

———————————————–

Recently (Dec 2011) I’ve been experiencing a few issues with the newer Netapp filers at my work, specifically the 3240 controllers. There is currently a known issue with NVRAM battery charging which if you’re not aware of can result in unplanned failovers of your Netapp controllers. This applies to the 3200 series (including the v3200 and SA320).

We have six of these controllers and my first warning (back at the beginning of November) was an autosupport email notification;

Symptom: BATLOW:HA Group Notification from <myfilername> (BATTERY LOW) WARNING

This message indicates that the NVRAM or NVMEM battery is below the minimum voltage required to safeguard data in the event of an unexpected disruption.

If the system has been halted and powered off for some time, this message is expected.This message repeats HOURLY as long as NVRAM or NVMEM battery is below the minimum voltage, if you are using ONTAP version 7.5, 8.1, or greater with an appliance that uses an NVMEM battery, the error will repeat WEEKLY.

When the storage controller is up and running, the battery will be charged to its normal operating capacity and this message should stop. However, if this message persists, there may be a problem with the NVRAM or NVMEM battery.

This was unexpected but a faulty backup battery wasn’t an immediate priority – after all it’s only required to protect against power failures or controller crashes which are pretty rare. A few days later it became a high priority after the controller failed over unexpectedly. This failure was actually triggered by the low battery level and is expected behaviour as documented in Netapp KB2011413 though it’s not made overly clear that a controller shutdown is the default action if the battery issue persists for 24 hours. I logged a call with Netapp but they were unaware of any systemic issues and despite pointing out that this was affecting all six of our controllers they simply sent replacement NVRAM batteries and suggested we swap them all out. I posted a question on the Netapp forums but at the time no-one else seemed to be having the same issue. The new batteries were duly fitted and the problem seemed to be resolved – I’ve since rechecked our battery charges and they’re stable at around 150 hours.

An update in an email we received from Netapp on the 22nd December now states that it’s a known firmware issue with a permanent fix currently expected in Feb 2012. Netapp advise that further downtime will be required to implement the fix when it’s made available.

Don’t ignore low battery alerts!

Continue reading NVRAM problems on Netapp 3200 series filers

The London VMware usergroup (26th Jan 2012)

It’s that lovely time of year again (and I don’t mean Xmas!) when the next London VMware usergroup is open for registration! If you’re not familiar with the LonVMUG (where have you been?) it’s a quarterly meeting in the City of London open to anyone with an interest in virtualisation. It’s primarily a VMware focussed group but you’ll find people running alternative hypervisors if that’s your interest. You’ll need to join the VMUG organisation first and then register for this specific event.

If you haven’t attended before you may be wondering “What’s in it for me?”. Off the top of my head I’d say the following;

  • Everyone at the LonVMUG has something to say and useful experiences. Find people with the same challenges as you and get talking!
  • Hear about third party products (with demos)
  • Get hands on with Labs
  • Meet the experts and ask questions. There’s a lot of collective knowledge at the average VMUG with vExperts aplenty;
    • Fancy meeting one of a rare breed, a VCDX? Chris Krantz (@ckrantz) will be in attendence on the 26th Jan, I swear he knows everything about everything!
    • Into Powershell/PowerCLI? How about Jonathan Medd (@jonathanmedd) or Al Renouf (@alanrenouf) – Powershell gurus and book authors!
    • Using EMC at work or thinking of building a home or work lab? Seek out Simon Seagrave (@kiwi_si) – EMC and home labs guru
    • Maybe you’re an ISP and you want to know more about the VMware cloud offerings? Then seek out Simon Gallagher (@vinf_net)- vCloud specialist, vTardis inventor
    • Are you an SME with a broad interest in all things virtualisation? Barry Coombs (@virtualisedreal) is often along and specialises in this market.
    • Disaster recovery your thing? Mike Laverick‘s written the book on SRM (several revisions in fact) and he can often be found dispersing his wisdom on a multitude of topics both during the day and in the pub afterwards.
    • …and too many others to mention!
  • Best of all this is all free!

The 26th Jan 2012 agenda (or download the PDF version);

10:00 – 10:15

Welcome

10:15 – 11:00

Intelligent Application Awareness in VMware Environments

Lorenzo Galelli, Symantec

11:00 – 11:45

Would you like fries with your VM?

Chris Kranz

11:45 – 12:15

Break in Thames Suite

12:15 – 13:00

Building 1000 hosts in 10 mins with Auto Deploy

Alan Renouf, VMware

End User Computing : Today & Tomorrow

Simon Richardson, VMware

13:00 – 14:00

 

Lunch


14:00 – 14:50

Stop the Virtualization Blame Game

Ben Vaux, Xangati

VMware Data Protection in a Box

Suresh Vasudevan, Nimble Storage

15:00 – 15:50

A little orchestration after lunch

Michael Poore

Private vCloud Architecture Deep Dive

Dave Hill, VMware

16:00 – 16:50

Virtualisation on Cisco UCS

Colin Lynch

VCP5 Tips and Tricks

Gregg Robertson

17:00 – 17:15

Close

17:15 onwards

Drinks at Pavilion End

It’s a great agenda and I’ll be supported a few friends who are presenting. Don’t miss out!

Where to go for the usergroup (make sure you register beforehand);

London Chamber of Commerce and Industry 33 Queen Street
London, EC4R 1AP (map)

Where to go for drinks afterwards;

The Pavilion End pub (official website)
23 Watling Street, Moorgate
London
EC4M 9BR (map)

Twitter:@lonvmug (or hashtag #lonvmug)

Why I blog (and maybe you should)

After being asked why I blog by a co-worker I’ve been thinking about what motivates me to blog. An inspirational blogpost by Mark Pollard on how to get into strategy identifies some traits which strike me as equally applicable to blogging;

  1. Curiosity. This is partly why I got into blogging as the techie in me wanted to know how it worked, which technologies were involved, what was that plugin that other bloggers were referring to? It’s the same instinct that makes good engineers – they want to know how something works so they take it apart!
  2. Action. Like most technologies the only way to really understand it is to get stuck in and do it. Until I started my blog I wasn’t sure what I’d blog about but I quickly found myself thinking ‘that might be interesting to others’ during my working day and I started turning thoughts into blogposts. I agree wholeheartedly with Seth Godin’s view that the process of distilling your thoughts into something readable for others is a very valuable process, and reason enough to blog – even if nobody reads it. Continue reading Why I blog (and maybe you should)

Container shipping and virtualisation – a potent analogy

One of the most interesting sessions I attended at VMworld in Copenhagen was entitled ‘Cloud Computing 2012 to 2014 – a two year perspective’ (session CIM4603, subscription required). The speaker was Joe Baguley, a well known cloud evangelist who recently joined VMware as Chief Cloud Technologist. I’ve seen Joe present before at the Cloud Camp events so knew what to expect (humour, lots of snappy analogies and some thought provoking concepts) and I wasn’t disappointed (note the link above is to the same session from Las Vegas, presented with his own slant by David Hunter). If you’re interesed in hearing Joe’s speech in person I recommend registering for the national VMUG taking place on 3rd November in Birmingham.

One of Joe’s analogies (well quoted in the press) was to compare VM encapsulation to a shipping container. This isn’t anything new (Chuck Hollis explains it very well in this blogpost from 2008!) but it’s an analogy I’ve been thinking about since buying the book ‘The Box‘ for my wife as a Christmas present last year. As a commodity trader working with a team of shippers I thought she’d find a book about the history of the shipping container interesting (the New York Times listed it as one of the best business books ever written) but instead I found myself reading it during a weekend break. It didn’t take long to see parallels with what’s been happening over the last few years in the IT industry;

  • Standardisation and automation altered existing business models – some companies flourished and others perished
  • Whole professions changed and those who didn’t adapt found themselves out of work
  • Containerisation introduces new challenges (scale, security)
  • The container was used for many purposes beyond it’s original remit

In the four years since Chuck wrote his post the practice of cloud computing has advanced considerably. Whereas his focus (in that post at least) was networking it’s now clear that most areas of IT are being impacted from infrastructure to applications.

This isn’t a ‘technical how to’ blogpost with any conclusions but more of a ‘wandering thoughts, slow day at work’ post. I’m going to explore the analogy a bit further and include a few miscellaneous facts which were too good to ignore!

Continue reading Container shipping and virtualisation – a potent analogy