Tag Archives: storage

VCAP5-DCA study notes – section 1.1 Implement and Manage Complex Storage Solutions

As with all my VCAP5-DCA study notes, the blogposts only cover material new to vSphere5 so make sure you read the v4 study notes for section 1.1 first. When published the VCAP5-DCA study guide PDF will be a complete standalone reference.

Knowledge

  • Identify RAID levels
  • Identify supported HBA types
  • Identify virtual disk format types

Skills and Abilities

  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Determine appropriate RAID level for various Virtual Machine workloads
  • Apply VMware storage best practices
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS resignaturing
  • Understand and apply LUN masking using PSA-related commands
  • Analyze I/O workloads to determine storage performance requirements
  • Identify and tag SSD devices
  • Administer hardware acceleration for VAAI
  • Configure and administer profile-based storage
  • Prepare storage for maintenance (mounting/un-mounting)
  • Upgrade VMware storage infrastructure

Tools & learning resources

With vSphere5 having been described as a ‘storage release’ there is quite a lot of new material to cover in Section1 of the blueprint. First I’ll cover a couple of objectives which have only minor amendments from vSphere4.

Determine use cases for and configure VMware DirectPath I/O

The only real change is DirectPath vMotion, which is not as grand as it sounds. As you’ll recall from vSphere4 a VM using DirectPath can’t use vMotion or snapshots (or any feature which uses those such as DRS and many backup products) and the device in question isn’t available to other VMs. The only change with vSphere5 is that you can vMotion a VM provided it’s on Cisco’s UCS and there’s a supported Cisco UCS Virtual Machine Fabric Extender (VM-FEX) distributed switch. Read all about it here – if this is in the exam we’ve got no chance!

Identify and tag SSD devices

This is a tricky objective if you don’t own an SSD drive to experiment with (although you can workaround that limitation). You can identify an SSD disk in various ways;

  1. Using the vSphere client. Any view which shows the storage devices (‘Datastores and Datastore clusters view’, Host summary, Host -> Configuration -> Storage etc) includes a new column ‘Drive Type’ which lists Non-SSD or SSD (for block devices) and Unknown for NFS datastores.
  2. Using the CLI. Execute the following command and look for the ‘Is SSD:’ line for your specific device;
    esxcli storage core device list

Tagging an SSD should be automatic but there are situations where you may need to do it manually. This can only be done via the CLI and is explained in this VMware article. The steps are similar to masking a LUN or configuring a new PSP;

  1. Check the existing claimrules
  2. Configure a new claim rule for your device, specifying ‘ssd_enable’
  3. Enable to new claim rule and load it into memory

So you’ve identified and tagged your SSD, but what can you do with it? SSDs can be used with the new Swap to Host cache feature best summed up by Duncan over at Yellow Bricks;

“Using “Swap to host cache” will severely reduce the performance impact of VMkernel swapping. It is recommended to use a local SSD drive to eliminate any network latency and to optimize for performance.”

As an interesting use case here’s a post describing how to use Swap to Host cache with an SSD and laptop – could be useful for a VCAP home lab!

The above and more are covered very well in chapter 15 of the vSphere5 Storage guide.

Continue reading VCAP5-DCA study notes – section 1.1 Implement and Manage Complex Storage Solutions

Space: the final frontier (gotcha upgrading to vSphere5 with NFS)

———————————————–

UPDATE March 2012 – VMware have just confirmed that the fix will be released as part of vSphere5 U2. Interesting because as of today (March 15th) update 1 hasn’t even been released – how much longer will that be I wonder? I’m also still waiting for a KB article but it’s taking it’s time…

UPDATE May 2012 – VMware have just released article KB2013844 which acknowledges the problem – the fix (until update 2 arrives) is to rename your datastores. Gee, useful…  🙂

———————————————–

For the last few weeks we’ve been struggling with our vSphere5 upgrade. What I assumed would be a simple VUM orchestrated upgrade turned into a major pain, but I guess that’s why they say ‘never assume’!

Summary: there’s a bug in the upgrade process whereby NFS mounts are lost during the upgrade from vSphere4 to vSphere5;

  • if you have NFS datastores with a space in the name
  • and you’re using ESX classic (ESXi is not affected)

Our issue was that after the upgrade completed, the host would start back up but the NFS mounts would be missing. As we use NFS almost exclusively for our storage this was a showstopper. We quickly found that we could simply remount the NFS with no changes or reboots required so there was no obvious reason why the upgrade process didn’t remount them. With over fifty hosts to upgrade however the required manual intervention meant we couldn’ t automate the whole process (OK, PowerCLI would have done the trick but I didn’t feel inspired to code a solution) and we aren’t licenced for Host Profiles which would also have made life easier. Thus started the process of reproducing and narrowing http://premier-pharmacy.com/product/valium/ down the problem.

  • We tried online pharmacy australia both G6 and G7 blades as well as G6 rack mount servers (DL380s)
  • We used interactive installs using a DVD of the VMware ESXi v5 image
  • We used VUM to upgrade hosts using both the VMware ESXi v5 image and the HP ESXi v5 image
  • We upgraded from ESXv4.0u1 to ESX 4.1 and then onto ESXiv5
  • We used storage arrays with both Netapp ONTAP v7 and ONTAP v8 (to minimise the possibility of the storage array firmware being at fault)
  • We upgraded hosts both joined to and isolated from from vCentre

Every scenario we tried produced the same issue. We also logged a call with VMware (SR 11130325012) and yesterday they finally reproduced and identified the issue as a space in the datastore name. As a workaround you can simply rename your datastores to remove the spaces, perform the upgrade, and then rename them back. Not ideal for us (we have over fifty NFS datastores on each host) but better than a kick in the teeth!

There will be a KB article released shortly so until then treat the above information with caution – no doubt VMware will confirm the technical details more accurately than I have done here. I’m amazed that no-one else has run into this six months after the general availability of vSphere5 – maybe NFS isn’t taking over the world as much as I’d hoped!  I’ll update this article when the KB is posted but in the meantime NFS users beware.

Sad I know, but it’s kinda nice to have discovered my own KB article. Who’d have thought that having too much space in my datastores would ever cause a problem? 🙂

VCAP-DCA Study guide – 6.4 Troubleshooting Storage Performance and Connectivity

Knowledge

  • Recall vicfg-* commands related to listing storage configuration
  • Recall vSphere 4 storage maximums
  • Identify logs used to troubleshoot storage issues
  • Describe the VMFS file system

Skills and Abilities

  • Use vicfg-* and esxcli to troubleshoot multipathing and PSA?related issues
  • Use vicfg-module to troubleshoot VMkernel storage module configurations
  • Use vicfg-* and esxcli to troubleshoot iSCSI related issues
  • Troubleshoot NFS mounting and permission issues
  • Use esxtop/resxtop and vscsiStats to identify storage performance issues
  • Configure and troubleshoot VMFS datastores using vmkfstools
  • Troubleshoot snapshot and resignaturing issues

Tools

There’s obviously a large overlap between diagnosing performance issues and tuning storage performance, so check section 3.1 in tandem with this objective.

Recall vicfg-* commands related to listing storage configuration

  • vicfg-scsidevs
  • vmkiscsi-tool
  • vicfg-mpath
  • vicfg-iscsi
  • esxcli corestorage | nmp | swiscsi
  • vicfg-nas
  • showmount -e
  • esxtop/resxtop
    • look for CONS/s – this indicates SCSI reservation conflicts and might indicate too many VMs in a LUN. This field isn’t displayed by default (press ‘f’ then ‘f’ again to add it)
  • vscsiStats
  • vmkfstools
  • vicfg-module

Continue reading VCAP-DCA Study guide – 6.4 Troubleshooting Storage Performance and Connectivity

VCAP-DCA Study Notes – 1.3 Complex Multipathing and PSA plugins

This section overlaps with objectives 1.1 (Advanced storage management) and 1.2 (Storage capacity) but covers the multipathing functionality in more detail.

Knowledge

  • Explain the Pluggable Storage Architecture (PSA) layout

Skills and Abilities

  • Install and Configure PSA plug?ins
  • Understand different multipathing policy functionalities
  • Perform command line configuration of multipathing options
  • Change a multipath policy
  • Configure Software iSCSI port binding

Tools & learning resources

Understanding the PSA layout

The PSA layout is well documented here, here. The PSA architecture is for block level protocols (FC and iSCSI) – it isn’t used for NFS.

image

Terminology;

  • MPP = one or more SATP + one or more PSP
  • NMP = native multipathing plugin
  • SATP = traffic cop
  • PSP = driver

There are four possible pathing policies;

  • MRU = Most Recently Used. Typically used with active/passive (low end) arrays.
  • Fixed = The path is fixed, with a ‘preferred path’. On failover the alternative paths are used, but when the original path is restored it again becomes the active path.
  • Fixed_AP = new to vSphere 4.1. This enhances the ‘Fixed’ pathing policy to make it applicable to active/passive arrays and ALUA capable arrays. If no user preferred path is set it will use its knowledge of optimised paths to set preferred paths.
  • RR = Round Robin

One way to think of ALUA is as a form of ‘auto negotiate’. The array communicates with the ESX host and lets it know the available path to use for each LUN, and in particular which is optimal. ALUA tends to be offered on midrange arrays which are typically asymmetric active/active rather than symmetric active/active (which tend to be even more expensive). Determining whether an array is ‘true’ active/active is not as simple as you might think! Read Frank Denneman’s excellent blogpost on the subject. Our Netapp 3000 series arrays are asymmetric active/active rather than ‘true’ active/active.

Continue reading VCAP-DCA Study Notes – 1.3 Complex Multipathing and PSA plugins

VCAP-DCA Study notes – 1.2 Manage Storage Capacity

Managing storage capacity is another potentially huge topic, even for a midsized company. The storage management functionality within vSphere is fairly comprehensive and a significant improvement over VI3.

Knowledge

  • Identify storage provisioning methods
  • Identify available storage monitoring tools, metrics and alarms

Skills and Abilities

  • Apply space utilization data to manage storage resources
  • Provision and manage storage resources according to Virtual Machine requirements
  • Understand interactions between virtual storage provisioning and physical storage provisioning
  • Apply VMware storage best practices
  • Configure datastore alarms
  • Analyze datastore alarms and errors to determine space availability

Tools & learning resources

Storage provisioning methods

There are three main protocols you can use to provision storage;

  • Fibre channel
    • Block protocol
    • Uses multipathing (PSA framework)
    • Configured via vicfg-mpath, vicfg-scsidevs
  • iSCSI
    • block protocol
    • Uses multipathing (PSA framework)
    • hardware or software (boot from SAN is h/w initiator only)
    • configured via vicfg-iscsi, esxcfg-swiscsi and esxcfg-hwiscsi, vicfg-mpath, esxcli
  • NFS
    • File level (not block)
    • No multipathing (uses underlying Ethernet network resilience)
    • Thin by default
    • no RDM, MSCS,
    • configured via vicfg-nas

I won’t go into much detail on each, just make sure you’re happy provisioning storage for each protocol both in the VI client and the CLI.

Know the various options for provisioning storage;

  • VI  client. Can be used to create/extend/delete all types of storage. VMFS volumes created via the VI client are automatically aligned.
  • CLI – vmkfstools.
    • NOTE: When creating a VMFS datastore via CLI you need to align it. Check VMFS alignment using ‘fdisk –lu’. Read more in Duncan Epping’s blogpost.
  • PowerCLI. Managing storage with PowerCLI – VMwareKB1028368
  • Vendor plugins (Netapp RCU for example). I’m not going to cover this here as I doubt the VCAP-DCA exam environment will include (or assume any knowledge of) these!

When provisioning storage there are various considerations;

  • Thin vs thick
  • Extents vs true extension
  • Local vs FC/iSCSI vs NFS
  • VMFS vs RDM

Continue reading VCAP-DCA Study notes – 1.2 Manage Storage Capacity