Archive

Posts Tagged ‘storage’

Storage Field Day #4 – who will you see?

October 8th, 2013 No comments

I'll-be-backFollowing on from last year when I attended Storage Field Day #2 I’m glad to say I’ve been invited back to San Jose this year for SFD #4, happening Nov 13th-15th. I enjoyed the experience last year and learnt a lot so while it’s another four days out of the office (and two long haul flights) I believe it’s time well spent. Of course I’m not the main attraction – below I provide a quick summary of the sponsors as it gave me a good excuse to go look up the ones I was less familiar with (the official event page also lists them). These are my thoughts based on a quick look at vendor websites so I’m happy to be corrected! Nimble Storage, Proximal Data, and Virident are attending VMworld Barcelona so I’ll have a chance to get some information in advance;

Avere Systems – a NAS hybrid storage array with a difference as it’s designed to work as accelerated networked storage in front of ‘legacy’ storage arrays and includes unusual features (for a storage array) such as WAN acceleration and storage migration functions. It also offers storage virtualisation and a global namespace. A less common use case maybe?

Cleversafe – an object storage solution designed for large scale storage requirements (first mention of Big Data during my investigations) – if you don’t have 10PB these guys aren’t what you need. You know RAID and replication techniques? That won’t help you here. Cleversafe uses ‘dispersed’ storage and erasure codes (which I need to learn more about) and sells via hardware appliances.

CloudByte – Their flagship product, ElastiStor, is a ZFS based software only product which offers storage virtualisation, linear scalability, and storage QoS. Potentially comparable to Nexenta (although they don’t offer QoS to my knowledge)? Software only, scalable, ability to use commodity hardware – what’s not to like? There’s also a free trial which works up to 4TB – lab time here we come…

CohoData (previously Convergent.io) – As a startup company who have yet to launch its hard to know exactly what they do. The founders were behind the creation of the Xen hypervisor back in the late 90s – these guys have serious pedigree.
From their website “an integrated storage and networking model that abstracts configuration and functionality from the underlying hardware while making use of innovative storage networking integration and high-performance, commodity hardware to help customers realize the vision of a software-defined datacenter“. Buzzword bingo? Another Nutanix or Simplivity? Time will tell.

GridStore – They’ve recently changed direction from scale out NAS to focus on performance enhancement by resolving the I/O blender problem created by virtualisation. In this respect it seems to do for Hyper-V (and Windows in general) what Virsto offers for VMware although it also offers storage QoS per VM. Like others it’s a ‘software defined storage’ product that’s bundled as an appliance (apparently for support purposes).

Nimble Storage – offer a hybrid storage array using a CASL architecture. These guys presented at SFD2 and seem to have seen good growth. It’ll be interesting to see what’s new – I’m guessing they’ll cover some of their SmartStack reference architectures with maybe a mention of InfoSight and NimbleConnect.

Overland Storage – Many of the SFD sponsors are startups but Overland were founded in 1980 and as a global company it’s hard to know which product from their portfolio they’ll be covering. I’d guess at SnapScale X4, a unified (NAS & iSCSI), scalable storage cluster which was released recently.

Oxygen Cloud – Dropbox on steroids with end to end encryption, AD/LDAP authentication, and easy sharing. Rather than being just ‘cloud’ storage Oxygen Cloud offers a storage service which abstracts away the underlying storage implementation, allowing you to use multiple vendors and locations (including your own) transparently. Interesting but there’s a big challenge overcoming Dropbox’s brand awareness.

Proximal Data – Their product is Autocache which offers intelligent server side caching embedded in the (VMware only) hypervisor. There is tough competition in this area notably from VMware’s recently released vFRC and Pernix Data’s FVP product which hit the scene with a splash a few months ago (Pernix presented at SFD3). I like the fact the company name is relevant so I can easily remember what they do!

Virident (now Western Digital) – makers of PCI-E server flash storage solutions, very much in competition with the market leading Fusion-IO. Recently acquired by Western Digital (Sept 2013). Unlike Proximal the server side flash is used as an adjunct to main memory (containing the working set of the application) rather than as an I/O cache.

As always you can watch the sessions live over the Internet (via the TechFieldDay website) and interact via social media (Twitter hashtag #SFD4).

Further Reading

Luca Dell’Oca has a similar post with some extra details

Categories: Storage Tags: ,

Netapp ONTAP 8.2 and SnapManager compatibility

June 14th, 2013 2 comments

Summary: Running SnapDrive or SnapManager on Windows 2003? You might have some decisions to make….

Netapp recently announced that ONTAP 8.2 will bring with it a new licencing model which impacts the SnapDrive and SnapManager suites. Unfortunately this could have significant impact on companies currently using those products so you need to be familiar with the changes. In KB7010074 (NOW access required) it clearly states that current versions (when running on Windows) don’t work with ONTAP 8.2;

Because of changes in the licensing infrastructure in Data ONTAP 8.2, the license-list-info ZAPI call used by the current versions of SnapDrive for Windows and the SnapManager products is no longer supported in Data ONTAP 8.2. As a result, the current releases of these products will not work with Data ONTAP 8.2.

 The SnapManager products mentioned below do not support ONTAP 8.2.

  • SnapDrive for Windows 6.X and below
  • SnapManager® for Exchange 6.X and below
  • Single Mailbox Recovery® 6.X and below
  • SnapManager for SQL® 6.X and below
  • SnapManager for SharePoint® 7.x and below
  • SnapManager for Hyper-V 1®.x

Unfortunately there is no workaround and we need to wait for future versions of SnapManager and SnapDrive to be released sometime in 2013 (according to the KB article) before we get ONTAP 8.2 compatibility. I’ve no major issue with this situation as ONTAP 8.2 was only released a few days ago for Cluster mode and isn’t even released yet for 7 mode customers.

If you’re using Windows 2003 with any of the above products however this could be a big deal. SnapDrive 6.5 (the latest as of June 2013) only supports Windows 2008 and newer so it’s a reasonably assumption that the newer releases will have similar requirements. Until now you could still use SnapDrive 6.4 if you needed backwards compatibility with older versions of Windows – I suspect Windows 2003 is still plentiful in many enterprises (as well as my own). Now though you have a hard choice – either upgrade the relevant Windows 2003 servers, stop using the Snap products, or accept that you can’t upgrade ONTAP to the 8.2 release.

Personally I have a bunch of physical clusters all running Windows 2003 and hosting mission critical SQL databases and if these dependencies don’t change I’ll have to accelerate a project to upgrade them all in the next year or so, something that currently has no budget. Software dependencies aren’t unique to Netapp nor are Netapp really at fault – upgrading software is part of infrastructure sustainability and Windows 2003 is ten years old.

Lesson for the day: Running old software brings with it a risk.

Categories: Netapp Tags: ,

Is storage a fungible commodity?

February 4th, 2013 No comments

Fungibility – are you getting what you expect? :-)

I keep hearing that ‘IT is becoming a commodity‘, cloud computing is ‘like a utility‘, and recently I’ve heard the term ‘fungibility’ applied to computing on multiple occasions. The technologies behind cloud computing are driving these changes but what does it mean to be a commodity, what on earth is fungibility, and what’s it got to do with cloud computing? In this post I’ll explore the fungibility of storage and in a future blogpost the wider impact to cloud computing.

Lets dig into what fungibility is and why it’s important. Wikipedia defines it as;

Fungibility is the property of a good or a commodity whose individual units are capable of mutual substitution, such as crude oil, shares in a company, bonds, precious metals, or currencies.

In plain English fungibility means something is interchangeable – a common example is money. If someone owes you ten dollars you don’t care if they pay you one ten dollar bill, two fives, or ten ones – you get essentially the same thing. Another example is that you’re supposed to eat five portions of fruit and veg every day but you could eat five fruits, five veg, or a mixture – they’re fungible (interchangeable).

Now we know what is it but who cares if something is fungible?

  • for consumers fungibility is a good thing as it increases competition and flexibility – you can buy your commodity from anyone, often driving down prices
  • for providers fungibility could be good and bad. The increased competition might benefit your competitors but history has shown that once a market becomes a commodity it tends to grow, leading to more business for all involved.

Note that just because a commodity is fungible it doesn’t mean there’s no differentiation. Many metals are considered fungible – a tonne of molybdenum may be valued the same whether it’s mined from Australia or Europe. If you need that metal in Europe however you’ll incur shipping costs if you buy the Australian sourced tonne so you’ll pay a premium to get the supplier from Europe. It’s this differentiation which enables trade – more on this in the followup post coming shortly.

Fungibility and storage

Two of the references I’ve heard were in regard to storage and whether it is or isn’t fungible, so that’s where I’ll start. Virsto’s argument during their storage hypervisor presentation at SFD2 argued that while CPU and memory are fungible (specifically in virtualized environments) storage isn’t and is therefore a pain point (which they aim to solve obviously). In his 2013 predictions article Arthur Cole at IT BusinessEdge sees storage becoming a fungible commodity which has ramifications for how it’s consumed.

Uncertainty over whether storage is fungible or not is understandable – my first reaction when I thought about it was ‘no chance’! I’m a storage admin in my current job and each storage request is slightly different – there are too many variable factors which affect the outcome that you couldn’t consider two requests as interchangeable unless the solution was from the same vendor and with the same configuration. Here’s just some of the factors when specifying solutions or diagnosing storage issues;

  • Capacity (typically in GB or TB)
  • Performance – throughput, latency, IOps
  • Workload – read/write ratio, block sizes, sustained or variable demand
  • Availability – HA, clustering, support, SLAs etc
  • Backups – snapshots, long term archiving, restore times
  • Security – location, governance, compliance

Crucially however, as a storage provider I have a different perspective to a consumer of my storage. For a consumer most of this complexity is invisible, hidden behind either a technical or business abstraction – hence why a storage request often only considers capacity (much to my frustration!). What I get concerns me, not how it’s implemented. If you look at storage from the customer’s perspective then it’s a simpler construct and provided it satisfies the user’s expectations it can be considered fungible. All those variables can differentiate one service from another but for many services they’re of secondary importance.

Take a simple consumer example – Dropbox. I’ve used this excellent service for quite a few years and the only thing I really care about is how much storage I can consume and that it works reliably. I assume that it’s always available, that I can get my files back when needed, and that the storage provided by Dropbox can handle what I throw at it. If I don’t like the service offering I can move to one of their competitors like Crashplan, Skydrive, or Bitcasa and while the functionality is slightly different (maybe they don’t all support Linux clients for example) I can compare prices and pick the one that best suits me.

At the enterprise level companies like Amazon, with their S3 and Glacier services, compete with other industry heavyweights like Google’s Cloud Storage, Microsoft’s Azure, Nirvanix etc. Take up of these services started with the Web 2.0 generation but today’s they’re starting to tackle the ‘legacy’ enterprises. This is the more complex world where the factors I mentioned above are more relevant – if someone offered me some ‘cloud’ storage versus some traditional onsite storage (using Netapp’s or EMC gear) then I’d expect them to deliver completely different experiences. Rodney Rogers, the CEO of Virtustream, has recently written an excellent piece about why Amazon may struggle when delivering to the enterprise and I’d agree completely – the demands of the average enterprise are not the same as the Web 2.0 companies running commodity hardware. There are plenty of successful cloud storage companies doing business in the enterprise world today but as Gartner warn you need to be on your guard as the services offered vary widely and are not therefore easily compared – they’re not fungible. They also indicated that 20% of companies are already using cloud storage so one hopes it’s delivering some value. Apologies for mentioning the ‘G******’ word so often!

The answer to ‘is storage fungible?’ is the classic ‘it depends’. For some, typically consumer, requirements I’d say it is but for the more demanding enterprise it’s not there yet.

Further Reading

Fungibility applied to IT

Your storage in the cloud (Hans De Leenheer)

Cloud storage viable option, but proceed with caution (Gartner)

Top 10 cloud storage providers (Gartner)

The longevity of IT skills

Categories: Storage Tags: , ,

My ‘chinwag’ with Mike Laverick

January 21st, 2013 No comments

Late last week I joined an illustrious line of community bloggers, vendors, and authors by having a ‘chinwag’ with Mike Laverick. Anyone who knows Mike knows that a quick chat can easily last an hour for all the right reasons – he’s passionate about VMware and technology in general and good at presenting complex ideas in an easily understood manner. I guess that’s why he recently became a senior cloud evangelist for VMware! We discussed a few topics which are close to my heart at the moment;

  • Oracle
  • vCloud Director
  • Storage Field Day

You can listen to the audio (MP3 or the iPod/iPad friendly M4V) or watch the YouTube video. As time is limited on the actual chinwag I thought I’d offer a few additional thoughts on a couple of the topics we discussed.

Oracle and converged infrastructure

I didn’t want to get embroiled in a discussion about Oracle’s support stance on VMware as that’s been covered many times before but it’s definitely still a barrier. Some of our Oracle team have peddled the ‘it’s not supported’ argument to senior management and even though I’ve clarified the ‘supported vs certified’ distinction it’s a difficult perception to alter. Every vendor wants to push their own solutions so you can’t blame Oracle for wanting to push their own solution but it sure is frustrating!

Of more interest to me is where converged infrastructure is going. As we discussed on the chinwag Oracle are an interesting use case for converged infrastructure (or engineered systems, pick your terminology of choice) because it includes the application tier. Most other converged offerings (VCE, FlexPod, vStart and even hyperconverged solutions like Nutanix) tend to stop at the hypervisor, thus providing a abstraction layer that you can run whatever workload you like on. Oracle (with the possible exception of IBM?) may be unique in owning the entire stack from hardware all the way up through storage, networking, compute, through to the hypervisor and up to their crown jewels, the Oracle database and applications. This gives them a position of strength to negotiate with even when certain layers are weak in comparison to ‘best of breed’, as is the case with OracleVM. Archie Hendryx explores this in his blogpost although I think he undersells the advantage Oracle have of owning a tier 1 application – Dell’s vStart or VCE’s vBlock may offer competition from an infrastructure perspective but my company don’t run any Dell or VCE applications. If you’re not Oracle how do you compete with this? You team up to provide a ‘virtual stack’ optimised for various workloads – today VDI is the most common (see reference architectures from Nexenta, Nimble Storage et al). As the market for converged infrastructure grows I think we’ll see more of these ‘vertical’ stack style offerings.

Here’s a few blogpost’s I found interesting related to Oracle’s solutions: a look at the Exadata infrastructure, who manages the Exadata, Exalogic 2.0 Focuses on Elastic Cloud

vCloud Director

After I described my problem getting vCD tabled as a viable technology for lab management Mike rightly pointed out that many people are using vCD in test and dev – maybe more than in production. I agree with Mike but suspect that most are using dev/test as a POC for a production private cloud, not as purpose built lab management environment. I didn’t get time to discuss a couple of other points which both complicate the introduction of vCD even if you have an existing VMware environment;

  • Introducing vCD (or any cloud solution for that matter) is potentially a much bigger change compared to the initial introduction of server virtualisation. In the latter the changes mainly impacted the infrastructure teams although provisioning, purchasing, networks and storage were all impacted. If you’re intending to deliver test/dev environments you’re suddenly incorporating your applications too, potentially including the whole development/delivery lifecycle. If you go the whole hog to self-service then you potentially include an even larger part of the business right up to the end users. That’s a very disruptive change for some ‘infrastructure guy’ to be proposing!
  • vCD recommends Enterprise+ licencing which means I have to argue for the highest licencing level for test/dev, even if I don’t have it in production

If you’re interested in vCloud Director as a lab management solution here are links to some of the companies and technologies I mentioned;  SkyTap Cloud, VMworld session OPS-CSM2150 – “Lab management with VMware vCloud Director: Software development customer panel”, Frank Brix’s network fencing blogpost, and a good generic post about using the cloud for development.

Categories: VMware Tags: , , ,

Netapp and vSphere5 storage integration

May 9th, 2012 8 comments

Let your storage array do the heavy lifting with VAAI!

I’ve seen a few blogposts recently about storage features in vSphere5 and plenty of forum discussions about the level of support from various vendors but none that specifically address the Netapp world. As some of these features require your vendor to provide plugins and integration I’m going to cover the Netapp offerings and point out what works today and what’s promised for the future.

Many of the vSphere5 storage features work regardless of your underlying storage array, including StorageDRS, storage clusters, VMFS5 enhancements (provided you have block protocols) and the VMware Storage Appliance (vSA). The following vSphere features however are dependent on array integration;

  • VAAI (the VMware Storage API for Array Integration). If you need a refresher on VAAI and what’s new in vSphere v5 check out these great blogposts by Dave Henry - part one covers block protocols (FC and iSCSI), part two covers NFS. The inimitable Chad Sakac from EMC also has a great post on the new vSphere5 primitives.
  • VASA (the VMware Storage API for Storage Awareness). Introduced in vSphere5 this allows your storage array to send underlying implementation details of the datastore back to the ESXi host such as RAID levels, replication, dedupe, compression, number of spindles etc. These details can be used by other features such as Storage Profiles and StorageDRS to make more informed decisions.

The main point of administration (and integration) when using Netapp storage is the Virtual Storage Console (VSC), a vCenter plugin created by Netapp. If you haven’t already got this installed (the latest version is v4, released March 16th 2012) then go download it (NOW account required). As well as the vCenter plugin you must ensure your version of ONTAP also supports the vSphere functionality – as of April 19th 2012 the latest release is ONTAP 8.1. You can find out more about its featureset from Netapp’s Nick Howell. As well as the core vSphere storage features the VSC enables some extra features;

These features are all covered in Netapp’s popular TR3749 (best practices for vSphere, now updated for vSphere5) and the VSC release notes.

Poor old NFS – no VAAI for you…

It all sounds great! You’ve upgraded to vSphere5 (with Enterprise or Enterprise Plus licensing), installed the VSC vCenter plugin and upgraded ONTAP to the shiny new 8.1 release. Your Netapp arrays are in place and churning out 1′s and 0′s at a blinding rate and you’re looking forward to giving vSphere some time off for good behaviour and letting your Netapp do the heavy lifting…..

Read more…

VCAP5-DCA study notes – section 1.1 Implement and Manage Complex Storage Solutions

May 8th, 2012 No comments

As with all my VCAP5-DCA study notes, the blogposts only cover material new to vSphere5 so make sure you read the v4 study notes for section 1.1 first. When published the VCAP5-DCA study guide PDF will be a complete standalone reference.

Knowledge

  • Identify RAID levels
  • Identify supported HBA types
  • Identify virtual disk format types

Skills and Abilities

  • Determine use cases for and configure VMware DirectPath I/O
  • Determine requirements for and configure NPIV
  • Determine appropriate RAID level for various Virtual Machine workloads
  • Apply VMware storage best practices
  • Understand use cases for Raw Device Mapping
  • Configure vCenter Server storage filters
  • Understand and apply VMFS resignaturing
  • Understand and apply LUN masking using PSA-related commands
  • Analyze I/O workloads to determine storage performance requirements
  • Identify and tag SSD devices
  • Administer hardware acceleration for VAAI
  • Configure and administer profile-based storage
  • Prepare storage for maintenance (mounting/un-mounting)
  • Upgrade VMware storage infrastructure

Tools & learning resources

With vSphere5 having been described as a ‘storage release’ there is quite a lot of new material to cover in Section1 of the blueprint. First I’ll cover a couple of objectives which have only minor amendments from vSphere4.

Determine use cases for and configure VMware DirectPath I/O

The only real change is DirectPath vMotion, which is not as grand as it sounds. As you’ll recall from vSphere4 a VM using DirectPath can’t use vMotion or snapshots (or any feature which uses those such as DRS and many backup products) and the device in question isn’t available to other VMs. The only change with vSphere5 is that you can vMotion a VM provided it’s on Cisco’s UCS and there’s a supported Cisco UCS Virtual Machine Fabric Extender (VM-FEX) distributed switch. Read all about it here – if this is in the exam we’ve got no chance!

Identify and tag SSD devices

This is a tricky objective if you don’t own an SSD drive to experiment with (although you can workaround that limitation). You can identify an SSD disk in various ways;

  1. Using the vSphere client. Any view which shows the storage devices (‘Datastores and Datastore clusters view’, Host summary, Host -> Configuration -> Storage etc) includes a new column ‘Drive Type’ which lists Non-SSD or SSD (for block devices) and Unknown for NFS datastores.
  2. Using the CLI. Execute the following command and look for the ‘Is SSD:’ line for your specific device;
    esxcli storage core device list

Tagging an SSD should be automatic but there are situations where you may need to do it manually. This can only be done via the CLI and is explained in this VMware article. The steps are similar to masking a LUN or configuring a new PSP;

  1. Check the existing claimrules
  2. Configure a new claim rule for your device, specifying ‘ssd_enable’
  3. Enable to new claim rule and load it into memory

So you’ve identified and tagged your SSD, but what can you do with it? SSDs can be used with the new Swap to Host cache feature best summed up by Duncan over at Yellow Bricks;

“Using “Swap to host cache” will severely reduce the performance impact of VMkernel swapping. It is recommended to use a local SSD drive to eliminate any network latency and to optimize for performance.”

As an interesting use case here’s a post describing how to use Swap to Host cache with an SSD and laptop – could be useful for a VCAP home lab!

The above and more are covered very well in chapter 15 of the vSphere5 Storage guide.

Read more…

Space: the final frontier (gotcha upgrading to vSphere5 with NFS)

February 16th, 2012 4 comments

———————————————–

UPDATE March 2012 – VMware have just confirmed that the fix will be released as part of vSphere5 U2. Interesting because as of today (March 15th) update 1 hasn’t even been released – how much longer will that be I wonder? I’m also still waiting for a KB article but it’s taking it’s time…

UPDATE May 2012 – VMware have just released article KB2013844 which acknowledges the problem – the fix (until update 2 arrives) is to rename your datastores. Gee, useful…  :-)

———————————————–

For the last few weeks we’ve been struggling with our vSphere5 upgrade. What I assumed would be a simple VUM orchestrated upgrade turned into a major pain, but I guess that’s why they say ‘never assume’!

Summary: there’s a bug in the upgrade process whereby NFS mounts are lost during the upgrade from vSphere4 to vSphere5;

  • if you have NFS datastores with a space in the name
  • and you’re using ESX classic (ESXi is not affected)

Our issue was that after the upgrade completed, the host would start back up but the NFS mounts would be missing. As we use NFS almost exclusively for our storage this was a showstopper. We quickly found that we could simply remount the NFS with no changes or reboots required so there was no obvious reason why the upgrade process didn’t remount them. With over fifty hosts to upgrade however the required manual intervention meant we couldn’ t automate the whole process (OK, PowerCLI would have done the trick but I didn’t feel inspired to code a solution) and we aren’t licenced for Host Profiles which would also have made life easier. Thus started the process of reproducing and narrowing down the problem.

  • We tried both G6 and G7 blades as well as G6 rack mount servers (DL380s)
  • We used interactive installs using a DVD of the VMware ESXi v5 image
  • We used VUM to upgrade hosts using both the VMware ESXi v5 image and the HP ESXi v5 image
  • We upgraded from ESXv4.0u1 to ESX 4.1 and then onto ESXiv5
  • We used storage arrays with both Netapp ONTAP v7 and ONTAP v8 (to minimise the possibility of the storage array firmware being at fault)
  • We upgraded hosts both joined to and isolated from from vCentre

Every scenario we tried produced the same issue. We also logged a call with VMware (SR 11130325012) and yesterday they finally reproduced and identified the issue as a space in the datastore name. As a workaround you can simply rename your datastores to remove the spaces, perform the upgrade, and then rename them back. Not ideal for us (we have over fifty NFS datastores on each host) but better than a kick in the teeth!

There will be a KB article released shortly so until then treat the above information with caution – no doubt VMware will confirm the technical details more accurately than I have done here. I’m amazed that no-one else has run into this six months after the general availability of vSphere5 – maybe NFS isn’t taking over the world as much as I’d hoped!  I’ll update this article when the KB is posted but in the meantime NFS users beware.

Sad I know, but it’s kinda nice to have discovered my own KB article. Who’d have thought that having too much space in my datastores would ever cause a problem? :-)

VCAP-DCA Study guide – 6.4 Troubleshooting Storage Performance and Connectivity

April 21st, 2011 No comments

Knowledge

  • Recall vicfg-* commands related to listing storage configuration
  • Recall vSphere 4 storage maximums
  • Identify logs used to troubleshoot storage issues
  • Describe the VMFS file system

Skills and Abilities

  • Use vicfg-* and esxcli to troubleshoot multipathing and PSA‐related issues
  • Use vicfg-module to troubleshoot VMkernel storage module configurations
  • Use vicfg-* and esxcli to troubleshoot iSCSI related issues
  • Troubleshoot NFS mounting and permission issues
  • Use esxtop/resxtop and vscsiStats to identify storage performance issues
  • Configure and troubleshoot VMFS datastores using vmkfstools
  • Troubleshoot snapshot and resignaturing issues

Tools

There’s obviously a large overlap between diagnosing performance issues and tuning storage performance, so check section 3.1 in tandem with this objective.

Recall vicfg-* commands related to listing storage configuration

  • vicfg-scsidevs
  • vmkiscsi-tool
  • vicfg-mpath
  • vicfg-iscsi
  • esxcli corestorage | nmp | swiscsi
  • vicfg-nas
  • showmount -e
  • esxtop/resxtop
    • look for CONS/s – this indicates SCSI reservation conflicts and might indicate too many VMs in a LUN. This field isn’t displayed by default (press ‘f’ then ‘f’ again to add it)
  • vscsiStats
  • vmkfstools
  • vicfg-module

Read more…

VCAP-DCA Study Notes – 1.3 Complex Multipathing and PSA plugins

April 16th, 2011 No comments

This section overlaps with objectives 1.1 (Advanced storage management) and 1.2 (Storage capacity) but covers the multipathing functionality in more detail.

Knowledge

  • Explain the Pluggable Storage Architecture (PSA) layout

Skills and Abilities

  • Install and Configure PSA plug‐ins
  • Understand different multipathing policy functionalities
  • Perform command line configuration of multipathing options
  • Change a multipath policy
  • Configure Software iSCSI port binding

Tools & learning resources

Understanding the PSA layout

The PSA layout is well documented here, here. The PSA architecture is for block level protocols (FC and iSCSI) – it isn’t used for NFS.

image

Terminology;

  • MPP = one or more SATP + one or more PSP
  • NMP = native multipathing plugin
  • SATP = traffic cop
  • PSP = driver

There are four possible pathing policies;

  • MRU = Most Recently Used. Typically used with active/passive (low end) arrays.
  • Fixed = The path is fixed, with a ‘preferred path’. On failover the alternative paths are used, but when the original path is restored it again becomes the active path.
  • Fixed_AP = new to vSphere 4.1. This enhances the ‘Fixed’ pathing policy to make it applicable to active/passive arrays and ALUA capable arrays. If no user preferred path is set it will use its knowledge of optimised paths to set preferred paths.
  • RR = Round Robin

One way to think of ALUA is as a form of ‘auto negotiate’. The array communicates with the ESX host and lets it know the available path to use for each LUN, and in particular which is optimal. ALUA tends to be offered on midrange arrays which are typically asymmetric active/active rather than symmetric active/active (which tend to be even more expensive). Determining whether an array is ‘true’ active/active is not as simple as you might think! Read Frank Denneman’s excellent blogpost on the subject. Our Netapp 3000 series arrays are asymmetric active/active rather than ‘true’ active/active.

Read more…

VCAP-DCA Study notes – 1.2 Manage Storage Capacity

March 11th, 2011 No comments

Managing storage capacity is another potentially huge topic, even for a midsized company. The storage management functionality within vSphere is fairly comprehensive and a significant improvement over VI3.

Knowledge

  • Identify storage provisioning methods
  • Identify available storage monitoring tools, metrics and alarms

Skills and Abilities

  • Apply space utilization data to manage storage resources
  • Provision and manage storage resources according to Virtual Machine requirements
  • Understand interactions between virtual storage provisioning and physical storage provisioning
  • Apply VMware storage best practices
  • Configure datastore alarms
  • Analyze datastore alarms and errors to determine space availability

Tools & learning resources

Storage provisioning methods

There are three main protocols you can use to provision storage;

  • Fibre channel
    • Block protocol
    • Uses multipathing (PSA framework)
    • Configured via vicfg-mpath, vicfg-scsidevs
  • iSCSI
    • block protocol
    • Uses multipathing (PSA framework)
    • hardware or software (boot from SAN is h/w initiator only)
    • configured via vicfg-iscsi, esxcfg-swiscsi and esxcfg-hwiscsi, vicfg-mpath, esxcli
  • NFS
    • File level (not block)
    • No multipathing (uses underlying Ethernet network resilience)
    • Thin by default
    • no RDM, MSCS,
    • configured via vicfg-nas

I won’t go into much detail on each, just make sure you’re happy provisioning storage for each protocol both in the VI client and the CLI.

Know the various options for provisioning storage;

  • VI  client. Can be used to create/extend/delete all types of storage. VMFS volumes created via the VI client are automatically aligned.
  • CLI – vmkfstools.
    • NOTE: When creating a VMFS datastore via CLI you need to align it. Check VMFS alignment using ‘fdisk –lu’. Read more in Duncan Epping’s blogpost.
  • PowerCLI. Managing storage with PowerCLI – VMwareKB1028368
  • Vendor plugins (Netapp RCU for example). I’m not going to cover this here as I doubt the VCAP-DCA exam environment will include (or assume any knowledge of) these!

When provisioning storage there are various considerations;

  • Thin vs thick
  • Extents vs true extension
  • Local vs FC/iSCSI vs NFS
  • VMFS vs RDM

Read more…