Category Archives: Storage

Netapp ONTAP 8.2 and SnapManager compatibility

Print Friendly, PDF & Email

Summary: Running SnapDrive or SnapManager on Windows 2003? You might have some decisions to make….

Netapp recently announced that ONTAP 8.2 will bring with it a new licencing model which impacts the SnapDrive and SnapManager suites. Unfortunately this could have significant impact on companies currently using those products so you need to be familiar with the changes. In KB7010074 (NOW access required) it clearly states that current versions (when running on Windows) don’t work with ONTAP 8.2;

Because of changes in the licensing infrastructure in Data ONTAP 8.2, the license-list-info ZAPI call used by the current versions of SnapDrive for Windows and the SnapManager products is no longer supported in Data ONTAP 8.2. As a result, the current releases of these products will not work with Data ONTAP 8.2.

 The SnapManager products mentioned below do not support ONTAP 8.2.

  • SnapDrive for Windows 6.X and below
  • SnapManager® for Exchange 6.X and below
  • Single Mailbox Recovery® 6.X and below
  • SnapManager for SQL® 6.X and below
  • SnapManager for SharePoint® 7.x and below
  • SnapManager for Hyper-V 1®.x

Unfortunately there is no workaround and we need to wait for future versions of SnapManager and SnapDrive to be released sometime in 2013 (according to the KB article) before we get ONTAP 8.2 compatibility. I’ve no major issue with this situation as ONTAP 8.2 was only released a few days ago for Cluster http://premier-pharmacy.com/product/avodart/ mode and isn’t even released yet for 7 mode customers.

If you’re using Windows 2003 with any of the above products however this could be a big deal. SnapDrive 6.5 (the latest as of June 2013) only supports Windows 2008 and newer so it’s a reasonably assumption that the newer releases will have similar requirements. Until now you could still use SnapDrive 6.4 if you needed backwards compatibility with older versions of Windows – I suspect Windows 2003 is still plentiful in many enterprises (as well as my own). Now though you have a hard choice – either upgrade the relevant Windows 2003 servers, stop using the Snap products, or accept that you can’t upgrade ONTAP to the 8.2 release.

Personally I have a bunch of physical clusters all running Windows 2003 and hosting mission critical SQL databases and if these dependencies don’t change I’ll have to accelerate a project to upgrade them all in the next year or so, something that currently has no budget. Software dependencies aren’t unique to Netapp nor are Netapp really at fault – upgrading software is part of infrastructure sustainability and Windows 2003 is ten years old.

Lesson for the day: Running old software brings with it a risk.

Is storage a fungible commodity?

Print Friendly, PDF & Email
Fungibility – are you getting what you expect? 🙂

I keep hearing that ‘IT is becoming a commodity‘, cloud computing is ‘like a utility‘, and recently I’ve heard the term ‘fungibility’ applied to computing on multiple occasions. The technologies behind cloud computing are driving these changes but what does it mean to be a commodity, what on earth is fungibility, and what’s it got to do with cloud computing? In this post I’ll explore the fungibility of storage and in a future blogpost the wider impact to cloud computing.

Lets dig into what fungibility is and why it’s important. Wikipedia defines it as;

Fungibility is the property of a good or a commodity whose individual units are capable of mutual substitution, such as crude oil, shares in a company, bonds, precious metals, or currencies.

In plain English fungibility means something is interchangeable – a common example is money. If someone owes you ten dollars you don’t care if they pay you one ten dollar bill, two fives, or ten ones – you get essentially the same thing. Another example is that you’re supposed to eat five portions of fruit and veg every day but you could eat five fruits, five veg, or a mixture – they’re fungible (interchangeable).

Now we know what is it but who cares if something is fungible?

  • for consumers fungibility is a good thing as it increases competition and flexibility – you can buy your commodity from anyone, often driving down prices
  • for providers fungibility could be good and bad. The increased competition might benefit your competitors but history has shown that once a market becomes a commodity it tends to grow, leading to more business for all involved.

Note that just because a commodity is fungible it doesn’t mean there’s no differentiation. Many metals are considered fungible – a tonne of molybdenum may be valued the same whether it’s mined from Australia or Europe. If you need that metal in Europe however you’ll incur shipping costs if you buy the Australian sourced tonne so you’ll pay a premium to get the supplier from Europe. It’s this differentiation which enables trade – more on this in the followup post coming shortly.

Fungibility and storage

Two of the references I’ve heard were in regard to storage and whether it is or isn’t fungible, so that’s where I’ll start. Virsto’s argument during their storage hypervisor presentation at SFD2 argued that while CPU and memory are fungible (specifically in virtualized environments) storage isn’t and is therefore a pain point (which they aim to solve obviously). In his 2013 predictions article Arthur Cole at IT BusinessEdge sees storage becoming a fungible commodity which has ramifications for how it’s consumed.

Uncertainty over whether storage is fungible or not is understandable – my first reaction when I thought about it was ‘no chance’! I’m a storage admin in my current job and each storage request is slightly different – there are too many variable factors which affect the outcome that you couldn’t consider two requests as interchangeable unless the solution was from the same vendor and with the same configuration. Here’s just some of the factors http://premier-pharmacy.com/product-category/diabetes/ when specifying solutions or diagnosing storage issues;

  • Capacity (typically in GB or TB)
  • Performance – throughput, latency, IOps
  • Workload – read/write ratio, block sizes, sustained or variable demand
  • Availability – HA, clustering, support, SLAs etc
  • Backups – snapshots, long term archiving, restore times
  • Security – location, governance, compliance

Crucially however, as a storage provider I have a different perspective to a consumer of my storage. For a consumer most of this complexity is invisible, hidden behind either a technical or business abstraction – hence why a storage request often only considers capacity (much to my frustration!). What I get concerns me, not how it’s implemented. If you look at storage from the customer’s perspective then it’s a simpler construct and provided it satisfies the user’s expectations it can be considered fungible. All those variables can differentiate one service from another but for many services they’re of secondary importance.

Take a simple consumer example – Dropbox. I’ve used this excellent service for quite a few years and the only thing I really care about is how much storage I can consume and that it works reliably. I assume that it’s always available, that I can get my files back when needed, and that the storage provided by Dropbox can handle what I throw at it. If I don’t like the service offering I can move to one of their competitors like Crashplan, Skydrive, or Bitcasa and while the functionality is slightly different (maybe they don’t all support Linux clients for example) I can compare prices and pick the one that best suits me.

At the enterprise level companies like Amazon, with their S3 and Glacier services, compete with other industry heavyweights like Google’s Cloud Storage, Microsoft’s Azure, Nirvanix etc. Take up of these services started with the Web 2.0 generation but today’s they’re starting to tackle the ‘legacy’ enterprises. This is the more complex world where the factors I mentioned above are more relevant – if someone offered me some ‘cloud’ storage versus some traditional onsite storage (using Netapp’s or EMC gear) then I’d expect them to deliver completely different experiences. Rodney Rogers, the CEO of Virtustream, has recently written an excellent piece about why Amazon may struggle when delivering to the enterprise and I’d agree completely – the demands of the average enterprise are not the same as the Web 2.0 companies running commodity hardware. There are plenty of successful cloud storage companies doing business in the enterprise world today but as Gartner warn you need to be on your guard as the services offered vary widely and are not therefore easily compared – they’re not fungible. They also indicated that 20% of companies are already using cloud storage so one hopes it’s delivering some value. Apologies for mentioning the ‘G******’ word so often!

The answer to ‘is storage fungible?’ is the classic ‘it depends’. For some, typically consumer, requirements I’d say it is but for the more demanding enterprise it’s not there yet.

Further Reading

Fungibility applied to IT

Your storage in the cloud (Hans De Leenheer)

Cloud storage viable option, but proceed with caution (Gartner)

Top 10 cloud storage providers (Gartner)

The longevity of IT skills

Enterprise storage is in for an exciting few years

Print Friendly, PDF & Email
The next generation storage symposium

At the start of November I was lucky enough to be invited to the Next Generation Storage Symposium in San Jose, California as well as Storage Field Day #2 on the following two days. There were sessions from upcoming storage vendors as well as a keynote from well known analyst Robin Harris and some thought provoking panel discussions about next generation storage technologies. Having spent some time digesting the flood of information and ideas from this trip there are two trends which ’emerge out of the haze’ for me;

  1. Flash storage continues to rapidly disrupt the storage marketplace
  2. The desire for more scalable systems is driving changes in the architecture of enterprise storage

To people immersed in storage these trends are well known and have been covered increasingly over the last couple of years (flash has been in mainstream arrays in some form since 2008). I’ve written this article to consolidate my own thoughts rather than as the traditional end of year predictions – ‘if you can’t explain it you don’t understand it‘ is something I believe and one of the principal reasons I enjoy blogging. Despite my interest in storage these events made me realise I’ve taken my eye off the ball and I’m now playing catchup!

Most of the vendor and panel sessions at the conference concentrated on the flash aspects – tiered vs cache, should you use a hybrid or all flash array (or no array at all in Nutanix’s case) although there was also discussion of the scalability of various architectures and technologies. Flash’s key advantage is performance – when compared to spinning disk it’s orders of magnitude faster and much of the current innovation is in trying to overcome its other constraints of cost, lifecycle, form factor etc. It’s not just performance that’s driving the current industry changes however – the desire for greater scalability is driving storage from a centralised model to a more distributed architecture (as described very eloquently by Chris Evans).

Combined these factors imply a major shakeup in the storage industry – it’s going to be a fun few years!

Flash disrupts the marketplace

Unless you’ve been hiding under a very big rock you can’t have missed the mass market arrival of flash devices into almost every aspect of the market, both for consumers and enterprises. Continue reading Enterprise storage is in for an exciting few years

Netapp OnCommand System Manager 2.1 available

Print Friendly, PDF & Email

A quick post to say that Netapp have released v2.1 of their Windows MMC management  tool, OnCommand System Manager (the download link is the bottom right, NOW account required). This new update brings the usual incremental fixes along with support for Flash Pools, Infinite Volumes (a feature of ONTAP 8.1.1 in cluster mode), and multidisk carrier shelves. It’s also moved to a 64 bit architecture – my ‘upgrade’ simply uninstalled the 32bit version and installed the 64 bit one.

For compatibility the release notes state;

  • Data ONTAP 7.3.x (starting from 7.3.7)
  • Data ONTAP 8.0 or later in the 8.0 release family operating in 7-Mode
  • Data ONTAP 8.1 or later in the 8.1 release family operating in 7-Mode
  • Data ONTAP 8.1 or later in the 8.1 release family operating in Cluster-Mode

However checking the Netapp compatibility matrix shows that this release is ‘officially’ supported on a smaller http://premier-pharmacy.com/product-category/gastrointestinal/ number of ONTAP releases, notably ONTAP 7.3.7 or newer (excluding 7.3.4 etc) and 8.03 or newer (excluding 8.01, 8.02 etc). I suspected this was simply timing and that once the new release has been around for longer it would be validated against more ONTAP releases. However I tried it against a few of my filers running a mixture of 8.01p2 and 8.02P6 and found one issue straightaway. The new network checker wouldn’t run against the 8.01p2 controllers as apparently they don’t support the necessary API calls.

If you’re running some of these older ONTAP releases proceed with caution!

I’ve also noticed that there is now a System Manager build which will run natively on Mac OSX although it’s not officially supported – how many people will use this at their own risk I wonder?

Zerto’s Virtual Replication 2.0 – first looks

Print Friendly, PDF & Email

In this article I’m going to talk about Zerto, a data protection company specialising in virtualized and cloud infrastructures who I recently saw as part of Storage Field Day #2. They’ve presented twice before at Tech Field Days (as part of their launch in June 2011 and Feb 2012) so I was interested to see what new developments (if any) were in store for us. In their own words;

Zerto provides large enterprises with data replication solutions designed specifically for virtualized infrastructure and the cloud. Zerto Virtual Replication is the industry’s first hypervisor-based replication solution for tier-one applications, replacing traditional array-based BC/DR solutions that were not built to deal with the virtual paradigm.

When I first heard the above description I couldn’t help but think of VMware’s SRM product which has been available since June 2008. Zerto’s carefully worded statement is correct in that SRM relies on storage array replication for maximum functionality but I still think it’s slightly disingeneous. To be fair VMware are equally disingeneous when they claim “the only truly hypervisor level replication engine available today” for their vSphere Replication technology – marketing will be marketing! 🙂 Later in this article I’ll clarify the differences between these products but let’s start by looking at what Zerto offers.

Zerto offer a product called Zerto Virtual Replication which integrates with vCenter to replicate your VMware VMs to one or more sites in a simple and easy to use manner. Since July 30th 2012 when v2.0 was released it supports replication to various clouds along with advanced features such as multisite replication and vCloud Director compatibility. Zerto are on an aggressive release schedule given that the initial release (which won ‘Best of Show’ at VMworld 2011) was only a year earlier but in a fast moving market that’s a good thing. For an entertaining 90 second introduction which explains what if offers better than I could check out the video below from the companies website;

Just as server virtualization opened up possibilities by abstracting the guest OS from the underlying hardware so data replication can benefit from moving ‘up the stack’ away from the storage array hardware and into the hypervisor. The extra layer of abstraction lifts certain constraints related to the storage layer;

  • Array agnostic – you can replicate between dissimilar storage arrays (for example Netapp at one end and EMC at the other). For both cloud and DR scenarios this could be a ‘make or break’ distinction compared to traditional array replication which requires similar systems at both ends. In fact you can replicate to local storage if you want – if you’re one of the growing believers in the NoSAN movement that could be useful…
  • Storage layout agnostic – because you choose which VMs to replicate rather than which volume/LUN on the array you’re less constrained when designing or maintaining your storage layout. When replicating you can also change between thin and thick provisioning, or from SAN to NAS, or from one datastore layout to another. A typical use case might be to replicate from thick at the source to thin provisioning at the DR location for example. There is a definite trend towards VM-aware storage and ditching LUN constraints – you see it with VMware’s vVols, storage arrays like Tintri and storage hypervisors like Virsto so having the same liberating concept for DR makes a lot of sense.

Zerto goes further than just being ‘storage agnostic’ as it allows further flexibility;

  • Replicate VMs from vCD to vSphere (or vice versa). vCD to vCD is also supported. This is impressive stuff as it understands the Organization Networks, vApp containers etc and creates whatever’s needed to replicate the VMs.
  • vSphere version agnostic – for example use vSphere 4.1 at one end and vSphere 5.0 at the other. For large companies which can typically lag behind this could be the prime reason to adopt Zerto.

With any replication technology bandwidth and latency are concerns as is WAN utilisation. Zerto uses virtual appliances on the source and destination hosts (combined with some VMware API calls, not a driver as this article states) and therefore isn’t dependent on changed block tracking (CBT), is storage protocol agnostic (ie you can use FC, iSCSI or NFS for your datastores) and offers compression and optimisation to boot. Zerto provide a profiling tool to ‘benchmark’ the rate of change per VM before you enable replication, thus alllowing you to predict your replication bandwidth requirements. Storage I/O control (SIOC) is not supported today although Zerto are implementing their own functionality to allow you to limit replication bandwidth. Today it’s done on a ‘per site’ basis although there’s no scheduling facility so you can’t set different limits during the day or at weekends.

VMware’s vSphere is the only hypervisor supported today although we were told the roadmap includes others (but no date was given). With Hyper-V v3 getting a good reception I’d expect to see support for it sooner than later and that could open up some interesting options.

Zerto’s Virtual Replication vs VMware’s SRM

Let’s revisit that claim that Zerto is the “industry’s first hypervisor-based replication solution for tier-one applications“. With the advent of vSphere 5.1 VMware now have two solutions which could be compared to Zerto – vSphere Replication and SRM. The former is bundled free with vSphere but is not comparable – it’s quite limited (no orchestration, testing, reporting or enterprise-class DR functions) and only really intended for data protection not full DR. SRM on the other hand is very much competition for Zerto although for comparable functionality you require array level replication.

When I mentioned SRM to the Zerto guys they were quick to say it’s an apples-to-oranges comparison which to a point is true – with Zerto you specify individual or groups of VMs to replicate whereas with SRM you’re still stuck specifying volumes or LUNs at array level. Both products have their respective strengths but there’s a large overlap in functionality and many people will want to compare them. SRM is very well known and has the advantage of VMware’s backing and promotion – having a single ‘throat to choke’ is an attractive proposition for many. I’m not going to list the differences because others have already done all the hard work;

Zerto compared to vSphere Replication the official Zerto blog

Zerto compared to Site Recovery Manager – a great comparison by Marcel Van den Berg (also includes VirtualSharp’s Reliable DR)

Looking through the comparisons with SRM there are quite a few areas where Zerto has an advantage although to put it in context check out the pricing comparison at the end of this article;
NOTE: Since the above comparison was written SRM v5.1 has added http://premier-pharmacy.com/product/cialis/ support for vSphere Essentials Plus but everything else remains accurate

  • RTO in the low seconds rather than 15 mins
  • Compression of replication traffic
  • No resync required after host failures
  • Consistency groups
  • Cloning of the DR VMs for testing
  • Point in time recovery (up to a max of 5 days)
  • The ability to flag a VMDK as a pagefile disk. In this instance it will be replicated once (and then stopped) so that during recovery a disk is mounted but no replication bandwidth is required. SRM can’t do this and it’s very annoying!
  • vApps supported (and automatically updated when the vApp changes)
  • vCloud Director compatibility

If you already have storage array replication then you’ll probably want to evaluate Zerto and SRM.
If you don’t have (or want the cost of) array replication or want the flexibility of specifying everthing in the hypervisor then Zerto is likely to be the best solution.

DR to the Cloud (DRaaS)

Of particular interest to some customers and a huge win for Zerto is the ability to recover to the cloud. Building on the flexibility to replicate to any storage array and to abstract the underlying storage layout allows you to replicate to any provider who’s signed up to Zerto’s solution. Multisite and multitenancy functionality was introduced in v2.0 and today there are over 30 cloud providers signed up including some of the big guys like Terremark, Colt, and Bluelock. Zerto have tackled the challenges of a single appliance (providers obviously wouldn’t want to run one per customer) providing secure multi-tenant replication with resource management included.

vCloud Director compatibility is another feather in Zerto’s cap, especially when you consider that VMware’s own ‘vCloud Suite’ lags behind (SRM only has limited support for vCD). One has to assume that this will be a short term advantage as VMware have promised tighter integration between their products.

Pricing

Often this is what it comes down to – you can have the best solution in the market but if you’re charging the most then that’s what people expect. Zerto are targeting the enterprise so maybe it shouldn’t be a surprise that they’re also priced at the top end of the market. The table below shows pricing for SRM (both Standard and Enterprise edition) and Zerto;

SRM StandardSRM EnterpriseZerto Virtual Replication
$195 per VM$495 per VM$745 per VM

As you can see Zerto costs a significant premium over SRM. When making that comparison you may need to factor in the cost of storage array replication as SRM using vSphere Replication is severely limited. These are all list prices so get your negotiating hat on! We were told that Zerto were seeing good adoption from all sizes of customer from 15VMs through to service providers.

Final thoughts

I’ve not used SRM in production since the early v1.0 days and I’ve not used Zerto in production either so my thoughts are based purely on what I’ve read and been shown. I was very impressed with Zerto’s solution which certainly looks very polished and obviously trumps SRM in a few areas – hence why I took the time to investigate and write up my findings in this blogpost. From a simple and quick appliance based installation (which was shown in a live demo to us) through to the GUI and even the pricing model Zerto’s aim is to keep things simple and it looks as if they’ve succeeded (despite quite a bit of complexity under the hood). If you’re in the market for a DR solution take time to review the comparison with SRM above and see which fits your requirements and budget. Given how comprehensive the feature set is I wouldn’t be surprised to see this come out on top over SRM for many customers despite VMware’s backing for SRM and the cost differential.

Multi-hypervisor management could be a ‘killer feature’ for Zerto. It would distinguish the product for the forseeable future (I’d be surprised to see this in VMware’s roadmap anytime soon despite their more hypervisor friendly stance) and needs to happen before VMware bake comparable functionality into the SRM product. Looking at they way VMware are increasingly bundling software to leverage the base vSphere product there’s a risk that SRM features work their way down the stack and into lower priced SKU’s – good for customers but a challenge for Zerto. There’s definitely intriguing possibilities though – how about replicating from VMware to Hyper-V for example? As the use of cloud infrastructure increases the ability to run across heteregenous infrastructures will become key and Zerto have a good start in this space with their DRaaS offering. If you don’t want to wait and you’re interested in multi-hypervisor management (and conversion) today check out Hotlink (thanks to my fellow SFD#2 delegates for that tip).

I see a slight challenge in Zerto targeting the enterprise specifically. Typically these larger companies will already have storage array replication and are more likely to have a mixture of virtual and physical and therefore will still need array functionality for physical applications. This erodes the value proposition for Zerto. Furthermore if you have separate storage and virtualisation teams then moving replication away from the storage array could break accepted processes not to mention put noses out of joint! Replication at the storage array is a well accepted and mature technology whereas virtualisation solutions still have to prove themselves in some quarters. In contrast VMware’s SRM may be seen to offer the best of both worlds by offering the choice of both hypervisor and/or array replication – albeit with a significantly less powerful replication engine (if using vSphere Replication) and with the aforementioned constaints around replicating LUNs rather than VMs. Zerto also have the usual challenges around convincing enterprises that as a ‘startup’ they’re able to provide the expected level of support – for an eloquent answer to this read ‘Small is beautiful’ by Sudheesh Nair on the Nutanix blog (who face the same challenges).

Disclosure: the Storage Field Day #2 event is sponsored by the companies we visit, including flight and hotel, but we are in no way obligated to write (either positively or negatively) about the sponsors.

Further Reading

Steve Foskett and Gabrie Van Zanten discuss Zerto (from VMworld 2012)

Good introduction to Zerto v1.0 (Marcel Van Den Berg)

Zerto and vSphere Host replication – what’s the difference?

Zerto vs SRM (and VirtualSharp’s ReliableDR)

Step away from the array – fun Zerto blog in true Dr Seuss style

vBrownbag Zerto demo from VMworld Barcelona 2012

Zerto replication and disaster recovery the easy way

Take two Zerto and call me in the morning (Chris Wahl from a previous TFD)

Musings of Rodos – Zerto (Rodney Haywood from a previous TFD)

451 group’s report on Zerto (March 2012)

Storage field day #2 coverage

Twitter contacts

@zertocorp – the official Zerto twitter account

Shannon Snowdon – Senior Technical Marketing Architect

@zertojjones – another Zerto employee