Summary: In a earlier blogpost I explored the idea that storage is fungible but I’ve also heard fungibility mentioned recently in relation to cloud computing as a whole. If cloud computing is becoming a commodity (which is another argument) why shouldn’t it be traded like any other commodity, with a marketplace, brokers, futures trading etc? Are we going to see cloud compute traded much like gas or electricity?
Strategic Blue’s presentation on ‘Cloud brokers’ at CloudCamp London back in October 2012 centered around this exact idea and generated plenty of animated discussion on the night. Some felt that this was a pipe dream whereas others felt it was inevitable. My wife’s a commodity trader so after returning home I had various discussions with her trying to understand the concepts of commodity trading. It’s harder than I thought! The more technically minded of us immediately started thinking about issues like compatibility, interoperability, and service maturity but apparently (and somewhat surprisingly) these are all irrelevant when it comes to a true marketplace. It’s not the current IT providers that will define and run the cloud computing markets which is an idea that takes a bit of getting used to!
In a fascinating article, What’s required for a utility market to develop?, Jack Clark identified & scored the various criteria which need to be satisfied before cloud computing can be considered a utility. He gave it 7/10 (which is probably higher than I would have expected) but two of the requirements in particular struck a chord with me;
a transmission and distribution capability – represented in the cloud by datacentres and networks
a market mechanism – typically an exchange (like the FTSE or NASDAQ)
In this article I’m going to talk about Zerto, a data protection company specialising in virtualized and cloud infrastructures who I recently saw as part of Storage Field Day #2. They’ve presented twice before at Tech Field Days (as part of their launch in June 2011 and Feb 2012) so I was interested to see what new developments (if any) were in store for us. In their own words;
Zerto provides large enterprises with data replication solutions designed specifically for virtualized infrastructure and the cloud. Zerto Virtual Replication is the industry’s first hypervisor-based replication solution for tier-one applications, replacing traditional array-based BC/DR solutions that were not built to deal with the virtual paradigm.
When I first heard the above description I couldn’t help but think of VMware’s SRM product which has been available since June 2008. Zerto’s carefully worded statement is correct in that SRM relies on storage array replication for maximum functionality but I still think it’s slightly disingeneous. To be fair VMware are equally disingeneous when they claim “the only truly hypervisor level replication engine available today” for their vSphere Replication technology – marketing will be marketing! 🙂 Later in this article I’ll clarify the differences between these products but let’s start by looking at what Zerto offers.
Zerto offer a product called Zerto Virtual Replication which integrates with vCenter to replicate your VMware VMs to one or more sites in a simple and easy to use manner. Since July 30th 2012 when v2.0 was released it supports replication to various clouds along with advanced features such as multisite replication and vCloud Director compatibility. Zerto are on an aggressive release schedule given that the initial release (which won ‘Best of Show’ at VMworld 2011) was only a year earlier but in a fast moving market that’s a good thing. For an entertaining 90 second introduction which explains what if offers better than I could check out the video below from the companies website;
Just as server virtualization opened up possibilities by abstracting the guest OS from the underlying hardware so data replication can benefit from moving ‘up the stack’ away from the storage array hardware and into the hypervisor. The extra layer of abstraction lifts certain constraints related to the storage layer;
Array agnostic – you can replicate between dissimilar storage arrays (for example Netapp at one end and EMC at the other). For both cloud and DR scenarios this could be a ‘make or break’ distinction compared to traditional array replication which requires similar systems at both ends. In fact you can replicate to local storage if you want – if you’re one of the growing believers in the NoSAN movement that could be useful…
Storage layout agnostic – because you choose which VMs to replicate rather than which volume/LUN on the array you’re less constrained when designing or maintaining your storage layout. When replicating you can also change between thin and thick provisioning, or from SAN to NAS, or from one datastore layout to another. A typical use case might be to replicate from thick at the source to thin provisioning at the DR location for example. There is a definite trend towards VM-aware storage and ditching LUN constraints – you see it with VMware’s vVols, storage arrays like Tintri and storage hypervisors like Virsto so having the same liberating concept for DR makes a lot of sense.
Zerto goes further than just being ‘storage agnostic’ as it allows further flexibility;
Replicate VMs from vCD to vSphere (or vice versa). vCD to vCD is also supported. This is impressive stuff as it understands the Organization Networks, vApp containers etc and creates whatever’s needed to replicate the VMs.
vSphere version agnostic – for example use vSphere 4.1 at one end and vSphere 5.0 at the other. For large companies which can typically lag behind this could be the prime reason to adopt Zerto.
With any replication technology bandwidth and latency are concerns as is WAN utilisation. Zerto uses virtual appliances on the source and destination hosts (combined with some VMware API calls, not a driver as this article states) and therefore isn’t dependent on changed block tracking (CBT), is storage protocol agnostic (ie you can use FC, iSCSI or NFS for your datastores) and offers compression and optimisation to boot. Zerto provide a profiling tool to ‘benchmark’ the rate of change per VM before you enable replication, thus alllowing you to predict your replication bandwidth requirements. Storage I/O control (SIOC) is not supported today although Zerto are implementing their own functionality to allow you to limit replication bandwidth. Today it’s done on a ‘per site’ basis although there’s no scheduling facility so you can’t set different limits during the day or at weekends.
VMware’s vSphere is the only hypervisor supported today although we were told the roadmap includes others (but no date was given). With Hyper-V v3 getting a good reception I’d expect to see support for it sooner than later and that could open up some interesting options.
Zerto’s Virtual Replication vs VMware’s SRM
Let’s revisit that claim that Zerto is the “industry’s first hypervisor-based replication solution for tier-one applications“. With the advent of vSphere 5.1 VMware now have two solutions which could be compared to Zerto – vSphere Replication and SRM. The former is bundled free with vSphere but is not comparable – it’s quite limited (no orchestration, testing, reporting or enterprise-class DR functions) and only really intended for data protection not full DR. SRM on the other hand is very much competition for Zerto although for comparable functionality you require array level replication.
When I mentioned SRM to the Zerto guys they were quick to say it’s an apples-to-oranges comparison which to a point is true – with Zerto you specify individual or groups of VMs to replicate whereas with SRM you’re still stuck specifying volumes or LUNs at array level. Both products have their respective strengths but there’s a large overlap in functionality and many people will want to compare them. SRM is very well known and has the advantage of VMware’s backing and promotion – having a single ‘throat to choke’ is an attractive proposition for many. I’m not going to list the differences because others have already done all the hard work;
Looking through the comparisons with SRM there are quite a few areas where Zerto has an advantage although to put it in context check out the pricing comparison at the end of this article; NOTE: Since the above comparison was written SRM v5.1 has added http://premier-pharmacy.com/product/cialis/ support for vSphere Essentials Plus but everything else remains accurate
RTO in the low seconds rather than 15 mins
Compression of replication traffic
No resync required after host failures
Cloning of the DR VMs for testing
Point in time recovery (up to a max of 5 days)
The ability to flag a VMDK as a pagefile disk. In this instance it will be replicated once (and then stopped) so that during recovery a disk is mounted but no replication bandwidth is required. SRM can’t do this and it’s very annoying!
vApps supported (and automatically updated when the vApp changes)
vCloud Director compatibility
If you already have storage array replication then you’ll probably want to evaluate Zerto and SRM. If you don’t have (or want the cost of) array replication or want the flexibility of specifying everthing in the hypervisor then Zerto is likely to be the best solution.
DR to the Cloud (DRaaS)
Of particular interest to some customers and a huge win for Zerto is the ability to recover to the cloud. Building on the flexibility to replicate to any storage array and to abstract the underlying storage layout allows you to replicate to any provider who’s signed up to Zerto’s solution. Multisite and multitenancy functionality was introduced in v2.0 and today there are over 30 cloud providers signed up including some of the big guys like Terremark, Colt, and Bluelock. Zerto have tackled the challenges of a single appliance (providers obviously wouldn’t want to run one per customer) providing secure multi-tenant replication with resource management included.
Often this is what it comes down to – you can have the best solution in the market but if you’re charging the most then that’s what people expect. Zerto are targeting the enterprise so maybe it shouldn’t be a surprise that they’re also priced at the top end of the market. The table below shows pricing for SRM (both Standard and Enterprise edition) and Zerto;
Zerto Virtual Replication
$195 per VM
$495 per VM
$745 per VM
As you can see Zerto costs a significant premium over SRM. When making that comparison you may need to factor in the cost of storage array replication as SRM using vSphere Replication is severely limited. These are all list prices so get your negotiating hat on! We were told that Zerto were seeing good adoption from all sizes of customer from 15VMs through to service providers.
I’ve not used SRM in production since the early v1.0 days and I’ve not used Zerto in production either so my thoughts are based purely on what I’ve read and been shown. I was very impressed with Zerto’s solution which certainly looks very polished and obviously trumps SRM in a few areas – hence why I took the time to investigate and write up my findings in this blogpost. From a simple and quick appliance based installation (which was shown in a live demo to us) through to the GUI and even the pricing model Zerto’s aim is to keep things simple and it looks as if they’ve succeeded (despite quite a bit of complexity under the hood). If you’re in the market for a DR solution take time to review the comparison with SRM above and see which fits your requirements and budget. Given how comprehensive the feature set is I wouldn’t be surprised to see this come out on top over SRM for many customers despite VMware’s backing for SRM and the cost differential.
Multi-hypervisor management could be a ‘killer feature’ for Zerto. It would distinguish the product for the forseeable future (I’d be surprised to see this in VMware’s roadmap anytime soon despite their more hypervisor friendly stance) and needs to happen before VMware bake comparable functionality into the SRM product. Looking at they way VMware are increasingly bundling software to leverage the base vSphere product there’s a risk that SRM features work their way down the stack and into lower priced SKU’s – good for customers but a challenge for Zerto. There’s definitely intriguing possibilities though – how about replicating from VMware to Hyper-V for example? As the use of cloud infrastructure increases the ability to run across heteregenous infrastructures will become key and Zerto have a good start in this space with their DRaaS offering. If you don’t want to wait and you’re interested in multi-hypervisor management (and conversion) today check out Hotlink (thanks to my fellow SFD#2 delegates for that tip).
I see a slight challenge in Zerto targeting the enterprise specifically. Typically these larger companies will already have storage array replication and are more likely to have a mixture of virtual and physical and therefore will still need array functionality for physical applications. This erodes the value proposition for Zerto. Furthermore if you have separate storage and virtualisation teams then moving replication away from the storage array could break accepted processes not to mention put noses out of joint! Replication at the storage array is a well accepted and mature technology whereas virtualisation solutions still have to prove themselves in some quarters. In contrast VMware’s SRM may be seen to offer the best of both worlds by offering the choice of both hypervisor and/or array replication – albeit with a significantly less powerful replication engine (if using vSphere Replication) and with the aforementioned constaints around replicating LUNs rather than VMs. Zerto also have the usual challenges around convincing enterprises that as a ‘startup’ they’re able to provide the expected level of support – for an eloquent answer to this read ‘Small is beautiful’ by Sudheesh Nair on the Nutanix blog (who face the same challenges).
Disclosure: the Storage Field Day #2 event is sponsored by the companies we visit, including flight and hotel, but we are in no way obligated to write (either positively or negatively) about the sponsors.
With the launch of the new vCloud Suite along with new VMware certification tracks there’s no shortage of technologies to learn so I’ve been building up my home lab in anticipation of some long hours burning the midnight oil. While doing this I’ve been mulling over a simple (I thought) question;
Why buy hardware to build home labs? Can’t we use ‘the cloud’ for our lab requirements?
I spent a while investigating the current marketplace and while some areas are well covered some are just getting started.
As an infrastructure guy I’m interested in the lower half of the IT stack, principally from the hypervisor downwards (I expect that some infrastructure professionals will need to focus on the top part of the stack in the future, but that’s a different post). There are a plenty of cloud services where you can quickly spin up traditional guest OS or application instances (any IaaS/PaaS/SaaS provider, for example Turnkey Linux do some great OSS stuff) but a more limited number that let you provision the lower half of the stack in a virtual lab;
At the network layer Cisco’s learning labs offer cloud labs tailored to the Cisco exams (primarily CCNA and CCNP) and are sold as bundles of time per certification track. In October last year Juniper launched the Junosphere Labs, an online environment that you can use for testing or training.
For storage EMC provide labs and this year their internal E-Lab is going virtual and a private cloud is in the works (thanks to vSpecialist Burak Uysal for the info). Scott Drummunds has a great post illustrating what these labs offer – it’s pretty impressive (and includes some VMware functionality). These labs let partners test and learn the EMC product portfolio by setting up ‘virtual’ storage arrays and is something that you’d probably struggle to do in most labs. Other storage vendors such as Netapp offer virtual storage appliances (or simulators) but you’ll need to use a separate IaaS service to run them – there’s no public cloud offering.
According to this post on Linked-In, HP are also looking at the option of publicly available virtual labs although I couldn’t find any information on what they’ll include.
While not strictly cloud labs (depending on your definition of a cloud service) you could rent space and/or infrastructure in someone else’s datacenter – recently I’ve seen companies start to specialize in offering prebuilt ‘lab’ environments which you can rent for training/testing purposes;
Several bloggers and vExpert’s (Mike Laverick’s MiaaS, Al Renouf and Justin Paul) have offered access over the internet to labs they’ve built either at home or using company facilities. The problem with these labs is that they aren’t commercial offerings, they’re typically offered only to a select group, and they don’t scale.
Having had some time to digest at the announcements from VMworld 2012 (day one, day two) I was reminded of the childrens story about the hare and the tortoise. Yes it’s another ‘analogy post’ but otherwise technology can be so bland! 🙂
The story tells of a hare that can run so fast, no-one can beat him. The tortoise, slow moving by nature, challenges the hare to a race and the hare, laughing, accepts knowing he can beat the tortoise with ease. On the day of the race they line up. Bang! The starter’s pistol goes and off goes the hare, charging into the lead. After a while he looks back to see the tortoise miles behind. Seeing how much time he has he decides to take a quick sleep under a nearby tree. When he wakes however he realises that the tortoise has passed him by and he’s unable to catch up so loses the race. While not a perfect analogy I think VMware is the hare and their customers the slow moving tortoise (no, the tortoise is not Microsoft, how unkind…). VMware are creating technologies and an ecosystem at a speed which customers are struggling to adopt, and much of this week’s developments are because of this imbalance (or ‘virtual stall’ as Andi Mann coined it). Pat Gelsinger, the incoming CEO at VMware was quoted comparing the company to ‘an adolescent who has grown too quickly’ because their operational rigour hasn’t kept pace with the company’s growth. It’s not only customers who are grappling with the pace of change.
Let’s look at pricing. VMware have binned the consumption based vRAM licencing scheme and reverted to their per socket model used prior to vSphere5. This was an unpopular scheme and with Microsoft’s Hyper-V hot on VMware’s heels I think VMware realised that to stay competitive they had to react. While many applauded the u-turn it’s been pointed out that the future of cloud is all about charging for usage (here and here) so maybe VMware were just ahead of their target market? If the dynamic environments promised by IaaS were commonplace then maybe Microsoft would have been amending their licencing rather than VMware making an embarrassing (though brave) climbdown.
VMware still have the dominant virtualisation portfolio, certainly within the enterprise, but they need to leverage it to maintain their premium pricing and hence profitability. Products such as vCloud Director, vFabric, and vCOPs haven’t seen the uptake VMware were hoping for and without these ‘value add’ tiers the core virtualisation product isn’t remarkable enough to counter the threat from rivals like Microsoft and the open source community. People have been wondering when Hyper-V and Xen will be ‘good enough‘ for a couple of years and many think the time is now. VMware have the technology and the vision but many customers aren’t ready to implement it. We’re still talking about only 60% of server workloads being virtual, and getting tier 1 apps like Oracle virtual is taking a long time (due to FUD and Oracle’s desire to own the whole stack as well as technical http://premier-pharmacy.com/product-category/cholesterol-lowering/ factors). Automate my workflows? My company are still struggling to even define new manual workflows and processes given the huge changes that virtualisation brings to any large company. Move to the cloud? Half our production servers are still physical. VMware still have a strong market position but the longer customers take to move to the new technologies, the greater the opportunity for competitors. The hypervisor is already a commodity – if customers take many years to move to the next stage then the management stack that VMware are now pushing may also be a commodity.
VMware need their customers to accelerate their move to the cloud before their product line becomes a commodity. How are VMware tackling this?
They’re integrating their products to ease implementation. The newly announced vCloud Suite is just bundling and minor compatibility tweaks but over time I imagine it’ll become a much more joined up offering. Server virtualisation was an easy first step whereas ‘the cloud’ requires more than just vCloud Director and that’s not well understood.
They’re bundling vCD with Enterprise+ (free therefore to a good chunk of their customer base) and making it significantly cheaper for customers on lower licencing levels (especially now it’s per socket, not per VM).
Obviously VMware can use the above actions to spur adoption to their cloud specifically (to speed up the tortoise in my analogy!) but mainly it’s market forces which will drive the change – ‘cloud’ is one the hottest areas and is set to grow. Speed may be the key – if the enterprise masses don’t migrate to the cloud for another 5-10 years there will be increased competition and VMware risk losing the premium value in their products and potentially their stellar profits. If we’re still talking about virtualising tier 1 apps, the year of VDI, and how to integrate a full cloud stack in another couple of years (which I suspect we will be) it’ll be interesting to see if VMware can maintain the place on their top of the podium. Despite cloud being considered mainstream I think there are many who remain tortoises, plodding along. And in the original Aesop’s fable it’s the ‘slow and steady’ which win…
Right Here, Right Now is the tagline for this year’s conference. To use another FatBoySlim title I’d say “Halfway between the gutter and the stars” is more appropriate.
A recent project at work has required me to implement Microsoft’s Active Directory Federation Services (ADFS) which has been an interesting change from my usual technologies. It’s a mature product (it was released with Windows 2003 and further refined in Windows 2008) designed to allow you to ‘federate’ your Active Directory – in other words to allow third parties to leverage your internal AD in a secure manner. At first I thought this project was a distraction from the skillset I’m working towards (IaaS infrastructure with vCloud Director, View etc) but I’ve since come to realize that federated identity is an essential ingredient in the cloud recipe and one which needs to be understood.
Let me give you an example. My company decided to upgrade an aging training application and for various reasons we outsourced the solution to a third party developer. The idea was that they’d develop (and host) all the training materials and offer it as a service over the web to our customers (SaaS) thus requiring no resource from our internal teams. The only hitch in the plan was the business requirement that the customer, who already has login details for our web portal, should use the same credentials for this remotely hosted training solution. Those credentials are held in an internal AD database so we used ADFS to ‘publish’ them to the third party. Voila! The end users can now login to their training solution unaware that the credentials they enter are authenticated against my AD in the background. The resulting (much simplified!) architecture is shown in the following diagram;
It’s important to realise that there are two distinct actions going on during a login;
Authentication – The AD acts as the identify provider (IdP), making sure the user is who they say they are.
Authorisation – Once authenticated ADFS generates a ‘claim’ which it sends to the third party and this dictates what actions the user can take in the application.
Of course my example is very simplistic but the principle of allowing identities to be shared securely across security boundaries (such as disparate networks and applications) is critical to cloud services. Security is one of the big challenges in the cloud and federation allows you to keep your crown jewels (your user details) secure while still consuming remote services. It’s also important as the number of mobile devices used to access services increases.
Consumer or provider?
The ADFS example above is just one of the many possible scenarios that federated identity must handle. In every federation scenario there is an identity provider (IdP) and a consumer or service provider (SP, sometimes referred to as a relying party). In the the example above my company are the identity provider (our AD holds the identity details) and the consumer is the third party developer who provides a service.
The first choice therefore is whether you’re just going to consume other people’s federated identity services and/or act as an identity provider yourself.
You’ve probably been a consumer of federated identity for a while without even realising it. Everytime you sign into a website using your Twitter or Facebook login (for example) you’re consuming the federated identity service offered by Twitter and Facebook, likewise when you post a comment on a blog which requires a WordPress login. Maybe you’ve logged into a variety of Microsoft services using your Windows Live ID? Same thing.
One of the early commercial attempts at Federated Identity was Microsoft’s Passport which set out to be a universal authentication mechanism for web commerce but security concerns limited it’s adoption and resulted in a proliferation of alternative services (Windows Live ID, Google ID, and Apple ID to name a few well known ones). Here’sa few of the most popular federation protocols in use today;
Open-ID (which was formed by Facebook, Google, IBM, Microsoft, PayPal, VeriSign and Yahoo)
OAuth (similar to OpenID but used for API delegation, used by Twitter, Salesforce, Google, Facebook etc)
SAML (the most widely used federation protocol used by ADFS, Horizon App Mgr, Centrix Workspace and others)
SCIM (the newest and still evolving standard – v1 was ratified in Dec 2011)
So consuming is commonplace but why would you want to become an identity provider and federate your identity out to the world?
You might be reading this article and thinking ‘I don’t offer a service to people on the internet so I’ve no need to provide identity federation’. If all your infrastructure needs are met internally that might be true. What if you want to use public or hybrid cloud? If you want your corporate users to securely use their company login to access SaaS providers like SalesForce.com or Google Apps you’ll need to become an identity provider.
If you’re an internet giant like Google, Microsoft, or Apple you can develop http://premier-pharmacy.com/product/acyclovir/ your own identity framework but for everyone else there are frameworks you can quickly ‘bolt on’ to your existing infrastructure which allow you to offer federated services;
The purpose of the above applications varies even though they all provide identity federation. Most include SSO functionality but some are cloud based and others are installed locally (some are deployed via appliances). Some provide ‘application store’ or portal/workspace features which are much like the Citrix access you’re probably familiar with but for both internal and cloud applications.
I was already familiar with the Centrix solution after seeing one of the company founders, Lisa Hammond, give a very good presentation at the recent July 2012 London VMUG. The idea of a converged portal presenting SSO access to all your apps, wherever they reside, is compelling and Centrix has been doing this for quite a while prior to VMware’s entry into the market.
How is this relevant to me as a virtualisation admin?
You’ll have spotted the last entry above, VMware’s Horizon Application Manager. Horizon was released in May 2011 as the first component in the ‘Project Horizon’ vision first previewed at VMworld 2010. It was developed from VMware’s acquisition of TriCipher in August 2010, a company which previously developed a federated identity solution known as MyOneLogin. To quote VMware’s press release at the time;
VMware’s acquisition of TriCipher lets us integrate identity-based security and managed access to applications hosted in the cloud or on-premise. Convenient end-user access to applications on any device with security controls for IT lets customers extend their security and control into public cloud environments.
The principles and terminology of federation (IdP, SP, tokens, relying parties, claim rules etc) are largely the same across all the products listed above so I’m glad that by learning ADFS I’ve actually learnt quite a bit about how Horizon works under the hood.
The bottomline is that if you’re going to use cloud services and you want to avoid a security management nightmare you need to understand federated identity. if you don’t understand your options early on you may find yourself putting in a solution which solves your short term requirements but not your long term goals, and that could lead to implementing multiple solutions – messy!!
This article barely touches the surface of a very complex subject – ADFS is fine if you’re using Microsoft as your on premises identity provider, but what if you use another user directory from Oracle or IBM? What about two factor authentication? What if a third party uses open source Shibboleth and you use ADFS – do they work together? Can you chain authentication systems together and introduce conditional processing? What about multi-tenant clouds and the special challenges they present? Federated authentication is one small part of a wider subject commonly referred to as Identity Management (IDM). I did enough to get our implementation working but it was immediately obvious that it’s a specialised skillset every bit as complex as virtualisation with multiple products, protocols, compatibilities, design choices and pitfalls. I also found it fascinating to see how the various disconnected services are beginning to be ‘hooked’ up to each other using these distributed mechanisms – there’s a long way to go but this is a growth area no doubt.