Tag Archives: cloud

VMware’s hybrid cloud – a discussion with customers

Summary: Hybrid cloud gets a lot of press and VMware are claiming a stake in a market predicted to grow massively over the next few years. I discuss with customers what vCloud Air can offer businesses and what you need to consider.

On Tuesday 25th November I took part in an online discussion about the hybrid cloud with a couple of vCloud Air customers (Schalk van der Merwe from The Hut Group and Matthew Garrett from Cloud Business) along with VMware’s Rick Munro (Chief Technologist, vCloud, EMEA), and well known blogger Julian Wood. The discussion lasted around an hour and we discussed topics such as;

  • Is hybrid cloud just a stepping stone to the public cloud?
  • Is data sovereignty an issue for customers?
  • Are clouds converging to a global platform or diverging into niches (market verticals like healthcare, government etc)?
  • Is the rapid pace of change a challenge when making a business case for cloud?
  • Does adopting cloud lead to changes in established ways of doing business, from a people/process/technology perspective?

You can view an edited version our discussion below which (luckily for you dear http://premier-pharmacy.com/product/flomax/ reader) is only 18 minutes;

If you don’t have time to watch, some of my takeaways were;

  • agility is a key driver of cloud adoption, more so than cost savings (which in my experience aren’t always expected or delivered). Results speak louder than RFPs!
  • it’s not a case of ‘private’ OR ‘public’ OR ‘hybrid’ cloud – you can, and probably should, use all three depending on your use case
  • start small and build on success (just as the DevOps crowd advocate!).
  • hybrid cloud may be a stepping stone, but it’s essential to take a first step
  • vCloud leverages your existing knowledge and skills so is an easy first step

Personally I think hybrid cloud is here to stay. Mainframes are still with us (from the 1950’s) and email has been available as a SaaS offering for at least 18 years (Hotmail started in 1996) yet today there are still thousands of companies running their own email systems.  For many enterprises ‘hybrid cloud’ will actually mean a multitude of different clouds including vCloud Air, AWS etc, despite the additional management overheads that will introduce. Time will tell!

 

Evolution of the IT Pro (staying relevant in 2014 and beyond)

Bob-the-BuilderSummary: The IT function is becoming a broker of services but, until that happens, infrastructure engineers will likely to fall into the ‘builder broker’ camp – you’ll need to be able to ‘stitch together’ different services but you’ll also need to build them and understand what’s ‘under the hood’.

For a few years now infrastructure engineers have been hearing how cloud computing is going to change their jobs, potentially putting many out of work. Plenty has been written about whether this will result in a net gain or loss of IT jobs (here, here, and here plus in one my first blogposts I talked about changing roles) but whatever your stance it’s undeniable that the nature of IT jobs will change – technology never stands still for long.

This isn’t theoretical or a shift that’ll start in ten years – changes are happening right now.

Gartner recently identified ‘IT as a service broker’ in their top ten technology trends for 2014 and I’d agree with those that say skills such as virtualisation are no longer enough. Here’s a few things I’ve being asked for in the last few months which is why I’m adding my voice to the ‘service broker’ trend;

  • Knowledge of alternative virtualisation/cloud platforms. “Should we be considering Hyper-V? Openstack? Oracle VM?”
  • How can we integrate Amazon’s VPC with our internal dev/test environments?
  • If we buy into a third parties managed services, what’s the impact on our production platform and technology roadmap?

The news columns are filling up with articles about changing skillsets;

Still not convinced? VMware’s flagship cloud product, vCAC, exists to orchestrate resources across multiple clouds http://premier-pharmacy.com/product-category/allergy/ from AWS, RackSpace, Azure and others so this talk of ‘brokering’ across heterogeneous systems is also where VMware see the future.

The requirement for inhouse engineering expertise isn’t going to disappear overnight so you’ve got time to adjust, but for many the future may be more about integrating services together than building them.

How do you stay relevant?

That’s the million dollar question isn’t it? I’ve listed my opinions below although for alternative advice Steve Beaver wrote a great article for The Virtualization Practice at the end of last year (“Get off the hypervisor and into the cloud”) which mirrors my thoughts exactly. If I’d read it before writing this I probably wouldn’t have bothered!

  1. Focus on technical expertise. As the industry coalesces towards service providers and consumers the providers need the best people they can find as the impact (at scale) is magnified. Automation is a key trend for this role as self-service is a key tenet of cloud. Luckily, while ‘compute’ has already been disrupted by virtualisation both storage and network are just getting started which will generate demand for those who keep up with technology developments.
  2. Focus on becoming an IT broker. This means getting a wide knowledge of different solutions and architectures (AWS, VMware, OpenStack, understand SOA principles, federation, integration patterns etc) and know how to implement and integrate them. You’ll also have to get closer to the business and be able to translate business requirements such that you can satisfy them via the available services. Some would argue that this is crossing over to the role of a business analyst, and they may be right.

If you’re going to go deep on technology, go work for a vendor, ISP, or big IT consultancy (sooner rather than later).

If you’re going for the broker/business analyst role make sure you’re building up your business knowledge, with less focus on the low level nuts and bolts.

Pick one or the other, but don’t stand still. Taking my own advice I’ve just taken a role with a service provider. Let’s see how this plays out! 🙂

VMworld 2013 – Is it just me?

vmworld2013logo-300x169Overall I guess I feel disappointed. Over the last week I’ve been trying to keep up with developments from VMworld and to be honest it’s not been as tough as I thought because most of the announcements were already known quantities and very little ‘new’ information was given. I see this as a reflection of the growth and maturity of VMware – release cycles are getting longer, innovation takes longer to gestate, and the low hanging fruit of ‘wow’ features has been exhausted (and having written that I see Chris Wolf’s article which says much the same thing. I’m in good company). Chris Wahl’s blog has full details of the new stuff.

caveatUPDATE 4th Sept: It’s been pointed out to me that as a vExpert and blogger I do tend to have early access to both information and beta releases so what I consider new and what most attendees consider new is different. Fair comment.

Eric Siebert, a long time veteran of VMworld and the technology involved, has a great writeup of the main announcements along with his thoughts, which largely mirror my own. Maybe we’ve been spoilt over the years by the ‘cool’ factor of the vMotion and svMotion, maybe I woke up on the wrong side of bed, or maybe VMware aren’t delivering the goods as they used to.

vSphere ticks along

vSphere has been on a two year release cycle for major versions but that seems to have slipped. The next release of the core vSphere platform will be out later this year (probably at VMworld Barcelona as with v5.1 last year) but even when it does v5.5 is not much to write home about;

  • SSO has been rewritten but it probably shouldn’t have been released as it was in the first place :oops:. OK, there are a few new features too.
  • New maximums will probably only help the minority
  • VSAN might be nice but isn’t even in beta yet and will still be an extra cost when it is released.
  • We still have two clients, both of which are required. The web client has been improved but they haven’t discontinued the GUI client as expected.
  • App HA is apparently significantly improved from previous editions but application support is still limited. Good for MS SQL maybe but there’s no Oracle, SAP etc. It’s also an Enterprise+ feature. SMP support for VMware’s FT feature (which could be great) is still just a technical preview with no release date.
  • OK – vSphere Flash Read Cache is a nice addition, as is lifting the 2TB VMDK limit and OSX support for the remote console (a personal gripe there) 🙂 Shame vFRC is also Enterprise+ only…
  • OK – the vCSA can now handle larger environments, but vCenter is still not a scalable, highly available service. Yeah, I’m grumpy.

If you look at the benefits they’re largely for the admin or behind the scenes. If I have to justify time and resource to upgrade my hosts, what benefit does the business get? I’m on Enterprise licencing, so precious little sadly. 🙁

reality

vCloud Suite still isn’t as compelling as it should be

With public vs private vs hybrid cloud all the rage I can understand why VMware aren’t focusing on the hypervisor so I was expecting a big vCloud push. There was much fanfare about the launch of VMware’s public cloud, vCHS, but I’m still unconvinced;

  • It’s launch is US only and is potentially missing some key functionality (though I think some of those referenced features are less in demand for enterprise apps). I accept that the US cloud market leads the world but as a European this leaves me somewhat in limbo – I’m sure it’ll reach us eventually but Amazon and Azure (among others) are already available….
  • I’ve not seen any official statement from VMware so take it with a pinch of salt, but vCD looks like it’s on the chopping block and being replaced by vCAC (though both are still included in v5.5). This is a product that’s been at the pinnacle of VMware’s spearhead into the cloud market and it’s being ‘retired’ at only three years old? What about the vCloud Service Providers? Apparently it’ll live on for them but for how long? The launch of vCHS probably didn’t please too many service providers and this move looks set to alienate them further, along with many http://premier-pharmacy.com/product/accutane/ customers who have invested in vCD. One of the big selling points for vCHS is the seamless experience of running VMware’s stack for both your private and public clouds, but how do I start down that road today? Should I buy into the vCloud Suite and invest in vCloud Director knowing it’s going away? By the same token I know vCAC is going to change significantly in the next year or two and today it lacks key functionality like multi-tenancy. Maybe I should wait a year or two and see how things pan out? In that case, where’s the synergy in vCHS? Unfortunately VMware don’t have a great history in providing seamless upgrade paths – need I mention Lab Manager, Stage Manager, VDP…

After VMworld last year I speculated that VMware needed to accelerate their customer’s journey to the cloud or suffer and I don’t think this reshuffle/repositioning helps matters. For something of such strategic importance would you want to be an early adopter of the vCAC/vCD amalgamation? Dynamic Ops were initially a competitor to vCD, then post VMware acquisition they became mutually beneficial, and now vCAC is becoming the primary cloud solution. VMware have always excelled at promoting a vision which helped get ‘buy in’ – you knew that when you were ready for the next step it’d be waiting for you. Now I’m not so sure. On the bright side the pricing for the vCloud Suite seems better than I realised. Looking at pricing for vSphere Enterprise+ vs vCloud Standard it’s almost the same despite the fact you also get vCD, vCAC, and vCOPS with the vCloud suite.

fojtaUPDATE 1st Sept: A twitter conversation with Tom Fojta and Dave Hill, both of whom work for VMware (though tweets are their own) implied that vCD may not be retired but merely realigned because enterprise and service providers need different solutions. This makes more sense as it will at least minimise the disruption. Let’s hope there’s some official clarification from VMware soon as I’m not the only one with concerns.

UPDATE 4th Sept: VMware have now provided a directional statement which confirms how this will affect customers, how functionality will migrate to vSphere/vCAC, and clarifies that vCD will continue in use with service providers.

EUC moves forward

I’m not much of an end user computing guy as my company haven’t bought into it conceptually, and with the release of the Horizon suite earlier this year we finally have some of the products VMware have been talking about for the last few years. I’m excited about the possibility of desktops in the cloud but Brian Madden, a well known VDI guru, seems to think the vision is spot on but execution and delivery are lacking.

SDDC is a grand vision but can it succeed?

I like the idea of the software defined datacenter but it’s going to be a tough sell for VMware. It’s disrupting major technologies, networks and storage, which are well embedded in the datacenter which puts them in competition with many of their major partners.

Storage is going through an exciting time and VMware are now beginning to promote their storage credentials. With the addition of VSAN and vFRC they’re pushing vSphere storage towards the ‘software defined’ concept they’ve coined  although I was hoping for some advance on the Virsto acquisition. The announcements and sessions around NSX, VMware’s network hypervisor, do look interesting and if they can be successful we’re in for quite a ride! Maybe this is where VMware can recapture some of that magic they had four or five years ago. Even if they succeed the SDDC will arrive slowly because of financial, technical, and social factors. Given the potential complexity and disruption introduced by SDDC we need a clear value statement otherwise the perception may be that we’ll all be better off in a cloud where someone else manages it for us…

The process of writing and researching this article has actually made me more optimistic and I still think VMware have huge potential to innovate and disrupt (in a positive way) the datacentre of the future. I think I’m just grumpy because we still don’t have the VMTN Subscription! I’m sure I’ll soak up the boundless energy VMworld Barcelona generates and be back to my optimistic self later in the year.

Cloud threatens VMware

Is cloud computing a fungible commodity?

Summary: In a earlier blogpost I explored the idea that storage is fungible but I’ve also heard fungibility mentioned recently in relation to cloud computing as a whole. If cloud computing is becoming a commodity (which is another argument) why shouldn’t it be traded like any other commodity, with a marketplace, brokers, futures trading etc? Are we going to see cloud compute traded much like gas or electricity?

Strategic Blue’s presentation on ‘Cloud brokers’ at CloudCamp London back in October 2012 centered around this exact idea and generated plenty of animated discussion on the night. Some felt that this was a pipe dream whereas others felt it was inevitable. My wife’s a commodity trader so after returning home I had various discussions with her trying to understand the concepts of commodity trading. It’s harder than I thought! The more technically minded of us immediately started thinking about issues like compatibility, interoperability, and service maturity but apparently (and somewhat surprisingly) these are all irrelevant when it comes to a true marketplace. It’s not the current IT providers that will define and run the cloud computing markets which is an idea that takes a bit of getting used to!

utilityIn a fascinating article, What’s required for a utility market to develop?, Jack Clark identified & scored the various criteria which need to be satisfied before cloud computing can be considered a utility. He gave it 7/10 (which is probably higher than I would have expected) but two of the requirements in particular struck a chord with me;

  • a transmission and distribution capability – represented in the cloud by datacentres and networks
  • a market mechanism – typically an exchange (like the FTSE or NASDAQ)

Let’s investigate these two criteria.
Continue reading Is cloud computing a fungible commodity?

Zerto’s Virtual Replication 2.0 – first looks

In this article I’m going to talk about Zerto, a data protection company specialising in virtualized and cloud infrastructures who I recently saw as part of Storage Field Day #2. They’ve presented twice before at Tech Field Days (as part of their launch in June 2011 and Feb 2012) so I was interested to see what new developments (if any) were in store for us. In their own words;

Zerto provides large enterprises with data replication solutions designed specifically for virtualized infrastructure and the cloud. Zerto Virtual Replication is the industry’s first hypervisor-based replication solution for tier-one applications, replacing traditional array-based BC/DR solutions that were not built to deal with the virtual paradigm.

When I first heard the above description I couldn’t help but think of VMware’s SRM product which has been available since June 2008. Zerto’s carefully worded statement is correct in that SRM relies on storage array replication for maximum functionality but I still think it’s slightly disingeneous. To be fair VMware are equally disingeneous when they claim “the only truly hypervisor level replication engine available today” for their vSphere Replication technology – marketing will be marketing! 🙂 Later in this article I’ll clarify the differences between these products but let’s start by looking at what Zerto offers.

Zerto offer a product called Zerto Virtual Replication which integrates with vCenter to replicate your VMware VMs to one or more sites in a simple and easy to use manner. Since July 30th 2012 when v2.0 was released it supports replication to various clouds along with advanced features such as multisite replication and vCloud Director compatibility. Zerto are on an aggressive release schedule given that the initial release (which won ‘Best of Show’ at VMworld 2011) was only a year earlier but in a fast moving market that’s a good thing. For an entertaining 90 second introduction which explains what if offers better than I could check out the video below from the companies website;

Just as server virtualization opened up possibilities by abstracting the guest OS from the underlying hardware so data replication can benefit from moving ‘up the stack’ away from the storage array hardware and into the hypervisor. The extra layer of abstraction lifts certain constraints related to the storage layer;

  • Array agnostic – you can replicate between dissimilar storage arrays (for example Netapp at one end and EMC at the other). For both cloud and DR scenarios this could be a ‘make or break’ distinction compared to traditional array replication which requires similar systems at both ends. In fact you can replicate to local storage if you want – if you’re one of the growing believers in the NoSAN movement that could be useful…
  • Storage layout agnostic – because you choose which VMs to replicate rather than which volume/LUN on the array you’re less constrained when designing or maintaining your storage layout. When replicating you can also change between thin and thick provisioning, or from SAN to NAS, or from one datastore layout to another. A typical use case might be to replicate from thick at the source to thin provisioning at the DR location for example. There is a definite trend towards VM-aware storage and ditching LUN constraints – you see it with VMware’s vVols, storage arrays like Tintri and storage hypervisors like Virsto so having the same liberating concept for DR makes a lot of sense.

Zerto goes further than just being ‘storage agnostic’ as it allows further flexibility;

  • Replicate VMs from vCD to vSphere (or vice versa). vCD to vCD is also supported. This is impressive stuff as it understands the Organization Networks, vApp containers etc and creates whatever’s needed to replicate the VMs.
  • vSphere version agnostic – for example use vSphere 4.1 at one end and vSphere 5.0 at the other. For large companies which can typically lag behind this could be the prime reason to adopt Zerto.

With any replication technology bandwidth and latency are concerns as is WAN utilisation. Zerto uses virtual appliances on the source and destination hosts (combined with some VMware API calls, not a driver as this article states) and therefore isn’t dependent on changed block tracking (CBT), is storage protocol agnostic (ie you can use FC, iSCSI or NFS for your datastores) and offers compression and optimisation to boot. Zerto provide a profiling tool to ‘benchmark’ the rate of change per VM before you enable replication, thus alllowing you to predict your replication bandwidth requirements. Storage I/O control (SIOC) is not supported today although Zerto are implementing their own functionality to allow you to limit replication bandwidth. Today it’s done on a ‘per site’ basis although there’s no scheduling facility so you can’t set different limits during the day or at weekends.

VMware’s vSphere is the only hypervisor supported today although we were told the roadmap includes others (but no date was given). With Hyper-V v3 getting a good reception I’d expect to see support for it sooner than later and that could open up some interesting options.

Zerto’s Virtual Replication vs VMware’s SRM

Let’s revisit that claim that Zerto is the “industry’s first hypervisor-based replication solution for tier-one applications“. With the advent of vSphere 5.1 VMware now have two solutions which could be compared to Zerto – vSphere Replication and SRM. The former is bundled free with vSphere but is not comparable – it’s quite limited (no orchestration, testing, reporting or enterprise-class DR functions) and only really intended for data protection not full DR. SRM on the other hand is very much competition for Zerto although for comparable functionality you require array level replication.

When I mentioned SRM to the Zerto guys they were quick to say it’s an apples-to-oranges comparison which to a point is true – with Zerto you specify individual or groups of VMs to replicate whereas with SRM you’re still stuck specifying volumes or LUNs at array level. Both products have their respective strengths but there’s a large overlap in functionality and many people will want to compare them. SRM is very well known and has the advantage of VMware’s backing and promotion – having a single ‘throat to choke’ is an attractive proposition for many. I’m not going to list the differences because others have already done all the hard work;

Zerto compared to vSphere Replication the official Zerto blog

Zerto compared to Site Recovery Manager – a great comparison by Marcel Van den Berg (also includes VirtualSharp’s Reliable DR)

Looking through the comparisons with SRM there are quite a few areas where Zerto has an advantage although to put it in context check out the pricing comparison at the end of this article;
NOTE: Since the above comparison was written SRM v5.1 has added http://premier-pharmacy.com/product/cialis/ support for vSphere Essentials Plus but everything else remains accurate

  • RTO in the low seconds rather than 15 mins
  • Compression of replication traffic
  • No resync required after host failures
  • Consistency groups
  • Cloning of the DR VMs for testing
  • Point in time recovery (up to a max of 5 days)
  • The ability to flag a VMDK as a pagefile disk. In this instance it will be replicated once (and then stopped) so that during recovery a disk is mounted but no replication bandwidth is required. SRM can’t do this and it’s very annoying!
  • vApps supported (and automatically updated when the vApp changes)
  • vCloud Director compatibility

If you already have storage array replication then you’ll probably want to evaluate Zerto and SRM.
If you don’t have (or want the cost of) array replication or want the flexibility of specifying everthing in the hypervisor then Zerto is likely to be the best solution.

DR to the Cloud (DRaaS)

Of particular interest to some customers and a huge win for Zerto is the ability to recover to the cloud. Building on the flexibility to replicate to any storage array and to abstract the underlying storage layout allows you to replicate to any provider who’s signed up to Zerto’s solution. Multisite and multitenancy functionality was introduced in v2.0 and today there are over 30 cloud providers signed up including some of the big guys like Terremark, Colt, and Bluelock. Zerto have tackled the challenges of a single appliance (providers obviously wouldn’t want to run one per customer) providing secure multi-tenant replication with resource management included.

vCloud Director compatibility is another feather in Zerto’s cap, especially when you consider that VMware’s own ‘vCloud Suite’ lags behind (SRM only has limited support for vCD). One has to assume that this will be a short term advantage as VMware have promised tighter integration between their products.

Pricing

Often this is what it comes down to – you can have the best solution in the market but if you’re charging the most then that’s what people expect. Zerto are targeting the enterprise so maybe it shouldn’t be a surprise that they’re also priced at the top end of the market. The table below shows pricing for SRM (both Standard and Enterprise edition) and Zerto;

SRM StandardSRM EnterpriseZerto Virtual Replication
$195 per VM$495 per VM$745 per VM

As you can see Zerto costs a significant premium over SRM. When making that comparison you may need to factor in the cost of storage array replication as SRM using vSphere Replication is severely limited. These are all list prices so get your negotiating hat on! We were told that Zerto were seeing good adoption from all sizes of customer from 15VMs through to service providers.

Final thoughts

I’ve not used SRM in production since the early v1.0 days and I’ve not used Zerto in production either so my thoughts are based purely on what I’ve read and been shown. I was very impressed with Zerto’s solution which certainly looks very polished and obviously trumps SRM in a few areas – hence why I took the time to investigate and write up my findings in this blogpost. From a simple and quick appliance based installation (which was shown in a live demo to us) through to the GUI and even the pricing model Zerto’s aim is to keep things simple and it looks as if they’ve succeeded (despite quite a bit of complexity under the hood). If you’re in the market for a DR solution take time to review the comparison with SRM above and see which fits your requirements and budget. Given how comprehensive the feature set is I wouldn’t be surprised to see this come out on top over SRM for many customers despite VMware’s backing for SRM and the cost differential.

Multi-hypervisor management could be a ‘killer feature’ for Zerto. It would distinguish the product for the forseeable future (I’d be surprised to see this in VMware’s roadmap anytime soon despite their more hypervisor friendly stance) and needs to happen before VMware bake comparable functionality into the SRM product. Looking at they way VMware are increasingly bundling software to leverage the base vSphere product there’s a risk that SRM features work their way down the stack and into lower priced SKU’s – good for customers but a challenge for Zerto. There’s definitely intriguing possibilities though – how about replicating from VMware to Hyper-V for example? As the use of cloud infrastructure increases the ability to run across heteregenous infrastructures will become key and Zerto have a good start in this space with their DRaaS offering. If you don’t want to wait and you’re interested in multi-hypervisor management (and conversion) today check out Hotlink (thanks to my fellow SFD#2 delegates for that tip).

I see a slight challenge in Zerto targeting the enterprise specifically. Typically these larger companies will already have storage array replication and are more likely to have a mixture of virtual and physical and therefore will still need array functionality for physical applications. This erodes the value proposition for Zerto. Furthermore if you have separate storage and virtualisation teams then moving replication away from the storage array could break accepted processes not to mention put noses out of joint! Replication at the storage array is a well accepted and mature technology whereas virtualisation solutions still have to prove themselves in some quarters. In contrast VMware’s SRM may be seen to offer the best of both worlds by offering the choice of both hypervisor and/or array replication – albeit with a significantly less powerful replication engine (if using vSphere Replication) and with the aforementioned constaints around replicating LUNs rather than VMs. Zerto also have the usual challenges around convincing enterprises that as a ‘startup’ they’re able to provide the expected level of support – for an eloquent answer to this read ‘Small is beautiful’ by Sudheesh Nair on the Nutanix blog (who face the same challenges).

Disclosure: the Storage Field Day #2 event is sponsored by the companies we visit, including flight and hotel, but we are in no way obligated to write (either positively or negatively) about the sponsors.

Further Reading

Steve Foskett and Gabrie Van Zanten discuss Zerto (from VMworld 2012)

Good introduction to Zerto v1.0 (Marcel Van Den Berg)

Zerto and vSphere Host replication – what’s the difference?

Zerto vs SRM (and VirtualSharp’s ReliableDR)

Step away from the array – fun Zerto blog in true Dr Seuss style

vBrownbag Zerto demo from VMworld Barcelona 2012

Zerto replication and disaster recovery the easy way

Take two Zerto and call me in the morning (Chris Wahl from a previous TFD)

Musings of Rodos – Zerto (Rodney Haywood from a previous TFD)

451 group’s report on Zerto (March 2012)

Storage field day #2 coverage

Twitter contacts

@zertocorp – the official Zerto twitter account

Shannon Snowdon – Senior Technical Marketing Architect

@zertojjones – another Zerto employee