Yearly Archives: 2014

VMware’s hybrid cloud – a discussion with customers

Summary: Hybrid cloud gets a lot of press and VMware are claiming a stake in a market predicted to grow massively over the next few years. I discuss with customers what vCloud Air can offer businesses and what you need to consider.

On Tuesday 25th November I took part in an online discussion about the hybrid cloud with a couple of vCloud Air customers (Schalk van der Merwe from The Hut Group and Matthew Garrett from Cloud Business) along with VMware’s Rick Munro (Chief Technologist, vCloud, EMEA), and well known blogger Julian Wood. The discussion lasted around an hour and we discussed topics such as;

  • Is hybrid cloud just a stepping stone to the public cloud?
  • Is data sovereignty an issue for customers?
  • Are clouds converging to a global platform or diverging into niches (market verticals like healthcare, government etc)?
  • Is the rapid pace of change a challenge when making a business case for cloud?
  • Does adopting cloud lead to changes in established ways of doing business, from a people/process/technology perspective?

You can view an edited version our discussion below which (luckily for you dear reader) is only 18 minutes;

If you don’t have time to watch, some of my takeaways were;

  • agility is a key driver of cloud adoption, more so than cost savings (which in my experience aren’t always expected or delivered). Results speak louder than RFPs!
  • it’s not a case of ‘private’ OR ‘public’ OR ‘hybrid’ cloud – you can, and probably should, use all three depending on your use case
  • start small and build on success (just as the DevOps crowd advocate!).
  • hybrid cloud may be a stepping stone, but it’s essential to take a first step
  • vCloud leverages your existing knowledge and skills so is an easy first step

Personally I think hybrid cloud is here to stay. Mainframes are still with us (from the 1950’s) and email has been available as a SaaS offering for at least 18 years (Hotmail started in 1996) yet today there are still thousands of companies running their own email systems.  For many enterprises ‘hybrid cloud’ will actually mean a multitude of different clouds including vCloud Air, AWS etc, despite the additional management overheads that will introduce. Time will tell!

 

Transitioning away from vCloud Director – the unspoken plan

Summary: vCloud Director, once the flagship product spearheading VMware’s vCloud Suite, is slowly winding down for enterprise customers – potentially leaving some companies with a roadmap challenge.

Having just started work for a cloud service provider in the Channel Islands (Foreshore) my focus has shifted and vCloud Director is a product I’m working with. After VMworld last year I wrote about how badly VMware communicated their product shift away from vCloud Director (vCD) and this year I’ve not seen much sign that communication has improved. At VMworld Barcelona this year only one session out of over 400 was about vCD. Yep. One (although to be fair it was ‘vCD roadmap for service providers’ – more on that later). How the mighty have fallen.

 What do we know about the vCD roadmap?

As announced last year the vCloud Suite roadmap involves the current features moving into other products, both in the vCloud Automation Center (now vRealize Automation) and the core vSphere product. It’s likely that the provisioning aspects will go into vCAC (now vRealize Operations) and some of the network functionality (multi-tenancy in particular) will go into the ‘core’ vSphere product. vCloud Director will continue to exist for service providers but for enterprise customers there is a migration to be done. There was also the following statement;

Yes, VMware will offer a product migration path that enables customers and partners to move from vCD to VCAC…

So far, so good.

So what’s the problem?

The problem is it’s been a year since that announcement and there’s been near radio silence since then. If enterprise customers need to transition off vCloud Director then VMware need to provide information, preferably sooner rather than later, on how that’s likely to work.

Continue reading Transitioning away from vCloud Director – the unspoken plan

Visio diagram of an Autolab environment

A few months ago I found myself wanting to use my home lab, but the whole environment had become very out of date. Rather than build everything from scratch and by hand it was the perfect excuse to try Autolab, a project which I was aware of (I’ve met the creator Alastair Cook a couple of times at VMworld) but had never found the time to deploy. For those not familiar with Autolab it aims to automate the build-out of a portable lab environment consisting of virtual networking,  storage, and compute using vSphere, and includes vCloud Director, View, and Veeam.

My first thought was ‘Does Autolab do what I need?’ and while the documentation was pretty good the overall environment (in particular the networking) which Autolab created wasn’t immediately clear to me. In the end I did use Autolab and while it did some of what I needed I wanted to see if I could integrate or improve the build using my existing setup (I have shared storage and multiple VLANs in my lab already). While sketching out my options I decided to create a proper Visio diagram of a completed Autolab build for future reference and thought it might be useful to others too. I’ve sent it on to Alastair so it may turn up in the next release (assuming there is one).

You can download it in Visio or .JPG format.

UPDATE 4th Jan: Autolab 2.0 has now been released but is largely unchanged. The DC and vCenter servers now support W2k12 and the storage VLANs (16 & 17 in the diagram) are no longer used – their subnets remain the same however.

Autolab v1.5

What Autolab is trying to achieve (freely distributable lab build automation) is highly commendable but given the ease of use and free availability of VMware’s Hands On Labs combined that with the rapid pace of development for many VMware products (vCD isn’t even available anymore unless you’re a service provider) and I wonder if Autolab in it’s current form is sustainable. To encapsulate and therefore make portable an entire working dev/test environment, the aim of the Autolab networking, is a perfect use case for NSX although if you want that for free you’ll have to look to open-source equivalents (OpenFlow et al). Time will tell!

Further Reading

http://www.labguides.com/autolab/

Wifi problems with TP-Link’s Powerline Starter Kit (WPA4220)

tplink 4220Summary: Powerline adaptors are better than they used to be but they aren’t without their problems.

I’ve recently moved house and didn’t want to go to the time and expense of wiring up my new house with CAT6 ethernet, so opted for some Powerline adapters instead. I’d used an early set of these (85Mbps) back in 2007 but standards have definitely advanced in this area and now we have 500Mbps adapters (well sort of) so I thought it was worth revisiting the technology.

My local computer store had a couple of TP-Link units in stock (WPA4220 Starter Kit) and I bought them on a whim. Plugging them in and getting them working took all of five minutes and voila – connectivity! The speeds weren’t great (around 80Mbps on average, so 5-6MBps on file transfers) but then my house was built in the 70s so it’s not especially modern and that does affect speeds. The bigger variable in my case was the fact I have a three phase power supply, rather than the more usual one. At first I thought this would prevent or greatly hinder my use of powerline networking but my powerline networking works just fine over multiple power phases (apparently a shared consumer unit is key). Speed is affected (my dropped to 60Mbps when crossing phases) but I’m really just using it for web browsing and streaming some video which seems to work fine.

UPDATE APRIL 2015 – I’ve now done some testing with iPerf and my speeds are lower than those reported by the TP-Link utility – often significantly. For example TP-Link reports 75Mbps when iPerf reports 25Mbps for the same link. Even allowing for protocol overheads there’s a significant discrepancy. I think TP-Link may be reporting ‘theoretical’ speeds achievable over my powerlines (using the PHY layer) whereas ‘real world’ transmission is impacted by many other factors. Still, I can stream HD without issue most of the time.

All said I was very happy with my powerline setup, until I’d been running the integrated wireless AP for a few days and started noticing connectivity problems. I’ve got a mixture of tablets (a couple of iPads, Nexus 7), smartphones, Sonos wireless speakers and the odd Google Chromecast and found that within a day or so they’d lose internet access. After further investigation and some Googling I found plenty of people in a similar scenario (here, here, here, and here) but with no acknowledgement or fix forthcoming from TP-Link. Sadly the logs for these units are hardly worth having as you can see in the screenshot below – over 20 hours after powering it on (and with Wifi failing) all that was logged was the initial startup event and even that didn’t have a timestamp;

logs

The problem seems to be certain types of traffic don’t pass through the wireless AP, even through plugging into the wired powerline socket on the same unit works fine. I quickly identified that DHCP broadcasts weren’t being received by wireless clients so devices were failing to renew their leases and dropping off the network. A simple reboot of the TP-Link resolves the issue for a while but it recurs within a few hours. Interestingly setting a static IP seems to be a good partial workaround as the wireless AP is still working and sending most types of traffic, but some devices, like the Chromecast, only support DHCP. For my Chromecast I’ve therefore set my DHCP server to reserve an IP for about a year! To alleviate the issue even further I’ve now bought a mains timer switch and automatically reboot the unit twice a day – a horrible hack, but it works. When the wireless fails I can’t even ping the TP-Link’s IP address wirelessly, even though i can ping my router and other devices on my network, and I can ping the TP-Link via a wired connection. Frustrating.

Without much visibility under the hood (this can’t run the highly customisable DD-WRT as that doesn’t understand powerline networking) the best I can tell is that some types of traffic are not being bridged onto the wireless AP correctly. I’m sure a few Wireshark captures would confirm this in more detail but as I’m relying on TP-Link to fix it one way or another I haven’t drilled down to that level. Unfortunately I believe a software fix (ie firmware update) is required and so far nothing has been forthcoming from TP-Link. On one of the posts linked to above there’s a post from ‘Vincent’, who I believe works for TP-Link, claiming that they’re trying to replicate the issue – I’m not sure why that should be difficult as I’d imagine a software issue would be pretty consistent but I can give them the benefit of the doubt for a while longer. Judging by a blogpost from Alex Boschman it looks like the equivalent Devolo unit’s aren’t immune to problems either, so maybe I’ll have to try D-Link or Solwise instead. Or maybe I’ll just wire up the house after all and use a standard wireless device, it might still be the most quickest way to get reliable access… 🙁

UPDATE: 27th November – I’ve now also tried a TP-Link TL-WPA281 which is essentially the older variant which only offers 300Mbps for the wireless AP. Sadly this behaves the same way. I’ve also experimented replacing the wireless functionality of the TP-Link with an old Netgear unit (WGR614v9) (I plug the Netgear into the TP-Link, so I’m still using the powerline aspect) and that seems to work flawlessly so I still think the TP-Link devices are the cause of my wifi issues.

Further Reading

Google chromecast network traffic (via Cisco)

Why multicast doesn’t always work with Wifi

Google Chromecast router compatibility list

Thoughts on VMware’s EVO:RAIL

EVORAIL smallSummary: At VMworld in August VMware announced their new hyperconverged offering, EVO:RAIL. I found myself discussing this during TechFieldDay Extra at VMworld Barcelona and this post details my thoughts having spent a bit longer investigating. I’m not the first to write about EVO:RAIL so I’ll quickly recap the basics before giving my thoughts and some things to bear in mind if you’re considering EVO:RAIL.

Briefly, what is EVO:RAIL?

evorailThere’s no point in rediscovering the wheel so I’ll simply direct you to Julian Wood’s excellent series;

As of October 2014 there are now eight qualified OEM partners although beyond that list there’s very little actual information available yet. Most of the vendors have an information page but products aren’t actually shipping yet and it’s difficult to know how they’ll differentiate and compete with each other. Several partners already have their own offerings in the converged infrastructure space so it’ll be interesting to see how well EVO:RAIL fits into their overall product portfolios and how motivated they are to sell it (good thoughts on that for EMCHP, and Dell). Unlike their own solutions, the form factor and hardware specifications are largely fixed so it’s going to be management additions (ILO cards, integration with management suites like HP OneView etc), service, and support that vary. For partners without an existing converged offering this is a great opportunity to easily and quickly compete in a growing market segment.

UPDATE 29th July 2015 – HP has now walked away from the EVO:RAIL offering. Interesting that they’ve done so very publicly rather than just letting it wilt on the vine…

Things to note about EVO:RAIL

In my ‘introduction to converged infrastructure’ post last year I listed a set of considerations – let’s run through them from an EVO:RAIL perspective;

Management. The hyperconverged nature should mean improved management as VMware (and their partners) have done the heavy lifting of integration, licencing, performance tuning etc. EVO:RAIL also offers a lightweight GUI for those that value simplicity while also offering the usual vSphere Web Client and VMware APIs for those that want to use them. This is however a converged appliance and that comes with some limitations – you can manage it using the new HCIA interface or the Web Client but it comes with its own vCSA instance so you can’t add it to an existing vCenter without losing support. It won’t use VUM for patching (although it does promise non-disruptive upgrades) although you can add the vCSA to an existing vCOps instance.

Simplicity. This is the strongest selling point in my opinion – EVO:RAIL is a turnkey deployment of familiar VMware technology. EVO:RAIL handles the deployment, configuration, and management and you can grow the compute and storage automatically as additional appliances are discovered and added. As the technology itself isn’t new there’s not much for support staff to learn, plus there’s ‘one throat to choke’ for both hardware and software (the OEM partner). Some people have pointed out that it doesn’t even use a distributed switch, despite being licenced with Ent+. Apparently the choice of a standard vSwitch was because of a potential performance issue with vDS and VSAN, which eventually turned out not to be an issue. Simplicity was also a key consideration and VMware felt there was no need for a vDS at this scale. I imagine we’ll see a vDS in the next iteration.

Flexibility. This is probably the biggest constraint for customers – it’s a ‘fixed’ appliance and there’s limited scope for change. The hardware and software you get with EVO:RAIL is fixed (4 nodes, 192GB RAM per node, no NSX etc) so even though you have a choice of who to buy it from, what you buy is largely the same regardless of who you choose. There is currently only one model so you have to scale linearly – you can’t buy a storage heavy node or a compute heavy node for example. EVO RAIL is sold 4 nodes at a time and the SMB end of the market may find it hard to finance that kind of CAPEX. As mentioned earlier the partner is responsible for updates (firmware and patching) – you won’t be able to upgrade to the new version of vSphere until they’ve validated and released it for example. Likewise you can’t plug in that nice EMC VNX you have lying around to provide extra storage – you have to use the provided VSAN. Flexibility vs simplicity is always a tradeoff!

Interoperability/integration. In theory this is a big plus for EVO:RAIL as it’s the usual VMware components which have probably the best third party integration in the market (I’m assuming you can use full API access). Another couple of notable integration requirements;

  • 10GB networking (ToR switch) is a requirement as it’s used to connect the four servers inside the 2U form factor given the lack of a backplane. You’ll need 8 ports per appliance therefore. I spoke to VMware engineers at VMworld on this and was told VMware looked for a 2u form factor where they could avoid this but couldn’t. Many SMB’s have not adopted 10GB yet so it’s a potential stumbling block – of course partners may use this opportunity to bundle 10GB networking which would be a good way to differentiate their solution.
  • IPv6 is required for the discovery feature used when more EVO:RAIL appliances are added. This discovery process is proprietary to VMware though it operates much like Apple’s Bonjour and apparently IPv6 is the only protocol which works (it guarantees a local link address).

Risk. This is always a consideration when adopting new technology but being a VMware backed solution using familiar components will go a considerable way to reducing concern. VSAN is a v1.0 product, as is HCIA although as that’s simply a thin wrapper around existing, mature, and best of breed components it’s probably safe to say VSAN maturity is the only concern for some people (given initial teething issues). Duncan Epping has a blogpost about this very subject but his summary is ‘it’s fully supported’ so make sure you know your own comfort level when adopting new technology.

Cost.  A choice of partners is great as it’ll allow customers to leveraging existing relationships. It’s worth pointing out that you buy from the partner so any existing licencing agreements (site licences etc) with VMware probably won’t be applicable. At VMworld I was told VMware have had several customers enquire about large orders (in the hundreds) so it’ll be interesting to see how price affects adoption. I don’t think this is really targeted at service providers and I’ve no idea how pricing would work for them. Having spent considerable time compiling orders, having a single SKU for ordering is very welcome!

Pricing

Talking of pricing, let’s have a look at ballpark costs. I’ve heard, though not been officially quoted, a cost of around €150,000 per 4 node block (or £120,000 for us Brits). This might seem high but bear in mind what you need;

UPDATE: 30th Nov – I realised I’d priced in four Supermicro chassis, rather than one, so I’ve updated the pricing.

  • Hardware. Let’s say approx £11k per node, so £45k for four nodes ie. one appliance (this is approx – don’t quote!);
    • Supermicro FatTwin chassis (inc 10GB NICs) £3500 (one chassis for all four nodes)
    • 2 x E2620 CPUs £400 each
    • 12 x 16GB DIMMs (192GB RAM) = £2000
    • 400GB Enterprise SSD = £4500 (yep!)
    • Three 1.2TB 10k rpm SAS disks = £600 x 3 = £1800
    • …plus power supplies, sundries
  • Software. List pricing is approx £11k per node plus vCenter, so a shade under £50k
    • vCenter (vCSA) 5.5 = £2000
    • vSphere 5.5 = £2750 per socket = £5500 per node
    • VSAN v1 = £1500 per socket = £3000 per node
    • Log Insight = £1500 per socket = £3000 per node
  • Support and maintenance for 3 years on both hardware and software – approx £15k
  • Total cost: £110,000

Once pricing is announced by the partners we’ll see just how much of a premium is being charged for the simplicity, automation, and integration that’s baked in to the EVO:RAIL appliance. There are of course multitudes of pricing options – you could just buy four commodity servers and an entry level SAN but there’s not much value in comparing apples and oranges (and I only have so much time to spend on this blogpost).

UPDATE 1st Dec 2014 – Howard Marks has done a more detailed price breakdown where he also compares a solution using Tegile storage. Christian Mohn also poses a question and potential ‘gotcha’ about the licencing – worth a read.

UPDATE May 2015 – VMware has introduced a ‘loyalty’ program to allow use of existing licences.

Competition

VMware aren’t the first to offer a converged appliance – in fact they’re several years behind. The likes of VCE’s vBlock was first back in 2010 and that was followed by the hyperconverged vendors like Nutanix and Simplivity. As John Troyer mentioned on vSoup’s VMworld podcast, Scale Computing use KVM to offer an EVO:RAIL competitor at cheaper prices (and have done for a few years). Looking at Gartner’s magic quadrant for converged infra it’s a pretty crowded market.

Microsoft recently announced their Cloud Platform Services (Cloud Pro thoughts on it) which was developed with Dell (who are obviously keeping their converged options wide open as they’ve also partnered with Nutanix and VMware on EVO:RAIL). While more similar to the upcoming EVO:RACK it’s another validation of the direction customers are expected to take.

Final thoughts

From a market perspective I think VMware’s entry into the hyperconverged marketplace is both a big deal and a non-event. It’s big news because it will increase adoption of hyperconverged infrastructure, particularly in the SMB space, through increased awareness and because EVO:RAIL is backed by large tier 1 vendors. It’s a non-event in that EVO:RAIL doesn’t offer anything new other than form factor – it’s standard VMware technologies and you could already get similar (some would say superior) products from the likes of Nutanix, Simplivity and others.

Personally I’m optimistic and positive about EVO:RAIL. Reading the interview with Dave Shanley it’s impressive how much was achieved in 8 months by 6 engineers (backed by a large company, but none the less). If VMware can address the current limitations around management, integration, and flexibility, while maintaining the simplicity it seems likely to be a winner.

Pricing for EVO:RAIL customers will be key although not all of the chosen partners are likely to compete on price.

UPDATE: April 2015 – a recent article from Business Insider implies that pricing is proving a considerable barrier to adoption for EVO:RAIL.

UPDATE: July 2015 – VMware have now offered extra configurations to allow double the VM density.

You can see our roundtable discussion on hyperconvergence at Tech Field Day Extra (held during VMworld Barcelona) below;

Further Reading

Good post by Marcel VanDeBerg and another from Victoriouss

Mike Laverick has a lot of useful material on his site, but as the VMware Evangelist for EVO:RAIL you’d expect that right! The guys over at the vSoup Podcast also had a chat with Mike.

A comparison of EVO:RAIL and Nutanix (from a Nutanix employee)

Good thoughts over at The Virtualization Practice

VMworld session SDDC1337 – Technical Deep Dive on EVO:RAIL (requires VMworld subscription)

Microsoft’s Cloud Platform System at Network World – good read by Brandon Butler

UPDATED 9th Dec – Some detail about HP’s offerings

UPDATED 16th Dec – EVO:RAIL differentiation between vendors

UPDATED 1st May – EVO:RAIL adoption slow for customers