Tag Archives: home lab

Physical patching – down the wormhole with a borescope!

Summary: Virtualised environments don’t need physical patching. My home, sadly, does but a cheap USB borescope made the job simpler and quicker.

This week I’ve been busy setting up my home office ready for Monday when I start my new job. One of the most time consuming tasks has been running network cable (CAT6) for connectivity – my previous experience with PowerLine technology was somewhat mixed and I decided that hardwired was the way to go now that I’m working from home all day every day. Unfortunately the layout of my house meant a couple of long runs (one 70m, one only about 20m) through multiple walls and floors/ceilings. Some parts of the cable run were relatively easy but others involved going through areas where I had limited visibility and access (ie behind masonry walls and through floorspaces) and didn’t want to make a mess (ie cut access panels or chase cables into plaster).

electrician-rodsEnter the DBPOWER® USB HD Borescope (what is a borescope?) which I bought via Amazon for a mere £18. This nifty geek tool lets you put a camera inside small spaces and see the output on a computer screen in real time, and when combined with a set of electrician’s wiring rods (£8 from Amazon) it was great for threading cable. I still had to drill holes through both the floor and ceiling (about 16mm to allow the ethernet connector – I was being lazy and not crimping my own) but navigating around obstructions, finding the exit hole, and pushing the cable through would not have been possible without this kit.

It’s wasn’t all smooth sailing. The borescope’s magnified view makes it pretty difficult to recognise what you’re looking at – the built in lighting can help but the focus jumps around in dark areas making it challenging. The electrician’s rods are good for pushing in a straight line (and with some flex) but the borescope I bought didn’t have a gooseneck (some do, though costs generally go up) so directing the camera inside floorspaces was pretty difficult.

Still for a total cost of £25 I managed to run my cables and avoided more hours spent on DIY making good, plus it appealed to my geek tendencies. A good job well done!

The VMTN subscription is back (via VMUG Advantage)

vmtnSummary: It’s not quite the full VMTN Subscription that I think some envisaged but it’s a very good start. Through the VMUG Advantage program you can now get 365 day eval licences for various VMware products. About time.

Back in the mists of time (2005 through 2007) VMware offered the VMTN subscription, their equivalent of Microsoft’s Technet program (itself now sadly gone to heaven/clouds too). This allowed various people to licence software. However this was discontinued in 2007 as it was felt that newly introduced free editions (VMServer and VMPlayer) combined with an expanded partner program provided sufficient access.

Fast forward to 2011 and the range of VMware products had multiplied. Independant consultants, bloggers, and even customers were beginning to struggle with limited trials and restrictive licencing. Cue Mike Laverick, a well known blogger, calling for the return of the VMTN subscription in a forum post which to date consists of over 23 pages of comments, with almost universal agreement that it should be brought back. As recently as June this year there was no sign it was ever going to happen – indeed Mike (who now works for VMware) had chatted to the relevant people internally and been told it wasn’t likely. There’s now a comment (quick work Duncan Epping) that most will welcome…

This morning it looks as if VMware finally relented. It’s not comprehensive access to every product VMware offer (consider it a v1.0 release) but it’s a good start including the following products;

  • VMware vCenter Server™ 5 Standalone for vSphere 5
  • VMware vSphere® with Operations Management™ Enterprise Plus
  • VMware vCloud Suite® Standard
  • VMware vRealize™ Operations Insight™
  • VMware vRealize Operations™ 6 Enterprise
  • VMware vRealize Log Insight™
  • VMware vRealize Operations for Horizon®
  • VMware Horizon® Advanced Edition
  • VMware Virtual SAN™

Luckily I’m a VMUG Advantage member so have access as of this morning (you should get an email if you’re a member). I get many of these as a vExpert but it’s nice to know there’s a more inclusive way of getting this access. For the small army of home lab enthusiasts the cost of a VMUG Advantage membership is well within reach (approx £130, or US$200pa) and means no more rebuilding labs every 30 days. Combined with the freely available HoLs there really is a good choice now.

I think this will massively drive adoption of VMUG Advantage. Previously the benefits were of limited use – the best discount was on full training courses and the rest were largely ignored. I must admit I was likely to let mine lapse at the end of the year but I may now reconsider.

Visio diagram of an Autolab environment

A few months ago I found myself wanting to use my home lab, but the whole environment had become very out of date. Rather than build everything from scratch and by hand it was the perfect excuse to try Autolab, a project which I was aware of (I’ve met the creator Alastair Cook a couple of times at VMworld) but had never found the time to deploy. For those not familiar with Autolab it aims to automate the build-out of a portable lab environment consisting of virtual networking,  storage, and compute using vSphere, and includes vCloud Director, View, and Veeam.

My first thought was ‘Does Autolab do what I need?’ and while the documentation was pretty good the overall environment (in particular the networking) which Autolab created wasn’t immediately clear to me. In the end I did use Autolab and while it did some of what I needed I wanted to see if I could integrate or improve the build using my existing setup (I have shared storage and multiple VLANs in my lab already). While sketching out my options I decided to create a proper Visio diagram of a completed Autolab build for future reference and thought it might be useful to others too. I’ve sent it on to Alastair so it may turn up in the next release (assuming there is one).

You can download it in Visio or .JPG format.

UPDATE 4th Jan: Autolab 2.0 has now been released but is largely unchanged. The DC and vCenter servers now support W2k12 and the storage VLANs (16 & 17 in the diagram) are no longer used – their subnets remain the same however.

Autolab v1.5

What Autolab is trying to achieve (freely distributable lab build automation) is highly commendable but given the ease of use and free availability of VMware’s Hands On Labs combined that with the rapid pace of development for many VMware products (vCD isn’t even available anymore unless you’re a service provider) and I wonder if Autolab in it’s current form is sustainable. To encapsulate and therefore make portable an entire working dev/test environment, the aim of the Autolab networking, is a perfect use case for NSX although if you want that for free you’ll have to look to open-source equivalents (OpenFlow et al). Time will tell!

Further Reading


Home labs – a poor man’s Fusion-IO?

While upgrading my home lab recently I found myself reconsidering the scale up vs scale out argument. There are countless articles about building your own home lab and whitebox hardware but is there a good alternative to the accepted ‘two whiteboxes and a NAS’ scenario that’s so common for entry level labs? I’m studying for the VCAP5-DCD so while the ‘up vs out’ discussion is a well trodden path there’s value (for me at least) in covering it again.

There are two main issues with many lab (and production) environments, mine included;

  1. Memory is a bottleneck and doubly so in labs using low end hardware – the vCentre appliance defaults to 8GB, as does vShield Manager so anyone wanting to play with vCloud (for example) needs a lot of RAM.
  2. Affordable yet performant shared storage is also a challenge – I’ve used both consumer NAS (from 2 to 5 bays) and ZFS based appliances but I’m still searching for more performance.

In an enterprise environment there are a variety of solutions to these challenges – memory density is increasing (up to 512GB per blade in the latest UCS servers for example) and on the storage front SSDs and flash memory have spurred innovations in the storage battle. In particular Fusion-IO have had great success with their flash memory devices which reduce the burden on shared storage while dramatically increasing performance. I was after something similar but without the budget.

When I built my newest home lab server, the vHydra I used a dual socket motherboard to maximise the possible RAM (up to 256GB RAM) and used local SSDs to supplement my shared storage. This has allowed me to solve the two issues above – I have a single server which can host a larger number of VMs with minimal reliance on my shared storage. The concepts are the same as solutions like Fusion-IO aim to do in production environments but mine isn’t particularly scalable. In fact it doesn’t really scale at all – I’ll have to revert to centralised storage if I buy more servers. Nor does it have any resilience – the ESXi server itself isn’t clustered and the storage is a single point of failure as there’s no RAID. It is cheap however, and for lab testing I can live with those compromises. None of this is vaguely new of course – Simon Gallagher’s vTardis has been using these same concepts to provide excellent lab solutions for years. Is this really a poor man’s Fusion-IO? There’s nothing like the peformance and nothing like the budget but the objectives are the same but to be honest it’s probably a slightly trolling blog title. I won’t do it again. Promise! 🙂

If you’re thinking of building a home lab from scratch consider buying a single large server with local SSD storage instead of multiple smaller servers with shared storage. You can always scale out later or wait for Ceph or HDFS to elimate the need for centralised storage at all…

Tip: It’s worth bearing in mind the 32GB limit on the free version of ESXi – unless you’re a vExpert or they reinstate the VMTN subscription you’ll be stuck with 60 day eval editions if you go above 32GB (or buying a licence!).

Further Reading

Is performant a word? 🙂

Home labs – a scalable vSphere whitebox

Having recently upgraded my home lab’s storage I decided it was also time to upgrade my aging hosts which date back to 2007. They’ve done well to survive and still be useful(ish) five years later but they’re maxed out at 8GB RAM and it’s becoming increasingly difficult to do anything with that. I briefly considered adding SSDs as host cache but that doesn’t address some of their other shortcomings such as no support for Fault Tolerance, VMDirectPath or any type of KVM functionality.

A quick look around the blogosphere revealed a few common options;

More power!

The problem for me was that these solutions all maxed out at 16 or 32GB RAM per host, a limitation of the single socket Xeon’s architecture. That’s a lot of memory for a home lab server today but to ensure that this server can last five years I really wanted more scalability. I wasn’t too fussed about noise as I use my cellar for my lab, and power consumption was a secondary concern. The server features of the Supermicro boards appeal to me (and many Supermicro motherboards are compatible with vSphere) so I browsed their range looking for the one that best met my requirements. My final parts list ended up as;

Must….have…more…POWER..the vHydra!

The total cost comes to around £1150. I’m branding mine the vHydra after the mythical multi-headed dragon!
Note: In the US this is significantly cheaper, coming in at $1450, or about £900.

For the money I get a powerful server that can replace all three of my current 8GB hosts and more than match their performance while consuming less power and space, plus Continue reading Home labs – a scalable vSphere whitebox