Category Archives: Storage

Storage Field Day #2 – I’ll be there!

Print Friendly, PDF & Email

Following on from the first storage focused Tech Field Day in April this year (known as Storage Field Day) there’s a second session running from November 8th-9th and I’m excited to say I’ve been invited and will be there. The brainchild of Gestalt IT’s Steve Foskett, Tech Field Day brings together influential individuals and innovative product vendors who assemble in Silicon Valley (San Jose) for two days of brain drain!

The day before (November 7th) is a Next Generation Storage Symposium (which I’m also attending) with the following vendors;

  • Nexsan
  • Nimbus Data
  • Permabit
  • Pure Storage
  • Scale Computing
  • Solid Fire
  • Tegile

I’m familiar with many of the sponsors presenting at this event and I’ve just been looking at some of their products at the recent VMworld Barcelona conference. For those that I’m less familiar with I’m hoping to do some pre-event research providing my son allows me the time! For a full http://premier-pharmacy.com/product-category/hair-loss/ list of sponsors check out the official webpage which also lists the delegates. I’ve met a couple of the delegates previously but most I’m only familiar with via the twitterverse – I’m looking forward to learning from both the sponsors and the other delegates who are a talented bunch.

As you’d expect from a leading technology event it’ll be streamed live over the Internet and via various forms of social media including Twitter (#TechFieldDay and/or follow @TechFieldDay) and inevitably some blogposts from the assorted panel. If you’ve an interest in storage and would like to use the event to question the vendors on specific subjects just let me know – I’ll happily proxy some questions on your behalf. Videos will be available after the event via the Tech Field Day website.

Home labs – the Synology 1512+

Print Friendly, PDF & Email

I’ve been running a home lab for a few years now and recently I decided it needed a bit of an upgrade. I’ve been looking at the growing trend towards online lab environments but for the time being I made the decision that it’s still cost effective to maintain my own. I need to learn the latest VMware technologies (which requires lab time) and partly because the geek in me wants some new toys. 🙂

Storage was the first thing I needed to address. While I’ve got an Iomega IX2-200 (the two disk version) it’s not really usable as shared storage for a lab due to slow performance (about 17MB/s for read, 13MB/s for writes). If I were a patient man that would be fine for testing but I found myself putting VMs on local disks so I could work quicker which rather defeats the purpose of a lab for HA/DRS etc. I’ve built a home NexentaStor CE server which is feature rich (ZFS, snapshots, dedupe, tiered SSD caching) but I’ve found the configuration and maintenance less than simple and it’s a big, heavy old server (circa 2007) which won’t last much longer. My wishlist included the following;

  • Easy to use – I want to spend my time using it, not configuring and supporting it
  • Small form factor, minimised power consumption
  • Hypervisor friendly – I’d like to play with VMware, Citrix, and Microsoft’s Hyper-V
  • Cloud backup options. I use Dropbox, SugarSync and others and it’d be useful to have built in replication ability.
  • Hook up a USB printer
  • Flexibility to run other tasks – bit torrent, audio/movie streaming, webcams for security etc (which my Iomega also offers)
  • VLAN and aggregated NIC support (both supported by my lab switch, a Cisco SLM2008)
  • Tiered storage/caching (NOT provided by the consumer Synology devices)

My requirements are by no means unique and there were three devices on my shortlist;

I choose Synology for a couple of reasons, primarily because I’ve heard lots of good things about the company from other bloggers (Jason Nash comes to mind) and Synology have a wide range of devices to choose from at different price/performance points. They’re not the cheapest but many people say the software is the best around and having been bitten once with the IX2-200 I figured I’d go upmarket this time. The model I choose was the relatively new DiskStation 1512+, a five bay unit which satisfies most of my requirements with the exception of tiered storage. I was excited when I first read a while ago that some of the Synology units fully support VAAI but not so this particular model according to Synology (the DS412+ has only limited support). I guess it’s always possible that support will find its way into lower end models such as the 1512+ (even if unsupported) at a future date – here’s hoping!

UPDATE Sept 14th 2012 – While both NFS and iSCSI work with vSphere5.0 the 1512+ is only certified by VMware for iSCSI on vSphere 4.1 as of 14th Sept 2012. Previous devices (the 1511+ for example) are listed for both NFS and iSCSI, also with vSphere 4.1. Rather than being incompatible it’s more likely that they just haven’t been tested yet although there are problems with both NFS and iSCSI when using vSphere5.1 and DSM 4.1.

UPDATE Oct 3rd 2012 – Synology have released an update for their DSM software which fixes the compatibility issues with vSphere 5.1 although it’s referred to as ‘improved performance’ in the release notes. I’ve not tested this yet but hopefully it’s all systems go. Good work Synology!

There are some additional features I wasn’t looking for but which will come in useful for a home lab;

  • Syslog server (especially useful with ESXi nowadays)
  • DHCP server
  • CloudStation – ‘Dropbox’ style functionality

Having chosen the unit I then needed to choose the drives to populate it with as the unit doesn’t ship with any. My lab already includes some older disks which I could have reused plus I had two SSDs in the NexentaStor server which I considered cannibalising. After reading this excellent blogpost about choosing disks for NAS devices (and consulting the Synology compatibility list) I went with five WD Red 2TB HDDs as a compromise between space, performance, compatibility, and cost. I missed the introduction of the ‘Red’ range of hard disks that’s targeted at NAS devices and running 24×7 but they get good reviews. This decision means I can keep all three storage devices (Iomega IX2, Nexenta and Synology) online and mess around with advanced features like StorageDRS.

UPDATE Feb 18th 2013 – Tom’s hardware had a look at these WD Red drives and they don’t seem great at high IOps. I’ve not done much benchmarking but maybe worth investigating other options if performance is key.

I bought my Synology from UK based ServersPlus who offered me a great price and free next day shipping too. I was already on their mailing list having come across them on Simon Seagrave’s Techhead.co.uk site – they offer a variety of bundles specifically aimed at VMware home labs (in particular the ML110 G7 bundles are on my wish list and they do a cheaper HP Microserver bundle too) and are worth checking out.

Using the Synology 1512+

Following the setup guide was trivial and I had the NAS up and running on the network in under ten minutes. I formatted my disks using the default Synology Hybrid RAID which offers more flexibility for adding disks and mixing disk types and only has a minimal performance impact. Recent DSM software (v4.0 onwards) has been improved so that the initial format is quick and the longer sector check (which takes many hours) is done in the background, allowing you to start using it much faster.. My first impression was seeing the management software, DSM, which is fantastic! I’m not going to repeat what others have already covered so if you want to know more about the unit and how it performs here’s a great, indepth review.

I enabled the syslog server and was quickly able to get my ESXi hosts logging to it. Time Machine for my MBP took another minute to configure and I’m looking forward to experimenting with CloudStation which offers ‘Dropbox like functionality’ on the Synology.

Chris Wahl’s done some investigation into iSCSI vs NFS performance (although on the Synology DS411 rather than the 1512+) and I found similar results – throughput via iSCSI was roughly half that of NFS. I wondered if I had to enable multiple iSCSI sessions as per this article but doing so didn’t make any difference. All tests were over GB NICs and the Synology has both NICs bonded (2GB LACP);

  • Copying files from my MBP (mixed sizes, 300GB) to the Synology – 50MB/s write
  • Creating a file (using dd in a VM, CentOS 5.4) via an NFS datastore – 40MB/s write
  • Creating a file (using dd in a VM, CentOS 5.4) via an iSCSI datastore – 20MB/s write
  • Creating a thick eager zeroed VMDK on an iSCSI datastore – 75MB/s write

Given Synology’s published figures which claim a possible write speed of 194MB/s these were rather disappointing but they’re initial impressions NOT scientific tests (I also tried a similar methodology to Chris using IO Analyser which also gave me some odd results – average latency over 300ms!) so I’ll update this post once I’ve ironed out the gremlins in my lab.

Tip: make sure you disable the default ‘HDD hibernation’ under the Power settings otherwise you’ll find your lab becoming unresponsive when left for periods of time. VMs don’t like their storage to disappear just because they haven’t used it in a while!

LAST MINUTE UPDATE! Just before I published this post the latest release of DSM, v4.1, was finally made available. DSM 4.1 brings several enhancements and having applied it I can attest that it’s an improvement over an already impressive software suite. Of particular interest to home labs will be the addition of an NTP server, a much improved Resource Monitor which includes IOPS, and an improved mail relay.

Overall I’m really impressed with the Synology unit. It’s been running smoothly for a couple of weeks and the software is definitely a strong point. It’s got a great set of features, good performance, is scalable and might even include VAAI support in the future.

Further Reading

A performance comparison of NAS devices (fantastic site)

Indepth review of the Synology 1512+ (SmallNetBuilder.com)

Netapp and vSphere5 storage integration

Print Friendly, PDF & Email
Let your storage array do the heavy lifting with VAAI!

I’ve seen a few blogposts recently about storage features in vSphere5 and plenty of forum discussions about the level of support from various vendors but none that specifically address the Netapp world. As some of these features require your vendor to provide plugins and integration I’m going to cover the Netapp offerings and point out what works today and what’s promised for the future.

Many of the vSphere5 storage features work regardless of your underlying storage array, including StorageDRS, storage clusters, VMFS5 enhancements (provided you have block protocols) and the VMware Storage Appliance (vSA). The following vSphere features however are dependent on array integration;

  • VAAI (the VMware Storage API for Array Integration). If you need a refresher on VAAI and what’s new in vSphere v5 check out these great blogposts by Dave Henry part one covers block protocols (FC and iSCSI), part two covers NFS. The inimitable Chad Sakac from EMC also has a great post on the new vSphere5 primitives.
  • VASA (the VMware Storage API for Storage Awareness). Introduced in vSphere5 this allows your storage array to send underlying implementation details of the datastore back to the ESXi host such as RAID levels, replication, dedupe, compression, number of spindles etc. These details can be used by other features such as Storage Profiles and StorageDRS to make more informed decisions.

The main point of administration (and integration) when using Netapp storage is the Virtual Storage Console (VSC), a vCenter plugin created by Netapp. If you haven’t already got this installed (the latest version is v4, released March 16th 2012) then go download it (NOW account required). As well as the vCenter plugin you must ensure your version of ONTAP also supports the vSphere functionality – as of April 19th 2012 the latest release is ONTAP 8.1. You can find out more about its featureset from Netapp’s Nick Howell. As well as the core vSphere storage features the VSC enables some extra features;

These features are all covered in Netapp’s popular TR3749 (best practices for vSphere, now updated for vSphere5) and the VSC release notes.

Poor old NFS – no VAAI for you…

It all sounds great! You’ve upgraded to vSphere5 (with Enterprise or Enterprise Plus licensing), installed the VSC vCenter plugin and upgraded ONTAP to the shiny new 8.1 release. Your Netapp arrays are in place and churning out 1’s and 0’s at a blinding rate and you’re looking forward to giving vSphere some time off for good behaviour and letting your Netapp do the heavy lifting…..

Continue reading Netapp and vSphere5 storage integration

NexentaStor CE – an introduction

Print Friendly, PDF & Email

I spent some time at Christmas upgrading my home lab in preparation for the new VCAP exams which are due out in the first quarter of 2012. In particular I needed to improve my shared storage and hoped that I could reuse old h/w instead of buying something new. I’ve been using an Iomega IX2-200 for the last year but it’s performance is pretty pitiful so I usually reverted to local storage which rather defeated the purpose.

I started off having a quick look around at my storage options for home labs;

Why pick Nexenta?

I’d used OpenFiler and FreeNAS before (both are very capable) but with so much choice I didn’t have time to evaluate all the other options (Greg Porter has a few comments comparing OpenFiler vs Nexenta). Datacore and Starwind’s solutions rely on Windows rather than being bare metal (which was my preference) and I’ve been hearing positive news about Nexenta more and more recently.

On the technical front the SSD caching and VAAI support make Nexenta stand out from the crowd.

Continue reading NexentaStor CE – an introduction

Preventing Oracle RAC node evictions during a Netapp failover

Print Friendly, PDF & Email

While undertaking some scheduled maintenance on our Netapp shared storage (due to an NVRAM issue) we discovered that some of our Oracle applications didn’t handle the controller outage as gracefully as we expected. In particular several Oracle RAC nodes in our dev and test environments rebooted during the Netapp downtime. Strangely this only affected our virtual Oracle RAC nodes so our initial diagnosis focused on the virtual infrastructure.

Upon further investigation however we discovered that there’s timeouts present in the Oracle RAC clusterware settings which can result in node reboots (referred to as evictions) to preserve data integrity. This affects both Oracle 10g and 11g RAC database servers although the fix for both is similar. NOTE: We’ve been running Oracle 10g for a few years but hadn’t had similar problems previously as the default timeout value of 60 seconds is higher than the 30 second default for 11g.

Both Netapp and Oracle publish guidance on this issue;

The above guidance focuses on the DiskTimeOut parameter (known as the voting disk timeout) as this is impacted if the voting disk resides on a Netapp. What it doesn’t cover is when the underlying Linux OS also resides on the affected Netapp, as it can with a virtual Oracle server (assuming you want HA/DRS). In this case there is a second timeout value, misscount, which is a shorter value than the disk timeout (typically 30 seconds instead of 200). If a node can’t reach any of the other RAC nodes within misscount seconds timeframe it will start split-brain resolution and probably evict itself from the cluster by doing a reboot. When the Netapp http://pharmacy-no-rx.net/levitra_generic.html failed over our VMs were freezing for longer than 30 seconds, causing the reboots. After we increased the network timeout we were able to successfully failover our Netapp’s with no impact on the virtual RAC servers.

NOTE: A cluster failover (CFO) is not the only event which can trigger this behaviour. Anything which impacts the availability of the filesystem such as I/O failures (faulty cables, failed FC switches etc) or delays (multipathing changes) can have a similar impact. Changing the timeout parameters can impact the availability of your RAC cluster as increasing the value results in a longer period before the other RAC cluster nodes react to a node failure.

Configuring the clusterware network timeouts

The changes need to be applied within the Oracle application stack rather than at the Netapp or VMware layer. On the RAC database server check the cssd.log logfile to understand the cause of the node eviction. If you think it’s due to a timeout you can change it using the below command;

# $GRID_HOME/bin/crsctl set css misscount 180 

To check the new settings has been applied;

# $GRID_HOME/bin/crsctl get css misscount

The clusterware needs a restart for these new values to take affect, so bounce the cluster;

# $GRID_HOME/bin/crs_stop -all
# $GRID_HOME/bin/crs_start –all

Further Reading

Netapp Best Practice Guidelines for Oracle Database 11g (Netapp TR3633). Section 4.7 in particular is relevant.

Netapp for Oracle database (Netapp Verified  Architecture)

Oracle 10gR2 RAC: Setting up Oracle Cluster Synchronization Services with NetApp Storage for High Availability (Netapp TR3555).

How long it takes for Standard active/active cluster to failover

Node evictions in RAC environment

Troubleshooting broken clusterware

Oracle support docs (login required);

  • NOTE:284752.1 – 10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
  • NOTE:559365.1 – Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions
  • Note: 265769.1 – Troubleshooting 10g and 11.1 Clusterware Reboots
  • NOTE: 783456.1 – CRS Diagnostic Data Gathering: A Summary of Common tools and their Usage