Tag Archives: DRS

VCAP-DCA Study notes – 3.3 Implement and maintain complex DRS solutions

Knowledge

  • Explain DRS affinity and anti?affinity rules
  • Identify required hardware components to support DPM
  • Identify EVC requirements, baselines and components
  • Understand the DRS slot?size algorithm and its impact on migration recommendations

Skills and Abilities

  • Properly configure BIOS and management settings to support DPM
  • Test DPM to verify proper configuration
  • Configure appropriate DPM Threshold to meet business requirements
  • Configure EVC using appropriate baseline
  • Change the EVC mode on an existing DRS cluster
  • Create DRS and DPM alarms
  • Configure applicable power management settings for ESX Hosts
  • Properly size virtual machines and clusters for optimal DRS efficiency
  • Properly apply virtual machine automation levels based upon application requirements

Tools & learning resources

Advanced DRS

  • Read the DRS deepdive at Yellow Bricks.
  • Use the (new to vSphere) DRS Faults and DRS History tabs to investigate issues with DRS
  • By default DRS recalculates every 5 minutes (including DPM recommendations), but it also does so when resource settings are changed (reservations, adding/removing hosts etc).For a full list of actions which trigger DRS calculations see Frank Denneman’s HA/DRS book.
  • It’s perfectly possible to turn on DRS even though all prerequisite functionality isn’t enabled – for example if vMotion isn’t enabled you won’t be prompted (at least until you try to migrate a VM)!

Affinity and anti-affinity rules

There are two types of affinity/anti-affinity rules;

  • VM-VM (new in vSphere v4.0)
  • VM-Host (new to vSphere 4.1)

The VM-VM affinity is pretty straightforward. Simply select a group of two or more VMs and decide if they should be kept together (affinity) or apart (anti-affinity). Typical use cases;

  • Webservers acting in a web farm (set anti-affinity to keep them on separate hosts for redundancy)
  • A webserver and associated application server (set affinity to optimise networking by keeping them on the same host)

VM-Host affinity is a new feature (with vSphere 4.1) which lets you ‘pin’ one or more VMs to a particular host or group of hosts. Use cases I can think of;

  • Pin the vCenter server to a couple of known hosts in a large cluster
  • Pin VMs for licence compliance (think Oracle, although apparently they don’t recognise this new feature as being valid – see the comments in this post)
  • Microsoft clustering (see section 4.3 for more details on how to configure this)
  • Multi-tenancy (cloud infrastructures)
  • Blade environments (ensure VMs run on different chassis in case of backplane failure)
  • Stretched clusters (spread between sites. See this Netapp post for Metrocluster details)

To implement them;

  • Define ‘pools’ of hosts.
  • Define ‘pools’ of VMs.
  • Create a rule pairing one VM group with one host group.
    • Specify either affinity (keep together) or anti-affinity (keep apart).
    • Specify either ‘should’ or ‘must’ (preference or mandatory)

Continue reading VCAP-DCA Study notes – 3.3 Implement and maintain complex DRS solutions

VCAP-DCA Study Notes – 4.2 Deploy and test VMware FT

The main document to work through for the VCAP-DCA is the Availability Guide but there are plenty of good white papers and blog posts which give useful background information (see the bottom of this post). If you have access to the 2010 VMworld content it’s worth watching session BC8274 which covers most of the material on the blueprint.

Knowledge

  • Identify VMware FT hardware requirements
  • Identify VMware FT compatibility requirements

Skills and Abilities

  • Modify VM and ESX/ESXi Host settings to allow for FT compatibility
  • Use VMware best practices to prepare a vSphere environment for FT
  • Configure FT logging
  • Prepare the infrastructure for FT compliance
  • Test FT failover, secondary restart and application fault tolerance in a FT Virtual Machine

FT requirements (hardware, software and feature compatibility)

Compatibility
  • Firstly you have to make sure your host hardware will support FT – it’s more demanding than many other VMware features.
    • The main requirement is to have Intel Lockstep technology support in the CPUs and chipset. Rather than list the processor families which support FT you can read VMwareKB1008027.
    • Hardware virtualisation must also be enabled in the BIOS (not always on by default).
  • You need to ensure the guest OS and CPU combination is supported (as the Availability Guide states, Solaris on AMD is not for example).
  • Must have HA enabled on the cluster
  • Licencing– you need Advanced or higher to run FT
  • Host certificates need to be enabled. If you did a clean install of vSphere 4.x this is enabled by default but if you upgraded from VI3.x you have to explicitly enable it (vCentre settings, SSL)
  • Should avoid mixing ESX and ESXi hosts in a cluster with FT-enabled VMs (VMwareKB1013637)

There are also VM level requirements;

  • No USB or sound devices
  • No NPIV
  • No paravirtualized guest OS
  • No physical mode RDMs
  • Hot plug (memory, CPU, hard disks etc) is automatically disabled for FT-enabled VMs
  • No Serial or parallel ports
Restrictions

FT places quite a few restrictions on the features you can use;

Continue reading VCAP-DCA Study Notes – 4.2 Deploy and test VMware FT