Monthly Archives: April 2011

VCAP-DCA Study notes – 3.1 Tune and Optimize vSphere Performance

It’s hard to know what to cover in this objective as performance tuning often implies troubleshooting (note the recommended reading of Performance Troubleshooting!) hence there’s a significant overlap with the troubleshooting section. Luckily there are plenty of excellent resources in the blogosphere and from VMware so it’s just a case of reading and practicing.

Knowledge

  • Identify appropriate BIOS and firmware setting requirements for optimal ESX/ESXi Host performance
  • Identify appropriate ESX driver revisions required for optimal ESX/ESXi Host performance
  • Recall where to locate information resources to verify compliance with VMware and third party vendor best practices

Skills and Abilities

  • Tune ESX/ESXi Host and Virtual Machine memory configurations
  • Tune ESX/ESXi Host and Virtual Machine networking configurations
  • Tune ESX/ESXi Host and Virtual Machine CPU configurations
  • Tune ESX/ESXi Host and Virtual Machine storage configurations
  • Configure and apply advanced ESX/ESXi Host attributes
  • Configure and apply advanced Virtual Machine attributes
  • Tune and optimize NUMA controls

Tools & learning resources

Identify BIOS and firmware settings for optimal performance

This will vary for each vendor but typical things to check;

  • Power saving for the CPU.
  • Hyperthreading – should be enabled
  • Hardware virtualisation (Intel VT, EPT etc) – required for EVC, Fault Tolerance etc
    NOTE: You should also enable the ‘No Execute’ memory protection bit.
  • NUMA settings (node interleaving for DL385 for instance. Normally disabled – check Frank Denneman’s post.
  • WOL for NIC cards (used with DPM)

Identify appropriate ESX driver revisions required for optimal host performance

I guess they mean the HCL. Let’s hope you don’t need an encyclopaedic knowledge of driver version histories!

Tune ESX/i host and VM memory configurations

Read this great series of blog posts from Arnim Van Lieshout on memory management – part one, two and three. And as always the Frank Denneman post.

Check your Service Console memory usage using esxtop.

Hardware assisted memory virtualisation

Check this is enabled (per VM). Edit Settings -> Options -> CPU/MMU Virtualisation;

image
Enabling h/w CPU/memory assist for a VM

NOTE: VMware strongly recommend you use large pages in conjunction with hardware assisted memory virtualisation. See section 3.2 for details on enabling large memory pages. However enabling large memory pages will negate the efficiency of TPS so you gain performance at the cost of higher memory usage. Pick your poison…(and read this interesting thread on the VMware forums)

Continue reading VCAP-DCA Study notes – 3.1 Tune and Optimize vSphere Performance

VCAP-DCA Study Notes – 1.3 Complex Multipathing and PSA plugins

This section overlaps with objectives 1.1 (Advanced storage management) and 1.2 (Storage capacity) but covers the multipathing functionality in more detail.

Knowledge

  • Explain the Pluggable Storage Architecture (PSA) layout

Skills and Abilities

  • Install and Configure PSA plug?ins
  • Understand different multipathing policy functionalities
  • Perform command line configuration of multipathing options
  • Change a multipath policy
  • Configure Software iSCSI port binding

Tools & learning resources

Understanding the PSA layout

The PSA layout is well documented here, here. The PSA architecture is for block level protocols (FC and iSCSI) – it isn’t used for NFS.

image

Terminology;

  • MPP = one or more SATP + one or more PSP
  • NMP = native multipathing plugin
  • SATP = traffic cop
  • PSP = driver

There are four possible pathing policies;

  • MRU = Most Recently Used. Typically used with active/passive (low end) arrays.
  • Fixed = The path is fixed, with a ‘preferred path’. On failover the alternative paths are used, but when the original path is restored it again becomes the active path.
  • Fixed_AP = new to vSphere 4.1. This enhances the ‘Fixed’ pathing policy to make it applicable to active/passive arrays and ALUA capable arrays. If no user preferred path is set it will use its knowledge of optimised paths to set preferred paths.
  • RR = Round Robin

One way to think of ALUA is as a form of ‘auto negotiate’. The array communicates with the ESX host and lets it know the available path to use for each LUN, and in particular which is optimal. ALUA tends to be offered on midrange arrays which are typically asymmetric active/active rather than symmetric active/active (which tend to be even more expensive). Determining whether an array is ‘true’ active/active is not as simple as you might think! Read Frank Denneman’s excellent blogpost on the subject. Our Netapp 3000 series arrays are asymmetric active/active rather than ‘true’ active/active.

Continue reading VCAP-DCA Study Notes – 1.3 Complex Multipathing and PSA plugins

VCAP-DCA Study Notes – 2.4 Administer vNetwork Distributed Switches

Knowledge

  • Explain relationship between vDS and logical vSSes

Skills and Abilities

  • Understand the use of command line tools to configure appropriate vDS settings on an ESX/ESXi host
  • Determine use cases for and apply Port Binding settings
  • Configure Live Port Moving
  • Given a set of network requirements, identify the appropriate distributed switch technology to use
  • Use command line tools to troubleshoot and identify configuration items from an existing vDS

Tools & learning resources

Relationship between vSS and vDS

Both standard (vSS) and distributed (vDS) switches can exist at the same time – indeed there’s good reason to use this ‘hybrid’ mode.

You can view the switch configuration on a host (both vSS and dvS) using esxcfg-vswitch -l. It won’t show the ‘hidden’ switches used under the hood by the vDS although you can read more about those in this useful article at RTFM or at Geeksilver’s blog.

Command line configuration of a vDS

The command line is pretty limited when it comes to vDS. Useful commands;

  • esxcfg-vswitch
    • esxcfg-vswitch -P vmnic0 -V 101 <dvSwitch> (link a physical NIC to a vDS)
    • esxcfg-vswitch -Q vmnic0 -V 101 <dvSwitch> (unlink a physical NIC from a vDS)
  • esxcfg-vswif -l | -d (list or delete a service console)
  • esxcfg-nics
  • net-dvs

NOTE: net-dvs can be used for diagnostics although it’s an unsupported command. It’s located in /usr/lib/vmware/bin. Use of this command is covered in section 6.4 Troubleshooting Network connectivity.

NOTE: esxcfg-vswitch can ONLY be used to link and unlink physical adaptors from a vDS. Use this to fix faulty network configurations. If necessary create a vSS switch and move your physical uplinks across to get your host back on the network. See VMwareKB1008127 or this blogpost for details.

Identify configuration items from an existing vDS

You can use esxcfg-vswitch -l to show the dvPort assigned to a given pNIC and dvPortGroup.

See the Troubleshooting Network connectivity section for more details.

Continue reading VCAP-DCA Study Notes – 2.4 Administer vNetwork Distributed Switches

VCAP-DCA Study Notes – 2.3 Deploy and Maintain Scalable virtual networks

Knowledge

  • Identify VMware NIC Teaming policies
  • Identify common network protocols

Skills and Abilities

  • Understand the NIC Teaming failover types and related physical network settings
  • Determine and apply Failover settings
  • Configure explicit failover to conform with VMware best practices
  • Configure port groups to properly isolate network traffic

Tools & learning resources

Identify, understand , and configure NIC teaming

The five available policies are;

  • Route based on virtual port ID (default)
  • Route based on IP Hash (MUST be used with static Etherchannel – no LACP). No beacon probing.
  • Route based on source MAC address
  • Route based on physical NIC load (vSphere 4.1 only)
  • Explicit failover

NOTE: These only affect outbound traffic. Inbound load balancing is controlled by the physical switch.

Continue reading VCAP-DCA Study Notes – 2.3 Deploy and Maintain Scalable virtual networks