Wednesday, 5 November 2014

Powershell script to bring up Hyper-V VMs Slowly enough that they function

The below Powershell script is something I wrote out of frustration while on an Exchange Messaging course to try to improve my mood. The Hyper-V VMs we were starting for each lab took an absolute age to load up and become responsive after we reverted their state at the end of each lab. This script attempts to start machines one after another with a configurable gap in between, the idea being you could set the $vms array to the machines you want started then wander off for a bit of a break then hopefully when you return the VMs would have loaded and be responsive.

You may ask why wouldn't you just start them all together, well the machines that were running the VMs were not up to spec and each of the VMs seemed to be based off of a single VM using dereferenced disks which severely impacted performance. The VMs took so long to load that Services on each of them often failed to start because of timeouts.

I thought I would share in the hope it may save someone else pulling their hair out.

# Machines to start in order to start
$vms = "20342B-LON-DC1", "20342B-LON-CAS1", "20342B-LON-MBX1", "20342B-LON-CL1", "20342B-LON-CL2" , "20342B-LON-LY1"
# minimum uptime in seconds to check for before starting next machine
$delaytime = 45
Function StartVM
     param ($Name)
     $vmquery = get-vm $Name
     if ($vmquery.state -eq "off")
        Write-Host " not running, starting" -NoNewline
        start-vm $Name
        $vmquery = get-vm $Name
     elseif ($vmquery.state -eq "Running")
        Write-Host " already running" -NoNewline
     While ($vmquery.uptime.TotalSeconds -lt  $delaytime) { $vmquery = get-vm $Name; sleep 2 ; Write-Host . -NoNewline }  
     write-host " uptime is "-nonewline
     write-host $vmquery.uptime.TotalSeconds -NoNewline
     write-host " seconds, over $delaytime so assumed to be up and running."
foreach ($vm in $vms) {
    write-host "Checking $vm... " -NoNewline
    StartVM $vm 
write-host "If $delaytime seconds was long enough your VMs should be functioning now."

I am still on the course studying for 70-341: Core solutions of Microsoft Exchange Server 2013 and 70-342: Advanced Solutions of Microsoft Exchange Server 2013, So I may well update the script as I run more labs. Ideally I would like to improve the script so it waits until the started VM is "responding" before starting the next one but this has proved difficult, if you have any suggestions please let me know.

Update: 06/02/2015, I have improved the script a little, it now copes better with machines being started from other sources or if the script is restarted and the times are in seconds as the latest tests I have endured have been on lab machines using SSDs which significantly improves performance.

Tuesday, 9 September 2014

Squeezing more from your Linux VMs

There is a way to squeeze a little more from Linux VM's.

The theory

Linux has a number of ways of sharing it's storage IO among different processes, the norm seems to be to use the Completely Fair Queuing (CFQ) scheduler which helps to prevent a single process from using more than it's fair share of storage IO. This is usually helpful in an environment where this one Linux box is the only OS using a storage device but when it is running on an hypervisor the hypervisor is also busy trying to dish out fair access to the storage so we are effectively doubling up calculating fair access. There are other schedulers that we can choose from and for VM's it seems to make the most sense to use either the noop or the deadline schedulers. They are both more simple to calculate then CFQ.

The practice

If you are using the device sda for your drive use the following to check what scheduler you are using :
# cat /sys/block/sda/queue/scheduler
which will return something like
noop anticipatory deadline [cfq]
Which indicates that the scheduler for this device is CFQ.

To change it on the fly you just: 
echo noop > /sys/block/sda/queue/scheduler
If you want to make the change work across reboots you will need to add elevator=noop to your kernel boot parameters, on a Debian system you edit /etc/default/grub and add "elevator=noop" to the GRUB_CMDLINE_LINUX line then run:
# update-grub
To update the grub configuration.