Clearing out missing pool desktop

if a desktop goes missing after a recompose, newer verions of Horizon have a tool to help out so you don’t have to go into the ADAM database and manually edit things.

go into the horizon server and the following directory. and follow the steps

cd c:\Program Files\VMware\VMware View\Server\tools\bin
viewdbchk.cmd –scanMachines –limit 100

You will have to enter the view user password as well as the composer password.
this will ask to disable the pool, say yes (make sure you have a maintenance window) and then it will ask to re enable once done.

RDM – find assisgned and make sure assigned as Perennially Reserved

<#
.SYNOPSIS
Alan Harrington, copy and excute sections at a time to make sure runs okay
.DESCRIPTION
collects all rdms that are ASSIGNED to a vm and makes them as Perenially Reserved, if the rdm is not configured on the vm then it won’t set it.
.PARAMETER <paramName>
<Description of script parameter>
.EXAMPLE
<An example of using the script>
#>
$cluster = “vnesxcomp099_rdms”

#Get-cluster $cluster | get-vm | sort name | Get-HardDisk -DiskType “RawPhysical”,”RawVirtual” | Select Parent,ScsiCanonicalName
$rdmsattachedinclu = Get-cluster $cluster | get-vm | sort name | Get-HardDisk -DiskType “RawPhysical”,”RawVirtual” | Select Parent,ScsiCanonicalName
$rdmsscsinaa = $rdmsattachedinclu | select ScsiCanonicalName

#full list, sorted, unique
[array]$temp = $null
foreach ($rdmsscsi in $rdmsscsinaa){
$temp2 = $rdmsscsi.ScsiCanonicalName
$temp = $temp + $temp2}
foreach ($i in $temp) {
[string]$i = $i
$temp[$temp.IndexOf($i)] = $i.SubString(4)}
$rdmlist = $temp | sort | select -Unique
$rdmlist = $rdmlist | ForEach-Object {“naa.$_”}
#now that list is converted back to naa. time to set it

$vmhs = get-cluster $cluster | get-vmhost | sort name
$vmhsesxcli = $vmhs | get-esxcli
foreach($esxcli in $vmhsesxcli) {
# And for each RDM Disk
foreach($RDM in $rdmlist) {
# Set the configuration to “PereniallyReserved”.
# setconfig method: void setconfig(boolean detached, string device, boolean perenniallyreserved)
$esxcli.storage.core.device.setconfig($false,$RDM,$true)
$esxcli.storage.core.device.list($rdm)
}
}

checking a cluster to see about MM (scsi bus sharing)

$clname = “HOSA-P-FARM06-ProSuites”
$clustertocheck = Get-Cluster $clname
Write-Host “Checking $clname” -ForegroundColor cyan
$vmhs = $clustertocheck | Get-VMHost | sort name
foreach ($vmh in $vmhs){
$scsishared = $null
$vms = $vmh | Get-VM | sort name
#$hostver = $vmh | get-View -Property Name,Config.Product | Format-Table Name, @{L=’Version’;E={$_.Config.Product.FullName}}
$scsishared = $vms |Get-ScsiController | Where-Object {$_.BusSharingMode -eq ‘Physical’ -or $_.BusSharingMode -eq ‘Virtual’}
if (!$scsishared){
if ($vmh.ConnectionState -eq “Maintenance”){write-host “$vmh is in MM already, host is $vmh.Version”}
if ($vmh.ConnectionState -ne “Maintenance”){Write-host “$vmh is okay, no shared SCSIBUS vms” -foregroundcolor yellow}
}
if ($scsishared) { Write-Host “$vmh is NOT okay, SCSIBUS SHARED vms” -foregroundcolor red }
}

Migrating VDS port group vms to vss port groups

$face = Get-VirtualPortGroup  | sort name

$face | where {$_.vlanid -eq “344”}

 

 

$vdpg = “dvsportgroup”

Get-VDPortgroup  $vdpg | get-vm

$vmnics = Get-VDPortgroup  $vdpg | get-vm | Get-NetworkAdapter | where {$_.NetworkName -eq $vdpg}

$vmnics | Set-NetworkAdapter -NetworkName “vssporrtgroup” -WhatIf

vRA 6.2 to vRA 6.2.1 Upgrade How to.

There isn’t any documentation that I can find for the upgrade notes for 6.2.1 vRA yet, however that didn’t stop me from upgrading our dev environment to check it out!

UPDATE: 3/17/2015, our VMRC proxy wasn’t working, there is a tiny mention of port 8444 here . don’t forget to add a virtual pool and members for port 8444 behind the load balancer! (we are using F5)

Suppose to follow the documentation for 6.2, but there are a few things that need to noted.

you finally don’t have to upgrade .NET, which is amazing.

snapshot everything!!! and don’t forget to back up your database!
Get-ResourcePool vcac03* | Get-VM | New-Snapshot -Name “pre-6.2.1” -Memory:$false -Quiesce:$false

got to love me some powercli 🙂

First, make sure your dim status is showing up as excuting zero work loads, other wise database upgrade issues will happen.

if it’s not zero wait a few mins and check again, it will be!

after that stop the vcac services on the following boxes in this order:
Proxy
DEMs
MGR – orchestrators
MGR – service
next attach the update repo iso (VMware-vCAC-Appliance-6.2.1.0-2553372-updaterepo.iso) it to the cd-rom drive, log into the first app box and go under update – settings – choose cd -rom, then go back to status and check for updates.

after that click install and wait 15 mins or so, stay on that page and DON’T do anything. it will display below

after that reboot and wait for 15 mins and you’ll have 29 services. (note the console proxy is not registered yet in this picture)

download the dbupgrade script and run it against the sql database, shouldn’t take long at all… (few seconds) you can see the schema has in fact been updated so make sure this is done!

when that is done hop onto the iaas box and download the iaas installer from the 6.2.1 web app box

(note: run the installer on the iaas box…. thanks Jonathon…)

up next go down the line in order below

mgr service
mgr orchestrators
DEM workers
Proxies

after that you are done!! congratulations!

move on the vco boxes and call it a day.
power them off and snap them then power them back on!

Get-ResourcePool vcac03* | Get-VM | New-Snapshot -Name “6.2.1-done” -Memory:$false -Quiesce:$false

curious thing I noticed that was the build on the webpage is Build 6.2.1-2543390, however for the app it is Appliance Version: 6.2.1.0 Build 2553372

quick way to upgrade hosts

quick way to upgrade a host with no vCenter (or when it’s down, like a home lab all in one)

esxcli network firewall ruleset set -e true -r httpClient
esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.5.0-20141204001-standard

vcac 6.0.1.1 update changes configuration files.

So there is a little issue when upgrading to 6.0.1.1 from 6.0.1 (which is the only supported way.. have to upgrade to 6.0.1 from 6.0 first).
This has only been noticed on the ISO attached cd rom for the update, so I don’t know about the others (seeing how it’s not released on the repository yet)

That removes A LOT of info from the server information. Like it’s db connection, it’s cluster settings… etc.. etc…
Here are the files I have noticed it changing (note only the files that are talked about in the distributed architecture have been checked against, so more configuration file might have actually been changed)
Located in /etc/vcac/
encryption.key
security.properties
server.xml
setenv-core
solution-users.properties
vcac.keystore
vcac.properties

Located in /etc/apache2
server.pem

the two files that are changed

The following files are changed,
Server.xml
setenv-core

The lines are missing in setenv-core for clustering and the café.node
VCAC_OPTS=”$VCAC_OPTS -Dspring.profiles.active=default,cluster”
VCAC_OPTS=”$VCAC_OPTS -Dcluster.cache.invalidation.poll.enabled=true”

Server.xml appears to be a different format all together, but contains the data, here you can see where it over wrote the database connection in Global Naming Resources, not sure what else has changed here.
the local host setting for clustering appears changed

However we are manually reconfiguring those two files and modifying the the café.node. instance id manually.

Just throwing this out there for anyone that is about to uprade to 6.0.1.1, you have been warned!!

vcpus, more the merrier? pt2

a while ago I published a blog post that is located here that talked about the concept that more vcpus don’t necessary mean more performance, due to ready time, cpu timing and various other things. Read more on the blog post above if you are looking for more information!

This client went live with there system and was experiencing some database time outs, some sluggishness for consumer reports and meter usage. nothing something you want to have happen…

I was able to convince the vendor to let me actually try to REDUCE the number of cpus from 8 to 4, if you recall last time, the server was at 16.3% rdy for barely any load, it moved up to 28.9% during some data migration testing. so, let’s see the results with the cpus reduced….

as you can see it’s not 2.64% ! amazing!!!

the VM was no longer having any performance issues, was generating reports over 4x faster, and had no hiccups or sluggishness about it!
less cpus = more work!

If you look at the picture you will notice there are 43 vms on this host… with 59 vcpus. running the math 59/16 = 3.6875, so the ratio is 3.6875 vcpus to 1 pcpus. The VM runs around 13% util with burst of 66%, we could drop it down to 2 cpus or 3 cpus for even better ratios, and even more performance to all vms, however, the vendor still wont let me 🙂

so there we have it, proof, more cpus, doesn’t always mean better performance, and can in some cases, hurt performance.

always remember to start with low vcpus and move up, it’ll make the VMadmins and the VMs happy!

vcpus, more the merrier?

Here we are, back to the tried and true thought that more vcpus equals better performance.

If you are running a fairly low vm:host ratio, this might not be as big of a deal because your physical cpu to virtual cpu ratio is lower. I recently ran into a vendor that “had” to have 8 cpus. Now this client has the latest and greatest most awesomeness host, some dell r620 2x 8 core (ht enabled) with 256gb of ram at 1600mhz – I mean this host is quick. The problem is there are two of them for the whole cluster. I’ve built it this way on purpose: it’s a basic smb that runs everything (and I do mean everything, virtualized, 99% of it is for dr purposes) and it’s backed by some nice eql arrays. So now that you know the background of the client, it’s time I introduce everyone to some important concepts in virtualization.

Welcome to the world of the vmk scheduler. Its job is to tell the vmk when to run vcpus; it likes to run Symmetric multiprocessing (SMP) vcpus at the same time. It will wait until it can, if smp vcpus aren’t run at the same time and the application is multithreaded, some very bad things will happen (cpu returns instruction sets out of order, blah blah blah). So now that we have a basic understanding of that, let’s look and see what some metrics are to see how long it’s taking the scheduler to do its thing…

First let’s discuss the metrics we will be using…
Taking directly from VMware
• Run – Amount of time the virtual machine is consuming CPU resources.
• Wait – Amount of time the virtual machine is waiting for a VMkernel resource.
• Ready – Amount of time the virtual machine was ready to run, waiting in a queue to be scheduled.
• Co-Stop – Amount of time a SMP virtual machine was ready to run, but incurred delay due to co-vCPU scheduling contention.

Esxtop and advanced perf stats to the rescue!!

If you don’t know how to use esxtop, go read elsewhere.
These are the values of this server: (click the image below)

High ready time bad!! 16% of it’s time trying to do work and it can’t…

Ouch… this server just spent 16.3% of its time waiting to be run… poor thing, sad part is it is using less than 600mhz at this time.
Let’s try some things, like only running this one vm on a host, and clearing off all other vms. We would expect ready time to decrease because the scheduler isn’t doing anything because it’s only one vm! It doesn’t even have to cross numa nodes!! (click image below)

As expected, it’s low… super low. That’s great – it’ll be able to rock out any time now..
Okay, so what if I change it so everything is running on one host again…
As expected, it’s back to high again (16.5%). Typically over 5% and you’ll notice performance issues, unless you’re reading this article, or you’ve been around the VMware block before, you won’t really know how to describe it other than “sluggish.”

Stay tuned for part two where we decrease the number of vpcus and watch the efficiency of the vm increase, even though the total amount of work it could do is decreased.