Windows 10 Fall Creators Update – Hyper-V VM Sharing – Because Sharing is Caring

With the latest Windows 10 Insider Build of 16226, Microsoft introduced a new feature in Hyper-V to allow easy sharing of VMs amongst users. To share a VM, connect to its console in Hyper-V Manager and click the Share button as seen below:

You will then be prompted to select a location to save the compressed VM export/import file with the extension vmcz (VM Compressed Zip perhaps?). Depending on the VM size, that might take a little while. If you want to check what’s in that export file, you can simply rename append .zip to its file name and open it either with Explorer or your favorite archive handling application. As you can see below, the structure is fairly familiar to anyone using Hyper-V:

You can find the VM hard disk drives (.vhd or .vhdx), its configuration file (.vmcx) and the run state file (.vmrs). So, there’s really no magic there! It creates a nice clean package of all the VMs artifact to easily send it around.

One thing I would like to see in future build is to trigger this process in other ways in Hyper-V Manager as it’s oddly missing from the VM right action pane and the right click contextual menu of the VM. Maybe that’ll come in future builds. I also couldn’t find a way to trigger this in PowerShell yet.

Once your friend has the vmcz file in hand, they can simply double click on it to trigger the import. In the background, the utility C:\Program Files\Hyper-V\vmimport.exe is called. Unfortunately on my test laptop, the import process bombs out as seen below:

I suspect one has only to type a name for the VM that will be imported and click Import Virtual Machine. Those kind of issues are to be expected when you’re in the Fast ring for the Insider Builds! I’m sure that will turn out to be a useful feature for casual Hyper-V users.


Validate Microsoft Recommended Updates with PowerShell

Sometimes you need to validate multiple computers to ensure that a specific patch has been installed. That can happen in the course of a support case with Microsoft who recommends certain updates to be installed as per a specific knowledge base article. In order to do this, I’ve build a simple function in PowerShell to gather that information and output a report. You can find this function on the GEM Automation Codeplex project here:

In order to use the function, you would do something like the following:

@("HOST01","HOST02","HOST03") | Validate-RecommendedUpdateStatus 

The function will then return something like the following:

ComputerName HotfixId InstallStatus KBURL                                          AffectedFeaturesOrRoles                  
------------ -------- ------------- -----                                          -----------------------                  
HOST001     2883200  Missing Hyper-V                                  
HOST001     2887595  Missing Hyper-V                                                    
HOST001     2903939  Installed Hyper-V                                   
HOST001     2919442  Installed Hyper-V   

While running, the function does the following:

  • Gets the list of features installed on the host
  • Checks the recommended updates for the installed feature against the RecommendedUpdates.csv file
    • I try to keep this file up to date as much possible as the Microsoft KB are getting updated
    • I updated the file on March 18th 2016
  • Lists whether the recommended update was installed or is missing

If you have any questions or issues regarding this, let me know!

New post on Altaro’s Blog – Hyper-V Differencing Disks for SQL Server Database Copies

I’m pretty excited to share that my first post on the Altaro Hyper-V blog was published today. In the article, I’m describing how you can take advantage of Hyper-V differencing disks to accelerate the refresh of dev/QA data in SQL Server. You can find the article here:

Altaro Hyper-V Blog – Hyper-V Differencing Disks for SQL Server Database Copies

You can also checkout the PowerShell code I’m using to fully automate the process of creating the parent disks and the differencing disks here on the GEM Automation Codeplex project.

Make sure to have a look at the other great content published on the Altaro blog, I’m sure you’ll find tons of useful information there!


GEM Automation

Version of GEM Automation has been published to CodePlex. Here are the highlights of this new release.

– Initial release of New-NanoServerCluster.ps1
– Removed explicit type casting in Set-VMNetworkConfiguration in libVM.psm1
– Modified Microsoft supplied script new-nanoserverimage.ps1 to include the containers package

– Initial release of Test-TCPPortResponseTime.ps1

AnalyzeCrashDumps.ps1 initial release

AnalyzeCrashDumps.ps1 is simply a master script that wraps the execution of 3 main scripts that collect, analyze and summarizes crash dumps in your environment. Simplifies scheduling the process in Task Scheduler.

Get-MUILanguagesReport.ps1 initial release
Get-UserPreferredLanguageReport.ps1 initial release

Those scripts are for our friends at the Office québecois de la langues française to help us assess the deployment of language packs and users language preferences accross all the computers in Active Directory. It achieves this by at the WMI class Win32_OperatingSystem using the MUILanguages property and by looking at a registry key in HKU\<user SID>\Control Panel\Desktop\MuiCached called MachinePreferredUILanguages.

If you encounter any issues with this release, don’t hesitate to log issues on CodePlex here.


GEM Automation

Today I took some time to check-in some of the code I’ve been working on lately on CodePlex. Since it’s been about 6 months since the last release, it’s pretty significant. Here are some of the highlights:

  • Windows crash dumps analysis and automation (Get-CrashDump.ps1,Get-CrashDumpAnalysis,Get-CrashDumpAnalysisReport)
    • Gathers crash dumps from all the computers present in Active Directory by default or from a list of computers in a text file and copies them to a central location (just noticed the path is hardcoded in the script, will fix this soon)
    • Run cdb over the memory dumps gathered in an incremental fashion
    • Extract core attributes from the cdb log files (i.e. module faulting, process name, etc.)
    • Create a summary of the collected crash dump attributes and output it to a csv file (I’ll try to post the Excel workbook I use to analyze the output)
  • libWindowsPerformance.psm1
    • Get-PerformanceMonitoring : Capture perfmon counters from a list of computers and output to a file or to the PowerShell pipeline
    • Filter-CounterValues : Filter perfmon counter samples from the PowerShell pipeline. This is useful to remove the samples that have little interest to you. In one case I used this to get only samples that exceeded 50% Processor time on 275 computers
    • Convert-PerformanceCounterToHashTable: I mainly wrote this as an helper function for when I send the perfmon samples to Azure EventHub
    • Store-PerformanceCounter : A function that persist counter samples from the pipeline to a SQL Server database
    • Execute-CounterTrigger: This is a function I use to execute particular action on a particular counter sample. For instance, in the case where I was gather CPU perfmon samples, I executed a trigger to get the list of active processes when the threshold was met to get an idea of what is using CPU on the 275 computers
    • Get-CounterStatistics: On an already collected perfmon log file, query it to get generic statistics (min, max, avg, total)
    • Start-PerfmonOnComputers: An helper function to make sure the required services are running on remote computers to collect perfmon data
  • libStorageSpaces.psm1
    • Series of core helper function that I used while developing automated tests for Storage Spaces (mainly setup of pool, virtual disks)
  • libSQLServerStatistics.psm1
    • Added new functions to gather buffer pool composition (database and object level)
    • Added functions to persist buffer pool composition over time
  • Small change in Get-VHDHierarchy and Get-VMStorageInformation to use CredSSP (required when you have remote storage on SOFS for instance)
  • libHyperVStatistics.psm1
    • Add function to workaround a bug in resource metering where the metering duration is empty while collecting samples
    • Now capturing individual VHD statistics appropriately
  • Monitor-VMUsage.ps1
    • Now capturing individual VHD statistics appropriately
  • libConfiguration.psm1
    • Added new functions to validate configuration files against the centralized configuration store
  • libIIS.psm1
    • New Get-RemoteWebSite function
    • New Get-ImportedLogFiles function
  • libUtilities
    • Improved Assert-PSSession function
    • New Test-FileLock function
  • Initial release of libNetworking.psm1
    • New Test-Port function which allows you to test TCP and UDP ports
    • New Test-ComputerConnectivity function to test whether a computer is responding through various methods
  • Initial release of libNeo4j.psm1
    • Core functions to manipulate and query data in a Neo4j graph database
    • This is used for a POC of a discovery process written in PowerShell that creates a graph in Neo4j that is used as a CMDB.

You can download the latest release here: GEM Automation

Here are some of my goals for future releases:

  • Improve documentation (both in PowerShell and on CodePlex)
  • Publish CMDB discovery processes that persist data in Neo4j
  • Ensure the code is using the standard configuration store
In the meantime, try to enjoy this minimally documented release! If you have questions about the code, feel free to ask via the comments below.

Measure-VM Object reference not set to an instance of an object

If you get the following error while Measure-VM in Hyper-V Resource Metering:

Measure-VM : Object reference not set to an instance of an object.
+ CategoryInfo : NotSpecified: (:) [Measure-VM], NullReferenceException
+ FullyQualifiedErrorId : Unspecified,Microsoft.HyperV.PowerShell.Commands.MeasureVMCommand

Check if there are hard disks for the VM that have no paths in the config. In my case there was a couple of disks like this that caused the issue.

Hyper-V Resource Metering and Cloud Costing Benchmarking

I’ve been a bit quiet on this blog for a little while now but I figured I would take some time to share with you what I’ve been working on lately.

In the last few weeks I’ve been working on a prototype to gather VM consumption statistics through the Hyper-V metering functionality in order to get a better grasp of the VM requirements in our environment. The core of the monitoring is done with the following scripts that are now published on the GEM Automation project on CodePlex:


By tagging VMs with specific information, it allows us to generate the following kinds of dashboards in Excel:

Total VM Consumption by Systems

Total VM Consumption by Environment

One report I’ve been working on this week is the following:


In this graph, I can compare almost in real time(well, I aggregate stats on an hourly basis) the cost of running each of our Hyper-V VM or system with for the following public cloud providers:

– Amazon
– Google
– HP
– Microsoft
– Rackspace

as well as our own internal cost. Using this, I can see if a specific system starts to be a good candidate to migrate to a public cloud costing wise. I can also look at trends within system execution to see whether I could run them on premise during part of the time and in the cloud some other time. As you can see in the graph above, picking a provider is not a simple answer but this tool definitely helps! My goal is also to start tracking public cloud price to have a better idea of the trend to be better able to predict when it will become more cost effective for us to move some workload in the cloud.

When we determine costs for our internal infrastructure, we look at the following costs:

– Servers
– Storage
– Networking
– Licenses
– Internet link
– Power
– Infrastructure related labor

Another use case we found this useful was when evaluating our license requirements for SQL Server. Using this, I can figure out how many cores are needed to run all the SQL Server workload simply by aggregating the CPU stats for the SQL Server system.

There’s a lot more that has been built but here’s an idea of the various “reports”/PivotTables/Graphs we have setup in our Excel workbook:

– Hosts Dashboard
– VM Costing Per Hour
– Environment Costing Per Hour
– Target Infrastructure Per Hour
– Azure vs On Prem (Total)
– On Prem vs Cloud Optimized
– On Prem vs Cloud Per Hour
– Cost Per System Per Hour
– Top 10 VM – CPU
– Cloud Candidates
– On Prem vs Azure Per Hour
– Hourly VM CPU Trending
– Daily Cluster CPU Trending
– Daily VM CPU Trending
– Hourly System CPU Trending
– And a lot more!

I’d be curious to get some feedback on this!