With the latest Windows 10 Insider Build of 16226, Microsoft introduced a new feature in Hyper-V to allow easy sharing of VMs amongst users. To share a VM, connect to its console in Hyper-V Manager and click the Share button as seen below:
You will then be prompted to select a location to save the compressed VM export/import file with the extension vmcz (VM Compressed Zip perhaps?). Depending on the VM size, that might take a little while. If you want to check what’s in that export file, you can simply rename append .zip to its file name and open it either with Explorer or your favorite archive handling application. As you can see below, the structure is fairly familiar to anyone using Hyper-V:
You can find the VM hard disk drives (.vhd or .vhdx), its configuration file (.vmcx) and the run state file (.vmrs). So, there’s really no magic there! It creates a nice clean package of all the VMs artifact to easily send it around.
One thing I would like to see in future build is to trigger this process in other ways in Hyper-V Manager as it’s oddly missing from the VM right action pane and the right click contextual menu of the VM. Maybe that’ll come in future builds. I also couldn’t find a way to trigger this in PowerShell yet.
Once your friend has the vmcz file in hand, they can simply double click on it to trigger the import. In the background, the utility C:\Program Files\Hyper-V\vmimport.exe is called. Unfortunately on my test laptop, the import process bombs out as seen below:
I suspect one has only to type a name for the VM that will be imported and click Import Virtual Machine. Those kind of issues are to be expected when you’re in the Fast ring for the Insider Builds! I’m sure that will turn out to be a useful feature for casual Hyper-V users.
I recently spent some time experimenting with GPU Discrete Device Assignment in Azure using the NV* series of VM. As we noticed that Internet Explorer was consuming quite a bit CPU resources on our Remote Desktop Services session hosts, I wondered how much of an impact on the CPU using a GPU would do by accelerating graphics through the specialized hardware. We did experiments with Windows Server 2012 R2 and Windows Server 2016. While Windows Server 2012 R2 does deliver some level of hardware acceleration for graphics, Windows Server 2016 did provide a more complete experience through better support for GPUs in an RDP session.
In order to enable hardware acceleration for RDP, you must do the following in your Azure NV* series VM:
- Download and install the latest driver recommended by Microsoft/NVidia from here
- Enable the Group Policy Setting Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment\Use the hardware default graphics adapter for all Remote Desktop Services sessions as shown below:
To validate the acceleration, I used a couple of tools to generate and measure the GPU load. For load generation I used the following:
- Island demo from Nvidia which is available for download here.
- This scenario worked fine in both Windows Server 2012 R2 and Windows Server 2016
- Here’s what it looks like when you run this demo (don’t mind the GPU information displayed, that was from my workstation, not from the Azure NV* VM):
- Microsoft Fish Tank page which leverages WebGL in the browser which is in turn accelerated by the GPU when possible
- This proved to be the scenario that differentiated Windows Server 2016 from Windows Server 2012 R2. Only under Windows Server 2016 could high frame rate and low CPU utilization was achieved. When this demo runs using only the software renderer, I observed CPU utilization close to 100% on a fairly beefy NV6 VM that has 6 cores and that just by running a single instance of that test.
- Here’s what FishGL looks like:
To measure the GPU utilization, I ended up using the following tools:
In order to do a capture with Windows Performance Recorder, make sure that GPU activity is selected under the profiles to be recorded:
Here’s a recorded trace of the GPU utilization from the Azure VM while running FishGL in Internet Explorer that’s being visualized in Windows Performance Analyzer:
As you can see in the WPA screenshot above, quite a few processes can take advantage of the GPU acceleration.
Here’s what it looks like in Process Explorer when you’re doing live monitoring. As you can see below, you can see which process is consuming GPU resources. In this particular screenshot, you can see what Internet Explorer consumes while running FishGL my workstation.
Windows Server 2016 takes great advantage of an assigned GPU to offload compute intensive rendering tasks. Hopefully this article helped you get things started!
Sometimes you need to validate multiple computers to ensure that a specific patch has been installed. That can happen in the course of a support case with Microsoft who recommends certain updates to be installed as per a specific knowledge base article. In order to do this, I’ve build a simple function in PowerShell to gather that information and output a report. You can find this function on the GEM Automation Codeplex project here:
In order to use the function, you would do something like the following:
@("HOST01","HOST02","HOST03") | Validate-RecommendedUpdateStatus
The function will then return something like the following:
ComputerName HotfixId InstallStatus KBURL AffectedFeaturesOrRoles
------------ -------- ------------- ----- -----------------------
HOST001 2883200 Missing https://support.microsoft.com/en-us/kb/2883200 Hyper-V
HOST001 2887595 Missing https://support.microsoft.com/en-us/kb/2887595 Hyper-V
HOST001 2903939 Installed https://support.microsoft.com/en-us/kb/2903939 Hyper-V
HOST001 2919442 Installed https://support.microsoft.com/en-us/kb/2919442 Hyper-V
While running, the function does the following:
- Gets the list of features installed on the host
- Checks the recommended updates for the installed feature against the RecommendedUpdates.csv file
- I try to keep this file up to date as much possible as the Microsoft KB are getting updated
- I updated the file on March 18th 2016
- Lists whether the recommended update was installed or is missing
If you have any questions or issues regarding this, let me know!
I’m pretty excited to share that my first post on the Altaro Hyper-V blog was published today. In the article, I’m describing how you can take advantage of Hyper-V differencing disks to accelerate the refresh of dev/QA data in SQL Server. You can find the article here:
Altaro Hyper-V Blog – Hyper-V Differencing Disks for SQL Server Database Copies
You can also checkout the PowerShell code I’m using to fully automate the process of creating the parent disks and the differencing disks here on the GEM Automation Codeplex project.
Make sure to have a look at the other great content published on the Altaro blog, I’m sure you’ll find tons of useful information there!
Version 184.108.40.206 of GEM Automation has been published to CodePlex. Here are the highlights of this new release.
– Initial release of New-NanoServerCluster.ps1
– Removed explicit type casting in Set-VMNetworkConfiguration in libVM.psm1
– Modified Microsoft supplied script new-nanoserverimage.ps1 to include the containers package
– Initial release of Test-TCPPortResponseTime.ps1
– AnalyzeCrashDumps.ps1 initial release
AnalyzeCrashDumps.ps1 is simply a master script that wraps the execution of 3 main scripts that collect, analyze and summarizes crash dumps in your environment. Simplifies scheduling the process in Task Scheduler.
– Get-MUILanguagesReport.ps1 initial release
– Get-UserPreferredLanguageReport.ps1 initial release
Those scripts are for our friends at the Office québecois de la langues française to help us assess the deployment of language packs and users language preferences accross all the computers in Active Directory. It achieves this by at the WMI class Win32_OperatingSystem using the MUILanguages property and by looking at a registry key in HKU\<user SID>\Control Panel\Desktop\MuiCached called MachinePreferredUILanguages.
If you encounter any issues with this release, don’t hesitate to log issues on CodePlex here.
Today I took some time to check-in some of the code I’ve been working on lately on CodePlex. Since it’s been about 6 months since the last release, it’s pretty significant. Here are some of the highlights:
- Windows crash dumps analysis and automation (Get-CrashDump.ps1,Get-CrashDumpAnalysis,Get-CrashDumpAnalysisReport)
- Gathers crash dumps from all the computers present in Active Directory by default or from a list of computers in a text file and copies them to a central location (just noticed the path is hardcoded in the script, will fix this soon)
- Run cdb over the memory dumps gathered in an incremental fashion
- Extract core attributes from the cdb log files (i.e. module faulting, process name, etc.)
- Create a summary of the collected crash dump attributes and output it to a csv file (I’ll try to post the Excel workbook I use to analyze the output)
- Get-PerformanceMonitoring : Capture perfmon counters from a list of computers and output to a file or to the PowerShell pipeline
- Filter-CounterValues : Filter perfmon counter samples from the PowerShell pipeline. This is useful to remove the samples that have little interest to you. In one case I used this to get only samples that exceeded 50% Processor time on 275 computers
- Convert-PerformanceCounterToHashTable: I mainly wrote this as an helper function for when I send the perfmon samples to Azure EventHub
- Store-PerformanceCounter : A function that persist counter samples from the pipeline to a SQL Server database
- Execute-CounterTrigger: This is a function I use to execute particular action on a particular counter sample. For instance, in the case where I was gather CPU perfmon samples, I executed a trigger to get the list of active processes when the threshold was met to get an idea of what is using CPU on the 275 computers
- Get-CounterStatistics: On an already collected perfmon log file, query it to get generic statistics (min, max, avg, total)
- Start-PerfmonOnComputers: An helper function to make sure the required services are running on remote computers to collect perfmon data
- Series of core helper function that I used while developing automated tests for Storage Spaces (mainly setup of pool, virtual disks)
- Added new functions to gather buffer pool composition (database and object level)
- Added functions to persist buffer pool composition over time
- Small change in Get-VHDHierarchy and Get-VMStorageInformation to use CredSSP (required when you have remote storage on SOFS for instance)
- Add function to workaround a bug in resource metering where the metering duration is empty while collecting samples
- Now capturing individual VHD statistics appropriately
- Now capturing individual VHD statistics appropriately
- Added new functions to validate configuration files against the centralized configuration store
- New Get-RemoteWebSite function
- New Get-ImportedLogFiles function
- Improved Assert-PSSession function
- New Test-FileLock function
- Initial release of libNetworking.psm1
- New Test-Port function which allows you to test TCP and UDP ports
- New Test-ComputerConnectivity function to test whether a computer is responding through various methods
- Initial release of libNeo4j.psm1
- Core functions to manipulate and query data in a Neo4j graph database
- This is used for a POC of a discovery process written in PowerShell that creates a graph in Neo4j that is used as a CMDB.
You can download the latest release here: GEM Automation 220.127.116.11
Here are some of my goals for future releases:
- Improve documentation (both in PowerShell and on CodePlex)
- Publish CMDB discovery processes that persist data in Neo4j
- Ensure the code is using the standard configuration store
In the meantime, try to enjoy this minimally documented release! If you have questions about the code, feel free to ask via the comments below.
If you get the following error while Measure-VM in Hyper-V Resource Metering:
Measure-VM : Object reference not set to an instance of an object.
+ CategoryInfo : NotSpecified: (:) [Measure-VM], NullReferenceException
+ FullyQualifiedErrorId : Unspecified,Microsoft.HyperV.PowerShell.Commands.MeasureVMCommand
Check if there are hard disks for the VM that have no paths in the config. In my case there was a couple of disks like this that caused the issue.