Today I took some time to check-in some of the code I’ve been working on lately on CodePlex. Since it’s been about 6 months since the last release, it’s pretty significant. Here are some of the highlights:
- Windows crash dumps analysis and automation (Get-CrashDump.ps1,Get-CrashDumpAnalysis,Get-CrashDumpAnalysisReport)
- Gathers crash dumps from all the computers present in Active Directory by default or from a list of computers in a text file and copies them to a central location (just noticed the path is hardcoded in the script, will fix this soon)
- Run cdb over the memory dumps gathered in an incremental fashion
- Extract core attributes from the cdb log files (i.e. module faulting, process name, etc.)
- Create a summary of the collected crash dump attributes and output it to a csv file (I’ll try to post the Excel workbook I use to analyze the output)
- libWindowsPerformance.psm1
- Get-PerformanceMonitoring : Capture perfmon counters from a list of computers and output to a file or to the PowerShell pipeline
- Filter-CounterValues : Filter perfmon counter samples from the PowerShell pipeline. This is useful to remove the samples that have little interest to you. In one case I used this to get only samples that exceeded 50% Processor time on 275 computers
- Convert-PerformanceCounterToHashTable: I mainly wrote this as an helper function for when I send the perfmon samples to Azure EventHub
- Store-PerformanceCounter : A function that persist counter samples from the pipeline to a SQL Server database
- Execute-CounterTrigger: This is a function I use to execute particular action on a particular counter sample. For instance, in the case where I was gather CPU perfmon samples, I executed a trigger to get the list of active processes when the threshold was met to get an idea of what is using CPU on the 275 computers
- Get-CounterStatistics: On an already collected perfmon log file, query it to get generic statistics (min, max, avg, total)
- Start-PerfmonOnComputers: An helper function to make sure the required services are running on remote computers to collect perfmon data
- libStorageSpaces.psm1
- Series of core helper function that I used while developing automated tests for Storage Spaces (mainly setup of pool, virtual disks)
- libSQLServerStatistics.psm1
- Added new functions to gather buffer pool composition (database and object level)
- Added functions to persist buffer pool composition over time
- Small change in Get-VHDHierarchy and Get-VMStorageInformation to use CredSSP (required when you have remote storage on SOFS for instance)
- libHyperVStatistics.psm1
- Add function to workaround a bug in resource metering where the metering duration is empty while collecting samples
- Now capturing individual VHD statistics appropriately
- Monitor-VMUsage.ps1
- Now capturing individual VHD statistics appropriately
- libConfiguration.psm1
- Added new functions to validate configuration files against the centralized configuration store
- libIIS.psm1
- New Get-RemoteWebSite function
- New Get-ImportedLogFiles function
- libUtilities
- Improved Assert-PSSession function
- New Test-FileLock function
- Initial release of libNetworking.psm1
- New Test-Port function which allows you to test TCP and UDP ports
- New Test-ComputerConnectivity function to test whether a computer is responding through various methods
- Initial release of libNeo4j.psm1
- Core functions to manipulate and query data in a Neo4j graph database
- This is used for a POC of a discovery process written in PowerShell that creates a graph in Neo4j that is used as a CMDB.
You can download the latest release here: GEM Automation 3.0.0.0
Here are some of my goals for future releases:
- Improve documentation (both in PowerShell and on CodePlex)
- Publish CMDB discovery processes that persist data in Neo4j
- Ensure the code is using the standard configuration store