I’ve been a bit quiet on this blog for a little while now but I figured I would take some time to share with you what I’ve been working on lately.

In the last few weeks I’ve been working on a prototype to gather VM consumption statistics through the Hyper-V metering functionality in order to get a better grasp of the VM requirements in our environment. The core of the monitoring is done with the following scripts that are now published on the GEM Automation project on CodePlex:

- Monitor-VMUsage.ps1
Get-VMInformation.ps1

By tagging VMs with specific information, it allows us to generate the following kinds of dashboards in Excel:

Total VM Consumption by Systems
SystemDashboard

Total VM Consumption by Environment
EnvironmentDashboard

One report I’ve been working on this week is the following:

SystemCloudCosting

In this graph, I can compare almost in real time(well, I aggregate stats on an hourly basis) the cost of running each of our Hyper-V VM or system with for the following public cloud providers:

- Amazon
– Google
– HP
– Microsoft
– Rackspace

as well as our own internal cost. Using this, I can see if a specific system starts to be a good candidate to migrate to a public cloud costing wise. I can also look at trends within system execution to see whether I could run them on premise during part of the time and in the cloud some other time. As you can see in the graph above, picking a provider is not a simple answer but this tool definitely helps! My goal is also to start tracking public cloud price to have a better idea of the trend to be better able to predict when it will become more cost effective for us to move some workload in the cloud.

When we determine costs for our internal infrastructure, we look at the following costs:

- Servers
– Storage
– Networking
– Licenses
– Internet link
– Power
– Infrastructure related labor

Another use case we found this useful was when evaluating our license requirements for SQL Server. Using this, I can figure out how many cores are needed to run all the SQL Server workload simply by aggregating the CPU stats for the SQL Server system.

There’s a lot more that has been built but here’s an idea of the various “reports”/PivotTables/Graphs we have setup in our Excel workbook:

- Hosts Dashboard
– VM Costing Per Hour
– Environment Costing Per Hour
– Target Infrastructure Per Hour
– Azure vs On Prem (Total)
– On Prem vs Cloud Optimized
– On Prem vs Cloud Per Hour
– Cost Per System Per Hour
– Top 10 VM – CPU
– Cloud Candidates
– On Prem vs Azure Per Hour
– Hourly VM CPU Trending
– Daily Cluster CPU Trending
– Daily VM CPU Trending
– Hourly System CPU Trending
– And a lot more!

I’d be curious to get some feedback on this!

For those of us in Canada, it is not possible to have the Xbox One power up a cable/satellite box unless you use the following workaround:

- Under Settings, change your location to United States instead of Canada, reboot your XBox
– Under the OneGuide settings, under Devices, you should now see an option to configure your cable box. Follow the wizard to do so
– Switch back your location to Canada and reboot again
– Done!

You won’t have the OneGuide but the AVR, cable box and TV will power on/off with the voice commands.

Enjoy!

Here is the list of notable changes in this release:

New library to interact with Service Bus for Windows Server.

ServiceBus
libServiceBus.psm1
New functions:

Get-SBMesssageFactory
Get-SBNamespaceManager
Peek-ServiceBusTopicSubscription
New-SBTopic
New-SBQueue
Send-SBMessage

Hyper-V
libVM.psm1
Modified Set-RemoteVMNetworkAdapterVLAN to accept adapter name
Modified Set-RemoteIPAddress to accept adapter name
Added Rename-RemoteNetworkAdapter

ActiveDirectory
libActiveDirectory.psm1
Added New-RandomPassword

You can download the latest release here

While discussing I/O performance challenges with my manager and colleague, we started talking about the inefficiencies of certain data operations. What I mean by this is that there is a certain imbalance between the data that’s coming in and the amount of data that flows through the infrastructure on a daily basis.

If about 5000 transactions were created on a daily basis, then why would we observe millions of IOs and records being moved throughout our various systems? We were all a bit puzzled by that question. How did we get to that point? More importantly, can we fix it?

I feel that sometimes we spend a lot of time designing the infrastructure to raise it I/O capabilities to new levels and success is measured by how many I/Os the storage subsystem can provide, GB of storage available, Gbps of bandwidth available to hypervisor hosts, etc. It seems that we’re often in a mode where as long as the apps does the job, we don’t care so much about its efficiency and underlying computing requirements until things go wrong.

In a world that promotes unlimited scalability through cloud infrastructure, that may sound a bit counterintuitive. Are we being coerced in avoiding the tough problems because of agile or SCRUM methodologies where points are earned by quickly catering to the user needs through the stories model? Where velocity is calculated by how many points of user desire a developer can meet. Are being driven by the hardware manufacturers to be inefficient just to keep their business model alive?

If we started just by measuring our systems by their effectiveness at meeting business demand? This would a model where efficiency is rewarded and wasted resources made visible. Could we be evaluated as an industry by something like Net Profit/bytes processed? Does this sound a bit like the good old chargeback model? Maybe on the surface but what’s needed is to provide the business with a view of how data flows in the company in relation to their business processes. The goal would be to integrate data flow costing in a process oriented cost allocation model. When piece of data A is captured, where does it go? How many time is it being updated? How many time is it being read or aggregated by some reports? How many times is it being stored throughout the various OLTP, OLAP, emails, spreadsheets, backups of all of those and so on. All of these need to be taken into account when evaluating a business process efficiency.

Sometimes it feels where just managing white noise when we manage IT infrastructure. It’s just some more bytes floating around we say to ourselves! Like everything in life, it has to be a balance. There’s no point in having an efficient system that doesn’t meet the business needs.

Just some food for thoughts!

A new version has been released!

Here’s what changed in this release since version 2.0.0.0:

Added LoadBalancersClusterNodes table to store NLB/ARR cluster configuration.
Added migration logic to load existing configuration in SQL Compact database.
Added Type filed to ConfigurationSettings table. This is used in the Assert-EnvironmentConfiguration.ps1 script.

Modified existing scripts/libraries to use SQL Compact database to fetch configuration data:
Assert-EnvironmentConfiguration.ps1
Backup-EnvironmentConfiguration.ps1
Create-ApplicationPackage.ps1
Deploy-Build.ps1
Get-EnvironmentPackageInventory.ps1
Get-PackageInventory.ps1
libApplicationRequestRouting.psm1
libAssertion.psm1
Set-ApplicationConfiguration.ps1

Added new functions to delete various configuration data.

Set encryption method while creating the database.

You can get the latest release here!

Quick post to mention that version 2.0.0.0 of the GEM Automation library has been release. Here’s a list of notable changes:

Initial release available for version 2.0.0.0.

-Configuration data migrated to SQL Compact database
-Password protected, encrypted using AES256 using a password hashed with SHA512
– Modified Deploy-Build.ps1 and Set-ApplicationConfiguration.ps1 to get the configuration data from the database
– Modified function libApplicationRequestRouting.psm1 that sets web front end nodes Offline/Online for an application in the load balancers
– Several new functions to modify and query the configuration data:
Run-SQLCompactQuery
Remove-ApplicationConfiguration
New-WebSiteVirtualDirectoryConfiguration
New-WebSiteConfiguration
New-WebSiteBindingConfiguration
New-WebSiteApplicationPoolConfiguration
New-Environment
New-ConfigurationSetting
New-ConfigurationDatabase
New-ApplicationDeploymentTarget
New-ApplicationCredentialComponent
New-ApplicationCredential
New-ApplicationConfigurationFile
New-ApplicationConfiguration
Get-WebSiteVirtualDirectoryConfiguration
Get-WebSiteConfiguration
Get-WebSiteBindingConfiguration
Get-WebSiteApplicationPoolConfiguration
Get-NullIfEmpty
Get-Environment
Get-ConfigurationSetting
Get-ConfigurationDatabaseConnection
Get-ApplicationDeploymentTarget
Get-ApplicationCredentialComponents
Get-ApplicationCredential
Get-ApplicationConfigurationFile
Get-ApplicationConfiguration

You can download the latest release here

Enjoy!

A new version of the library was published!

GEM Automation 1.4.1.2

List of Changes

Enjoy!

Today I’ve committed several changes and additions to the GEMAutomation project on Codeplex. Here’s an overview of what’s new:

Added the following:

CodeUtilities

Convert-ScriptsToModules.ps1
This is a small script to rename ps1 file that contains functions to psm1 files (PowerShell Module)

Get-FunctionUsage.ps1
This script goes through PowerShell scripts and module to gather function usage. Using this, you can build a dependency graph for your PowerShell code. I’ve took the resulting data and used NodeXL to display the dependencies in a graph format using Excel. For larger libraries, you could import the data in Neo4j to make it more queryable

DPM

libDPM.psm1
Right now, this only contains a function to restore the latest recovery point of a SQL Server database to any SQL Server instance connected to the Data Protection Manager server.

Hyper-V

Get-VHDHierarchy.ps1
This goes through all the VHDs/VHDXs in an Hyper-V cluster to discover their hierarchy when applicable. For instance, if you have VMs built from a template disk (parent) using differencing disks, this will allow you to easily identify which parent disks are in use. The resulting list is currently stored in SharePoint.

libHyperVCluster.psm1
This contains a few methods:
– Get-HyperVClusterNodes (Gets information on all the nodes part of an Hyper-V cluster)
– Get-RemoteVMClusterResource (Gets the information of a VM resource remotely)
– Get-ClusterSharedVolumes (Gets information on all the CSV mounted in an Hyper-V cluster)
– Select-ClusterSharedVolume (This is not used due to an unknown issue, ask me for details!)

libVHDXTracking.psm1
This is used by the Get-VHDHierarchy.ps1 script to gather the VHDX hierarchy.

- libVMHardDisk.psm1
This contains the following method:
– Unmount-VMHardDiskDrive (Unmounts a disk that’s mounted in a path)
– Add-VMHardDiskDriveAccessPath (Adds a mount point to a VM disk)
– Mount-VMHardDiskDrive (Prepare a VM hard disk by putting it online, clearing the read-only attribute and assign the proper volume label. If it’s a new disk,
it will be initialized, partitioned and formated.)
– New-RemoteDifferencingVHD (Remotely creates a new differencing disk.)
– New-RemoteVHD (Remotely creates a VHD)
– Add-RemoteVMHardDiskDrive (Remotely add a VHD to a VM.)
– Create-RemoteVMHardDiskDrive (Remotely create a VHD and attach it to a VM.)
– Get-RemoteVMHardDiskDrive (Get the VMHardDiskDrive information for particular disk for a VM.)
– Remove-RemoteVMHardDiskDrive (Remotely remove a VM hard disk drive.)
– Create-ParentDisk (Generic function to create new parent disk, mainly used right now to create SQL Server data parent disks using the Create-SQLDataParent.ps1 script)

VHDXSQLRefresh

Create-SQLDataParent.ps1
This script performs the following:
– Create a new VHDX and mounts it in a VM
– Restore databases from Data Protection Manager based on a configurable list
– Prepare the databases restored by defragmenting the indices and dropping the users present
– Notify people by email once the process is completed

We use this internally to create a master copy of our production systems. We then create differencing disks in Hyper-V which are environment specific and attach them to the desired development or quality assurance SQL Server instance. Once the parent disk is created, we can very quickly refresh an environment with hundreds of GB of data in a few minutes and we reduce the storage required for all our Dev/QA environments. Each environment is then free to use the delta disk as they wish by upgrading the database schema, adding/deleting/updating the data without affecting the parent disk as all changes are written in the differencing disk.

libParent.psm1
– Get-SQLDataParentDatabases (Gets the list of the databases that are part of a certain parent definition)
– Get-DatabaseRefreshScripts (Gets the list of scripts that need to be run after the database clone has been attached in an environment)
– Get-ParentVersions (Gets the available version of a certain parent type)
– Get-ParentNames (Gets the list of the available parents definition)

Win2012\Add-Clone.ps1
Creates a SQL Server data differencing disk based on a specific parent disk.

Win2012\Remove-Clone.ps1
Removes a SQL Server data differencing disk from the VM by first detaching the databases contained in the disk.

Win2012\Detach-Master.ps1
Removes a parent disk from the VM. This is usually ran after the Create-SQLDataParent.ps1 has completed.

If you have any questions regarding those scripts, feel free to comment here or start a discussion on the Codeplex site.

You can download the new release here:
GEM Automation Latest Release

Enjoy!

This is a quick post to announce the initial release of our first PowerShell library on CodePlex!

The BuildDeployment tools are a set of scripts and functions to assist you in the following tasks:

- Centralize the configuration of the applications used in your environment
– Deploy applications in a consistent and automated fashion on multiple servers if necessary
– Assert the environment configuration through a series of automated test
– Configure IIS applications and Application Request Routing load balancing farm automatically and consistently across multiple servers

Download

For more details please check:

https://gemautomation.codeplex.com/

I will soon be publishing a series of scripts I’ve been working on for a little while.

The scripts can do the following:

- Deploy application code to IIS web front ends
– Configure IIS sites
– Configure Application Request Routing farms for web apps
– Create Hyper-V parent disks from databases restores from Data Protection Manager
– Create Hyper-V differencing disks from the parent disks with SQL data
– … And more!

The code will be published here:

GEMAutomation

Follow

Get every new post delivered to your Inbox.

Join 50 other followers

%d bloggers like this: