Measuring IT Efficiency Through Data Flow Quantification

While discussing I/O performance challenges with my manager and colleague, we started talking about the inefficiencies of certain data operations. What I mean by this is that there is a certain imbalance between the data that’s coming in and the amount of data that flows through the infrastructure on a daily basis.

If about 5000 transactions were created on a daily basis, then why would we observe millions of IOs and records being moved throughout our various systems? We were all a bit puzzled by that question. How did we get to that point? More importantly, can we fix it?

I feel that sometimes we spend a lot of time designing the infrastructure to raise it I/O capabilities to new levels and success is measured by how many I/Os the storage subsystem can provide, GB of storage available, Gbps of bandwidth available to hypervisor hosts, etc. It seems that we’re often in a mode where as long as the apps does the job, we don’t care so much about its efficiency and underlying computing requirements until things go wrong.

In a world that promotes unlimited scalability through cloud infrastructure, that may sound a bit counterintuitive. Are we being coerced in avoiding the tough problems because of agile or SCRUM methodologies where points are earned by quickly catering to the user needs through the stories model? Where velocity is calculated by how many points of user desire a developer can meet. Are being driven by the hardware manufacturers to be inefficient just to keep their business model alive?

If we started just by measuring our systems by their effectiveness at meeting business demand? This would a model where efficiency is rewarded and wasted resources made visible. Could we be evaluated as an industry by something like Net Profit/bytes processed? Does this sound a bit like the good old chargeback model? Maybe on the surface but what’s needed is to provide the business with a view of how data flows in the company in relation to their business processes. The goal would be to integrate data flow costing in a process oriented cost allocation model. When piece of data A is captured, where does it go? How many time is it being updated? How many time is it being read or aggregated by some reports? How many times is it being stored throughout the various OLTP, OLAP, emails, spreadsheets, backups of all of those and so on. All of these need to be taken into account when evaluating a business process efficiency.

Sometimes it feels where just managing white noise when we manage IT infrastructure. It’s just some more bytes floating around we say to ourselves! Like everything in life, it has to be a balance. There’s no point in having an efficient system that doesn’t meet the business needs.

Just some food for thoughts!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s