Day 1 from USENIX: I/O Virtualization Day and others

Today I had the opportunity to get a glimpse of what the researchers in the realm of I/O virtualization are doing. One of the first presentation was regarding a new way to handle exits sent by guest VMs by the hypervisor. In a nutshell, the proposition is to categorize exits into synchronous and asynchronous operations, separate the hypervisor execution from the VM execution instructions and use a notification mechanism between the VMs and the hypervisor to handle the exits. According to initial results, they were able to obtain significant boosts in performance but the research was only preliminary.

The second presentation was about moving middleware code in the hypervisor space to avoid jumping through too many layers and therefore losing performance. In the research, a MySQL storage module was used to demonstrate the concept. While the concept presented slight performance improvements, it opens up a whole new can of worm on the security side. I personally think that would essentially be taking shortcuts in the name of performance improvements. I mean, it’s fairly intuitive to figure out that by removing execution layers and bringing the code closer to the metal, that performance would improve.

There was two presentations that  I found particularly interesting was regarding how to achieve effective QoS for storage and network I/O (Gatekeeper). The storage QoS solution involved the usage of multiple queues that are a bit more static in nature that would be linked to specific class of services to guarantee response time and IOPS. The classes are then nested to provide an ordering mechanism. For the network QoS solution, it essentially involves having agents on the hypervisor hosts to coordinate bandwidth allocation for each virtual machine. By choking data rates at the source, you can prevent situation where you can’t control how much data is sent to you. You can then more accurately predict demand and allocate capacity.

The day ended with a panel on the challenges of I/O virtualization. There was an interesting debate that came from a comment I made, which will be the subject of my next post.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s