What Windows 7 should be

Here’s my take on some of the features Windows 7 should have:

Distributed processing

Now that we’re getting decent bandwidth in LANs and WANs, it would be nice if Microsoft could develop a layer in the operating system that would work similarly to VMWare VMotion but at the thread level. This means that if the local computing device, could be Windows Mobile phone, laptop, desktop or a server, is overloaded by another thread execution, the load could be offloaded to another node with available processing on the fly. Once a new device is connected to a network, being a private or public network like a wi-fi access point, the users could decide if it wants to participate in the processing. Applications could be built in such a way that they could detect if the computer is part of a larger computing pool and enable extra functionnality for the end-users. The distribution of thread execution could vary based on the bandwidth available and the job at hand. This could also be controlled via policies as well, for instance engineering workstations could have execution priority over a gaming application in a corporate network.

It would be interesting to see libraries such as Parallel FX from Microsoft leverage distributed processing over multiple computers in a transparent fashion. They’re already abstracting job distribution over multiple local CPU, I don’t see any reasons why that couldn’t be extended to remote processing of tasks as well.

 It would be cool to have the operating system capable of routing certain specialized tasks to specific processors such as GPUs. Multiple PCs with graphic cards could be used as a rendering farm transparently for thin clients for instance.

I know distributed processing is not a new thing in academic and sometimes corporate settings, but imagine if a general purpose operating system such as Windows could bring this mainstream. A pool with 100 millions of PC sharing their workload to achieve what was before unthinkable. PC virtualization is nice, but thread execution virtualization is the future!

Distributed storage

Wouldn’t it be nice to have access to a single pool of storage that is almost infinite in size and always available? If you look at one of my previous post Semantic Web, you can find in there an idea of how storage could evolve into something more collaborative and intelligent. Think of it as a giant hard drive with built-in redundancy and load balancing functionnality.

Wrap up

If you combine those two technologies together, imagine how that would change the world of computing. Near unlimited storage and processing available to each of us. This would also help in the area of green computing as we wouldn’t have to build PCs with as much processing capability anymore since we could always leverage the computing capability of our peers in a given network. You could plug-in a PC that boots from a distributed storage pool and startup a data mining application that runs multiple threads over multiple local and remote CPUs. Neato!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s