Cloud 2.0, what future for cloud computing?

Almost one year ago, I posted a a personal review about Azure, Amazon, Google Engine, VMWare and the others. One year later, the cloud computing market is definitively taking shape. Patterns are emerging along with early standardization attempts (ex: www.simplecloud.org).

My own personal guess is that the cloud computing market (not the technology) will somehow be reaching a v1.0 status at the very end of 2009, when the latest big player - that is to say Microsoft - will have finally launched it’s own cloud.

My personal definition for cloud computing v1.0 is a complex technology mash-up that involves a series of computing resource abstractions:

(1) Both storage and computing nodes come in two flavors depending if the cloud supports geo-localization of its resources. In particular, read-only geo-localized scalable storage also known as content delivery networks provide advanced automated geo-localization; while computing nodes are still manually geo-localized.

(2) At present time, virtually no major cloud provider support distributed cache - but considering the success and community interest in Memcached, I am guessing that all major cloud providers will be supporting this service by the end of 2010.

(3) Again, virtually no major cloud provider support sharded relational DB at the moment, but considering the importance of relational data in virtually every single enterprise app, I am also guessing that most major cloud providers will offer that by the end of 2010.

With those services in place, I will consider that the cloud v1.0 milestone will have been reached.

Guessing what lies further ahead, beyond 2010, is a difficult game as the cloud computing technology is still under a very fast paced evolution.

Yet, I think (or rather I guess) that there will be two major forces for cloud computing 2.0:

Indeed, at present time, cloud computing is mostly an option available for projects carrying little or no legacy, as the migration toward the cloud represents a complete redesign of most apps.

Furthermore, cloud computing v1.0 involves loads of hard-core development skills and a significant amount of knowledge about distributed computing. This is a vast barrier that will slow down the adoption rate of the cloud.

Thus, a key aspect of cloud computing 2.0 will be to obtain drastic productivity improvement through mature programming environments that will significantly facilitate design and testing of cloud apps. Considering the breath of issues to migrate existing apps toward the cloud, I believe that this task will require no less investments than the actual design of the cloud v1.0.

Then, if cloud v1.0 is vastly scalable, it’s also still far from real-time interactions (*) as latency is, at best, only marginally better than what is obtained with classical server setups. Indeed, geo-localization is made available, but at a very coarse grained level (typically continents) and rather with a spirit of compliance with local regulations, as opposed to latency fine-tuning.

(*) Check OnLive for an early attempt at low-latency cloud infrastructure.

I feel that the potential for on-demand computing resources made available in nearly locally allowing nearly real-time interactions - from mobile apps to urban commodities - is huge. UI responsiveness is addictive, and the competition between cloud providers will reflect that.

Yet, lowering the latency will probably mean multiplying cloud data centers around the world so that most people (who will remain as blissfully ignorant about cloud computing, as they are about water supply) can enjoy loads of services with improved user experience.

To achieve that, I suspect that major cloud providers will end-ups with dozens (and ultimately hundreds) datacenters starting with the largest/wealthiest cities.

Considering that data centers typically costs hundreds of millions of dollars. Cloud 2.0 will represent investments no less important than what has been made historically to setup the power grid.