Author

I am Joannes Vermorel, founder at Lokad. I am also an engineer from the Corps des Mines who initially graduated from the ENS.

I have been passionate about computer science, software matters and data mining for almost two decades. (RSS - ATOM)

Meta

Wednesday
Oct282009

No excuse for not disclosing your roadmap

Software is a fast-paced industry. New technologies soon become obsolete ones, and you need keep your mindset in Fire and Motion mode to move forward. Yet, when something really big emerges, say cloud computing, you end up in a crossroad and you need to make a choice about the future of your business.

This future depends on the 3rd party technology you decide to rely on. This is true for software companies buying software components, but it's also true for brick and mortar companies moving to the next generation ERP.

Trying to push my own little company forward, we can't afford reinventing the wheel. Thus, we are relying on loads of 3rd party tools and technologies; most likey any software company I guess - with the notable exceptions of Microsoft and Google that are nearly self-sustained.

There is nothing wrong in itself depending on other business. Specialization has been a driving force behind business growth for the last two centuries. Yet, in order to take good decisions, I need to be informed about future plans for key technologies that we are adopting.

• where the company is heading.
• if it matches your upcoming requirements.

Looking around me at companies who disclose their roadmaps, I realized that roadmaps are strong drivers to establish trust and commitment. It shows to your customers and partners that you are committed to move forward with them; not just to leverage status quo.

Yet, it's still sad to see that many companies adopt the Absolute Radio Silence strategy of Apple. It might work in the case of Apple, because they have become expert at leveraging the media buzz around their own plans; but it looks to me a total nonsense for B2B markets where relationships last for years if not decades.

The average lifetime of an installed ERP is 8 years.

Hiding behind the argument we don't want to over-promise  to keep your customers and partners in the dark looks to me a lame excuse. The roadmap represents a best effort attempt at summarizing directions, not an exact schedule. Obviously, it comes with a level of uncertainty. In B2B markets, your customers are smart enough to understand that.

Thus, I have decided to publish a public Lokad roadmap for 2010.

Obviously, one can argue that this roadmap is going to benefit to our competitors. Frankly, I don't think so. If disclosing a 3 pages document is sufficient to put your business in trouble then it means that your intellectual property is really weak in the first place.

The roadmap tells what you are going to do, not the fine grained details to make it work. As usual, ideas are dime a dozen, many investors would even offer them for free, execution is everything.

Thursday
Sep242009

Cloud 2.0, what future for cloud computing?

Almost one year ago, I posted a a personal review about Azure, Amazon, Google Engine, VMWare and the others. One year later, the cloud computing market is definitively taking shape. Patterns are emerging along with early standardization attempts.

My own personal guess is that the cloud computing market (not the technology) will somehow be reaching a v1.0 status at the very end of 2009, when the latest big player - that is to say Microsoft - will have finally launched it's own cloud.

My personal definition for cloud computing v1.0 is a complex technology mash-up that involves a series of computing resource abstractions:

• Scalable key-value storage (1)

• Scalable queues

• Computing nodes on demand (1)

• Scalable functional CPU (ala MapReduce)

• Scalable cache (2)

• Sharded relational DB (3)

(1) Both storage and computing nodes come in two flavors depending if the cloud supports geo-localization of its resources. In particular, read-only geo-localized scalable storage also known as content delivery networks provide advanced automated geo-localization; while computing nodes are still manually geo-localized.

(2) At present time, virtually no major cloud provider support distributed cache - but considering the success and community interest in Memcached, I am guessing that all major cloud providers will be supporting this service by the end of 2010.

(3) Again, virtually no major cloud provider support sharded relational DB at the moment, but considering the importance of relational data in virtually every single enterprise app, I am also guessing that most major cloud providers will offer that by the end of 2010.

With those services in place, I will consider that the cloud v1.0 milestone will have been reached.

Guessing what lies further ahead, beyond 2010, is a difficult game as the cloud computing technology is still under a very fast paced evolution.

Yet, I think (or rather I guess) that there will be two major forces for cloud computing 2.0:

• Drastic productivity improvements though mature environments.

• Fine grained geo-localization for near real-time latencies (say 10ms).

Indeed, at present time, cloud computing is mostly an option available for projects carrying little or no legacy, as the migration toward the cloud represents a complete redesign of most apps.

Furthermore, cloud computing v1.0 involves loads of hard-core development skills and a significant amount of knowledge about distributed computing. This is a vast barrier that will slow down the adoption rate of the cloud.

Thus, a key aspect of cloud computing 2.0 will be to obtain drastic productivity improvement through mature programming environments that will significantly facilitate design and testing of cloud apps. Considering the breath of issues to migrate existing apps toward the cloud, I believe that this task will require no less investments than the actual design of the cloud v1.0.

Then, if cloud v1.0 is vastly scalable, it's also still far from real-time interactions (*) as latency is, at best, only marginally better than what is obtained with classical server setups. Indeed, geo-localization is made available, but at a very coarse grained level (typically continents) and rather with a spirit of compliance with local regulations, as opposed to latency fine-tuning.

(*) Check OnLive for an early attempt at low-latency cloud infrastructure.

I feel that the potential for on-demand computing resources made available in nearly locally allowing nearly real-time interactions - from mobile apps to urban commodities - is huge. UI responsiveness is addictive, and the competition between cloud providers will reflect that.

Yet, lowering the latency will probably mean multiplying cloud data centers around the world so that most people (who will remain as blissfully ignorant about cloud computing, as they are about water supply) can enjoy loads of services with improved user experience.

To achieve that, I suspect that major cloud providers will end-ups with dozens (and ultimately hundreds) datacenters starting with the largest/wealthiest cities.

Considering that data centers typically costs hundreds of millions of dollars. Cloud 2.0 will represent investments no less important than what has been made historically to setup the power grid.

Sunday
Aug302009

I have been hearing a lot about Twitter for a long time. I am still puzzled a bit by the concept, but apparently a significant percentage of the registrants at Lokad do have a Twitter account. So now, I can start wasting time on Twitter too while pretending it's company work :-)

More seriously, it appears that a couple of competitors, prospects, customers are actually discussing of sales forecasts out there, so it might be worth keeping on eye on that.

Thanks to Rinat, a twitter account for Lokad had been setup a while ago. Since this account is intended to be a company account, we need to handle several users here. Sharing passwords isn't such a great method, so I have decided to give a try to CoTweet.

I have also setup a personal twitter account, although, since I am already lagging behind with my various blogs (including the present one), it's not clear if I will be able to keep up with the frequency that appears to be expected by the Twitter community.

Tuesday
Jul282009

Thoughts about the Windows Azure pricing

Microsoft has recently unveiled its pricing for Windows Azure. In short, Microsoft did exactly align with the pricing offered by Amazon. CPU costs $0.12 / h, meaning that a single instance running 24/24 for a month costs$86.4 which is fairly expensive compared to classical hosting provider where you can get more for basically half the price.

But well, this situation was expected as Microsoft probably does not want to start a price war with his business partners still selling dedicated Windows Server hosting. Current Azure pricing is sufficiently high to deter most companies except the ones who happen to have peaky needs.

To me, the Azure pricing is fine except in 3 areas:

• Each Azure WebRole costs at least $86.4 / month no matter how few web traffic you have (reminder: with Azure you need a distinct webrole for every distinct webapp). This situation is caused by the architecture of Windows Azure where a VM gets dedicated for every WebRole. If we compare to Google App Engine (GAE), the situation does not looks to good for Azure, indeed, with GAE, hosting a low traffic webapp is virtually free. Free vs.$1000 / year is likely to make a difference for most small / medium businesses, especially if you end-up with a dozen of webapps to cover all your needs.

• Cloud Storage operations are expensive: the storage itself is rather cheap $0.15 / GB / month, but the cost of$0.01 per 10K operations might be a killer for cloud apps intensively relying on small storage operations. Yes, one can argue that this price ain't cheaper with AWS, but this is not entirely true as AWS provides other services such as the block storage that comes with 10x lower price per operation (EBS could be used to lower the pressure on blob storage whenever possible).

• Raw CPU at $0.12 / h is expensive and Azure offers no solution to lower this price whereas AWS offers CPU at$0.015 / h through their MapReduce service.

Obviously, those pricing weaknesses closely reflect missing cloud technologies for Azure (at the moment). The MapReduce issue will be fixed when Microsoft ports DryadLinq to Azure. Block storage and shared low cost web hosting might be also on their way too (although I have little info on that matter). As a side note, the Azure Cache Provider might be a killing tool to reduce the pressure on the cloud storage (but pricing is unknown yet).

As a final note, it's interesting to see that the cloud computing pricing is really dependent on the quality of the software used to run the cloud. Better software typically leads to computing hardware being delivered at much lower costs, almost 10x lower costs in many situations.

Thursday
Apr022009

In praise of Voices.com

I have been a long time consumer of freelance marketplaces. Yet, all the freelance websites that I have experienced so far left me a feeling of half-backed design. Guru, oDesk, eLance, rentacoder, just to name a few of them.

The heart of the problem lies in the doomed attempts at supporting any type of freelance jobs with a unique web application.

In contrast, voices.com has a unique focus on voice talents. You won't find database administrators or supply chain consultants on voices.com; but when it comes to voice-over jobs, the application is just plain great.

Basically, like any other freelance website, you post your job - including your scripts since it's a voice over job - and within hours you get dozen of freelance offers. So far, so good, all other freelance websites are doing that.

Yet, the killing feature of voices.com is that each freelancer gives you a 30s record of their own voice over your scripts.

And this feature is plain amazing. Instead of wasting hours making desperate attempts at sorting out true talent out of the massive amount of junk proposals, you just listen to your 30s samples, which precisely happens to be the rational way to take your decision.

And the best thing is that since voice.com is putting a strong emphasis on talent through this very feature, you're getting virtually no junk proposal at all. Among the 30 proposals that I have been getting yesterday in less than 6h, most of them were very good, and a few of them, plain excellent.

Not believing me? Just check the very nice job that Ray Grover did for us within a 6h timeframe from job posting to job termination.

Page 1 2 3 4 5 ... 5