Author

I am Joannes Vermorel, founder at Lokad. I am also an engineer from the Corps des Mines who initially graduated from the ENS.

I have been passionate about computer science, software matters and data mining for almost two decades. (RSS - ATOM)

Meta

Wednesday
Mar042015

## Buying software? You should ignore references

Being a (small) software entrepreneur, it is still amazing to witness how hell is breaking loose when certain large software vendors start deploying their “solution”. Even more fascinating, is that after causing massive damage, the vendor just signs another massive deal with another large company and hell breaks loose again. Repeat this one hundred times, and you witness a world-wide verticalized software leader crippling an entire industry with half-backed technology.

Any resemblance between the characters in this post and any real retail company is purely coincidental.

I already pointed out that Requests For Quotes (RFQ) were a recipe for disaster, but RFQs alone do not explain the scale of the mess. As become more and more familiar with selling to large companies, I now tend to think that one heavyweight driver behind these epic failures is a banal flaw of the human mind: we massively overvalue other people’s opinion on a particular subject instead of relying on our own judgment.

In B2B software, one’s references usually come from is a person who works in a company similar to the one you are trying to sell to, and who, when called by your prospects, conveys exceptionally positive feelings about you and extremely vague information about your solution. Having tested this approach myself, I can say that the results are highly impressive: the reference call is an incredibly efficient sales method. Thus, it is pretty safe to assume that any sufficiently large software B2B vendor is also be acutely aware of this pattern as well.

At this point, for the vendor, it becomes extremely tempting not to merely stumble upon happy customers who happen to be willing to act as referees, but to manufacture these references directly, or even to fake them if it’s what it takes. How hard could this be? It turns out that it’s not hard at all.

As a first-hand witness, I have observed that there are two main paths to manufacturing such references, which I would refer to as the non-subtle path and the subtle path. My observations indicate that both options are routinely leveraged by most B2B software vendors once they reach a certain size.

The non-subtle path is, well, not subtle: you just pay. Don’t get me wrong, there is no bribery involved or anything that would be against the law. Your “reference” company get paid through a massive discount on its own setup fee, and is under a strict agreement that they will play their part in acting as a reference later on. Naturally, it is difficult to include this in the official contract, but it turns out that you don’t need to. Once a verbal agreement is reached, most business executives stick to the spirit of the agreement, even if they are not bound by written contract to do so. Some vendors go even a step further by directly offering a large referral fee to their flagship references.

The subtle path takes another angle: you overinvest in order to make your “reference” client happy. Indeed, usually, even the worst flaws of an enterprise software can be fixed given unreasonable efforts, that is, efforts that go well beyond the budget of your client. As a vendor, you still have the option to pick a few clients where you decide to overinvest and make sure that they are genuinely happy. When the time comes and a reference has to be provided, the reference is naturally chosen as one of those “happy few” clients who benefit from an outstanding service.

While one can be tempted to argue that the subtle path is morally superior to the non-subtle path, I would argue that they are both equally deceptive, because a prospect gets a highly distorted view of the service actually provided by the vendor. The subtle path has the benefit of not being a soul crushing experience for the vendor staff, but many people accommodate with the non-subtle path as well.

If you happen to be in a position of buying enterprise software, it means that you should treat all such hand-picked references with downright mistrust. While it is counter-intuitive, the rational option is to refuse any discussions with these references as they are likely to distort your imperfect (but so far unbiased) perception of the product to be acquired.

Refusing calls with references? Insanity, most will say. Let’s step back for one second, and let’s have a look at what can be considered as the “gold standard” [1] of rational assessment: the paper selection process of international scientific publications. The introduction of blind, and now double-blind, peer reviews was precisely motivated to fight the very same kind of mundane human flaws. Nowadays, if a research team was to try to get a paper published based on the ground that they have buddies who think that their work is “cool”, the scientific community would laugh at them, and rightly so. Only the cold examination of the work itself by peers stands ground.

And that is what references are: they are buddies of the vendor.

In addition, there is another problem with references that is very specific to the software industry: time is of the essence. References are a reflection of the past, and by definition, when looking at the past, you are almost certain to miss recent innovations. However, software is an incredibly fast-paced industry. Since I first launched Lokad, the software business for commerce has been disrupted by three major tech waves: cloud computing, multichannel commerce and mobile commerce; and that is not even counting “minor” waves like Big Data. Buying software is like buying a map: you don’t want an outdated version.

Software that is used to run large companies is typically between one and two decades behind what would be considered as “state of the art”. Thus, even if a vendor is selling technology that is one decade behind the rest of the market, this vendor can still manage to be perceived as an “upgraded” company by players who were two decades behind the market. It is a fallacy to believe that because the situation improved somewhat, the move to purchase a particular software was a good one. The opportunity to get up to speed with the market has been wasted, and the company remains uncompetitive.

No matter which approach is adopted by the vendor to obtain its references, one thing is certain: it takes a tremendous amount of time to obtain references, typically years. Thus, by the time a references are obtained, chances are high that the technology that has been assessed by the referee has now become outdated. At Lokad, it happened to us twice: by the time we obtained references for our “classic” forecasting technology, we had already released our “quantile” forecasting technology and our former “classic” forecasting software was already history. And three years later, history repeated itself as we released “quantile grids” forecasting that is vastly superior to our former “quantiles”. If companies were buying iPhone based on customer references, they would just be starting to buy the iPhone 1 now, not trusting iPhone 2 yet because it would still lack customer references; and it would be unimaginable to even consider all the different versions from iPhone 3 to iPhone 6 that have not yet been time-tested.

The need for references emerges because the software buyer is vulnerable and insecure, and rightly so, as epic failures are extremely frequent when buying enterprise software. While the need to obtain security during the buying process is real, references, as we have seen, is a recipe for major failures.

A much better approach is to carry out a thorough examination of the solution being proposed, and yes, this usually means becoming a bit of an expert in this field in order to perform an in-depth assessment of the solution being presented by the vendor. Don’t delegate your judgment to people you have no reason to trust in the first place.

[1] The scientific community is not devoid of flaws, it is still large bunch of humans after all. Peer reviewing is a research area in progress. Publication protocols are still being improved, always seeking to uphold higher standards of rationality.

Saturday
Jan222011

## Telling the difference between cloud and smoke

Returned a few days ago from NRF11. As expected, there were many companies advertising cloud computing, and yet, how disappointing when investigating the case a tiny bit further: it seems that about less than 10% of the companies advertising themselves as cloudy are actually leveraging the cloud.

For 2011, I am predicting there will be a lot of companies disappointed by cloud computing - now apparently widely used a pure marketing buzzword without technological substance to support the claims.
For those of you who might not be too familiar with cloud computing, here is a 3min sanity test to check if an app is cloud-powered or not. Obviously, you also go for a very rigorous in-depth audit, but with this test, you should be able to uncover the vast majority of smoky apps.

### 1. Is there any specific reason why this app is in the cloud?

Bad answer: we strive to deliver next-generation outstanding software solutions, exceeding customer expectations, blah blah blah, insert here more corporate talk ...
A pair of regular servers - typically a web server plus database server - can handle thousands of concurrent users for non-intensive webapps. This is already a lot more users than what most apps of the market will ever face (remember with a high probability you don't need to scale). So there has to be a compelling reason that justify the cloud beside the very hypothetical scenario to grow faster than Facebook.

### 2. Is the underlying infrastructure larger than 100k machines?

Bad answer: well, in fact we are just having our own dedicated servers at DediHost Corp Inc (put here the name of regular hoster).
A key aspect of cloud computing is cost reduction through massification. As of 2011, there are still only a handfew cloud providers available, namely: Amazon WS, Google App Engine, Rackspace Cloud, Salesforce and Windows Azure. Make sure to ask which cloud infrastructure is being used. Also, private clouds are no exceptions, it's not because it's "private" that suddenly massification is achieved with 100 servers. It takes more, a lot more, to build a cloud.

### 3. Can you open an account and get started right from the web, no setup cost?

Multitenancy is a key aspect to reduce admin costs. In particular, with any reasonable cloud-based architecture there is no reason to have mandatory setup costs (which does not mean that company may not charge some optional onboarding package providing eventually training , dedicated support etc). Setup costs are typically a sign of a non cloud software where each extra deployment takes some amount of gruntwork.

### 4. Is there a public pricing? Typically indexed on usage or user metrics.

For cloud-based apps, there are about zero compeling reason not to have a public pricing. Indeed, cloud costs are highly predicable and strictly based on usage, hence, it makes little sense from a market perspective to go for a customized pricing for each client as it increase sales friction providing no added value for the client.

### 5. Can two machines failing bring down the app along with them?

In the cloud, the app layer should be properly decoupled from the hardware layer. In particular, hardware failures are accounted for and primarily handled by the cloud fabric which reallocate VMs when facing hardware issues. The cloud does not offer better hardware, just a more resilient way to deal with failures. In this respect, setting-up a backup server for every single production server is a very non-cloud approach. First, it doubles the hardware cost, keeping half of the machine idle about 99% of the time, and second, it proves brittle facing Murphy's law, aka 2 machines failing at the same time.

As a final note, it's rather hard to tell the difference between a well-designed SaaS powered by a regular hoster and the same but powered by a cloud. Although, back to point 1, unless there is a reason to need the cloud, it won't make much difference anyway.

Tuesday
Jul272010

## Wish list for Relenta CRM

At Lokad, we have using the Relenta CRM for nearly two years. It's an excellent lean CRM that comes with a core focus on emails which happen to represent about 90% of our interactions with clients and prospects.If you happen to be an ISV, Relenta is worth having a closer look.

Although, I have been missing a few key features in Relenta for a long time. Hence, I taking the time here to post my wish list for Relenta.

### 1. Accounts

Relenta only deals with Contacts yet, when prospecting larger companies, many contacts are typically involved. It would be much nicer if it was possible to create 1-to-many relationships between Accounts and Contacts. In particular, this would let the Relenta user browse at a glance all the latest interactions related to a particular account, instead of jumping from one contact to the next.

One of the feature that I like the most in wikis are their abilities to display the recent changes. Through recent change, you can gain immediate insights in what other people are doing, without having to bother to actually ask them.

Presently, there is no way to easily figure out who has been doing what in Relenta.  It would much nicer if a stream of recent updates was available for browsing. In particular, display could be made more or less compact by aggregating updates per contact (or per account). Eventually, the stream could even be made available as RSS.

### 3. Activity capture API

The Lead Capture API of Relenta is a killer feature due to its simplicity. For an ISV, it's a super simple way to collect all trial registrations that keeps flowing through our online apps with extremely limited integration grunt-work.

Yet, although it's very simple to automate the Contact creation in Relenta, it's not possible to automate the insertion of Activities later on the very same contact. This feature would be extremely handy to automatically report payments, or any kind of noticable activities (in the case of Lokad that would be large forecasts retrieval for example).

### 4. Refined tagging

Tagging is one of the best idea of the Web 2.0 wave. It's a great way to organize complex yet little structured content.

Relenta already provide a minimal tagging system, yet there is no tag auto-completion (killer feature) and it's not possible to search against multiple tags. Pushing a bit more work on tags would be a great move forward to make the most of them.

### 5. iCalendar support

iCalendar is a very nice and popular format to send meeting request. Presently, Relenta does not support .ics attachments and meeting requests appears as completely garbled. It would be really nice if Relenta was support iCalendar with the possibility to acknowledge meeting requests.

Saturday
Apr172010

## Stack Exchange 2.0: epic fail?

It's a sad thing to see a pair of brilliant entrepreneurs going for a probable epic fail with best intentions in mind.

IMHO, the recently announced Stack Exchange 2.0  has a high probability of failure, and worse (as far I am concerned) it might marginally hurt my business due to the lack of ongoing commitment for the forum I did setup for my own company a few months ago.

Let consider the situation:

• Stack Exchange is an excellent Q&A engine, something that the market had been waiting for a long time, as illustrated by the success of Stack Overflow.
• Stack Exchange 1.0 had a very reasonable business model, announced with an entry price of about $120 / month. Now, Stack Exchange 2.0 for a business plan that urgently reminds me of the 37signals announcement of their$100 billion valuation:

When it comes to valuation, making money is a real obstacle. Our profitability has been a real drag on our valuation.We’ll give away everything for free and let the market speculate about how much money we could make if we wanted to make money. That way, the sky’s the limit!

So let start by discussing the arguments proposed by the SE team to go for a free service:

1. SE had not enough enough beta volunteers. They were expecting "thousands of sites would start to sprout up on every possible topic" (sic)
2. People were prone to create "ghost-town sites that nobody visited" (sic).  "Allowing anyone with a credit card to make a site" (sic) wasn't a good idea.
3. People were prone to "multiple sites on the same topic" (sic).

Expecting an overnight success

Well, as far point 1 is concerned, that was just plain unrealistic expectations. Yes, I am 100% sure that SE 1.0 (Stack Exchange 1.0) could have ultimately had thousands of happily paying customers, but it turns out there is no overnight success. Stack Overflow was a quick success because it benefited from two famous bloggers with huge 100% focused audience.

Yet, SE 1.0 is a B2B product, and needs to be marketed and sold accordingly; and it turns out that the SO (Stack Overflow) is definitively not the right audience to market SE. All the SO folks happen to be software developers: little wonder that you get half a dozen SE spawn focusing on startups (but we will get back to this point).

Hence, SE 1.0 had not even been marketed yet, not to the relevant audience anyway.

Then, the beta branding is sufficient to scare away about 99% of all B2B prospects; software tools are sort of a one-of-a-kind exception here. Thus, to even get a chance for SE to succeed, it would need to branded as v1.0 then start charging for it.

Companies don't trust "gratis" stuff, and rightly so. B2B is the contrary of B2C: you have to start charging for it to succeed (open source entered the business the day people started to charge for it).

Building a community is a very significant investment. It takes a lot of time, typically years, and thousands of hours of work. SE 1.0 was beta. How could it be expected any reasonable organization to push such a commitment on an unproven solution? For most businesses, a solution becomes proven when someone gets charged for it, and this person claims that is was money profitably spent.

Then, the SE team implicitly assumed that low traffic means implicit failure: those guys with credit cards they don't know what they are doing.

The market gets it wrong, and we are going to do it better.

I am very skeptical when anyone (even bright people) pretend to know more than the market. If somebody is selling Luxury Yatch, then a single answered question might be worth millions of USD if it makes the difference to close the deal.

If people (like me) are ready to be pay \$100 / month for low traffic websites, then there might be another explanation than those people don't know how to spent their money.

Pushing the point further, how do ghost sites hurt the SE business in anyway? Companies pay for the websites, nobody looks at those websites, nobody gets hurt by those websites. Take the money and stop whining.

Websites on similar subjects are created

Considering that SE was primarily addressed at a near monolithic audience (software developers), it's little wonder to see that many people had the same idea at the same time.

This is old behavior inherited from Economy 1.0: it's called competition.

And so what? Software editors sell their products to companies who happen to compete against each other. This is a healthy situation. At some point the market may decide to elect a "outstanding" winner, but most of the time, market does not, and competition keeps going.

Again, pretending to know more than the market is a wild assumption.

As final point of the analysis of SE 1.0: the outlook was bright, it needed an official v1.0 release, some meaningful marketing outside the software crowd, and SE 1.0 was very likely to become a profitable business. Dropping the SE 1.0 at this point almost look like a bad case of Fear of Success.

Outlook on Stack Exchange 2.0

The SE 2.0 business model claims that service will be offering for free. Yet, there is nothing as free in business. Most likely SE 2.0 will be paid through the advertising tax which happen to be extremely expensive.

In particular, this business model will be way to expensive for most organization to commit any significant resources.

Basically, SE 2.0 is positioning itself in the attention sharing economy trying to re-introduce the old byzantine usenet rules. The intrinsic problem is that creating a community is very different from actually creating business value.

For example, Wikipedia is a worldwide community success, but it's actual business value is zilch: you might donate to Wikipedia, but you can't invest in Wikipedia and expect any financial reward.

Then, SE 1.0 had low profile unknown competitors; SE 2.0 is getting up ahead a direct confrontation with big guys: Google, Facebook, LinkedIn which will - no doubt - deliver their own variants (if SE 2.0 prove to get some traction) to be marketed much more effectively using their social communities.

As a final note, I am hoping, if the SE team sticks to its plans, that the market will quickly fill the room left empty by the now defunct SE 1.0. Contenders are already in place.

Thursday
Jan142010

## Table Storage gotcha in Azure

Table Storage is a powerful component of the Windows Azure Storage. Yet, I feel that there is quite a significant friction working directly against the Table Storage, and it really calls for more high level patterns.

Recently, I have been toying more with the v1.0 of the Azure tools released in November'09, and I would like to share a couple of gotchas with the community hoping it will save you a couple of hours.

Gotcha 1: no REST level .NET library is provided

Contrary to other storage services, there is no .NET library provided as a wrapper around the raw REST specification of the Table Storage. Hence, you have no choice but to go for ADO.NET.

This situation rather frustrating because ADO.NET does not really reflect the real power of the Table Storage. Intrinsically, there nothing fundamentaly wrong with ADO.NET, it just suffers the law of leaky abstractions, and yes, the table client is leaking.

Gotcha 2: constraints on Table names are specific

I would have expected all the storage units (that is to say queues, containers and tables) in Windows Azure to come with similar naming constraints. Unfortunately it's not the case, as table names do not support hyphens for example.

Gotcha 3: table client performs no client-side validation

If your entity has properties that do not match the set of supported property types then properties get silently ignored. I got burned through a int[] property that I was naively expecting to be supported. Note that I am perfectly fine with the limitations of the Table Storage, yet, I would have expected the table client to throw an exception instead of silently ignoring the problem.

Similarly, since the table client performs no validation, DataServiceContext.SaveChangesWithRetries behaves very poorly with the default retry policy as a failing call due to, say, and entity that already exists in the storage, is going to attempted again and again, as if it was a network failure. In this sort of situation, you really want to fail fast, not to spend 180s re-attempting the operation.

Gotcha 4: no batching by default

By default DataServiceContext.SaveChanges does not save entities in batch, but performs 1 storage call for each entity. Obviously, this is a very inefficient approach if you have many entities. Hence, you should really make sure that SaveChanges is called with the option SaveChangeOptions.Batch.

Gotcha 4: paging takes a lot of plumbing

Contrary to Blob Storage library that abstracts away most nitty-gritty details such as the management of continuation tokens, the table client does not. You are forced into a lot of plumbing to perform something as simple as paging through entities.

Then, back to the method SaveChanges, if you need to save more than 100 entities at once, you will have to deal yourself with the capacity limitations of the Table Storage. Simply put, you will have to split your calls into smaller ones: the table client doesn't do that for you.

Gotcha 5: random access to many entities are once takes even more plumbing

As outlined before, the primary benefit of the Table Storage is to provide a cloud storage much more suited than the Blob Storage for fine-grained data access (up to 100x cheaper actually). Hence, you really want to grab entities by batch of 100 whenever possible.

Turns out that retrieving 100 entities following a random access pattern (within the same partition obviously) is really far from being straightforward. You can check my solution posted on the Azure forums.

Gotcha 6: table client support limited tweaks through events

Although there is no REST level API available in the StorageClient, the ADO.NET table does support limited customization through events: ReadingEntity and WritingEntity.

It took me a while to realize that such customization was possible in the first place as those events feel like outliers in the whole StorageClient design. It's about the only part where events are used, and leveraging side-effects on events is usually considered as really brittle .NET design.

Stay tuned for an O/C mapper to be included in Lokad.Cloud for Table Storage. I am still figuring out how to deal with overflowing entities.

Page 1 2 3