Author

I am Joannes Vermorel, founder at Lokad. I am also an engineer from the Corps des Mines who initially graduated from the ENS.

I have been passionate about computer science, software matters and data mining for almost two decades. (RSS - ATOM)

Meta
Tags
Monday
Mar192012

Bizarre pricing, does it matter? (B2B)

My company has just released quantile forecasts upgrade. It's no less than a small revolution for us, however, unless you've got some inventory to manage, it's probably not too relevant to your business.

Another salient aspect is our new pricing for quantiles (the old pricing for classic forecasts remains untouched). Lokad is selling a monthly subscription, and if $q_i$ represents one of the actual quantile values retrieved by the client during the month, then the monthly cost $C$ is given by:

$$C = $0.15 \times \left(\sum_{i=0}^n q_i^{2/3} \right)^{2/3}$$

We hesitated to round 0.15 as $\frac{\pi}{2}$ because formula look better with Greek letters. Obviously, it's not simple, and most people would go as far as saying it's downright obscure, but it is really a good pricing, or just plain insanity?

To understand a bit where Lokad is coming from, let's start with the fact that we are a B2B software company. About 95% of competitors don't have any kind of public pricing: you can only ask for a quote, and then a talented sales guy will contact you to figure out your maximum budget, only to get back to you with a quote at 120% of the figure you gave him.

However, I strongly favor public pricing, not because it's more transparent, honest, fair, whatever, but because it's a massive time saver. At Lokad, we don't enter into time-consuming pricing negotiations except for the largest clients, where it does make sense to spend time negotiating.

The cardinal rule of software pricing is that it should capture the willingness to pay of the client, which, in B2B, is typically related to the economic gains generated by the usage of the product. In the case of demand forecasting, benefits can be accurately computed. However, turning this forecasting benefits formula into a pricing formula is insaly complex in the general case.

Hence, we decided to settle for heuristics that somehow mimic this theoretical willingness to pay, ran many simulations over our existing customer base, and finally figured out the formula. I do not claim that this pricing formula is optimal in any way: it is not. However, it does bring a very reasonable pricing for clients ranging from 1-man companies to 100,000+ employees companies.

Pros:

  • (As far we can judge) It's aligned with the value Lokad creates for clients.
  • It's still simple enough to be memorized in 20s.
  • It does not put incentive to game the pricing by excluding slow movers (i.e. products with low sales) from the forecasting process.
  • There is no threshold effect, where the pricing jumps to a much larger number just because the company has 1 more product than what the license would support.

Cons:

  • It certainly falls into the category of bizarre pricing.
  • The only way to know for sure the real monthly cost is to give a try (1). 
  • Some prospects try the pricing formula on their own, and get it wrong (2).

(1) This statement applies to most metered SaaS, even if the pricing is linear. For example, at Lokad we had very little clue about our exact bandwidth consumption until we migrated toward the cloud (with dedicated servers, bandwidth was part of the package).

(2) I believe this partly explains why 95% of our competitors don't put any public price on display. That, and the fact that a very expensive pricing is likely to scare away prospects, before getting the chance of cornering them into the sales process.

I would be interested to see if other B2B niches have designed their own bizarre pricing formulas. Don't hesitate to submit them in comments.

Wednesday
Feb222012

Cloud questions from Syracuse University, NY

A few days ago, I received a couple of questions from a student of Syracuse University, NY who is writing a paper about cloud computing and virtualization. Questions are relatively broad, so I am taking the opportunity to directly post here the answers.

What was the actual technical and business impact of adopting cloud technology?

The technical impact was a complete rewrite of our codebase. It has been the large upgrade ever undertaken by Lokad, and it did span over 18 months, more or less mobilizing the entire dev workforce during the transition.

As far business is concerned, it did imply that most of the business of Lokad during 2010 (the peak of our cloud migration) has been stalled for a year or so. For a young company, 1 year of delay is a very long time. 

On the upside, before the migration to the cloud, Lokad was stuck with SMBs. Serving any mid-large retail network was beyond our technical reach. With the cloud, processing super-large retail networks had become feasible. 

What, if any, negative experience did Lokad encounter in the course of migrating to the cloud?

Back in 2009, when we did start to ramp up our cloud migration efforts, the primary problem was that none of us at Lokad had any in-depth experience of what the cloud implies as software architecture is concerned. Cloud computing is not just any kind of distributed computing, it comes with a rather specific mindset.

Hence, the first obstacle was to figure out by ourselves patterns and practices for enterprise software on the cloud. It has been a tedious journey to end-up with Lokad.CQRS which is roughly the 2nd generation of native cloud apps. We rewrote everything for the cloud once, and then we did it again to get sometime simpler, leaner, more maintainable, etc.

Then, at present time, most our recurring cloud problems come from integrations with legacy pre-Web enterprise software. For example, operating through VPNs from the cloud tends to be a huge pain. In contrast, modern apps that offer REST API are a much more natural fit for cloud apps, but those are still rare in the enterprise.

From your current perspective, what, if anything, would you have done differently?

Tough question, especially for a data analytics company such as Lokad where it can take 1 year to figure out the 100 magic lines of code that will let you outperform the competion. Obviously, if we had to rewrite again Lokad from scratch, it would take us much less time. However it would be dismissing that the bulk of the effort has been the R&D that made our forecasting technology cloud native.

The two technical aspects where I feel we have been hesitating for too long were SQL and SOAP.

  • It took us too long to decide to ditch SQL entirely in favor of some native cloud storage (basically the Blob Storage offered by Windows Azure).
  • SOAP was a somewhat similar case. It took us a long time to give up on SOAP in favor of REST.

In both cases, the problem was that we had (or maybe it was just me) not been fully accepting the extent of the implications of a migration toward the cloud. We remained stuck for months with older paradigms that caused a lot of uneeded frictions. Giving up on those from Day 1 would have save a lot of efforts.

Monday
Jan232012

Goodbye Subversion, you served me well

I had been a long time Subversion user even before I started my company. Since 2006, the data analytics core of Lokad had been managed over SVN which proved to be a very robust piece of software (combined with TortoiseSVN).

We had a few hiccups where the easiest way forward was to delete the local version and check-out again, but otherwise, our SVN hoster has been operating flawlessly over 5 years, which is a long time as far software technology goes.

After more than 13,000 commits over SVN, we have finally migrated the forecasting core, the 2nd most complex software part, right after accounting and billing :-) toward Git.

Internally, after hesitating a lot between Mercurial and Git, we finally opted for Git primarily because of GitHub where we now host our open source projects

There is a bit of nostalgia looking at good old tools depart. I am wondering whether Git will last for the next half-decade, or it will be supplanted by something that will make it look pale in comparison. 

My personal take for the next 5 years: Git will stay but the technology battle will displace itself toward the collaborative tools that operate over Git (or Hg).

Wednesday
Jan112012

MathJax, at last a decent way to post maths on the web

For a long time, posting something as simple as a square root on the web has been a major pain. Despite MathML having been around for years, Firefox is still the only browser (that I know of) to render MathML correctly.

$$p=\Phi\left(\sqrt{2\ln\left(\frac{1}{\sqrt{2\pi}}\frac{M}{H}\right)}\right)$$

Recently, I did stumble upon MathJax, an outstanding JavaScript rendering engine for mathematics that works for all major recent browsers. The syntax is derived from the one of LaTeX, and the output is either MathML (if you have Firefox) or plain HTML/CSS otherwise.

Thanks to MathJax, I have been able to post a long delayed analysis about optimal service levels (that the illustrating formula here above) and economic order quantity. Kudos to the MathJax team!

Tuesday
Dec202011

Instant transfer with Bitcoin but without 3rd parties

Update 2012-05-17: Double spending can be made extremely difficult through quasi-instant double spending attempt detection. See TransactionRadar.com as an illustration. I now believe that the ideas posted below are moot, because early double spending detection is just the way to go.

Bitcoin is a crypto-currency (check out my previous post for some more introductory thoughts) that provides many desirable properties such as decentralization, very low transaction fee, digital-native, ... However enabling instant payment has not been a forte of Bitcoin so far. It's very noticeable that people did even raise funds to address this problem with a trusted 3rd party setup.

In this post, I will try to describe a convention that would offer instant (1) secure (2) decentralized (3) transactions with Bitcoin (4).

Let's start by clarifying the scope of this claim:

  1. Instant. There is no such thing as real-time on the Internet, if only because of speed of light. Here, I am considering as instant anything below 10 seconds, which would be sufficient for the vast majority of the mundane use of a currency such as shopping.
  2. Secure. With Bitcoin, a transaction can be propagated in the network within seconds, yet, the transaction only becomes secured - aka with no further possibility of double spending - once the transaction has been included into the blockchain (6 blocks inclusion being the default of the Bitcoin client). Obviously, this requirement somewhat conflicts with the previous one, because 6 blocks represents about 1h on average (10min per block being the target speed of Bitcoin).
  3. Decentralized. The solution to reconciliate 1 and 2 should not rely on a trusted 3rd party. I hold no grudge against BitInstant, but if a solution exists to do the same thing without middlemen, then I believe it will only make Bitcoin stronger.
  4. Bitcoin. The solution should preserve the Bitcoin protocol as it exists today, requiring no upgrade of the community, except for those who would like to leverage instant payments. It's a convention in the usage of Bitcoin that I am referring to: it fits into the existing protocol spec. Those who don't want to follow this convention can safely ignore the whole thing.

Disclaimer: I am neither a cryptograph nor a security expert, merely an enthusiast Bitcoin user.

The core idea of my proposal is to introduce a twist in the notion of security: instead of a strict prevention of double spending, let's make double-spending more expensive that the expected benefit. Indeed, if double-spending becomes possible but only a steep cost (cost being expressed in Bitcoin too) then there is no incentive to actually make any widespread use of the double-spending trick for instant payments. With this twist, we accept the possibility of double spending, but only because it's highly innefficient for the attacker. It will not prevent a crazy attacker to do some damage, but from a global perspective, the overal damage through this twist should stay insignificant (because there are so many better ways to wreak havoc anyway if you're willing to spend money on the case).

For the convention that reconcilitate 1, 2, 3 and 4, I use two ingredients:

  • A Bitcoin address that is provably expensive: the setup cost of the address is X BTC. 
  • A mechanism to check that garantees that no double-spending attack to place for the address in the past (blockchain-wise).

Usual Bitcoin addresses are quasi-free (the CPU cost to generate a new address is negligible), but it's not difficult to produce a Bitcoin address that comes with a provable cost. The easiest way is go for monetary destruction with a transaction that targets /null. Yet, destroying coins is not entirely satisfying. 

Thus, in order to prove the value of the address AX, I propose to have a transaction, originating from a single address 1A only (only 1 input) that by convention redistribute its value to the coinbase address (*) of 10 consecutive blocks that are less than 1 month old (at the time of the proof).

(*) It's the address of the first transaction of the block used by the miner himself to capture its reward.

Indeed, we cannot rely on transaction fee alone to prove the cost of address, because a miner could decide to create a ficticious high-fee transaction in a block - fictictious in the sense that the fee would cost nothing to the miner, who would immediately recover the fee through the ownership of the block.

Yet, by targeting 10 consecutive blocks, we prevent any miner to fully self-reward itself with the transaction. Indeed, blocks are assigned based on a lottery where the odds are proportional to the processing power injected in the process. A "smart" miner would be able to target one (**) of his block, lower the cost by 10% which does not compromise the pattern (the cost remains very real).

(**) Some super-heavy mining pool, like deepbit, could push the leverage further; but having a single mining operator representing more than 1/2 of the total hashing power of Bitcoin is a big problem for Bitcoin anyway; so I am assuming here that no operator has more than a fraction of the total computing power available.

Then, the 1 month old restriction is just there to increase the odds that the coins do not get lost. Indeed, since the owner of the targeted addresses do not expect further funds to be pushed on those addresses they may not even monitor them once they have been emptied. Yet, with the 1 month delay, the lucky reward will not stay unnoticed.

Another argument in favor of rewarding the coinbase addresses is that it increases the incentive on mining efforts, hence strenghtening Bitcoin as a whole.

Based on the convention established here above, we have now a way to prove that a Bitcoin address did cost at least X BTC to her owner. Yet, we still need a way to be sure that no double-spending attack has already been done.

Here, the intuition is the following: you cannot prevent double-spending with instant payment (aka without block validation), but you can expose afterward the double-spending attack which will destroy the trust invested in the provably expensive address.

Let Alice be the honest merchant who offer instant Bitcoin payment; let Bob be the bad guy who trying a double-spending attack on Alice.

At the moment of the transaction, Bob gives to Alice the content of the transaction Tx1 that has 1B as input (the address of Bob, proved being expensive) and 1A as output (the address of Alice). Yet, at the very same time, Bob is issuing another transaction Tx2 that empties the address 1B. As a result, after a while, Alice realizes that Tx1 has been rejected.

It's now time for Alice to retaliate by exposing Bob. In order to do that, Alice produces a small dummy transaction to herself where the transaction Tx1 in recursively embedded as data though a convention based on OP_DROP. (***) Once the transaction Tx1 is exposed, the community of merchants, who like Alice, accept instant transaction withness that 1B cannot be trusted any more because the cumulative effect of the transaction Tx2 going out of 1B and of the exposed transaction Tx1 (which never made its way to the block chain) leads to a negative coin amount on 1B.

(***) For the sake of concision I am leaving out the tiny specifics of how exactly should this recursive transaction embedding be implemented. Anyway, based on my understanding of Script, it's perfectly possible to recursively embark a transaction (treated as data) into another transaction.

At this point, we have a system where Bob, the bad guy cannot hurt Alice the merchant (recipient) without getting some retaliation. Yet, what if Alice is a bad merchant and Bob the honest client? Could Alice hurt Bob just for the sake of breaking the community trust into his provably expensive address 1B?

We need one final touch to the convention to protect Bob the sender from a false accusation of Alice. In order to achieve that Bob should make sure each emitted transaction Tx1 from 1B, his provably expensive address, is broadcasted to the network, and not just given to Alice. By doing this, Bob ensures that Tx1 will make its way to the blockchain and prevents Alice to report 1B as dishonest (to be safe Bob is better off putting some transaction fee in Tx1 that guarantees a speedy chain inclusion).

Implementating the convention

As far I can tell, the proposal does not involve any breaking change. Ideally, the convention would make its way to the Bitcoin client (or a dedicated fork) to support 3 extra features:

  • Spending BTC to increase the trust level on a particular Bitcoin address.
  • Performing instant transactions channelled through the "expensive" Bitcoin address.
  • Reporting the "cost" of the address for the incoming transactions. 

Then, there is many small details that would need to be polished such as the delay for the community to decide whether trust is lost on an address after being reported. Also, the convention as a whole can also probably be polished further.

Anonymous payments

This convention would be one step further is making Bitcoin less anonymous that it is today. Considering the scope of application of instant payments, it does not seem (to me) too much of a problem. If you really want to stay anonymous, then, entering a retail store isn't top notch anyway. Alternatively, for eCommerce, the 1h payment delay is mostly a non-issue (except maybe for pizza delivery).

In real life

Instant payments are needed for small purchases: you typically don't need to transfer both a big amount AND to do it instantly, it's either or. To accept (or not) whether an instant payment of X BTC made from a proved Y BTC address should go through instantly should be left to the merchant itself.

With a 10 BTC proof, it would reasonable to accept instant payment up to 10 BTC (maybe a bit less assuming a self-serving miner scenario). Coordinating triple-spending (or more) in real life seems complicated (but not impossible) but I seriously doubt people would actually bother for such a complex scheme except to demonstrate its feasability. Indeed, the stakes would be very limited anyway, as anything large would go the usual route of non-instant payments. 

Then, looking at recurring customers payment with the same address would be also a way to gradually increase the confidence cap (from the merchant viewpoint) for instant payments even without asking the client to increase its proof.

Compared to a rough 2% middleman fee (based on pricing of BitInstant), I feel that the provably expensive address would be amortized in less than 1 year considering weekly purchase. Not a deal breaker, but still an option probably worth having a look at considering the positive side-effect on the mining side.

Page 1 ... 2 3 4 5 6 ... 31 Next 5 Entries »