On choosing the right block size for Bitcoin

The Bitcoin community has been debating the right size of a block for almost its entire existence. Two years ago, these debates lead to the split between Bitcoin Cash - opting for larger blocks - and Bitcoin Core - opting for 1MB-forever blocks 1. Yet, while many members of both communities appear to have firm opinions on the matter of choosing the adequate block size, I have casually observed a lot of misconceptions concerning this topic. With Bitcoin being a complex system it is unfortunately not possible to single out this one setting without embracing the rest of the picture as well. Having extensively pondered the case, I would like to offer some insights on this matter.

In the following, Bitcoin Cash is always referred to as Bitcoin.

Let’s start by clarifying the goals of Bitcoin as a system, otherwise we won’t even know what we are optimizing these blocks for. From the end-user perspective of Bitcoin used as cash, we want to be able to:

Then, as a counterpart to those goals, we want to keep Bitcoin as a vibrant, open ecosystem of wallets, exchanges and apps where the overhead associated with the active participation to the network2 is kept as low as possible; but not lower than the point where we would lose the capacity to address the end-user requirements as stated above.

From those goals, it immediately follows that the “right” block size - assuming that bigger blocks means more overhead for operators3 - is profoundly dependent on the size of its community. If Bitcoin is used on a daily basis by 1 billion users, it would be fine4 if the cost to operate over the Bitcoin network is at, say, 20,000 USD per month; while clearly this would not be fine if Bitcoin is only used on a daily basis by 10,000 users.

Let’s review the most common misconceptions concerning the Bitcoin block size.

Blocks should never be full. No, a predictable size limit is required to preserve the integrity of the network. The cost of operating a system (software+hardware) over the Bitcoin network is largely driven by the maximal transaction throughtput that this system can support. Without a throughput limit in place, an attacker can cheaply flood the network with paid-for transactions and create widespread downtimes over other operators. For example, at 0.1 cent per transaction, without any block size limit, it would only cost 10,000 USD to create a 2.5 GB block which - considering the state of the Bitcoin ecosystem as of September 2019 - would crash nearly every single app including all the major Bitcoin full-node implementations. Thus, under certain conditions; ideally rare, fees should be able to rise sharply if only as a defense mechanism against denial-of-service attacks.

The block size is a good proxy of the long term overhead. While some may argue that raising the block size “centralizes” Bitcoin because it increases the network overhead, in reality the block size is a rather incomplete indicator as far as economic overhead goes. Cost analysis of the Bitcoin network indicates that, at scale, the size of the UTXO dataset (the unspent transaction outputs) is the dominant factor5 contributing to the TCO (total cost of ownership) for any app that operates on-chain. This explains why the dust limit configuration setting, a much less discussed factor, is - arguably - even more critically important than the block size limit. While lowering the dust limit gives end-users more freedom to perform micro-transactions, it also exposes the network to a rapid growth of the UTXO dataset, which is a lot more damaging - because of its permanent nature - than a transient spike of transactions. Nevertheless, as of September 2019, most Bitcoin apps are indexing the whole blockchain, sometimes out of necessity but frequently out of convenience as alternatives do not exist (yet).

Frequent full blocks are fine. No, if more than a tiny few percents of the blocks are full - i.e. reaching the block size limit as per consensus rules - over a long period, say 1000 blocks (approximately one week), then Bitcoin gets into serious problems. Transaction fees turn into an auction for an artificially rare resource - the block space. Economics 101 predicts that the costs of rationed goods facing inelastic demand will skyrocket until alternatives are found. Economics 101 also predicts that middlemen are likely to appear to take advantage of the situation through a mechanism known as the quota rent. Economics 101 is right and this is precisely the situation being observed with Bitcoin Core where not only fees as high as 10 USD are routinely observed as a direct consequence of the 1MB block size acting as a quota, but where companies6 have also positioned themselves as middlemen who benefit from the ongoing enforcement of the quota.

Wallets should be smarter with their fees to avoid bad UX. While it is true that wallets should avoid bad UX, Bitcoin is still lacking at its core proper means of assessing whether a transaction will be included or not in the next block. This is why pre-consensus is so critically important to further improve Bitcoin7. With pre-consensus, psychic abilities are no longer required from wallets: if the wallet gets the fee wrong, a higher fee is retried within seconds. The end-user might still suffer, once in a while, a multi-cent fee and a 10 second lag, but transaction flooding attacks becomes dramatically more expensive. Blaming wallets for poor fees handling is (mostly) barking at the wrong tree, this problem has to be addressed by a better consensus mechanism.

Mining pools should produce blocks of X megabytes. (replace X by 8, or 32 or 128 depending on your affiliation). As of September 2019, the 90-day moving average for the block size of Bitcoin blocks has not exceeded 400KB since August 2017, and the average block size has been hovering around 200KB for this 2 year period. As a rule of thumb, keeping a block size limit at 10x the average block size observed for the last year is a reasonable guesstimated limit in order to stay clear from any quota effect and its skyrocketing fees. Moreover, keeping the limit predictable months in advance is highly desirable because businesses8 need to time their budget and roll-out whatever investments are needed to support their Bitcoin-related operations.

Increasing the block-size is easy. Change the code and recompile. Done. This approach is a recipe for disaster. At the present time, most of the Bitcoin software infrastructure remains dramatically under-engineered9 as far as on-chain scaling is concerned. Worse, the software is riddled with accidental choke-points that cannot be addressed by simply throwing more hardware at the problem. These problems are fixable - no rocket science involved - but require sizeable investments to be made on high quality infrastructure code.

Miners compete over the mempool backlog. Yes and no, but mostly no. The mempool is a “design accident” of the Bitcoin client historically designed by Satoshi. From a software engineering perspective, the RAM is the most expensive place to keep Bitcoin transactions in a computer (compared to alternative data storage options such as SSD). Having transactions lingering around in limbo because they don’t fit in a block is bad for everybody. For end-users, it’s downright confusing. For operators, it’s costly. The sane networking option consists of achieving low latency consensus on whether the transaction will make it to the next block, and dropping the transaction if it won’t. Again, pre-consensus is the key to deliver a deep fix to this angle.

Mining pools compete on block sizes. As of September 2019, 32 MB is the maximum block size limit per consensus rule, while 2 MB is the maximum “soft” limit defined by Bitcoin ABC. Mining pools have the option to modify this 2 MB soft limit to any value as long as it remains under 32 MB. This state of affairs is not very sensical. Once a large block is mined and successfully propagated the whole Bitcoin network incurs the long term costs associated with this block. The blockchain, or rather the UTXO and its updates, are “commons” (as in “the tragedy of the commons”). The security model of Bitcoin relies on trusting the miners in aggregate, but not trusting any single one of them in particular. Thus, the degradation of the “commons” cannot be left as an option to any miner. While the block size should ultimately become market-driven10, it remains critical to achieve low latency consensus on whether a transaction will make it to the next block, irrespective of the block publisher who will win the next iteration of the block propagation game.

In conclusion, as of September 2019, there is no emergency whatsoever as far as the block size of Bitcoin is concerned. The consensus cap at 32 MB vastly exceeds what the Bitcoin ecosystem requires. A soft cap at 2 MB is a reasonable guesstimated limit for the next quarter. On-chain scaling remains critical but it takes a lot more than just bumping the block size setting upward.


  1. Due to the introduction of Segwit as a soft-fork, the “operating” footprint of a block on the Bitcoin Core side can exceed the 1MB limit; however, in practice, blocks remain under 2MB even when the network operates at peak thoughput. ↩︎

  2. See also A taxonomy of the Bitcoin applicative landscape. ↩︎

  3. For the sake of simplicity, I am classifying participants to the Bitcoin network in two classes: end-users and operators. Operators include every participant that needs to monitor the network-wide flow of ongoing Bitcoin transactions (mining pools, wallets, exchanges, apps, etc) . End-users merely want to transact and monitor their transactions. ↩︎

  4. While it’s difficult to fully rationalize what are “acceptable” entry barriers for Bitcoin operators caused by the network itself, it is obvious that extreme positions are nonsensical. For example, there is no point in keeping the “full node” hourly overhead below the cost of a single transaction fee - an unfortunate situation routinely happening to Bitcoin Core. Conversely, if Bitcoin, as of September 2019, were to become unaccessible to wannabe entrepreneurs having no more than 5000 USD as their yearly budget for cloud resources, it would be a huge opportunity loss. ↩︎

  5. Few apps are operating on-chain to actually preserve and index the full blockchain. At scale, it can be expected that that most apps would operate leveraging only the UTXO plus a “recent” fraction of the blockchain - e.g. last 3 months. However, as of September 2019, there are very few software pieces readily available to support such scaling strategies. ↩︎

  6. Blockstream, as far as its Liquid product is concerned, is positioning itself has a middleman that depends on the ongoing enforcement of the quota on the block size over the Bitcoin Core network in order to remain relevant. Coincidentally, several prominent Blockstream employees became, a few years ago, increasingly vocal against any change to be ever brought to the block size cap. ↩︎

  7. At present time, in my humble opinion, the best candidate to deliver low latency consensus over Bitcoin is Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies that would supplement, but not replace, the existing mining process. ↩︎

  8. For example a small exchange that has budgeted 1,000 USD of IT costs to operate Bitcoin for the next month is most likely going to drop Bitcoin altogether if the actual IT costs are skyrocketing to 10,000 USD over the same time without any corresponding uptake of business as far as their forex business is concerned. ↩︎

  9. Under-engineering is a polite under-statement as far as on-chain scaling is concerned. For example the codebase of Bitcoin ABC is literally littered with global locks inherited from the historical Satoshi client, which is one of the worst design antipatterns to achieve any degree of scalability. ↩︎

  10. This adaptive block size proposal illustrates how the block size cap could become market-driven while deterring adversarial behaviors and while remaining manageable for the community at large. ↩︎