
Onchain market predictability is becoming a chain-design problem, not just an app-layer optimization. In a March 25 research essay, a16z crypto argued that throughput alone will not make onchain order books, auctions, and other latency-sensitive markets competitive if chains cannot guarantee fast inclusion and protect transactions from being observed before ordering is fixed.
Why throughput is no longer enough for serious onchain markets
The starting point in the a16z piece is simple: blockchains can now plausibly claim the raw capacity needed to compete with traditional financial infrastructure, but financial applications also need predictability. The authors argue that a trade, cancel, bid, or option exercise only works well when builders can rely on the transaction landing as soon as possible, rather than merely "within the next few seconds." They use an onchain order book as the clearest example. Market makers need to constantly update quotes when the outside world changes, and if those updates land late while arbitrageurs get in first, the market maker absorbs the loss. The expected response is wider spreads, worse prices, and weaker venue quality. That is an infrastructure problem, not a user-interface problem.
This matters because crypto has spent years treating performance as a headline number. Transactions per second are easy to advertise. Predictable inclusion is much harder to deliver and much easier to ignore until a real market has to react to live information. The a16z essay argues that for onchain finance to support high-value markets rather than just basic settlement, chains need short-term inclusion guarantees. Their linked research paper makes the same point in more formal terms, arguing that efficient onchain auctions require selective-censorship resistance and hiding, meaning no adversary can selectively delay a transaction or learn its contents before confirmation finalizes ordering.
Why single-proposer chains create a market structure problem
The essay's strongest contribution is that it frames block design as market structure. In a single-leader system, the current proposer has unusual power over both inclusion and visibility. A leader can delay a cancel order by tens of milliseconds, choose one trader's transaction over another, or insert its own order after seeing the incoming flow. The a16z piece argues that either of two powers can break a market: the power to censor another participant's transaction, or the power to see that transaction before choosing one's own response. The paper's abstract connects this directly to MEV, describing traditional single-proposer systems as giving validators a serial monopoly over inclusion and ordering.
That logic applies well beyond a toy auction. If a rate decision, liquidation wave, or large external price move hits, the market maker who cannot cancel stale quotes quickly becomes the subsidy for everyone else. The a16z article argues that once builders know a proposer can behave this way, markets either become thinner or move away from the chain's native execution path. Even if leaders do not fully exploit the advantage today, the authors say relying on social restraint is weak design because higher onchain financial activity raises the payoff for exploitative behavior. Their timing-games example makes the point cleanly: once one operator pushes farther for extra rewards, others have an incentive to follow.
Why so much market logic still moves offchain
One reason DeFi still functions, despite those problems, is that applications route around them. The a16z essay explicitly says that protocols needing fast auctions often run the critical mechanism offchain and only settle the result onchain, naming UniswapX and CoW Swap as examples. Uniswap's own documentation shows why. UniswapX is an auction-based protocol where swappers broadcast signed orders and fillers compete to execute them; on Ethereum mainnet, Uniswap uses a two-role system with permissioned quoters plus permissionless fillers because 12-second blocks and high gas costs make direct onchain price discovery less workable there. On faster L2s, the docs say fillers can compete directly onchain without RFQ.
CoW Protocol's documentation reveals a related compromise from another angle. It uses a fair combinatorial batch auction where solvers compute candidate solutions for a batch and the protocol selects winners according to a defined objective function. That design is powerful, but it also shows how much valuable trading logic sits in auction coordination and solver competition rather than in raw base-layer execution alone. The a16z article argues that when too much of this mechanism moves offchain, the chain risks becoming mostly a settlement rail. That weakens one of DeFi's core promises: composability between applications executing in the same shared environment.
What builders actually need: inclusion guarantees and hiding
The most useful part of the article is its attempt to specify the missing properties. The authors argue that the right target is not vague "faster execution" but two concrete guarantees. First is short-term censorship resistance: a valid transaction that reaches an honest node on time should be included in the next possible block. Second is hiding: except for the node that first receives a transaction, no other party should learn anything about it before inclusion has been finalized. The essay suggests that if these properties hold, proposers cannot selectively hold back trades and cannot front-run them after inspecting the contents. The linked paper then proposes a multiple concurrent proposer protocol intended to offer exactly those properties.
This is where the builder angle becomes more interesting than the investment framing. The proposal is not merely "better MEV mitigation." It is a claim that chain architecture for markets should minimize concentrated proposer discretion. The a16z article sketches mechanisms such as timelock encryption or threshold encryption as ways to keep transaction contents hidden until consensus is already done. It also argues that having several viable entry points per slot is better than depending on a single leader. The paper does not mean every market needs full privacy or every chain needs the same ordering rule. It means high-frequency onchain markets need a design where no single actor both sees the flow and decides who gets to move first.
transaction inclusion archive
What this means for chain teams and onchain market builders
The practical implication is that infrastructure teams should stop treating order books, auctions, and derivatives as applications that can simply be "deployed later" once throughput improves. The article's argument is that these products reveal whether a chain can support adversarial finance rather than just passive settlement. Builders evaluating new execution environments should ask harder questions: How many entry points can reliably land a transaction in a slot? Can a proposer peek at order flow before ordering is final? Are auction-critical flows forced offchain because the base layer cannot offer robust inclusion? Those questions map much more directly to whether a market can survive than generic performance claims.
For market builders, the near-term takeaway is not that a complete solution has arrived. It has not. The paper is a preprint, and the a16z article itself presents the right ordering rule and the robustness-performance tradeoff as active research problems. But the direction is clear. Chains that want serious capital markets on top of them will need stronger guarantees around inclusion and pre-confirmation privacy, while protocols that cannot get those guarantees will keep leaning on offchain auctions, solver networks, and permissioned coordination. That split will likely define the next phase of onchain market design more than another round of TPS marketing.
onchain trading archive
The a16z essay is useful because it names the bottleneck plainly. If onchain finance wants tighter spreads, deeper books, and less fragile auction design, the next breakthrough may not be more throughput at all. It may be a chain that can make inclusion predictable while keeping order flow unreadable until it is too late to exploit.
Reference Desk
Sources & References
Berat Oshily is a Birmingham-based Web3 journalist and blockchain researcher with over six years of experience covering the decentralised technology space. Specialising in NFTs, DAOs, and smart contract infrastructure, he has built a reputation for sharp, technically grounded reporting on the Ethereum ecosystem and the UK's evolving digital asset regulatory landscape. His work has appeared in Decrypt, Wired UK, and The Defiant. Berat has received a grant from the Ethereum Foundation in recognition of his contributions to open-source DeFi education and is a regular presence at NFT.London and ETHGlobal conferences across the UK and Europe.
Continue Reading
Related Articles
Additional reporting and adjacent stories connected to this topic.
Yesterday
European Banks Push Tokenized Deposits Over Stablecoins
European banks are pushing tokenized deposits because they want onchain cash without handing the money layer to stablecoin issuers. The fight is about settlement architecture, not branding.

Yesterday
Meta's Stablecoin Return Runs on Partners, Not Power
Meta's stablecoin comeback is defined by what it is not doing. After Diem, the company appears to want payment rails it can distribute at scale without trying to control the money itself.

Yesterday
Hyperliquid Mobile App Turns a Trading Venue Into a Habit
Hyperliquid's Android MVP is sparse by design. Native mobile distribution gives the exchange a tighter grip on alerts, retention, and the trading flow it previously shared with third-party apps.



