
On the Welfare of EIP-1559 with Patient Bidders with Moshe Babaioff | a16z crypto Research Series
Audio Summary
AI Summary
This seminar focuses on the welfare properties of EIP-1559, the transaction fee mechanism used in Ethereum, particularly when dealing with "patient bidders." The research aims to provide a formal proof of its performance, specifically regarding welfare, without considering strategic behavior.
Ethereum, as the second-largest blockchain, processes a significant volume of transactions, each with associated fees. EIP-1559 was adopted without a formal proof of its strategic or welfare performance. While prior work has explored its strategic properties, this research delves into its algorithmic welfare properties.
The model considers users submitting transactions, each having a size (Qi) and a value (Vi) per unit size, resulting in a total value of Vi * Qi. Validators are responsible for including these transactions in blocks, which have limited space. Pending transactions reside in a mempool. The mechanism determines both inclusion and payment. While users and validators are generally strategic, this presentation primarily focuses on non-strategic welfare.
In contrast to Bitcoin's classic solution, where miners are chosen randomly and bids are paid directly to them, EIP-1559 introduces several changes. It features a variable block size, allowing blocks to be at most twice the target size (B). Payments are specified for transactions, with some payments burned to incentivize collusion. The mechanism can be viewed as a dynamic price mechanism for job scheduling, where the price per unit dynamically adjusts based on supply and demand.
The core of EIP-1559 involves a base price (PT) at each time T. Users specify their maximum willingness to pay per unit (BI). If BI exceeds PT, the transaction can be included, and payments are burned. Crucially, the base price for the next round (PT+1) is updated using the formula: PT+1 = PT * (1 + (1/8) * ((QT - B) / B)), where QT is the total capacity used in the previous round. This means if QT is above B, the price increases; if below B, it decreases; and if exactly B, it remains stable.
More detailed aspects include a maximum block size of 2B and the option for transactions to specify "priority tips" paid to validators. If too many bids exceed even the 2B capacity, validators can choose any subset of transactions, potentially maximizing priority tips, though the research does not assume specific strategic behavior in this scenario.
The research distinguishes between "impatient bidders," whose transactions are immediately dropped if not scheduled, and "patient bidders," whose transactions remain in the mempool and can be scheduled in future blocks without value loss. This study focuses on patient bidders, acknowledging this is a simplification, with later discussion of "partially patient" agents who might discount time or lose value at some point.
Regarding validators, the assumption is adversarial behavior, meaning they can schedule any subset of eligible transactions. The only constraint is that they always choose a "maximal by inclusion" set, ensuring no eligible transaction is left out if space is available. This implies that validators do not screen by value beyond what the base fee already dictates. This weak assumption makes the positive welfare results more robust.
The social welfare is defined as the sum of values (Qi * Vi) for all included transactions. The model assumes fixed, known transaction sizes.
The protocol's basic properties include a target block size B and a maximum size C * B, where C=2 for EIP-1559. The price update formula is approximated as PT+1 = PT * exp(ETA * (QT - B) / B), and prices cannot drop below a minimum (Pmin). The protocol has five parameters: block size B, multiplier C, ETA (1/8 in EIP-1559), Pmin, and the initial price P1.
A key finding is that while the protocol can schedule up to C*B (2B) in any given round, it produces schedules with an average block size of B over time, with a "slackness" (delta) that is independent of the number of blocks (K). This slackness depends on ETA, the range of values (L to H), and C. This theorem essentially proves that the mechanism, despite allowing larger blocks, tends to maintain the target average block size over extended periods. The proof involves analyzing the ratio of prices over time using telescopic products and logarithms, showing that the total quantity scheduled remains proportional to K*B plus a bounded error term.
The main result addresses welfare. Maximizing welfare is challenging due to the NP-hard knapsack problem nature of each block, the online aspect of transactions, and the price-based nature of the protocol. The theorem states that for values between L and H, and an initial price within this range, the welfare of EIP-1559 up to time T + gamma is better than the welfare of an optimal offline algorithm (OPT) restricted to a maximal block size B, by a factor of 1 - ETA.
Here, 1 - ETA represents a welfare loss related to the price adjustment parameter. Gamma is a "resource augmentation" term, representing additional rounds needed for the EIP-1559 to catch up to OPT. Gamma is approximately (1/ETA) * log(H/L) * (some factor). The result even holds if OPT is allowed to have an average block size B and fractional allocations.
The trade-offs are evident: smaller ETA leads to less welfare loss but a larger gamma (more rounds for price adjustment). Gamma also depends on 1/(C-2), meaning it becomes unbounded when C=2, which is the current EIP-1559 parameter. This suggests a potential issue for the current implementation.
For C < 2, the research shows that not only price-based algorithms but also general online algorithms will incur a constant factor loss. However, for C=2, while competitive online algorithms exist (those that can sort transactions), EIP-1559 (being price-based) will still incur a constant factor loss. This constant loss for C=2 is avoided if transaction sizes (Qmax) are small relative to B, specifically if C > 1 + Qmax/B.
The full theorem's benchmark can also have average block size B with slackness delta prime, incurring an additional delta prime loss in gamma. If the initial price P1 is very high, gamma must also grow at least as large as log(P1/Pmin). The ratio depends on Vmax/Pmin. The parameter C prime = C - Qmax/B must be greater than zero for the bounds to hold, adjusting for cases with large transactions.
The proof sketch for the main result begins by considering a greedy online algorithm that sorts transactions by density and fills blocks up to at least the target size. Assuming transaction sizes are at most B (since OPT is restricted to B), this algorithm, given one extra round, can achieve welfare at least as large as OPT. This is proven by showing that for any value threshold theta, the quantity scheduled by the online algorithm with value at least theta, up to time T+1, is greater than what OPT schedules up to time T. This lemma then implies the welfare bound.
Extending this to EIP-1559, the argument is similar, but instead of one extra step, gamma extra steps are needed. This is because EIP-1559 cannot sort by density and validators choose the included set. The lemma shows that EIP-1559, with threshold theta and gamma extra steps, beats OPT up to time T, with a small adjustment due to price dynamics. The proof involves identifying time intervals where prices are below theta (meaning all eligible transactions are scheduled) and intervals where prices are above theta (meaning only high-density transactions are scheduled). The gamma term compensates for intervals where the price is rising, and potentially valuable transactions are missed.
The research also explores lower bounds. The positive results do not extend to more general models without constant loss. For instance, when there are multiple resources or when agents are "partially patient" (either discounting time or having limited patience where value drops to zero after a certain period), a constant loss is unavoidable.
For all price-based algorithms, the slackness (gamma + delta) must grow at least as log(log(H/L)), contrasting with EIP-1559's log(H/L). This suggests that EIP-1559 performs a linear search in the exponential price space, and perhaps other price-based mechanisms could achieve better bounds through binary search-like strategies.
In the discounting case, where a transaction's value is discounted by (1-rho)^(t-ta) if executed at time t instead of ta, even a small positive discount factor (rho) mixed with fully patient agents can lead to an unavoidable constant factor loss (e.g., 1/20 of welfare). This holds if gamma is "little o" of T.
An example demonstrating EIP-1559's constant factor loss for C=2 and large transactions involves a scenario where the price is at Pmin=1. Two types of transactions arrive: great transactions (size B, value 10) and low-value transactions (total size (1+epsilon)*B, value 2). An adversarial validator can choose to schedule the low-value transactions, filling the block to (1+epsilon)*B (within the 2B limit) and leaving no space for the great transaction. The price update, seeing (1+epsilon)*B capacity used, increases by ETA*epsilon. It takes many rounds (omega(1/(ETA*epsilon))) for the price to reach 2, at which point the low-value transactions are no longer eligible. This allows the adversary to repeatedly force the scheduling of low-value transactions, leading to a constant loss of welfare (e.g., half). This scenario critically relies on C being exactly 2, allowing epsilon to be arbitrarily small. If C > 2, epsilon would be at least C-2, preventing this specific exploitation.
Open problems include analyzing welfare with strategic agents, investigating if mechanisms can achieve positive results when all agents have the same discount factor, exploring if other price-based mechanisms can achieve better slackness bounds (log(log(H/L))), and studying constant approximation mechanisms for multiple resources and partially patient agents (which have been shown to exist for online, non-pricing algorithms).
In summary, the research on EIP-1559 shows that with patient bidders, if the average block size is B and some resource augmentation (gamma extra steps) is allowed, the protocol can compete with an offline optimum. However, constant welfare loss is unavoidable under various extensions of the model, such as multiple resources, partially patient agents, or when the block size multiplier C is exactly 2 in specific adversarial scenarios.