Blockchain Monitoring is MEV. Always was
Table of Contents
- Introduction
- Incident Detection and Management
- Proactivity – Anomaly Detection
- Monitoring of Blockchain Applications
- Mitigation is MEV
- Formalization of Ethical Censorship, a sketch
- Rollups: A better alternative?
- Implementation Sketch
- A note on censorship
- Conclusion
- Phylax
- Acknowledgements
- Discussion
- References
Introduction
In the ever-evolving world of blockchain technology, the ability to monitor and mitigate potential issues is not just a luxury but a necessity.
It’s time we put an end to the Billions of dollars that are lost to hacks every year1.
This post delves into the intersection of blockchain monitoring and Maximum Extractable Value (MEV). We’ll explore the critical phases of issue detection and mitigation, the role of observability and monitoring, and how these concepts apply to blockchain applications.
Then, we see how blockchain incident response is coupled with MEV activities and how these can be used to mitigate hacks. Finally, we will see a quick sketch of how that could look in practice, the challenges with Ethereum (or, more broadly L1s), and how Rollups could be a more suitable environment. We will quickly define the typical Rollup deployment and sketch how we could augment it with the hack mitigations we have discussed.
Incident Detection and Management
Swiftly identifying and addressing issues is paramount. We can conceptualize this process through a three-phase timeline:
Issue Surfacing
: This initial phase marks the emergence of a potential problem. It’s the undercurrent, often unnoticed until it manifests visibly.Issue Detection
: The problem is not just present but recognized, often triggering alarms or alerts.Issue Mitigation
: This phase is about resolution—addressing the problem and restoring normalcy.
Figure 1: A usual Incident Response timeline2
Central to the Issue Detection
phase are two pivotal concepts: Observability
and Monitoring
.
Observability: This is the system’s inherent capability to provide insights into its internal states based on external outputs. It delves into the “unknown unknowns”—aspects you weren’t aware you needed to know. It’s about exploring without a set map and gathering raw, unstructured data to gain insights.
Monitoring: This is the proactive approach, where you’ve set up specific checkpoints or metrics based on anticipated issues—the “known unknowns.” It’s structured surveillance, where you’ve predefined what to look for and set alerts accordingly.
Upon detecting an anomaly, the Incident Response protocol3 is activated, which is a well-defined roadmap detailing potential issues and the recommended course of action for the on-call engineer. It’s a mix of diagnostic queries, key data points, and prescriptive activities, such as rebooting a service or rerouting traffic.
For the more seasoned SRE engineers, the above are greatly influenced by the NIST Cybersecurity Framework4. This framework is crucial as it provides a policy framework of computer security guidance for how private sector organizations can assess and improve their ability to prevent, detect, and respond to cyber attacks.
Proactivity – Anomaly Detection
A “proactive” observability system is designed to anticipate and address potential issues before they escalate. Its primary goal is to detect early signs of degradation in user experience and implement mitigation strategies, preventing more significant problems such as security breaches or outages.
Consider a straightforward example: If a server’s load is increasing linearly then a proactive system would recognize this trend and automatically scale up the number of servers
Monitoring of Blockchain Applications
Let’s now integrate these concepts within the context of blockchain technology. In essence, blockchains provide an abstraction of the “backend” of an application to the node operators. This concept has been referred to as Hyperstructures5 by Jacob from Zora, a term that aptly captures the essence of this abstraction. The developers of blockchain applications are not required to manage any infrastructure to deploy their protocols. They are relieved of operational burdens, save for a few considerations that must be taken into account when creating the threat model of their protocol. For example, a protocol that heavily relies on time as a core component must consider that block producers have a certain degree of control over the exact timestamp
at which a block is created.
For this analysis, we will define blockchain monitoring
as the monitoring of applications that run on the blockchain rather than the underlying nodes that participate in the blockchain network.
The field of blockchain monitoring, particularly from a DevOps perspective, is still in its infancy, with only a few experienced teams, such as Tenderly and Yearn6, having substantial experience in this area.
The existing solutions can be broadly categorized into two groups:
- Analysis of transactions in the wild in an attempt to identify patterns before a hack occurs
- Monitoring of the protocol’s contracts and triggering alerts or automatic mitigation actions once a transaction is detected on the blockchain
The first category of activities is based on the observation that most hacks follow a specific preparation pattern before they occur (e.g., funding a contract with funds from Tornado Cash). The second category of activities is valid only if the attack requires multiple transactions (that are not included in the same block) and can act as a damage control mechanism. While it is beneficial to know that an attack has occurred, the utility of this knowledge is limited if nothing can be done to mitigate it.
Unless the vulnerability is reported before the exploit, it is usually too late.
Finalised and non-finalized state
In Blockchains, there are two kinds of realities or states:
- Unfinalized state
- Finalized state
When a transaction enters the mempool, it enters at the start of the “unfinalized state,” and there is 0% certainty that it will be included in a finalized block and thus becomes “part” of the blockchain’s history. To be more precise, we don’t care about the transaction itself, but rather about the changes it will make to the state of the blockchain or its intent, but for now, let’s talk about transactions.
Once a transaction is included in a block, the “finalization certainty” increases, and depending on the consensus algorithm, it can go from being finalized after one block (e.g., Tendermint) or several (e.g., Ethereum7). In Ethereum, reorgs are slashed by the protocol, so even an unfinalized block is usually a “good enough” assurance.
graph TD;
A[Transaction Submission] -->|Enters Mempool| B[Unfinalized State]
B -->|Included in Block| C[Increasing Finalization Certainty]
C -->|Consensus Algorithm Processing| D[Finalized State]
D --> E[Immutable Part of Blockchain History]
Figure 2: The different states of a transaction in the path to finalization
Monitoring the state that matters
Thus, assuming that the attack will be concluded in a single block, what we ought to monitor is not the finalized state but the state that is not finalized.
Because the state that is not finalized is the state that is amenable to our mitigative actions.
Once the block finalizes, the attack has been concluded, so any alerting becomes much less valuable. It’s now about damage control, such as letting the users know and talking with legal. Thus, to interfere with the transaction’s effects from the moment it’s broadcasted to the moment it’s included in a finalized block, we have to perform a group of actions generally named “MEV – Maximal Extractable Value.”
To tie it back with the “regular” observability and incident response of the web2 world, we could say that if \(T_1\) is a malicious
transaction, we have \(t_i\) to act proactively. If we include a transaction \(T_2\) that “freezes” the protocol before \(T_1\), we will cause \(T_1\) to revert and effectively protect the protocol. We shall call \(T_2\) as the mitigation
.
sequenceDiagram
participant Attacker as Attacker
participant Guardian as Guardian
participant Protocol as Protocol
participant Validator as Validator
Attacker->>Validator: Broadcasts T1 (malicious transaction)
Note over Attacker,Validator: T1 is in the mempool
Guardian->>Validator: Broadcasts T2 with higher gas(mitigative transaction)
Note over Guardian,Validator: T2 is in the mempool
Validator->>Validator: Adds T2 and T1 to Block (in that order)
Validator->>Protocol: Executes T2
Note over Protocol: T2 freezes the protocol
Validator->>Protocol: Executes T1
Note over Protocol: T1 reverts due to the protocol being frozen
Figure 3: An example flow of causing an unfinalized transaction to revert. Note that we mitigate at the start of the finalization spectrum in this example.
A small note on multi-transaction hacks
Here, We are evaluating the worst-case scenario: a hack is completed in a single transaction. In many cases, such as the Nomad hack8, the hacks took many transactions to be completed.
In that case, we can think of the finalized state
as a prolonged process over many transactions (each having its own time to finalization
). Thus, the mitigation process doesn’t need to happen with the first transaction, but for example, it can allow the first malicious transaction to land, and then front-run the second. The more efficient the system is, the quicker it can detect the malicious transactions and execute the mitigative actions.
Mitigation is MEV
For the rest of the post, we will assume some familiarity with MEV9. Moreover, we will define two actors:
attacker
: The actor that wants to put a protocol into an incorrect state by issuing one or several malicious transactionsguardian
: The actor that wants to protect a protocol from being put into an incorrect state
There are roughly three ways the guardian
can interfere with the effects of malicious transactions:
- Frontrun the transaction: Bribe the validators to include a new transaction in front of the malicious transaction
- Censor the transaction: Bribe the validators; do not include the transaction in a block (censorship)
- Censor the block and reorg: Bribe the validators not to vote/attest for the block that includes that transaction, which results in the block never to finalized. Eventually, the block gets reorged out of the chain.
The first mitigation can be performed by any user of the network. In contrast, the other two mitigations require agents that participate in the network’s consensus.
Frontrunning
Place a mitigative transaction before the malicious transaction. Initially, users or MEV searchers used to do that via the simple gas auction10, which resulted in many negative externalities to the network, such as network delays. Now, it has evolved into a more elegant version, where users can submit bundles of transactions to searchers (e.g via Flashbots bundles11), which they, in turn, create blocks and submit them to the block proposers via PBS12.
The nice thing about frontrunning is that we don’t need to coordinate with the block proposers on some ethical basis but purely on economics.
Figure 4: The mitigative transaction is placed before the malicious transaction; it’s executed first, and the malicious transaction is rendered useless
By the way, frontrunning protocol hacks are not new13. Generalized frontrunners have repeatedly performed such attacks, not on purpose, but because it was profitable for them to frontrun the attack by performing the same attack themselves. On every occasion, the bot operators would return some of the funds while keeping the rest as a bounty (usually 10%). This is a testament to that network participants (agents operating on the blockchain network) are not simple economic incentive automatons. Still, there is a social layer that affects their behavior. The economically rational thing to do would be not to return the funds.
The generalized frontrunners work by simulating every transaction they find and seeing if it would be profitable to copy the transaction’s intent and frontrun it. For example, if the vulnerability calls an unprotected function called withdraw()
, they can call it before the hacker does. Frontrunning hacks emerged as a byproduct of these generalized frontrunners, and people are quickly realizing that this could be a real mitigation strategy.
Bribe to Censor a transaction
Bribing to censor is very straightforward. The defender
bribes the block proposer not to include some transactions from the mempool. It’s an expensive and not very useful attack, as the block proposer changes rapidly in most blockchain protocols.
These three reasons why a block proposer wouldn’t want to censor a transaction:
- Economical reasons, as the attacker is paying more for the transaction to be included than the
guardian
for the transaction to be censored - Ideological reasons, as actively censoring a transaction could be socially unacceptable14. OFAC censorship was talked about extensively, similar to what we are discussing. The difference is in the ethics of this
- Coordination reasons, as we may not be able to communicate with all possible block proposers and “enforce” the censorship. A textbook prisoner’s dilemma15
On the other hand, it’s interesting that the censorship we are discussing here is generally acceptable. Most people in the industry believe that even if some protocol action enables a party to obtain funds that they shouldn’t, that doesn’t make it acceptable16.
Bribe to censor a published block
This is a variation of transaction censorship, but here, the attacker
included their transactions in a published block. The defender
now enters a bribing contest against the attacker
with the network validators, who get to vote on the next block. Not all validators get to vote depending on the consensus protocol, but usually, a committee is chosen. From that committee, a validator gets to be the proposer and proposes a block, while the rest vote on whether to add that block to the blockchain, following their Fork-Choice Rule (FCR).
If enough validators don’t vote for the block, then the block is not added, and a new block is proposed by another validator. Effectively, the proposed block gets reorged out of the chain.
Figure 5: A censor and reorg attack, where the proposed block gets reorged out of the chain, as no validators vote for it
The important bit is that we don’t care to sustain this attack for a long time (it would be impossible anyway), but produce enough lag for the malicious
transaction to get the mitigative
transaction. After that, we don’t care if the malicious
transaction gets included and finalized because the attacks have effectively been nullified.
Ethereum will introduce single-slot finality 17 in the future, making this kind of attack much harder, effectively only possible for blocks currently at the tip of the chain.
Formalization of Ethical Censorship, a sketch
Let’s attempt to illustrate a formalized sketch of the system. In Ethereum, the “state” refers to the collective information stored in the Ethereum blockchain at a particular time. It includes all account balances, contract code, contract storage, and nonce values.
The state in Ethereum is a mapping between addresses (160-bit identifiers) and account states. An account state is a data structure that contains four fields (nonce, balance, storageRoot, codeHash).
\(\sigma : \text{Address} \rightarrow \text{AccountState} \text{where} \text{AccountState} = (nonce, balance, storageRoot, codeHash)\) Figure 6: The mapping between addresses and acount states in ethereum
\[\begin{align*} \text{nonce} & : \mathbb{N} \quad \text{(number of transactions sent from this address)} \\ \text{balance} & : \mathbb{N}^+ \quad \text{(number of Wei owned by this address)} \\ \text{storageRoot} & : \{0,1\}^{256} \quad \text{(hash of the root node of a Merkle Patricia tree)} \\ \text{codeHash} & : \{0,1\}^{256} \quad \text{(hash of the EVM code of this account)} \end{align*}\]Figure 7: The contents of an account’s state
Let’s denote:
- \(A_m\): the subset of Ethereum accounts that the system monitors
- \(S_f\): the currently finalized state of Ethereum
- \(S_m\): the corresponding states of \(A_m\)
- \(S_i\): the set of states of \(A_m\) that are considered incorrect or invalid
- \(T_a\): the set of transactions accessible to the system before their finalization, where \(T_a \supseteq T_m\) (the transactions in the mempool)
- \(T_s\): the set of transactions that interact with any account \(a \in A_m\)
- \(t^m_n\): the nth transaction that is flagged as malicious
The system performs the following steps in a loop:
- For each transaction \(t_n\) that belongs to both \(T_a\) and \(T_s\), take that transaction.
- Apply the transaction \(t_n\), via the State Transition Function (STF), to the state \(S_f\), resulting in a new state \(S_t\): \(S_t = STF(t_n, S_f)\).
- Check if any element in \(S_t\) also belongs to \(S_i\). If true, the transaction \(t_n\) is flagged as malicious and denoted as \(t_n^m\).
- If \(t_n^m\) exists, the system attempts to mitigate it through actions, denoted as \(M(t^m_n)\). Some of these actions are described above.
Let’s call the above procedure the monitoring loop:
def apply_transaction(transaction: Transaction, state: State) -> State:
pass
def is_invalid_state(state: State) -> bool:
# Define your function to check if a state is invalid here
pass
def perform_mitigation(transaction: Transaction):
# Define your function to perform mitigation actions here
pass
async def monitoring_loop(state: State):
while True:
transaction = await get_new_transaction()
new_state = apply_transaction(transaction, state)
if is_invalid_state(new_state):
perform_mitigation(transaction)
else:
state = new_state
Figure 8: Pseudocode for the “monitoring loop”. A very simple loop that runs on every new transaction that enters the system and before the transaction finalizes
Economics
- Let’s denote the Total Value Locked (TVL) in a protocol as \(V_p\).
- Let’s denote the TVL exposed due to the protocol being in an incorrect state \(S_i\) as \(V_e\), where \(V_e < V_p\). Note that \(V_e\) is not a constant value but depends on the vulnerability that is exposed by the
attacker
. Also, it’s a hidden value until the attack is executed. - Let’s denote the value the
guardian
will pay as a bounty to prevent the protocol from being put into the incorrect state as \(B\), where naturally, \(B \leq V_p\). - As \(B\) approaches its limit and equals \(V_e\), all the value is captured by the block proposers as MEV. Since we are talking about “Ethical Censorship”, there is a non-economical quantity that we need to take into consideration. Thus, the social layer will not allow for the limit to be reached, but rather, it will force the block producers to settle for a fair value \(V_f\). We can illustrate this as \(B + V_f < V_p\).
\(V_f\) is a core part of the thesis
While proposers could, in theory, capture most of the value as MEV in a bribing war between the attacker
and the guardian,
they will settle for \(V_f\) paid by the guardian.
As mentioned before, there is precedent with blockchain actors to perform an irrational (financially) activity because of external forces, that being the social layer or the threat of the judicial system.
In practice
We observe that it’s not trivial to conduct the previous activities for a few reasons:
- Incomplete mempool view: Either due to network lag or to private mempools, we can’t frontrun what we can’t see
- Coordination issues: The transaction & block censorship game is a classic example of the prisoner’s dilemma
In a world where these types of mitigations become common, we can foresee the creation of malicious pools, which are not malicious from a protocol perspective but rather from an ethical one. By following the protocol rules, they can’t be evicted from the validator set without some form of social consensus, and they offer a unique product to the market: A haven from censorship for malicious transactions. In that scenario, we must rely on the block censorship attack, which is expensive and complicated to sustain.
Blockchain monitoring seems confined to observation after the fact in this bleak future, racing to protect what’s left.
But at the same time, we are moving towards a rollup-centric future; applications are deployed on Rollups, and the Rollups settle on L1 (or other Rollups). Rollups are intrinsically more centralized, which alleviates the practical considerations I described above and makes it easier to perform the mitigative actions, as the monitor
could have a trusted and privileged relationship with the Rollup providers.
Rollups are inherently a better fit for what we aim to achieve here.
Let me explain.
Rollups: A better alternative?
Rollups may present a more suitable alternative to L1s for the system we’ve discussed. As previously mentioned, in an L1, such a system faces significant challenges, including:
- Maintaining a comprehensive and current view of the mempool, which can be hindered by network lag or private pools.
- Coordinating with all block proposers (or other agents in the MEV chain) to implement mitigations.
In contrast, Rollups, even in their decentralized form, are inherently more centralized. They resemble blockchains but with a significantly smaller number of block proposers. This tighter network topology with fewer participants alleviates the concerns we’ve outlined above. Designed to rent security from the L1, Rollups can afford a more federated structure.
The system we’ve discussed can leverage this structure to enforce the mitigations we’ve explored, either as a privileged participant of the Rollup protocol or because there’s an incentive to implement these mitigations.
Definition
First, let’s agree on the basics. I like the definition provided by James Prestwich18, so let’s define the rollup as follows and roll with it:
A rollup is an opt-in subset of another consensus, keeping a superset of state via a custom state-transition function.
This definition is quite generic and applies to rollups that settle on a stateful chain like Ethereum and the ones that derive transaction ordering and DA (like the ones based on Celestia, called Sovereign).
The Rollup actors
Now that we have defined a Rollup let’s define the actors of one:
- Sequencer: The actor responsible for ordering transactions in a rollup.
- Executor: The actor responsible for executing transactions in a rollup.
- Proposer: The actor responsible for proposing(attesting) the new blocks to L1 in a rollup.
- Prover (ZKRUs): The actor generating zero-knowledge proofs in a ZK rollup.
- Challenger (ORUs): The actor responsible for challenging invalid blocks in an Optimistic rollup.
Very briefly, a Sequencer is a trusted or untrusted (depending on the Rollup design) actor of the rollup protocol responsible for packaging multiple transactions from the user and storing them in the DA layer. As per the name, they have the power to “sequence” or, more formally order transactions, and thus, the relevance to this discussion19.
How Rollups work
Rollups are a scalability solution for Ethereum, allowing it to process more transactions by rolling multiple transfers into one transaction. Two main types are Optimistic Rollups, which assume transactions are valid unless proven fraudulent through fraud proofs, and zk-Rollups, which use cryptographic proofs to verify transactions’ validity upfront. Both types compress transaction data and save it to the Ethereum blockchain, acting as a separate blockchain instance managed by a node system called the Sequencer.
Implementation Sketch
Now, let’s see some sketches on tackling this in the presence of centralized and decentralized sequencers.
Optimistic Rollups with a centralized Sequencer
In rollups, such a system would be placed as a sidecar to the agent that receives the user transactions, which would be the sequencer for optimistic rollups and the prover for zero-knowledge rollups. The more centralized a system is, the better guarantees such a system will have that it “sees” the whole “mempool.” With ZK rollups, that would be challenging, as anyone can prove a transaction and submit the proof to the smart contract on Ethereum.
Thus, optimistic rollups and sequencers have the most significant product-market fit with this idea.
Figure 9: An example infrastructure where the monitor module sits between the sequencer and the user. Every transaction \(t_n\) is passed to the sequencer only if it isn’t malicious \(t_n^m\), as described in above
We place the monitor module at the “entry” of the system so that we don’t spend resources sequencing a transaction that we don’t plan to include.
Another option is to place the monitor module alongside or deeply integrated with the Executor or, more formally, the State-Transition Function. The insight here is that this part of the system already goes through the execution effort, so integrating there should have a lower overhead, as we don’t need to execute again but simply assert the resulting state.
Figure 10: The executor could output the new state to the monitor
, which would output the L2 block only if it didn’t include any hacked state
The Monitor
will check every new state produced by the Executor, and if the hacked state exists, it will reject that state. In essence, tying this back to our original discussion about L1s, the state produced by the Executor can be thought of as not finalized.
There is a soft finalization when the Proposer posts the data on the L1 and hard finalization when the 7-window time passes. You can read more about Rollup security in this tiny blog post penned by Jon Charbonneau20.
Optimistic Rollups with shared Sequencers
Applying a monitoring system to shared sequencers, where no single entity has exclusive sequencing rights, would require a decentralized approach. Each sequencer could run an instance of the monitor to filter transactions before sequencing. By slashing the sequencers in case they sequence a malicious transaction, one can depend on them to perform the mitigative actions if they detect a malicious transaction.
To be more precise, shared sequencers will probably only sequence without executing. The reason is that a shared sequencer that supports ten rollups would need to run ten different State Transition Functions (STFs), which is infeasible. They produce sequenced transactions executed by the STF of every rollup to create the new state root. In such a protocol, the STF executor would reject certain blocks and never post them to the rollup contract.
Since the sequencer posts information about the sequenced transactions on the L1, the user could try to enforce the execution of the transaction. In that case, a separate flow could easily prove that a transaction is malicious and reject it from being added to the state again. Since that would entail a more complex protocol, one could enforce a lockdown, giving enough time for mitigations.
Thus, the question falls again to what we discussed at the start of the blog post:
1) How we define what a malicious transaction is 2) How we detect the transaction promptly (sub-second)
The Concept of Forced Inclusion
We need to consider a particular aspect in the above scenarios, known as “Forced Inclusion”.
One of the fundamental features of rollups is the inability of sequencers to censor users. Users can directly interact with the smart contract on L1 and “force” their transaction into the rollup’s state. This feature is particularly relevant in Optimistic Rollups, where the Sequencer, Execution, and Proposer are crucial for the rollup’s economic security.
This mechanism includes a time-lock in all Optimistic Rollups (like Arbitrum21 and Optimism22), providing sufficient time for the guardian
to detect any problematic transaction and pause the protocol if necessary.
In the case of Zero Knowledge Rollups, a transaction rejected by the prover can be “forced” by running the proving software independently, generating a valid proof, and submitting it to the smart contract on L1. While some ZK rollups currently have a whitelist, limiting who can submit proofs, the ultimate goal is to allow anyone to do so.
Therefore, forced inclusion could counteract this system, but only in zero-knowledge protocols. Addressing this would require changes in the design of zero-knowledge protocols, which is beyond the scope of this blog post.
A note on censorship
The concept we’re exploring here essentially involves implementing a form of censorship, or more accurately, establishing a system that incentivizes frontrunning or censorship. While this may be controversial to some, the consensus is that code is not law and theft, even if facilitated by a code vulnerability, is unacceptable. The social layer holds more significance than the blockchain layer.
Furthermore, a new type of rollup that doesn’t settle on an L1 but uses it solely for Data Availability has demonstrated that users have sufficient trust in the social layer to utilize such rollups. These rollups operate under the assumption that if fraud occurs, the community, thanks to the Data Availability layer, will detect it and fork the fraudulent rollup into a new, honest one. Automatic settlement is not necessary.
In both scenarios, the social layer is the final safety net for the unhappy path, a last resort.
Similarly, it’s feasible to prove on-chain that a mitigative action taken by a roll-up actor was unwarranted. Whether a transaction results in a certain state is provable, and thus, one can demonstrate that the mitigative action should not have occurred. This could lead to on-chain slashing (via on-chain proving) or consequences at the social layer.
What we’re investigating isn’t new but rather a realignment of incentives over what is already possible.
Conclusion
In this blog post, we explored the MEV concept and its potential role in protecting blockchain applications from malicious transactions. We have discussed the different actors involved, the attacker
and the guardian
, and the various strategies they can employ. We have also delved into the intricacies of rollups and their relevance to our discussion.
We have seen how monitoring can be used proactively to detect and mitigate potential issues and how this can be applied in the context of both centralized and decentralized sequencers in rollups. The key challenges lie in defining what constitutes a malicious transaction and detecting such transactions on time.
While there are still many questions to be answered and details to be worked out, it is clear that MEV and monitoring have a significant role in the future of blockchain security. As we continue to explore and develop these concepts, we move closer to a more secure and robust blockchain ecosystem.
Phylax
Phylax is currently developing an open-source tool, which will be available soon, to assist protocol developers in monitoring their applications and performing incident response. After going through the Nomad Hack 8, I knew I wanted to do something about security in our space so that we don’t lose billions of dollars yearly to hacks. While we will always offer the open-source tool for free as our gift to the world, this is a mere first step.
The vision is to build a protocol as close to the consensus layer as Flashbots is evolving. It’s an audacious vision for sure, but otherwise, where’s the fun in this?
If you find this interesting and want to bounce ideas or join the team as founding engineers, drop me a line at odysseas dot phylax dot watch
.
Acknowledgements
A warm thank you to Greg Markou, Mike Neuder, Swanny, Daniel Marzec and, relyt for their thoughtful comments.
Discussion
Join the discussion in the Flashbots Collective.
References
-
https://static.googleusercontent.com/media/sre.google/en//static/pdf/Anatomy_Of_An_Incident.pdf ↩
-
https://sre.google/workbook/incident-response/#putting-best-practices-into-practice ↩
-
https://www.cybersaint.io/blog/nist-cybersecurity-framework-core-explained#:~:text=Here%2C%20we'll%20dive%20into,common%20acro%20critical%20infrastructure%20sectors ↩
-
https://blog.tenderly.co/case-studies/what-good-war-room-emergency-procedure-yearn-finance-case ↩
-
https://medium.com/offchainlabs/post-mortem-report-ethereum-mainnet-finality-05-11-2023-95e271dfd8b2Â ↩
-
https://ethresear.ch/t/mev-auction-auctioning-transaction-ordering-rights-as-a-solution-to-miner-extractable-value/6788Â ↩
-
https://docs.flashbots.net/flashbots-auction/advanced/understanding-bundles ↩
-
https://docs.flashbots.net/flashbots-mev-boost/introduction ↩
-
https://unchainedcrypto.com/curve-exploit-results-in-largest-mev-block-rewards-in-ethereums-history/Â ↩
-
https://www.investopedia.com/terms/p/prisoners-dilemma.asp ↩
-
https://ethresear.ch/t/a-simple-single-slot-finality-protocol/14920Â ↩
-
https://prestwich.substack.com/p/the-definitive-guide-to-sequencing ↩
-
https://docs.arbitrum.io/sequencer#unhappyuncommon-case-sequencer-isnt-doing-its-job ↩