Author: Maven11; Translator: Golden Finance xiaozou
Multiple Concurrent Proposers (MCP) refers to a mechanism that allows multiple proposers to be active simultaneously (note that it should not be confused with Multi Context Protocol or secure Multi-Party Computation, although there are indeed some similarities between them). It is an innovative solution to the censorship problem. This article will explore why having multiple proposers rather than a single proposer responsible for block proposals is a key element in improving blockchain mechanism design, including its operational principles and implementation significance.
Although the core concept of MCP is relatively easy to understand, there is currently almost no actual adoption of this mechanism in blockchain. However, to some extent, the operational model of Bitcoin mining pools has similarities with multiple concurrent proposers — anyone running a full Bitcoin node can have transactions included in the blockchain.
On the other hand, Solana’s multi-concurrent builder mechanism has some things in common with the implementation of a full MCP, at least reflecting the idea of multiple different participants participating in block building (but not block proposal). On Ethereum, about 95% of blocks are built through MEV-Boost. While there are multiple active builders at the same time, there can only be one winner per auction, so the advantage that Solana achieves through multiple concurrent builders doesn’t hold up here. In fact, there is currently no chain that allows multiple proposers to have the right to propose blocks at any time.
The most intuitive way to understand MCP is to break it down into two levels: multiple proposers simultaneously providing blocks, and the final merging of these sub-blocks.
These proposer groups are likely to adopt a subcommittee format (similar to Ethereum’s existing mechanisms), as it is not practical for all validators to participate. This also means that it must be ensured that no single subcommittee is monopolized by a particular staking pool, otherwise it may lead to censorship and collusion issues. Additionally, it should be noted that the legendary family stakers usually have limited technical capabilities—MCP will significantly increase system complexity.
The following are the core advantages of MCP that are worth adopting:
Reasons for Supporting Multiple Concurrent Proposers:
–Enhance censorship resistance (especially important under the current bottleneck)
–Expand at the foundational protocol level rather than relying on external solutions.
–Decentralized MEV (no longer determined by a single proposer or builder for transaction packaging)
The direct issues exposed by the implementation of MCP:
– Intensified competition in transaction ordering (packaging and sequence) (could lead to the emergence of the PGA phenomenon?)
–Simulated challenges brought by invalid transactions
–Hardware requirements increase
–Data availability issues of invalid transactions
–Need to introduce finality tools
Let’s analyze these features item by item, starting with the advantages, and then assess whether potential issues could pose implementation barriers for resource-intensive public chains.
1. Advantages of MCP
(1) Enhance Censorship Resistance
Most of today’s blockchains use a deterministic finality mechanism, and their consensus process relies on a single leader to determine the content of the block (with slight differences). After the block is broadcast, the majority of validators reach consensus and incorporate it into the canonical chain. Ethereum accelerates block production through a subcommittee mechanism (but it takes longer for the full set of validators to reach consensus). In the MCP framework, multiple proposers each build their own block and eventually merge, which means that the block entry moves from a single principal (proposer/builder/repeater, these roles should ideally be discarded by the MCP) to a multi-channel model. This makes it much more difficult to review. When there are multiple packaging bodies, the censorship resistance of the system will be significantly enhanced.
At the heart of the current bottleneck (note that teams like Flashbots are improving the status quo) is that a single builder receives block building rights from a single proposer through an auction, and the (trusted) relayer as the auctioneer further exacerbates centralization. While the Ethereum core protocol is decentralized, the existing on-chain transaction process is not. Solana is also facing the centralization of Jito relays/builders and is trying to solve it with a re-staking solution (the first real-yield re-staking “AVS”!). )。 Bitcoin users can solve the problem autonomously (at a lower cost) by running a full node, but this comes at the expense of finality – Bitcoin uses probabilistic finality and lacks the “finality tools” needed to implement MCP, which relies on the longest chain rule.
(2) Expansion at the underlying protocol level
Often, a lot of development is outsourced to third-party teams to fix the inherent design flaws of L1 (not limited to Ethereum) to solve core protocol problems. Implementing MCP means directly dealing with issues that would otherwise be solved/caused by off-chain solutions. This increases the hardware requirements (while increasing censorship resistance), which may be a worthwhile trade-off depending on the need for decentralization by protocol users. Solana, in particular, is likely to use this approach to address Jito’s centralization. In addition, because the block building effort is distributed to multiple parties, the overall network bandwidth demand will ultimately increase.
(3) Distributed MEV
The most unique effect of MCP is that it changes the original “MEV lottery” model by allowing the MEV of specific blocks to be shared among multiple active proposers (instead of being monopolized by a single proposer or builder). Validators (mostly corporate entities) are more inclined to obtain a stable revenue stream, and this mechanism can effectively prevent unilateral extraction of MEV through transaction reordering (the current situation). This feature has a synergistic effect with anti-censorship goals.
Note: If you have read our previous articles, you may be familiar with the term CAP theorem: it is the three fundamental characteristics that must be satisfied for distributed systems to operate normally.
C: Stands for consistency, meaning that the user experience should remain uniform across all users, and every time the system is used, it should feel like interacting with a single database.
A: Represents availability, which is often referred to as liveness, indicating that all messages must be processed by the system nodes and reflected in subsequent blocks/queries, and all instructions must be executed.
P: Represents partition tolerance, which means that the system must maintain consistency and availability even when subjected to attacks or when the node network is split.
MCP is one of the best ways to achieve the key elements of the CAP theorem (especially anti-censorship) — these elements are often simply classified as game theory problems. Remember: trust the protocol itself, not the game theory.
However, advantages inevitably come at a cost. The principles of the CAP theorem indicate that great achievements are always accompanied by corresponding flaws – it is almost impossible to fully accommodate all characteristics. Therefore, let us examine the potential issues that implementing MCP may bring.
2. Problems to be solved by MCP
The main issue is that MCP can trigger a dual competition phase within the block to some extent. First is the transaction packaging fee, and second is the sorting fee. The handling of the sorting fee is particularly tricky, as in the first phase, local producers only have access to a partial block view rather than a global view. This means that accurately calculating the optimal bid for a specific block position is a daunting task.
This is not only difficult to operate, but more critically, (under the auction mechanism) it will bring us back to the era of prioritized Gas auctions (PGA). Although censorship resistance is better guaranteed, it essentially revives the issues that MEV-Boost aimed to solve—such as the high median Gas fees for competitive blocks and the exclusive pricing during the packaging stage.
In addition to sorting issues, including local vs. global views, there are other transaction-related challenges. This refers specifically to the problem caused by invalid transactions during the propagation of the local-global view of the block. Considering that it is impossible to predict the impact of state changes on other proposer’s transactions at the beginning of the phase (before the subblocks are merged into multiple proposers’ co-built blocks), there may be cases where proposers pass invalid transactions to each other (the problem is exacerbated if these transactions are uploaded to the chain as data availability content). It is also possible for a validator in the current MCP set to violate a parameter limit (e.g., breaking the maximum gas value). While this can be solved by introducing an arbiter (or protocol built-in rule) that can filter low-priced transactions with the same state change by fee after data availability disclosure, this brings us back to the resolved PGA dilemma. However, not using mechanisms such as auctions at all to give searchers/builders control over block locations would lead to a flood of spam transactions and worsening latency gambling – all of which would undermine the possibility of preconfs. Ethereum (after the Pectra upgrade) and Solana have additional considerations: Ethereum’s proposal 7702 makes transactions no longer invalidate due to nonces, and Solana has no transaction nonce (account nonce still exists). This makes it much more difficult to judge the validity of a transaction – essentially simulating all combinations to determine the correct ordering, which can put a huge strain on the network’s bandwidth. Solana may be easier to handle with its high hardware barrier to entry, but Ethereum will undoubtedly need a hardware upgrade. However, Ethereum’s potential solution is for the execution client (not the builder + repeater) to actually calculate the ordering during the sub-block merge phase – reaffirming the need for a hardware upgrade.
In terms of data availability (DA), another important issue, as mentioned earlier, is that these invalid transactions may leak onto the chain (effectively becoming free transactions). This further exacerbates the simulated computation burden mentioned in the pre-consensus phase—although you can filter invalid transactions in the merging phase. Some existing implementations of FOCIL (sending addresses instead of complete transactions) may be reused (unless they rely entirely on simulated validation, but if there is human intervention rather than protocol rules, it may disrupt the simulation process by invalidating other transactions).
As mentioned earlier, implementing MCP is likely to require the finality tools to address synchronization issues—this is also implied in the previous section on consensus pre-ordering simulation. This will simultaneously trigger the time gaming problem of delayed block proposals (such phenomena are already evident in MEV-Boost auctions), with impacts including: proposers may first observe other blocks before constructing their own, intentionally sending transactions that invalidate others (especially advantageous for searchers). If overly strict anti-time-gaming rules are established, it will lead to the elimination of underperforming validators (resulting in more missed blocks).
Possible solutions to the time game can be borrowed from the improvements of chains such as Monad, which use an asynchronous execution (deferred execution) mechanism. For example, you can set a rule: the complete effect of the transaction set of all active proposers in a single period of time must wait until all the sets are built. This significantly limits throughput, as there is a high probability that multiple proposers will contain the same transaction. Delayed execution also means that even if a transaction is “included” in a subblock, it may not make it to the final merge block, resulting in a “included but rollback” transaction (echoing the double inclusion problem mentioned at the beginning). Note that this may require specific finality tools to perform such operations (including execution, propagation, and finalization of blocks).
Although we mainly focus on Ethereum, it is worth noting that Solana is actively advancing MCP. With Max Resnick joining Anza and Anatoly publicly expressing support for implementation, this trend is becoming increasingly apparent. The article recently published by Anatoly mentions the following key points of concern:
–What if blocks from different validators arrive at different times (this could also be a time game)
–How to merge transactions (as discussed previously)
–How to allocate block capacity (maximum Gas limit) among validators to maximize bandwidth
–Resource waste issue (the same transaction being included in multiple sub-blocks, this issue was also mentioned earlier)
Many of the issues faced when implementing MCP on Solana often resonate with those encountered on Ethereum. However, Solana places greater emphasis on bandwidth and performance optimization, which means that while ensuring robust consensus, block resource management and block merging become more important.
Another key point we mentioned at the beginning of the article is that MCP not only hardens the protocol, but can also be used to extend the protocol. It can even incorporate application-specific serialization (ASS) into the protocol layer through a sorting mechanism. In the future, there may be a scenario where instead of being the proposer of the XYZ transaction, the application itself acts as the proposer and sorts the set of transactions according to its own needs (which is what Project Delta is working towards) - or conversely, the application provides the proposer with a collation of transactions. It is worth noting that the option of transferring the MEV tax to the application party (transaction initiator) and combining it with the MCP is also being explored (which will be easier to implement as it is no longer controlled by a single proposer).
In a recent post, Max and Anatoly argued that MCP could achieve narrower bid-ask spreads by applying dedicated serialization (decentralized NASDAQ concept). In the current environment, as mentioned earlier, only a single leader can propose blocks. This means that when the price fluctuates, the quoting party in the order book will try to reverse certain quotes. Under Solana’s single proposer model, it can only be done through Jito auctions due to the proposer’s monopoly on power. Ideally, as Hyperliquid shows, chargeback requests should be prioritized to allow market makers to maintain tighter spreads. Therefore, it is hoped that this will be achieved through ASS as an application - they have a monopoly on auction power under the single leader model, and this monopoly can be eliminated by adopting MPC. However, this ASS solution is likely to be limited to state isolation scenarios. The essence of the proposal is to allow app developers to define priority actions (e.g., order cancellations) for specific accounts, and prioritize the highest priority transactions (not necessarily the highest tip transactions, but the lifeblood of liquidity) for specific accounts. The core idea is to set a fee threshold for regular trades, while allowing certain priority trades to break through the limit.
The packaging fee and sorting fee issues discussed earlier seem to have been addressed by Solana. The packaging fee goes to the transaction validators, while the sorting fee is paid to the protocol (burned). When merging from multiple sub-blocks, it is only necessary to sort and execute the merged transaction set according to the preset sorting fee.
The above mechanism works closely with Solana’s block propagation mechanism, Turbine. Leaders (MCP) use data sharding (shreds) to send them to the relay nodes in the Turbine tree structure—these relays should contain shards from all leaders. The relay nodes send shard acknowledgment messages to a single consensus leader, who must collect enough message fragments before broadcasting and reaching consensus.
With the release of Alpenglow, considering the cancellation of the single-layer relay node architecture and on-chain voting mechanism (now fully on-chain), the specific implementation may be adjusted. These changes are expected to reduce the operational costs for validators, thereby increasing the number of validators and attracting participants with weaker technical capabilities. This is certainly beneficial for decentralization, but may impact the performance of the chain. It is worth discussing how they will address the validator failure issue after the implementation of MCP in Solana.
3. Other Ecosystem MCP Practices
The Cosmos ecosystem is also promoting the implementation of MCP, and the well-known organization Informal Systems has just released a multi-proposer specification under the BFT consensus model. They adopt a secure broadcasting protocol, requiring each validator’s sub-block to be confirmed by other validators through a voting expansion mechanism. The block construction module of Tendermint/CometBFT then reaches consensus on these sub-block collections, which means that specific validators will generate a large number of sub-blocks.
Sei is developing MCP (striving to become the first implemented project) through the Sei Giga project, with some design inspiration drawn from the article Autobahn (strongly recommended for reading). The core idea is to decouple data availability from ordering, accelerating data availability through multiple parallel channels, and then ordering it to the global chain. This is slightly different from Ethereum’s MCP concept — validators do not synchronize block production in fixed time intervals, but instead continuously produce blocks and then merge them into a global view.
Patrick O’Grady from Commonware is also exploring related solutions.
Finally, the Delta project designed a underlying layer with a censorship-resistant bulletin board function, while each application runs its own concurrent sorter, and the blocks produced ultimately settle to the global state layer.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Analysis of the Advantages and Disadvantages of Multi-Concurrent Proposers (MCP)
Author: Maven11; Translator: Golden Finance xiaozou
Multiple Concurrent Proposers (MCP) refers to a mechanism that allows multiple proposers to be active simultaneously (note that it should not be confused with Multi Context Protocol or secure Multi-Party Computation, although there are indeed some similarities between them). It is an innovative solution to the censorship problem. This article will explore why having multiple proposers rather than a single proposer responsible for block proposals is a key element in improving blockchain mechanism design, including its operational principles and implementation significance.
Although the core concept of MCP is relatively easy to understand, there is currently almost no actual adoption of this mechanism in blockchain. However, to some extent, the operational model of Bitcoin mining pools has similarities with multiple concurrent proposers — anyone running a full Bitcoin node can have transactions included in the blockchain.
On the other hand, Solana’s multi-concurrent builder mechanism has some things in common with the implementation of a full MCP, at least reflecting the idea of multiple different participants participating in block building (but not block proposal). On Ethereum, about 95% of blocks are built through MEV-Boost. While there are multiple active builders at the same time, there can only be one winner per auction, so the advantage that Solana achieves through multiple concurrent builders doesn’t hold up here. In fact, there is currently no chain that allows multiple proposers to have the right to propose blocks at any time.
The most intuitive way to understand MCP is to break it down into two levels: multiple proposers simultaneously providing blocks, and the final merging of these sub-blocks.
These proposer groups are likely to adopt a subcommittee format (similar to Ethereum’s existing mechanisms), as it is not practical for all validators to participate. This also means that it must be ensured that no single subcommittee is monopolized by a particular staking pool, otherwise it may lead to censorship and collusion issues. Additionally, it should be noted that the legendary family stakers usually have limited technical capabilities—MCP will significantly increase system complexity.
The following are the core advantages of MCP that are worth adopting:
Reasons for Supporting Multiple Concurrent Proposers:
–Enhance censorship resistance (especially important under the current bottleneck)
–Expand at the foundational protocol level rather than relying on external solutions.
–Decentralized MEV (no longer determined by a single proposer or builder for transaction packaging)
The direct issues exposed by the implementation of MCP:
– Intensified competition in transaction ordering (packaging and sequence) (could lead to the emergence of the PGA phenomenon?)
–Simulated challenges brought by invalid transactions
–Hardware requirements increase
–Data availability issues of invalid transactions
–Need to introduce finality tools
Let’s analyze these features item by item, starting with the advantages, and then assess whether potential issues could pose implementation barriers for resource-intensive public chains.
1. Advantages of MCP
(1) Enhance Censorship Resistance
Most of today’s blockchains use a deterministic finality mechanism, and their consensus process relies on a single leader to determine the content of the block (with slight differences). After the block is broadcast, the majority of validators reach consensus and incorporate it into the canonical chain. Ethereum accelerates block production through a subcommittee mechanism (but it takes longer for the full set of validators to reach consensus). In the MCP framework, multiple proposers each build their own block and eventually merge, which means that the block entry moves from a single principal (proposer/builder/repeater, these roles should ideally be discarded by the MCP) to a multi-channel model. This makes it much more difficult to review. When there are multiple packaging bodies, the censorship resistance of the system will be significantly enhanced.
At the heart of the current bottleneck (note that teams like Flashbots are improving the status quo) is that a single builder receives block building rights from a single proposer through an auction, and the (trusted) relayer as the auctioneer further exacerbates centralization. While the Ethereum core protocol is decentralized, the existing on-chain transaction process is not. Solana is also facing the centralization of Jito relays/builders and is trying to solve it with a re-staking solution (the first real-yield re-staking “AVS”!). )。 Bitcoin users can solve the problem autonomously (at a lower cost) by running a full node, but this comes at the expense of finality – Bitcoin uses probabilistic finality and lacks the “finality tools” needed to implement MCP, which relies on the longest chain rule.
(2) Expansion at the underlying protocol level
Often, a lot of development is outsourced to third-party teams to fix the inherent design flaws of L1 (not limited to Ethereum) to solve core protocol problems. Implementing MCP means directly dealing with issues that would otherwise be solved/caused by off-chain solutions. This increases the hardware requirements (while increasing censorship resistance), which may be a worthwhile trade-off depending on the need for decentralization by protocol users. Solana, in particular, is likely to use this approach to address Jito’s centralization. In addition, because the block building effort is distributed to multiple parties, the overall network bandwidth demand will ultimately increase.
(3) Distributed MEV
The most unique effect of MCP is that it changes the original “MEV lottery” model by allowing the MEV of specific blocks to be shared among multiple active proposers (instead of being monopolized by a single proposer or builder). Validators (mostly corporate entities) are more inclined to obtain a stable revenue stream, and this mechanism can effectively prevent unilateral extraction of MEV through transaction reordering (the current situation). This feature has a synergistic effect with anti-censorship goals.
Note: If you have read our previous articles, you may be familiar with the term CAP theorem: it is the three fundamental characteristics that must be satisfied for distributed systems to operate normally.
C: Stands for consistency, meaning that the user experience should remain uniform across all users, and every time the system is used, it should feel like interacting with a single database.
A: Represents availability, which is often referred to as liveness, indicating that all messages must be processed by the system nodes and reflected in subsequent blocks/queries, and all instructions must be executed.
P: Represents partition tolerance, which means that the system must maintain consistency and availability even when subjected to attacks or when the node network is split.
MCP is one of the best ways to achieve the key elements of the CAP theorem (especially anti-censorship) — these elements are often simply classified as game theory problems. Remember: trust the protocol itself, not the game theory.
However, advantages inevitably come at a cost. The principles of the CAP theorem indicate that great achievements are always accompanied by corresponding flaws – it is almost impossible to fully accommodate all characteristics. Therefore, let us examine the potential issues that implementing MCP may bring.
2. Problems to be solved by MCP
The main issue is that MCP can trigger a dual competition phase within the block to some extent. First is the transaction packaging fee, and second is the sorting fee. The handling of the sorting fee is particularly tricky, as in the first phase, local producers only have access to a partial block view rather than a global view. This means that accurately calculating the optimal bid for a specific block position is a daunting task.
! ch8ILlNsaae7gXUMsXRIibmRDzmECCY1A73W5FDF.png
This is not only difficult to operate, but more critically, (under the auction mechanism) it will bring us back to the era of prioritized Gas auctions (PGA). Although censorship resistance is better guaranteed, it essentially revives the issues that MEV-Boost aimed to solve—such as the high median Gas fees for competitive blocks and the exclusive pricing during the packaging stage.
In addition to sorting issues, including local vs. global views, there are other transaction-related challenges. This refers specifically to the problem caused by invalid transactions during the propagation of the local-global view of the block. Considering that it is impossible to predict the impact of state changes on other proposer’s transactions at the beginning of the phase (before the subblocks are merged into multiple proposers’ co-built blocks), there may be cases where proposers pass invalid transactions to each other (the problem is exacerbated if these transactions are uploaded to the chain as data availability content). It is also possible for a validator in the current MCP set to violate a parameter limit (e.g., breaking the maximum gas value). While this can be solved by introducing an arbiter (or protocol built-in rule) that can filter low-priced transactions with the same state change by fee after data availability disclosure, this brings us back to the resolved PGA dilemma. However, not using mechanisms such as auctions at all to give searchers/builders control over block locations would lead to a flood of spam transactions and worsening latency gambling – all of which would undermine the possibility of preconfs. Ethereum (after the Pectra upgrade) and Solana have additional considerations: Ethereum’s proposal 7702 makes transactions no longer invalidate due to nonces, and Solana has no transaction nonce (account nonce still exists). This makes it much more difficult to judge the validity of a transaction – essentially simulating all combinations to determine the correct ordering, which can put a huge strain on the network’s bandwidth. Solana may be easier to handle with its high hardware barrier to entry, but Ethereum will undoubtedly need a hardware upgrade. However, Ethereum’s potential solution is for the execution client (not the builder + repeater) to actually calculate the ordering during the sub-block merge phase – reaffirming the need for a hardware upgrade.
In terms of data availability (DA), another important issue, as mentioned earlier, is that these invalid transactions may leak onto the chain (effectively becoming free transactions). This further exacerbates the simulated computation burden mentioned in the pre-consensus phase—although you can filter invalid transactions in the merging phase. Some existing implementations of FOCIL (sending addresses instead of complete transactions) may be reused (unless they rely entirely on simulated validation, but if there is human intervention rather than protocol rules, it may disrupt the simulation process by invalidating other transactions).
As mentioned earlier, implementing MCP is likely to require the finality tools to address synchronization issues—this is also implied in the previous section on consensus pre-ordering simulation. This will simultaneously trigger the time gaming problem of delayed block proposals (such phenomena are already evident in MEV-Boost auctions), with impacts including: proposers may first observe other blocks before constructing their own, intentionally sending transactions that invalidate others (especially advantageous for searchers). If overly strict anti-time-gaming rules are established, it will lead to the elimination of underperforming validators (resulting in more missed blocks).
Possible solutions to the time game can be borrowed from the improvements of chains such as Monad, which use an asynchronous execution (deferred execution) mechanism. For example, you can set a rule: the complete effect of the transaction set of all active proposers in a single period of time must wait until all the sets are built. This significantly limits throughput, as there is a high probability that multiple proposers will contain the same transaction. Delayed execution also means that even if a transaction is “included” in a subblock, it may not make it to the final merge block, resulting in a “included but rollback” transaction (echoing the double inclusion problem mentioned at the beginning). Note that this may require specific finality tools to perform such operations (including execution, propagation, and finalization of blocks).
Although we mainly focus on Ethereum, it is worth noting that Solana is actively advancing MCP. With Max Resnick joining Anza and Anatoly publicly expressing support for implementation, this trend is becoming increasingly apparent. The article recently published by Anatoly mentions the following key points of concern:
–What if blocks from different validators arrive at different times (this could also be a time game)
–How to merge transactions (as discussed previously)
–How to allocate block capacity (maximum Gas limit) among validators to maximize bandwidth
–Resource waste issue (the same transaction being included in multiple sub-blocks, this issue was also mentioned earlier)
Many of the issues faced when implementing MCP on Solana often resonate with those encountered on Ethereum. However, Solana places greater emphasis on bandwidth and performance optimization, which means that while ensuring robust consensus, block resource management and block merging become more important.
Another key point we mentioned at the beginning of the article is that MCP not only hardens the protocol, but can also be used to extend the protocol. It can even incorporate application-specific serialization (ASS) into the protocol layer through a sorting mechanism. In the future, there may be a scenario where instead of being the proposer of the XYZ transaction, the application itself acts as the proposer and sorts the set of transactions according to its own needs (which is what Project Delta is working towards) - or conversely, the application provides the proposer with a collation of transactions. It is worth noting that the option of transferring the MEV tax to the application party (transaction initiator) and combining it with the MCP is also being explored (which will be easier to implement as it is no longer controlled by a single proposer).
In a recent post, Max and Anatoly argued that MCP could achieve narrower bid-ask spreads by applying dedicated serialization (decentralized NASDAQ concept). In the current environment, as mentioned earlier, only a single leader can propose blocks. This means that when the price fluctuates, the quoting party in the order book will try to reverse certain quotes. Under Solana’s single proposer model, it can only be done through Jito auctions due to the proposer’s monopoly on power. Ideally, as Hyperliquid shows, chargeback requests should be prioritized to allow market makers to maintain tighter spreads. Therefore, it is hoped that this will be achieved through ASS as an application - they have a monopoly on auction power under the single leader model, and this monopoly can be eliminated by adopting MPC. However, this ASS solution is likely to be limited to state isolation scenarios. The essence of the proposal is to allow app developers to define priority actions (e.g., order cancellations) for specific accounts, and prioritize the highest priority transactions (not necessarily the highest tip transactions, but the lifeblood of liquidity) for specific accounts. The core idea is to set a fee threshold for regular trades, while allowing certain priority trades to break through the limit.
The packaging fee and sorting fee issues discussed earlier seem to have been addressed by Solana. The packaging fee goes to the transaction validators, while the sorting fee is paid to the protocol (burned). When merging from multiple sub-blocks, it is only necessary to sort and execute the merged transaction set according to the preset sorting fee.
The above mechanism works closely with Solana’s block propagation mechanism, Turbine. Leaders (MCP) use data sharding (shreds) to send them to the relay nodes in the Turbine tree structure—these relays should contain shards from all leaders. The relay nodes send shard acknowledgment messages to a single consensus leader, who must collect enough message fragments before broadcasting and reaching consensus.
With the release of Alpenglow, considering the cancellation of the single-layer relay node architecture and on-chain voting mechanism (now fully on-chain), the specific implementation may be adjusted. These changes are expected to reduce the operational costs for validators, thereby increasing the number of validators and attracting participants with weaker technical capabilities. This is certainly beneficial for decentralization, but may impact the performance of the chain. It is worth discussing how they will address the validator failure issue after the implementation of MCP in Solana.
3. Other Ecosystem MCP Practices
The Cosmos ecosystem is also promoting the implementation of MCP, and the well-known organization Informal Systems has just released a multi-proposer specification under the BFT consensus model. They adopt a secure broadcasting protocol, requiring each validator’s sub-block to be confirmed by other validators through a voting expansion mechanism. The block construction module of Tendermint/CometBFT then reaches consensus on these sub-block collections, which means that specific validators will generate a large number of sub-blocks.
Sei is developing MCP (striving to become the first implemented project) through the Sei Giga project, with some design inspiration drawn from the article Autobahn (strongly recommended for reading). The core idea is to decouple data availability from ordering, accelerating data availability through multiple parallel channels, and then ordering it to the global chain. This is slightly different from Ethereum’s MCP concept — validators do not synchronize block production in fixed time intervals, but instead continuously produce blocks and then merge them into a global view.
Patrick O’Grady from Commonware is also exploring related solutions.
Finally, the Delta project designed a underlying layer with a censorship-resistant bulletin board function, while each application runs its own concurrent sorter, and the blocks produced ultimately settle to the global state layer.