Today, Vitalik Buterin published a new article in the ETH community titled “What might a built-in ZK-EVM look like?”. This article explores how ETH Workshop will have its own ZK-EVM built into future network upgrades.
As we all know, in the context of the slow development of the ETH, almost all mainstream Layer 2 already have ZK-EVM, but when the ETH workshop mainnet encapsulates its own ZK-EVM, will there be a conflict between the role positioning of the mainnet and Layer 2?
In this article, Vitalik Buterin emphasizes the importance of compatibility, data availability, and auditability, and explores the possibilities of implementing efficient and stateful attesters. In addition, the article explores the possibility of implementing stateful proofs for efficiency, and discusses the role of Layer 2 projects in providing rapid pre-confirmation and MEV mitigation strategies. This article reflects the balance of advancing the development of the ETH network through ZK-EVMs while maintaining its flexibility.
Odaily Planet Daily has compiled the original article, and the full text of the article is as follows:
optimistic rollups and ZK rollups, as Layer-2 EVM protocols on top of ETH, both rely on EVM verification. However, this requires them to trust a large codebase, and if there are bugs in this codebase, then these VMs are at risk of being hacked. This also means that ZK-EVMs, even if they want to be fully equivalent to the L1 EVM, will need some form of governance to replicate L1 EVM changes into their own EVM implementations.
This is not an ideal situation. Because these projects are replicating the functionality that already exists in the ETH Workshop Protocol to themselves - ETH Governance is already responsible for upgrading and fixing bugs, and what ZK-EVM basically does is validate Layer 1 ETH blocks. Over the next few years, we expect light clients to become more and more powerful, and will soon be able to use ZK-SNARKs to fully validate L1 EVM executions. At that point, the ETH network will actually have a ZK-EVM in a package. So, the question arises: why not make this ZK-EVM natively available for rollups as well?
This article will describe several versions of the “Encapsulated ZK-EVM”, analyzing their trade-offs, design challenges, and reasons for not going in certain directions. The benefits of implementing a protocol’s functionality should be compared to the benefits of letting the ecosystem handle transactions and keeping the underlying protocol simple.
What key features do we want to get from the encapsulated ZK-EVM?
What are the key attributes we can expect from the packaged ZK-EVM?
Basic function: Validate ETH blocks. The protocol function (which has not yet been determined whether it is opcode, precompilation, or some other mechanism) should be able to accept at least a pre-state root, a block, and a post-state root as input, and verify that the post-state root is actually the result of executing that block on top of the pre-state root.
Compatible with ETH Workshop’s multi-client. This means that we want to avoid a single attestation system built in, and instead allow different clients to use different attestation systems.
This also means a few things:
Data availability requirements: For any EVM execution that uses encapsulated ZK-EVM proofs, we want to guarantee that the underlying data is usable so that provers using a different attestation system can re-attest the execution, and clients that rely on that attestation system can validate those newly generated proofs.
Proofs reside outside of the EVM and chunk data structures:* ZK-EVM functionality doesn’t really take SNARKs as input inside the EVM, as different clients will expect different types of SNARKs. Instead, it might be similar to blob validation: a transaction can include claims (pre-state, block body, post-state) that need to be proved, the contents of which can be accessed by opcodes or precompilations, and client-side consensus rules check data availability and proof of claims made in the block, respectively.
Auditability: If any execution is proven, we want the underlying data to be usable so that if anything goes wrong, both users and developers can inspect it. In fact, this adds one more reason why data availability requirements are important.
Upgradeability: If we find a bug in a particular ZK-EVM solution, we want to be able to fix it quickly. This means that there is no need for a hard fork to fix the problem. This adds another reason why proofs outside of EVM and block data structures are important.
One of the attractions of supporting “approximate EVM” :**L2s is the ability to innovate at the execution layer and scale the EVM. If an L2 VM is only slightly different from the EVM, then it would be nice if L2 could use the native in-protocol ZK-EVM for the same parts as the EVM, and only rely on their own code to handle the different parts. This can be achieved by designing a ZK-EVM function that allows the caller to specify a bit field or opcode or address list, which will be handled by an externally provided form rather than the EVM itself. We can also make the gas cost customizable to some extent.
“Open” vs. “closed” multiclient systems
The “multi-client mentality” is probably the most controversial requirement on this list. One option would be to abandon multiclient and focus on one ZK-SNARK scheme, which would simplify the design. But at the cost of a larger “philosophical shift” at ETH Workshop (as this is effectively abandoning ETH Workshop’s long-standing multiclient mentality) and introducing greater risk. In the long-term future, for example, when formal verification techniques get better, it may be better to go this route, but for now it seems too risky.
Another option is a closed multiclient system with a fixed set of attestation systems that is known within the protocol. For example, we might decide to use three ZK-EVMs: PSE ZK-EVM, Polygon ZK-EVM, and Kakarot. A block needs proofs from at least two of these three to be valid. This is better than a single proof system, but it makes the system less adaptable due to the fact that users have to maintain validators for each proof system that exists, there will be an inevitable governance process to incorporate new proof systems, etc.
This pushes me to prefer an open multiclient system where proofs are placed “outside of the block” and verified separately by the clients. Individual users will use any client they want to validate a block, and they will be able to do so as long as at least one prover creates a proof for that attestation system. The attestation system will gain influence by convincing users to run them, not by convincing the protocol governance process. However, this approach does have a more complex cost, as we’ll see.
What key features do we want in the ZK-EVM implementation?
In addition to the basic functional correctness and safety guarantees, the most important attribute is speed. While it is possible to design an asynchronous protocol with built-in ZK-EVM functionality where each claim only returns results after N slots, the problem becomes much easier if we can reliably guarantee that a proof will be generated in a matter of seconds, so that what happens in each block is self-sufficient.
While it takes minutes or hours to generate a proof for a ETH block today, we know that there is no theoretical reason to prevent mass parallelization: we can always assemble enough GPUs to prove the different parts of the block execution separately and then use recursive SNARKs to put those proofs together. In addition, the proof process can be further optimized through hardware acceleration of FPGAs and ASICs. Actually getting there, however, is an engineering challenge that should not be underestimated.
What exactly does the ZK-EVM function look like in the protocol?
Similar to EIP-4844 blob transactions, we introduce a new transaction type that includes ZK-EVM claims:
The latter can be converted to the former, but not the other way around. We’ve also extended the block sidecar object (introduced in EIP-4844) to include a list of proofs declared in a block.
Note that in practice, we may want to split the sidecar into two separate sidecars, one for blobs and one for attestation, and set up a separate subnet for each type of proof (and an additional subnet for blobs).
At the consensus layer, we’ve added a validation rule that will only accept a block if the client sees a valid proof of each claim in the block. The proof must be a concatenation of the ZK-SNARK proof, serialized as a pair of transaction_and_witness_blobs, and pre_state_root valid with (i) and Witness (ii) output the correct post_state_root. (Block, Witness) Potentially, customers can choose to wait for M-of-N for multiple types of proofs.
A note here is that the block execution itself can simply be thought of as a triplet that needs to be checked along with the triples provided in the ZKEVMClaimTransaction object (σpre, σpost, Proof).
As a result, a user’s ZK-EVM implementation can replace its execution client, which will still be used by (i) provers and block builders, and (ii) nodes that care about indexing and storing data for local use.
Validation and revalidation
Let’s say you have two ETH clients, one of which uses PSE ZK-EVM and the other uses Polygon ZK-EVM. Suppose that at this point, both implementations have evolved to the point where they can prove the execution of a ETH block in less than 5 seconds, and that for each proof system, there are enough independent volunteers running hardware to generate proofs.
Unfortunately, since independent attestation systems are not built-in, they cannot be incentivized in the protocol, however, we anticipate that the cost of running a prover is lower compared to the cost of R&D, so we can simply use a generic body to fund public goods for provers.
Let’s say someone releases a ZKEvmClaimNetworkTransaction, but they only release a version of PSE ZK-EVM proof. Polygon ZK-EVM’s proof node sees this and uses Polygon ZK-EVM’s proof to calculate and republish the object.
This increases the total maximum delay between the oldest honest node accepting a block and the newest honest node accepting the same block δ: 2 δ+Tprove (assuming Tprove<5 s).
The good news, however, is that if we take slot finality, we can almost certainly “pipeline” this extra latency along with the multi-round consensus latency inherent in SSF. For example, in a 4-slot proposal, the “head vote” step may only need to check the validity of the base block, but then the “freeze and confirm” step will require proof of existence.
Extension: Support for “Approximate EVM”
An ideal goal for the ZK-EVM feature is to support “approximate EVM”: that is, an EVM with some extra functionality built in. This could include new precompilation, new opcodes, or even options where the contract can be written in an EVM or a completely different virtual machine (e.g., like in Arbitrum Stylus), or even multiple parallel EVMs with synchronous cross-communication.
Some modifications can be supported in a simple way: we can define a language that allows ZKEVMClaimTransaction to pass the full description of the modified EVM rule. This can be done:
Custom gas table (users can’t reduce gas costs, but can increase them)
Disable certain opcodes
Set the block number (this will imply different rules depending on the hard fork)
Set a flag that activates a full set of EVM changes that have been normalized for L2 use rather than L1 use, or other simpler changes
In order to allow users to add new features in a more open way, by introducing a new precompile (or opcode), we can include a precompiled input/output record in the blob of ZKEVMClaimNetworkTransaction:
class PrecompileInputOutputTran(Container):
used_precompile_addresses: List[Address]
inputs_commitments: List[VersionedHash]
outputs: List[Bytes]
The EVM execution will be modified as follows. Initialize an empty input array. Whenever the address in used_precompile_addresses is called, we add an InputsRecord(callee_address, gas, input_calldata) object to the inputs and set the call’s RETURNDATA to outputs[i]。 Finally, we check that used_precompile_addresses has been called len(outputs) a total of times, and that inputs_commitments matches the result promised by serializing the SSZ on the input. The purpose of exposing inputs_commitments is to make it easier for external SNARKs to prove the relationship between inputs and outputs.
Note the asymmetry between the input and output, where the input is stored in a hash and the output is stored in the bytes that must be provided. This is because execution needs to be done by a client that only sees the input and understands the EVM. The EVM execution has already generated the input for them, so they only need to check if the generated input matches the claimed input, which only requires a hash check. However, the output must be provided to them in its entirety and therefore must be usable data.
Another useful feature could be to allow “privileged transactions” that can be invoked from any sender account. These transactions can be run between two other transactions, or as part of another (and possibly privileged) transaction when precompilation is invoked. This can be used to allow non-EVM mechanisms to be called back into the EVM.
This design can be modified to support new or modified opcodes, in addition to new or modified precompilations. Even with only precompilation, this design is quite powerful. For example:
By setting used_precompile_addresses to include a list of normal account addresses with certain flags in the state, and making an SNARK to prove that it’s built correctly, you can support Arbitrum Stylus-style features where the contract can be written in EVM or WASM (or another VM). Privileged transactions can be used to allow WASM accounts to be called back into the EVM.
By adding an external check to ensure that the input/output records and privileged transactions performed by multiple EVMs are matched in the correct way, you can demonstrate a parallel system of multiple EVMs talking to each other through a synchronization channel.
A Type 4 ZK-EVM can be operated by having multiple implementations: one that directly converts Solidity or another high-level language into a SNARK-friendly VM, and another that compiles it into EVM code and executes it in the built-in ZK-EVM. The second (and inevitably slower) implementation can only run if the fault prover sends a transaction that asserts that there is a bug, and collects a bounty if they can offer that the two handle different transactions.
A pure asynchronous VM can be achieved by returning zero to all calls and mapping the calls to privileged transactions added to the end of the block.
Extension: Support for stateful certifiers
One of the challenges with the above design is that it is completely stateless, which makes it data-inefficient. In the case of ideal data compression, ERC 20 sends using stateful compression can be up to 3x more space-efficient than state-only compression.
In addition, stateful EVMs do not need to provide witness data. In both cases, the principle is the same: when we already know that the data is usable because it was entered or generated in a previous EVM execution, then it is a waste to ask that the data is available.
If we want the ZK-EVM feature to have state, then we have two options:
Requirements
σpre
Either it’s empty, it’s a list of available data with the keys and values declared, or it’s executed before a certain time
σpost
。
Toward
(
σpre,
σpost,
Proof
)
Triple to add a receipt generated for the block
R
of blob commitments. Any previously generated or used blob commitments, including those that represent blocks, witnesses, receipts, or even plain EIP-4844 blob transactions, may have some time constraints that can be referenced in ZKEVMClaimTransaction and accessed during their execution (possibly via a series of instructions: "Will be committed
i
The bytes of N… N+k-1 is inserted into the chunk+witness data
J
”)。
Option 1 means that instead of having stateless EVM verification built-in, it is better to have an EVM child chain built-in. Option two essentially creates a minimal built-in stateful compression algorithm that uses a previously used or generated blob as a dictionary. Both approaches place a burden on the prover node, which is the only one that needs to store more information, and in case two, it is easier to have a time limit on this burden than in case one.
Parameters for closed multi-cerversers and off-chain data
A closed multi-prover system with a fixed number of proofs in an M-of-N structure avoids much of the above complexity. In particular, a closed multi-prover system doesn’t need to worry about ensuring that the data is on-chain. In addition, a closed multi-attester system will allow ZK-EVM proofs to be executed off-chain, making them compatible with EVM Plasma solutions.
However, a closed multi-prover system adds governance complexity and removes auditability, which are high costs that need to be weighed against these benefits.
If we built ZK-EVM into it and made it a protocol feature, what would be the role of a “Layer 2 project”?
The EVM verification functions currently implemented by the Layer 2 teams themselves will be handled by the protocol, but the Layer 2 project is still responsible for a number of important features:
Fast pre-confirmation: The finality of a single slot can slow down Layer 1 slots, while Layer 2 projects are already providing their users with “pre-confirmations” that are backed by Layer 2’s own security with much lower latency than a single slot. This service will continue to be fully under the responsibility of Layer 2.
MEV (Miner Extractable Value) mitigation strategies: This could include encrypted mempools, reputation-based sequencer selection, and other features that Layer 1s are reluctant to implement.
Extensions to the EVM: Layer 2 projects can provide significant extensions to the EVM for their users. This includes “approximate EVM” and fundamentally different approaches like Arbitrum Stylus’ WASM support and the SNARK-friendly Cairo language.
Convenience for users and developers: Layer 2 teams have done a lot to attract users and projects to their ecosystem and make them feel welcome; they are compensated by capturing MEV and congestion charges within their network. This relationship is here to stay.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
V Shen Xinwen: After ETH Fang built-in ZK, where is Layer2 heading?
Produced | Odaily
Compile | Loopy Lu
Today, Vitalik Buterin published a new article in the ETH community titled “What might a built-in ZK-EVM look like?”. This article explores how ETH Workshop will have its own ZK-EVM built into future network upgrades.
As we all know, in the context of the slow development of the ETH, almost all mainstream Layer 2 already have ZK-EVM, but when the ETH workshop mainnet encapsulates its own ZK-EVM, will there be a conflict between the role positioning of the mainnet and Layer 2?
In this article, Vitalik Buterin emphasizes the importance of compatibility, data availability, and auditability, and explores the possibilities of implementing efficient and stateful attesters. In addition, the article explores the possibility of implementing stateful proofs for efficiency, and discusses the role of Layer 2 projects in providing rapid pre-confirmation and MEV mitigation strategies. This article reflects the balance of advancing the development of the ETH network through ZK-EVMs while maintaining its flexibility.
Odaily Planet Daily has compiled the original article, and the full text of the article is as follows:
optimistic rollups and ZK rollups, as Layer-2 EVM protocols on top of ETH, both rely on EVM verification. However, this requires them to trust a large codebase, and if there are bugs in this codebase, then these VMs are at risk of being hacked. This also means that ZK-EVMs, even if they want to be fully equivalent to the L1 EVM, will need some form of governance to replicate L1 EVM changes into their own EVM implementations.
This is not an ideal situation. Because these projects are replicating the functionality that already exists in the ETH Workshop Protocol to themselves - ETH Governance is already responsible for upgrading and fixing bugs, and what ZK-EVM basically does is validate Layer 1 ETH blocks. Over the next few years, we expect light clients to become more and more powerful, and will soon be able to use ZK-SNARKs to fully validate L1 EVM executions. At that point, the ETH network will actually have a ZK-EVM in a package. So, the question arises: why not make this ZK-EVM natively available for rollups as well?
This article will describe several versions of the “Encapsulated ZK-EVM”, analyzing their trade-offs, design challenges, and reasons for not going in certain directions. The benefits of implementing a protocol’s functionality should be compared to the benefits of letting the ecosystem handle transactions and keeping the underlying protocol simple.
What key features do we want to get from the encapsulated ZK-EVM?
What are the key attributes we can expect from the packaged ZK-EVM?
Basic function: Validate ETH blocks. The protocol function (which has not yet been determined whether it is opcode, precompilation, or some other mechanism) should be able to accept at least a pre-state root, a block, and a post-state root as input, and verify that the post-state root is actually the result of executing that block on top of the pre-state root. Compatible with ETH Workshop’s multi-client. This means that we want to avoid a single attestation system built in, and instead allow different clients to use different attestation systems.
This also means a few things:
Data availability requirements: For any EVM execution that uses encapsulated ZK-EVM proofs, we want to guarantee that the underlying data is usable so that provers using a different attestation system can re-attest the execution, and clients that rely on that attestation system can validate those newly generated proofs.
Proofs reside outside of the EVM and chunk data structures:* ZK-EVM functionality doesn’t really take SNARKs as input inside the EVM, as different clients will expect different types of SNARKs. Instead, it might be similar to blob validation: a transaction can include claims (pre-state, block body, post-state) that need to be proved, the contents of which can be accessed by opcodes or precompilations, and client-side consensus rules check data availability and proof of claims made in the block, respectively.
Auditability: If any execution is proven, we want the underlying data to be usable so that if anything goes wrong, both users and developers can inspect it. In fact, this adds one more reason why data availability requirements are important.
Upgradeability: If we find a bug in a particular ZK-EVM solution, we want to be able to fix it quickly. This means that there is no need for a hard fork to fix the problem. This adds another reason why proofs outside of EVM and block data structures are important.
One of the attractions of supporting “approximate EVM” :**L2s is the ability to innovate at the execution layer and scale the EVM. If an L2 VM is only slightly different from the EVM, then it would be nice if L2 could use the native in-protocol ZK-EVM for the same parts as the EVM, and only rely on their own code to handle the different parts. This can be achieved by designing a ZK-EVM function that allows the caller to specify a bit field or opcode or address list, which will be handled by an externally provided form rather than the EVM itself. We can also make the gas cost customizable to some extent.
“Open” vs. “closed” multiclient systems
The “multi-client mentality” is probably the most controversial requirement on this list. One option would be to abandon multiclient and focus on one ZK-SNARK scheme, which would simplify the design. But at the cost of a larger “philosophical shift” at ETH Workshop (as this is effectively abandoning ETH Workshop’s long-standing multiclient mentality) and introducing greater risk. In the long-term future, for example, when formal verification techniques get better, it may be better to go this route, but for now it seems too risky.
Another option is a closed multiclient system with a fixed set of attestation systems that is known within the protocol. For example, we might decide to use three ZK-EVMs: PSE ZK-EVM, Polygon ZK-EVM, and Kakarot. A block needs proofs from at least two of these three to be valid. This is better than a single proof system, but it makes the system less adaptable due to the fact that users have to maintain validators for each proof system that exists, there will be an inevitable governance process to incorporate new proof systems, etc.
This pushes me to prefer an open multiclient system where proofs are placed “outside of the block” and verified separately by the clients. Individual users will use any client they want to validate a block, and they will be able to do so as long as at least one prover creates a proof for that attestation system. The attestation system will gain influence by convincing users to run them, not by convincing the protocol governance process. However, this approach does have a more complex cost, as we’ll see.
What key features do we want in the ZK-EVM implementation?
In addition to the basic functional correctness and safety guarantees, the most important attribute is speed. While it is possible to design an asynchronous protocol with built-in ZK-EVM functionality where each claim only returns results after N slots, the problem becomes much easier if we can reliably guarantee that a proof will be generated in a matter of seconds, so that what happens in each block is self-sufficient.
While it takes minutes or hours to generate a proof for a ETH block today, we know that there is no theoretical reason to prevent mass parallelization: we can always assemble enough GPUs to prove the different parts of the block execution separately and then use recursive SNARKs to put those proofs together. In addition, the proof process can be further optimized through hardware acceleration of FPGAs and ASICs. Actually getting there, however, is an engineering challenge that should not be underestimated.
What exactly does the ZK-EVM function look like in the protocol?
Similar to EIP-4844 blob transactions, we introduce a new transaction type that includes ZK-EVM claims:
As with EIP-4844, the object passed in the mempool is a modified version of the transaction:
The latter can be converted to the former, but not the other way around. We’ve also extended the block sidecar object (introduced in EIP-4844) to include a list of proofs declared in a block.
Note that in practice, we may want to split the sidecar into two separate sidecars, one for blobs and one for attestation, and set up a separate subnet for each type of proof (and an additional subnet for blobs).
At the consensus layer, we’ve added a validation rule that will only accept a block if the client sees a valid proof of each claim in the block. The proof must be a concatenation of the ZK-SNARK proof, serialized as a pair of transaction_and_witness_blobs, and pre_state_root valid with (i) and Witness (ii) output the correct post_state_root. (Block, Witness) Potentially, customers can choose to wait for M-of-N for multiple types of proofs.
A note here is that the block execution itself can simply be thought of as a triplet that needs to be checked along with the triples provided in the ZKEVMClaimTransaction object (σpre, σpost, Proof).
As a result, a user’s ZK-EVM implementation can replace its execution client, which will still be used by (i) provers and block builders, and (ii) nodes that care about indexing and storing data for local use.
Validation and revalidation
Let’s say you have two ETH clients, one of which uses PSE ZK-EVM and the other uses Polygon ZK-EVM. Suppose that at this point, both implementations have evolved to the point where they can prove the execution of a ETH block in less than 5 seconds, and that for each proof system, there are enough independent volunteers running hardware to generate proofs.
Unfortunately, since independent attestation systems are not built-in, they cannot be incentivized in the protocol, however, we anticipate that the cost of running a prover is lower compared to the cost of R&D, so we can simply use a generic body to fund public goods for provers.
Let’s say someone releases a ZKEvmClaimNetworkTransaction, but they only release a version of PSE ZK-EVM proof. Polygon ZK-EVM’s proof node sees this and uses Polygon ZK-EVM’s proof to calculate and republish the object.
This increases the total maximum delay between the oldest honest node accepting a block and the newest honest node accepting the same block δ: 2 δ+Tprove (assuming Tprove<5 s).
The good news, however, is that if we take slot finality, we can almost certainly “pipeline” this extra latency along with the multi-round consensus latency inherent in SSF. For example, in a 4-slot proposal, the “head vote” step may only need to check the validity of the base block, but then the “freeze and confirm” step will require proof of existence.
Extension: Support for “Approximate EVM”
An ideal goal for the ZK-EVM feature is to support “approximate EVM”: that is, an EVM with some extra functionality built in. This could include new precompilation, new opcodes, or even options where the contract can be written in an EVM or a completely different virtual machine (e.g., like in Arbitrum Stylus), or even multiple parallel EVMs with synchronous cross-communication.
Some modifications can be supported in a simple way: we can define a language that allows ZKEVMClaimTransaction to pass the full description of the modified EVM rule. This can be done:
In order to allow users to add new features in a more open way, by introducing a new precompile (or opcode), we can include a precompiled input/output record in the blob of ZKEVMClaimNetworkTransaction:
The EVM execution will be modified as follows. Initialize an empty input array. Whenever the address in used_precompile_addresses is called, we add an InputsRecord(callee_address, gas, input_calldata) object to the inputs and set the call’s RETURNDATA to outputs[i]。 Finally, we check that used_precompile_addresses has been called len(outputs) a total of times, and that inputs_commitments matches the result promised by serializing the SSZ on the input. The purpose of exposing inputs_commitments is to make it easier for external SNARKs to prove the relationship between inputs and outputs.
Note the asymmetry between the input and output, where the input is stored in a hash and the output is stored in the bytes that must be provided. This is because execution needs to be done by a client that only sees the input and understands the EVM. The EVM execution has already generated the input for them, so they only need to check if the generated input matches the claimed input, which only requires a hash check. However, the output must be provided to them in its entirety and therefore must be usable data.
Another useful feature could be to allow “privileged transactions” that can be invoked from any sender account. These transactions can be run between two other transactions, or as part of another (and possibly privileged) transaction when precompilation is invoked. This can be used to allow non-EVM mechanisms to be called back into the EVM.
This design can be modified to support new or modified opcodes, in addition to new or modified precompilations. Even with only precompilation, this design is quite powerful. For example:
Extension: Support for stateful certifiers
One of the challenges with the above design is that it is completely stateless, which makes it data-inefficient. In the case of ideal data compression, ERC 20 sends using stateful compression can be up to 3x more space-efficient than state-only compression.
In addition, stateful EVMs do not need to provide witness data. In both cases, the principle is the same: when we already know that the data is usable because it was entered or generated in a previous EVM execution, then it is a waste to ask that the data is available.
If we want the ZK-EVM feature to have state, then we have two options:
Requirements σpre Either it’s empty, it’s a list of available data with the keys and values declared, or it’s executed before a certain time σpost 。
Toward ( σpre, σpost, Proof ) Triple to add a receipt generated for the block R of blob commitments. Any previously generated or used blob commitments, including those that represent blocks, witnesses, receipts, or even plain EIP-4844 blob transactions, may have some time constraints that can be referenced in ZKEVMClaimTransaction and accessed during their execution (possibly via a series of instructions: "Will be committed i The bytes of N… N+k-1 is inserted into the chunk+witness data J ”)。
Option 1 means that instead of having stateless EVM verification built-in, it is better to have an EVM child chain built-in. Option two essentially creates a minimal built-in stateful compression algorithm that uses a previously used or generated blob as a dictionary. Both approaches place a burden on the prover node, which is the only one that needs to store more information, and in case two, it is easier to have a time limit on this burden than in case one.
Parameters for closed multi-cerversers and off-chain data
A closed multi-prover system with a fixed number of proofs in an M-of-N structure avoids much of the above complexity. In particular, a closed multi-prover system doesn’t need to worry about ensuring that the data is on-chain. In addition, a closed multi-attester system will allow ZK-EVM proofs to be executed off-chain, making them compatible with EVM Plasma solutions.
However, a closed multi-prover system adds governance complexity and removes auditability, which are high costs that need to be weighed against these benefits.
If we built ZK-EVM into it and made it a protocol feature, what would be the role of a “Layer 2 project”?
The EVM verification functions currently implemented by the Layer 2 teams themselves will be handled by the protocol, but the Layer 2 project is still responsible for a number of important features: