Hi everyone. This category is meant for discussing MIPs and broad ideas/proposals that can evolve into MIPs. I’d like to propose an idea here.
Disclaimer: While I possess a reasonable understanding of cryptography, ZK crypto is relatively new to me. If there are logical errors in my proposal, which I’ve diligently tried to avoid by researching ZK crypto, please correct me and, if possible, suggest a solution.
How MultiversX could be utilising ZK Crypto
While this may sound like a privacy feature (and can indeed serve as one), I aim to discuss its potential capabilities in transaction execution.
Currently, when I execute a smart contract, validator nodes must run the code with my specified input (provided in the transaction details). Once they’ve calculated the code’s outcome, they can approve my transaction.
Computing the outcome of smart contract code is pretty efficient with WASM. However, it still consumes some CPU time.
Rather than executing the code for every smart contract transaction to determine its correctness, ZK cryptography, such as ZK STARKs, could be employed for these verifications.
Users would calculate the outcome themselves (i.e., the wallet or the RPC node would compute it on their behalf. Off-chain). The ideal situation would have the wallet undertake this to prevent potential censorship, where the connection point to the chain might decline to compute the transaction outcome for the user. If the wallet application itself manages this, there’s no need for extra servers, no central entity dependency, no censorship, reduced operational costs, minimized latency, no RPC server downtime, enhanced availability, improved transaction confidentiality, increased privacy, and more. Implementing this could be challenging. While xPortal might manage it, third-party wallets might be a looking at a few problems. They might need to resort to an RPC server for computations. Alternatively, MultiversX could raise the bar for implementing the MultiversX protocol by mandating off-chain ZK STARK computations. But… well this might limit wallet adoption.
With this ZK STARK computation, validator nodes can verify the validity of the provided transaction outcome. Essentially, they only need to determine if the user’s ZK proof is accurate. If it is, then the user’s proposed transaction outcome is indeed correct. There’s no need to execute the entire smart contract code each time to determine if the transaction will succeed.
Validating a ZK proof should consume fewer resources than executing the smart contract code.
For “simple” smart contracts, the outcome is generally deterministic. However, with a DEX, the outcome is not clear. Swap rates might fluctuate, resulting in users receiving slightly more or fewer destination tokens than expected. This is acceptable due to slippage. But how can we ensure a transaction’s successful execution when all we possess is the user’s off-chain proof of a successful transaction? Given that transactions are randomly shuffled within a block, slippage might intervene, preventing the swap from being executed. If nodes don’t verify the slippage, users might end up receiving significantly fewer tokens than estimated.
Essentially, the ZK transaction only proves that a transaction should execute based on the user-estimated liquidity and their available funds. Since token prices and liquidity are always changing, nodes would need to re-run the smart contract to compute the actual outcome, ensuring the user’s slippage aligns or the transaction is reverted.
How can this be addressed? I propose a solution:
The user could specify both the acceptable outcomes/slippage and also the liquidity pool ratios they consent to. They might, for instance, only approve a ratio of 200/202-300/303 for the token pair A-B. If the ratio differs outside their specified bounds, the slippage would logically, subsequently, also fail. Another thing the user must specify: They must also indicate the pool’s new ratio after their transaction has been successfully executed. All this would be based on off-chain values provided from an API/RPC or similar.
In essence, users would compute the necessary pool ratio to ensure slippage acceptance and transaction validation AND the new pool ratio post-swap.
Checking the current account (DEX smart contract) balance is more efficient than executing the entire smart contract code.
The block proposer could verify a transaction’s tentative validity by comparing the user-specified pool ratio to the current pool ratio (the actual smart contract balance ratio if the transaction was the first executed in the block, or the new pool ratio as determined by the initial swap transaction).
For example:
- If the user’s transaction was the first added to the block, the pool ratio they included in their transaction and ZK proof should roughly align with the smart contract’s pool ratio. They would also need to indicate the new current pool ratio, post-swap as explained earlier.
- If subsequent transactions occur, the node would need to verify if the pool ratio (for ensuring slippage alignment) matches the NEW current pool ratio, adjusted by the first-executed transaction. Conveniently, the prior transaction would also have provided the new pool ratio post-swap.
Yet, a challenge remains: How would a Node update the state? What’s the blockchain’s new state? Even if the slippage is met, it’s probable that users received a marginally different token amount due to price variations and the random transaction shuffling. How can we determine the definitive amount without computation? We’d need to compute something again.
However, this computation can now be more efficient.
The block proposer wouldn’t need to ascertain transaction validity. They’d know. Thus, these transactions would be marked as tentatively valid. Once the block proposer ceases to add new transactions, they can compute the collective outcome for all tentatively accepted transactions, processing them in one large batch.
This method is time-efficient because the swaps’ validity can be proven way more quickly, and the final amount can be batch-executed, bypassing the need to compute the entire smart contract every time, thus preserving overhead and CPU time.
By defining both the approved ratios and the expected post-transaction ratio, users essentially offer a “window” of valid states for their transaction. This window can be swiftly compared to the liquidity pool’s current state. After accumulating all provisionally accepted transactions, the block proposer can compute the collective outcome. This batched processing minimizes the overhead linked with individual transaction validation, accelerating overall block processing.
But what if a user submits outdated data in the “demanded pool-ratio” fields? A good point. In most scenarios, this wouldn’t be a significant issue due to minimal price fluctuations meeting the slippage limit. I presume most transactions would utilize the latest available data. But I could be wrong, as I’m not an observer operator. One potential idea could be to process transactions with stale pool-ratio data that doesn’t match the current pool ratio after all other swaps. This ensures checks for transaction execution within given bounds using updated data.
If an old ratio transaction is the first executed, the “expected output pool-ratio” will differ from the real expected output-pool ratio. This means subsequent swaps might fail as their specified pool ratio might be outside the current pool ratio’s bounds, adjusted by the previously executed swap.
Not ideal.
Neither is my solution, as it somewhat disrupts the random transaction shuffling, allowing transaction placement predictions within a block based on transaction input data. If the data is outdated, it’s placed further back in the block. This might make MEV and front-running easier, especially if users have slow internet or if the API/RPC provides them with slightly outdated pool ratio values.
A small window should allow for minor differences.
In essence, you could argue that outdated pool ratio values aren’t problematic as long as they don’t deviate more than 0.05% or 0.1% from the actual pool ratio. In most scenarios, this means even transactions based on old data would be processed equally to all others. Only during intense price fluctuations would outdated data fall outside this window. But in such cases, the slippage might have caused the transaction to fail anyway.
Benefits of implementing ZK STARKs:
- Lower resource usage of the nodes
- Faster transaction validation
- More gas per block can be added to maximize throughput (because more CPU time is now available to verify more transactions!)
- By promoting wallet-based calculations of the smart contract code, reliance on central entities or servers is reduced, mitigating potential censorship and centralization risks.
- ZK STARKs are, to my knowledge, post-quantum resistent and do not require a secure and trust-based setup as ZK SNARKs do.
Finally, since the wallet is already calculating the SC code, it could also display the expected outcome to the user, directly inside xPortal, as a nice UX feature.
I am eager to hear your opinion on my idea. I hope this can be transformed into a real MIP one day.