MultiversX as Settlement Layer and Data Availability for Sovereign Chains - MIP - 12

We can work towards modularity with components and development in order to enhance SovereignShards to be the best place for any application specific chain to go. At the current phase, we have nicely integrated MultiversX mainnet in the process of the SovereignChain nodes, but we can make more, to offer more security, more interoperability, and cross chain interactions.

Even in the case of SovereignChain to MultiversX interactions we can make updates in order to offer more security for all the users, funds and applications. Some of the primitives to watch out for are “Data Availability Layer”, “Settlement Layer”, “ZK/validity/optimistic proofs”.

So we need to integrate a set of modules and make the case for each of the aforementioned elements and search for the open source code/partner which is the easiest to integrate. When MultiversX mainchain becomes settlement and data availability layer (more for redundancy and security perspectives) it means SovereignChains will use eGLD as gas for these operations, as we extend how much will be written/inscribed into the mainchain.

MultiversX mainchain as Settlement Layer for Sovereign Shards

In order to do this, people need an escape mechanism from the sovereign shards, even in the case of SovereignChain not working. This actually means a manual way of retrieving tokens which were put in the SovereignChains when the nodes of that shard become malicious. There are different orders of safety here and we have to see what to implement and in what order. The endgoal is ZK-proofs written into the mainchain which represents correctness of everything which was executed on the sovereign shards and also gives an escape route for users if nodes stopped producing blocks on Sovereign. But until then (we need more research and PI^2 here), we can still create some properties for the users.

1. First stage: RootHash and Merkle Proofs - challenge period

The SovereignChain will post the blockHeader, which contains the RootHash of the SovereignChain to the mainchain at every block; the multiSig verifier contract can keep these roothashes. The multiSig verifier contract will keep the last 10K roothashes (numbers can change) in a circular list.

Starting from the rootHash, the user can at any time post on the mainchain a request/challenge for his funds to be transferred on the mainchain. The challenge of the user will contain multiple Merkle proofs in which the user demonstrates that he has funds in his account. If there is no response to the challenge in the next 3 days, the user will be able to withdraw the selected tokens and amounts. Invalidation of the challenge can come in 2 ways:

  • Merkle proof is not valid anymore, as the user has consumed his tokens.
  • proof that the token bridge is working.
  • This system does not resolve the problem when >67% of the validators from the sovereign shards are malicious. To resolve that scenario we will need to introduce ZK proofs which contain proofs of signature for each user operation in the system. Thus creating a complete rollup.

The leader is obliged to post it, otherwise his rating will decrease. The SovereingChainHeaderVerifierSC will create an event when someone posts the header+roothash on the mainchain. This event is catched by the SovereignNotifier and pushed to all the validators. Validators will include this event in one of the next blocks to increase/decrease the rating on the ValidatorStatistics.

A set of configs will be added as a timeFrame in which header+roothash has to be sent (once per X blocks) and a timeframe of how late can a leader be when sending a proof. A sovereignShard can post a header+roothash multiple times per block and up to 1 proof per 2 minutes. If a sovereignShard posts a proof every minute, he will post 1440 proofs, every one of those costing around 0.00138EGLD - around 2eGLD at current price per day.

2. The second/final stage: creating full rollup data and pushing it to mainchain

The prerequisites for this are ZK-WASM / ZK-EVM in some L2s, which create a full rollup+validity data as the block is executed on the sovereign chains. This will be a case by case solution, depending on the VM and mode of running one SovereingChain will use. PI^2 technology is explored, in order to create complete proofs of changes.

MultiversX mainchain as Data Availability Layer for Sovereign Chains

Data Availability is a huge topic nowadays in the ETH L2 space, people are choosing different DA layers for cost reduction (Choosing Celestia/Near over Ethereum), and some use the modular approach having one chain for execution, one for DA, one for settlement. The problem with modularity is tenfolds: latency between each of those chains, lower security as what happens if DA/Execution/Settlement fails or a combination of those. There is a lot of coding to be done in those cases. DA has to be as a backup, not as a main thing, the same with the settlement layer. It is for security reasons, restarts, forking, malicious takeover.

In order to enhance the security of SovereignChains we choose to use MultiversX mainchain for DataAvailability and Settlement layer (step1 explained above). And as DataAvailability is not at all about one global synchronous state, sharding is actually helping us in this manner. Thus we can put DA data for separate sovereigns on separate shards.

If the component is well built, we can reuse them to offer settlement on other L1s, like ETH/BTC. In case of ETH L2s which connect to DataAvailability layers as Celestia, Avail, NearDA and in the future ETH DankSharding, validators keep the data and create KZG commitments in order to ensure the outside world that they keep data. If we look at the current state of ETH, the ethereum validators do not offer any mathematical/economical security that the data is going to be kept, but it is a social consensus that transaction storage is not deleted. L2s are writing to tx.CallData - which is saved in TX.Storage, not to the trie. With the new proto-dankSharding ERC-4844, the change is that you have a new TX.Blob in which L2s can post their state changes, and these state changes are programmatically kept for 16 days in a storage. Later, with full implementation, validators on ETH will post a proof on the block that they have the data from the last 16 days. But this is not yet fully implemented.

For DataAvailability, the SovereignChain needs to post only the state changes in a compressed format at every X blocks. Together with the roothash+header posted from the previous step (settlement layer), validation of these stateChanges can happen directly. As someone can recreate the full state from the stateChanges and post a challenge if the resulting roothash is different from the one put onchain. In case of these challenges, validators would get their stakedEGLD slashed (x%-or X amount). Although the hash of all stateChanges could be added into the SovereignShard header as well, which enters into the signature phase of the consensus.

If a SovereignChain puts onchain the compressed format of stateChanges at every 8 minutes, 180 of commits per day, let’s say 10KB per 10 minutes, meaning around 2eGLD per day. These number are super small compared to L2s in Ethereum, 1 day fee for polygon is 54K USD, for Arbitrum/Optimism is >500K USD. Adding this with the settlement step, it would cost around 4eGLD per day per SovereignShard in one of our setups.

If we do not change anything on the mainnet, we have DataAvailability the same way as Ethereum has. Researching and adding KZG commitments could make the mainnet compatible with other DAs, using an industry standard. Although, a set of historical nodes and economics design for that would be more than sufficient. If we have KZG commitments, the mainchain could become a DA layer for other ETH/BTC L2s as well.