Contents
This document explains the contents of the rc/v1.6.0 release codenamed Sirius. It is split in 3 sections:
- the features list containing detailed insights of the feature along with the external impact and the relevant pull requests list
- the smaller features and fixes area contains the one-pull request small features or fixes along with the external impact details
- the merges area contains the list of pull requests used to update feature branches as the work progressed on multiple areas at once. It is here for reference purposes only.
Features
1. Optimise consensus signature check #4467
A new feature optimizes CPU usage by changing how block signatures are verified during consensus. Previously, each signature share for block consensus proof was individually verified, costing about 3ms per signature. With ~400 signatures in the metachain block, this takes around 1.2s of CPU time per block, with an additional 200ms for the shards. The optimization involves assuming all signature shares are valid during aggregation and only verifying the aggregated result. This verification time was reduced to 3ms in a previous update (PR #4314). However, if an aggregated signature is invalid, it introduces an extra 3ms cost, as the leader will need to check each signature individually. To ensure consistent optimization, a penalty, termed âpseudo slashingâ, will be imposed for submitting invalid signatures. This penalty involves an additional consensus message that provides proof of invalid signatures, which isnât mandatory but, if sent, will allow nodes to temporarily blacklist misbehaving nodes. Blacklisted nodes will face dropped messages, rating drops, potential jailing, loss of block rewards/fees, and a waiting period before they can join consensus again.
Full description here.
2. Refactor resolvers #4692
This feature solves a technical debt by splitting the functionality of a resolver component in 2. The resolver component should be able only to respond to requests from the p2p network and the requester should be able only to request missing information (send the request packet on the p2p network)
Full description here.
3. Multikey #4741
This feature added multikey support that will enable the node to sign on behalf of more than one key. For it to function, a multikey node will be required for each shard of the chain (including the metachain) and the exact set of keys should be given to all the nodes. A detailed description of the feature is available on the multikey docs page
Full description here.
4. PubKeyConverter refactor #4716
The original implementation of the PubkeyConverter.Encode
method did not propagate the error to its caller. Instead, it logged the error and returned an empty string. This behavior can be problematic as it might hide potential issues and security threats. Furthermore, the loggerâs presence in the PubKeyConverter constructor became unnecessary, therefore the hrp
was given as input, especially considering our vision for sovereign shards where each shard could have its unique human-readable part (hrp
).
Full description here.
5. Peers rating handler fixes #4800
This feature solves several bugs in the p2p peers rating components that are used whenever a node will try to determine which p2p peers have a better response history to maximize the chance of receiving a response.
Full description here.
6. Governance v3 #4879
The voting smart contract is implemented in go, and it is running on the system VM alongside the other known contract (staking, auction, delegation). The voting contract currently does not have any enforcement, proposals are voted for to keep track of decisions.
MinQuorum
, MinPassThreshold
, MinVetoThreshold
can be expressed in terms of either stake or voting power, the result is the same. We also have a MaxDuration
- which is a maximum number of epochs a proposal can run. This prevents users from locking their tokens in the Governance contract for a very long time if the proposal is made incorrectly or malicious with a very high duration; a ProposalFee
- to prevent spam (when more community addresses are whitelisted or the governance becomes completely open);
Once a proposal gets registered and the proposal fee is paid - the voting can start from the start vote nonce until the end vote nonce. After the EndVotePeriod
any whitelisted address can send one more transaction in order to close the proposal - at that time it will be calculated if the proposal is voted or not. With that transaction all the votes for that selected proposal will be deleted from the trie. For every valid vote there will be a new entrance created in the trie with the key: proposal+callerAddress
. These will be deleted when a proposal is finished. ProposalFinish
call will clean all the storage and only after will set the proposal to the computed state - passed or not. Same proposal (âGitHub commitâ) cannot be set more than once.
To deploy a proposal, one might use the transaction data field formatted as:
proposal@<githubcommithash(40bytes as hex)>@startVoteEpoch@endVoteEpoch
ProposalCost: 1000eGLD
User makes a transaction with vote@<proposalX>@<VoteType>
: The governance contract will ask the staking, validator and delegation contracts how much stake/delegated eGLD he has. From the staked/delegated eGLD we compute the voting power using the quadratic formula.
The governance contract will compute the gas according to the number of storage GETS he needs to do to compute the voting power for each user. The user does not need to provide how much eGLD he staked/delegated, the governance contract will resolve this.
For contracts like MultiversX Community Delegation, Liquid Staking contracts, we made a new endpoint called delegateVote@<proposalX>@<VoteType>@<delegateTo>@<balanceToVote>
First we compute the total staked/delegated for that contract, compute if the totalVoted += balanceToVote < totalStaked
for the contract. We assign the votes to the user the contract have delegated, recomputing the userâs voting power with the quadratic formula.
Full description here.
7. DNS v2 #5045
Integrated new DNS functionalities:
saveUserName
can be called multiple times, and it will change the current username of the userdeleteUserName
will delete the username of the user Both endpoints can be called only by special/whitelisted smart contracts.
Full description here.
8. WebSocket outport driver #5142
Refactored the WebSocket driver. The new drivers now support sending messages marshaled with json marshaller or the gogo proto marshalled. The new WebSocketHost implementation can run in server or client mode. A detailed description of the feature is available on the indexer docs page
Full description here.
9. Sync missing trie nodes #4616
If there was a bug during processing and a trie node was missing (either because it was deleted or because it was never saved in the first place), that validator would have not been able to move forward with block processing. When a missing trie node is reached, sync it from the network. In this way, the network nodes will be able to repair themselves.
Full description here.
10. Balance data tries #4636
The keys where the values are saved in a data trie are not random. Because of this, the data tries are not balanced, thus resulting in more intermediary nodes. When saving some data in the data trie, do not save at key
but rather at hash(key)
. This way, the keys will be random, resulting in balanced data tries. In order for this change to be backwards compatible, add Version field to trie nodes. Nodes that are accessed will be automatically migrated to the new version (where the storage key is hash(key)) Added a builtin function that will migrate data trie nodes when it is called. Added a new API endpoint which will return true if the data trie of a specified account is fully migrated.
Full description here.
11. Sharded persister #5010
Added the possibility to split a storer among more than one directory, each having its own level DB instance. The keys are routed in their respective data shard in the same manner as wallet addresses are split among chain shards. The number of data shards is configurable and backward compatible as the information is saved in a file, in the same directory where the data shards reside. This split will improve data access, especially when dealing with state trie nodes.
Full description here.
12. Trie sync optimizations #5291
Improved trie sync process with parallelization. This is achieved by syncing the main trie and data tries in parallel.
Full description here.
13. VM v1.5 #4789
This feature integrates the VM v1.5 and the smart contract processor v2. The following set of features will become available: Multi-async on a single level, ManagedBigFloats, ManagedMap, BackTransfers.
Full description here.
14. State package refactor #5334
Moved the accounts implementations in their own package. Removed duplicated code and increased code readability.
Full Description here.
15. Full archive refactor #5345
This feature is a complete refactoring for the full archive solution. The old solution was based on the idea that full archive nodes would still connect on the p2p network by overriding the sharding counters but in practice, it did not perform well. This new solution relies on a secondary, optional p2p network on which only the full archive nodes will join. Since this network will mostly contain full archive nodes, the connection between the nodes and the request-response cycles will be optimized.
Full Description here.
16. VM-query with block coordinates #5512
Added support for vm query execution on the chain state at the provided block nonce and block hash.
Full Description here.
17. Logs & events changes #5490
Currently, we do not have information of how ONE contract calls another contract intra-shard. All we have are token operations. Intra-shard token operations are taken from log/events on ESDTTransfer/ESDTNFTTransfer/MultiESDTNFTTransfer and transferValueOnly log/events. So we do not know whether one contract calls another using ExecuteOnDest/ExecuteOnSame/AsyncCall or what function and how it is called (if there are logs generated by the underlying SC then we know - but that is not standard).
Full Description here.
18. Transaction execution ordering refactor #4918
The grouping of transactions in the block give precedence to source and destination shards (transactions are grouped into miniblocks, by the sender and destination shards, each miniblock having only one shard as source for the transactions inside (sender shard) and one shard as destination for the transactions inside (destination shard)). This is useful for the cross shard interactions to be optimized and correctly tracked by the metachain nodes, but does not provide details about the transactions execution order.
Currently, the transactions execution order is re-computed after the actual execution and fed into the outport driver, which serves clients such as indexers or notifiers. In some cases, e.g. multiple smart contract results generated by the same transaction, that need to be executed cross shard in the same destination shard, the execution order for these individual SCRs cannot be correctly determined after execution and all will be treated as a batch and receive the same order.
The execution order of SCRs is currently estimated (post-processing) rather than taken from execution, which makes it less efficient and in some cases is more prone to errors. The SCRs ordering will also become available from the execution component directly, together with the execution order of the other transactions and integrate the ordering into the outport driver.
This feature creates an ordered collection component that can be used during the transactions execution and collect the transactions and SCRs in their correct execution order. This feature introduces this component and its integration.
Full Description here.