Accelerator update
This section provides instructions and guidance for interacting with the TON node at a low level. If you are using MyTonCtrl, check the Collators and validators guideline for a more user-friendly experience.
The key feature of TON Blockchain is the ability to distribute transaction processing across network nodes, shifting from everybody checks all transactions to every transaction is checked by a secure subset of validators. This ability to infinitely horizontally scale throughput across shards when a WorkChain splits into the required number of ShardChains distinguishes TON from other L1 networks.
However, it is necessary to rotate validator subsets regularly, which involves processing one shard or another, to prevent collusion. At the same time, to process transactions, validators obviously should know the state of the shard before the transaction. The most straightforward approach is to require all validators to be aware of the state of all shards.
This approach works well when the number of TON users is within the range of a few million and the TPS (transactions per second) is under 100. However, in the future, when TON Blockchain processes many thousands of transactions per second for hundreds of millions or billions of people, no single server will be able to keep the actual state of the whole network. Fortunately, TON was designed with such loads in mind and supports sharding both throughput and state updates.
Accelerator
Accelerator is an updated design to improve blockchain scalability. Its main features are:
- Partial nodes: A node can monitor specific shards of the blockchain instead of the entire set of shards.
- Liteserver infrastructure: Liteserver operators can configure each LS to monitor a set of shards, and lite-clients can select a suitable LS for each request.
- Collator/validator separation: Validators can only monitor MasterChain, significantly reducing their load.
Validator can use collator nodes to collate new shard blocks.
Partial nodes
Previously, each TON node was required to download all shards of the TON blockchain, which limited scalability.
To address this issue, the main feature of the update allows nodes to monitor only a subset of shards.
A node monitors a shard by maintaining its shard state and downloading all new blocks within that shard. Notably, each node continuously monitors the MasterChain.
The BaseChain includes a parameter called monitor_min_split
in ConfigParam 12
, which is set to 2
in the Testnet. This parameter divides the BaseChain into 2^monitor_min_split = 4
groups of shards:
- Shards with the prefix
0:2000000000000000
- Shards with the prefix
0:6000000000000000
- Shards with the prefix
0:a000000000000000
- Shards with the prefix
0:e000000000000000
Nodes can only monitor an entire group of shards at once. For instance, a node can choose to monitor all shards with the prefix 0:2000000000000000
. However, it cannot selectively monitor just 0:1000000000000000
without also including 0:3000000000000000
.
It is guaranteed that shards from different groups will not merge. This ensures that a monitored shard will not unexpectedly merge with a non-monitored shard.
Current value of monitor_min_split
in Mainnet is 0
, which means that all shards are in one group. The actual value can always be checked in Config param 12.
Node configuration
- By default, a node monitors all shards. You can turn off this behavior by adding the
-M
flag to thevalidator-engine
. - When you use the
-M
flag, the node will only monitor the MasterChain. If you want to monitor specific BaseChain shards, use the--add-shard <wc:shard>
flag. For example:
validator-engine ... -M --add-shard 0:2000000000000000 --add-shard 0:e000000000000000
- These flags will configure the node to monitor all shards with the prefixes
0:2000000000000000
and0:e000000000000000
. You can either add these flags to an existing node or launch a new node with them included.
Notes:
- DO NOT add these flags to a node that is participating in validation. Currently, validators are required to monitor all shards; this will be improved in future updates, allowing them to monitor only the MasterChain.
- If you use the
-M
flag, the node will begin downloading any missing shards, which may take some time. This is also true if you add new shards later using the--add-shard
flag. - The command
--add-shard 0:0800000000000000
will add the entire shard group associated with the prefix0:2000000000000000
due to themonitor_min_split
configuration.
Low-level configuration
--add-shard
flag is a shorthand for specific validator console commands.
A node stores a list of shards to monitor in the config (see file db/config.json
, section shards_to_monitor
).
This list can be modified using validator-engine-console
:
add-shard <wc>:<shard>
del-shard <wc>:<shard>
The --add-shard X
flag is equivalent to the add-shard X
command.
Lite client configuration
If you have multiple liteservers, each configured to monitor specific shards, you can list them in the liteservers_v2
section of the global config.
See the example:
{
"liteservers_v2": [
{
"ip": 123456789, "port": 10001,
"id": { "@type": "pub.ed25519", "key": "..." },
"slices": [
{
"@type": "liteserver.descV2.sliceSimple",
"shards": [
{ "workchain": 0, "shard": 2305843009213693952 },
{ "workchain": 0, "shard": -6917529027641081856 }
]
}
]
},
{
"ip": 987654321, "port": 10002,
"id": { "@type": "pub.ed25519", "key": "..." },
"slices": [
{
"@type": "liteserver.descV2.sliceSimple",
"shards": [
{ "workchain": 0, "shard": 6917529027641081856 },
{ "workchain": 0, "shard": -2305843009213693952 }
]
}
]
}
],
"validator": "...",
"dht": "..."
}
This config includes two liteservers:
- The first one monitors shards with prefixes
0:2000000000000000
and0:a000000000000000
. - The second one monitors shards with prefixes
0:6000000000000000
and0:e000000000000000
.
Both liteservers monitor MasterChain, so it is not necessary to include MasterChain explicitly in the configuration.
Note:
- To obtain the value for
"shard": 6917529027641081856
, convert the shard ID in hexadecimal (6000000000000000
) to decimal within the range of[-2^63, 2^63)
. - Both
lite-client
andtonlib
support this new global configuration format. Clients select the appropriate liteserver for each request based on its shard.
Proxy liteserver
Proxy liteserver is a server designed to accept standard liteserver queries and forward them to other liteservers.
Its primary purpose is to create a single liteserver that functions as a liteserver (LS) for all shards while distributing incoming queries to the appropriate child liteservers behind the scenes. This setup eliminates the need for clients to maintain multiple TCP connections for different shards. It enables older clients to interact with sharded liteservers through the proxy.
Usage:
proxy-liteserver -p <tcp-port> -C global-config.json --db db-dir/ --logname ls.log
List all child liteservers in the global config. These can be partial liteservers, as shown in the example above.
To use the proxy liteserver in clients, create a new global config with this proxy in the liteservers
section. See db-dir/config.json
:
{
"@type" : "proxyLiteserver.config",
"port" : 10005,
"id" : {
"@type" : "pub.ed25519",
"key" : "..."
}
}
This file contains the port and public key for the proxy liteserver. You can copy these details to the new global configuration.
The key is generated upon the first launch and remains unchanged after any restarts.
If you need to use an existing private key, place the private key file in db-dir/keyring/<key-hash-hex>
and launch proxy-liteserver
with the --adnl-id <key-hash-hex>
flag.
Collator/validator separation
Currently, Testnet and Mainnet validators function as follows:
- All validators monitor all shards.
- For each shard, a validator group is randomly selected to generate and validate new blocks.
- Within this validator group, validators collate (generate) new block candidates one by one, while other validators validate and sign them.
Changes introduced in the accelerator update are as follows:
- Validators will monitor only the MasterChain, significantly reducing their workload (this feature is not yet enabled in Testnet).
- The process for selecting validator groups and signing blocks remains unchanged.
- MasterChain validators will continue to collate and validate blocks as before.
- The collation of a shard block requires monitoring the shard. To address this, a new type of node, called a collator node, is introduced. Shard validators will send requests to collator nodes to generate block candidates.
- Validators will still validate blocks themselves. Collators will attach collated data (proof of shard state) to blocks, allowing for validation without the need to monitor the shard.
In the current master
node branch, validators must still monitor all shards. However, you can launch collator nodes and configure your validators to collate through them.
Launching a collator node
To configure a collator node, use the following commands in the validator-engine-console
:
new-key
add-adnl <key-id-hex> 0
add-collator <key-id-hex> <wc>:<shard>
The new-key
and add-adnl
commands create a new ADNL address, while add-collator
starts a collator node for the specified shard using this ADNL address.
A collator for shard X
can create blocks for all shards that are either ancestors or descendants of X
. However, collator nodes cannot create blocks for the MasterChain; they are limited to the BaseChain.
In a simple scenario, you can use a node that monitors all shards and launch a collator for all of them by running: add-collator <key-id-hex> 0:8000000000000000
.
Alternatively, you can launch a partial node that monitors and collates only a subset of shards. For example, to launch a node with flags -M --add-shard 0:2000000000000000
, you would start the collator with the command add-collator <key-id-hex> 0:2000000000000000
. This collator will generate blocks in the designated group of shards.
Notes:
- A collator node generates blocks automatically, even without requests from validators.
- A collator node configured to generate blocks for a specific shard does not need to monitor other shards. However, it does require access to outbound message queues from neighbouring shard states for collation. This is accomplished by downloading these message queues from other nodes that monitor the relevant shards.