Configuration
Directory structure
By default, the folder at ~/.kyve
is where your configuration and data are saved.
. # ~/.kyve
├── data/ # Contains the application state and history.
└── config/
├── app.toml # Application-related configuration file.
├── client.toml # Client-related configuration file.
├── config.toml # Tendermint-related configuration file.
├── genesis.json # The genesis file.
├── node_key.json # Private key to use for node authentication in the p2p protocol.
└── priv_validator_key.json # Private key to use as a validator in the consensus protocol.
The home directory of kyved
can be specified and updated by using the global flag --home <directory>
Main config files
The Cosmos SDK automatically generates three configuration files inside ~/.kyve/config
:
config.toml
: used to configure Tendermint, learn more on Tendermint's documentationapp.toml
: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync, JSON-RPC, etc.client.toml
: generated by the Cosmos SDK, and used to configure your client, such as keyring backend, chain id or broadcast mode
Peer configuration
In ~/.kyve/config/config.toml
you can set your peers. We would recommend setting those peers in the config file directly. Available persistent peers can be found below:
- Mainnet
- Kaon
- Korellia
We recommend adding a certain amount of persistent peers. We provide initial nodes hosted in Europe and the United States.
EU:
b950b6b08f7a6d5c3e068fcd263802b336ffe047@18.198.182.214:26656
25da6253fc8740893277630461eb34c2e4daf545@3.76.244.30:26656
US:
146d27829fd240e0e4672700514e9835cb6fdd98@34.212.201.1:26656
fae8cd5f04406e64484a7a8b6719eacbb861c094@44.241.103.199:26656
Example configuration with all four peers:
persistent_peers = "b950b6b08f7a6d5c3e068fcd263802b336ffe047@18.198.182.214:26656,25da6253fc8740893277630461eb34c2e4daf545@3.76.244.30:26656,146d27829fd240e0e4672700514e9835cb6fdd98@34.212.201.1:26656,fae8cd5f04406e64484a7a8b6719eacbb861c094@44.241.103.199:26656"
Other persistent peers can be found here.
We recommend adding a certain amount of persistent peers. We provide initial nodes hosted in Europe and the United States.
EU:
430845649afaad0a817bdf36da63b6f93bbd8bd1@3.67.29.225:26656
b68e5131552e40b9ee70427879eb34e146ef20df@18.194.131.3:26656
US:
4f97b95345da25877da84533712795a8671b02c8@52.39.152.195:26656
164efedef3711d449604fefe88f79669ccd54447@52.10.203.131:26656
Example configuration with all four peers:
persistent_peers = "430845649afaad0a817bdf36da63b6f93bbd8bd1@3.67.29.225:26656,b68e5131552e40b9ee70427879eb34e146ef20df@18.194.131.3:26656,4f97b95345da25877da84533712795a8671b02c8@52.39.152.195:26656,164efedef3711d449604fefe88f79669ccd54447@52.10.203.131:26656"
Other peers can be found here.
Default persistent peers:
persistent_peers = "7871d64bdc41d87582c26c86545849e6153c6676@52.58.250.62:26656"
Other persistent peers can be found here.
Note: You can share your peer with the tendermint show-node-id
command.
./kyved tendermint show-node-id
a23f87b6501f53f71b03099ee7f5f22a7d8c0138
Disk Usage Optimization
The size of the blockchain database typically increases with time, depending on factors such as block speed and transaction volume.
Fortunately, there are several configurations that can significantly reduce the disk space required. However, some of these changes may only take full effect when you configure and sync from the beginning with the new settings.
State-Pruning
By default the last 362880 states are kept and at a 10 block interval the state gets pruned. By specifying "nothing" the node will not delete anything and all the history will be stored. This setting is required for "archival nodes". By specifying "everything" the 2 latest states will be kept, pruning at 10 block intervals.
pruning = "default | nothing | everything"
Of course custom pruning strategies can be applied. More information can be found in the comments of app.toml
.
If you don't plan on accessing past state or providing state sync we recommend settings this to "everything".
State-sync snapshots
State sync snapshots allow other nodes to rapidly join the network without replaying historical blocks, instead downloading and applying a snapshot of the application state at a given height. This is disabled by default and can be specified in app.toml
.
If you want to provide statesync to other nodes, you also need to have pruning enabled for the amount of recent blocks you want to provide state-sync for.
Example configuration for providing state-sync every 200 blocks for the past two days:
pruning = custom
pruning-keep-recent = 3000
pruning-interval = 100
snapshot-interval = 200
snapshot-keep-recent = 144
snapshot-interval specifies the block interval at which local state sync snapshots are taken (0 to disable).
Logging
The default logging level is set to "info" which generates a large number of logs. This level of logging can be helpful at the beginning to ensure that the node is syncing properly. However, once you have confirmed that syncing is progressing smoothly, you can reduce the log level to "warn" or "error". On config.toml
set the following:
log_level = "warn"
Tx-Index
If you are sure you do not need to query a transaction by its hash from your
node you can disable tx-indexing. In config.toml
under [tx_index]
set
indexer = "null"