Environment Preparation
Initialize the Database and Local State
After configuring your database and buckets settings, before running Stellar Core for the first time, you must initialize the database:
stellar-core new-db
This command will initialize the database, as well as the bucket directory, and then exit. You can also use this command if your database gets corrupted and you want to restart it from scratch.
Automatic Maintenance
Some tables in stellar-core are used to publish ledger data to history archives.
If not managed properly, those tables will grow without bounds. To avoid this, a built-in scheduler will delete data from old ledgers that are not used anymore by other parts of the system.
By default, stellar-core will perform this automatic maintenance. The configuration fields that control the automatic maintenance behavior are:
AUTOMATIC_MAINTENANCE_PERIOD
,AUTOMATIC_MAINTENANCE_COUNT
If you need to regenerate the metadata, the simplest way is to replay ledgers for the range you're interested in after (optionally) clearing the database with the new-db
command referenced earlier.
In some cases automatic maintenance has just too much work to do in order to get back to the nominal state. This can occur following large catchup operations such as when performing a full catchup that may create a backlog of 10s of millions of ledgers.
If this happens, database performance can be restored. The node will require some downtime while you perform the following recovery commands:
- run the
maintenance
http command manually with a large number of ledgers, and - perform a database maintenance operation such as
VACUUM FULL
to reclaim/rebuild the database as needed.
Metadata Snapshots and Restoration
Some deployments of Stellar Core and Horizon will want to retain metadata for the entire history of the network. This metadata can be quite large and computationally expensive to regenerate anew by replaying ledgers in stellar-core from an empty initial database state, as described in the previous section.
This can be especially costly if it must be run more than once. For instance, when bringing a new node online. Or even if running a single node with Horizon, having already ingested the metadata once: a subsequent version of Horizon may have a schema change that entails re-ingesting it again.
Due to the very large size requirements, we recommend against retaining metadata for the whole network history.
Some operators therefore prefer to shut down their stellar-core (and/or Horizon) processes and take filesystem-level snapshots or database-level dumps of the contents of Stellar Core's database and bucket directory, and/or Horizon's database, after metadata generation has occurred the first time. Such snapshots can then be restored, putting stellar-core and/or Horizon in a state containing metadata without performing full replay.
Any reasonably recent state will do — if such a snapshot is a little old, stellar-core will replay ledgers from whenever the snapshot was taken to the current network state anyways — but this procedure can greatly accelerate restoring validator nodes, or cloning them to create new ones.
History Archives
Stellar Core normally interacts with one or more history archives, which are configurable facilities where Full Validators store flat files containing history checkpoints: bucket files and history logs. History archives are usually off-site commodity storage services such as Amazon S3, Google Cloud Storage, Azure Blob Storage, or custom SCP/SFTP/HTTP servers.
Use command templates in the config file to give the specifics of which services you will use and how to access them. The example config will demonstrate how to configure a history archive through command templates.
Configuring to Get Data from an Archive
No matter what kind of node you're running, you should configure it to get
history from one or more public archives. You can configure any number of archives to download from: Stellar Core will automatically round-robin between them.
When you're choosing your quorum set, you should include high-quality nodes — which, by defintion, publish archives — and add the location for each node's archive in the HISTORY
field in the validators array.
If you notice a lot of errors related to downloading archives, you should ensure all archives in your configuration are up-to-date. You can review the example Mainnet configuration to see how you might use up-to-date Tier 1 validators for their history archives.
Configuring to Publish Data to an Archive
Archive sections can also be configured with put
and mkdir
commands to cause the instance to publish to that archive (for nodes configured as full validators).
The very first time you want to use your archive, before starting your node, you need to initialize it with:
stellar-core new-hist <historyarchive>
More detailed guidance and strategies for publishing history archives can be found in the publishing history archives page. Please check there for more information.
- Make sure that you configure both
put
andmkdir
ifput
doesn't automatically create sub-folders. - Writing to the same archive from different nodes is not supported and will result in undefined behavior, potentially data loss.
- Do not run
new-hist
on an existing archive unless you want to erase it.
Other Preparation
In addition, your should ensure that your operating environment is also functional. This means you will have considered and prepared the following.
- Logging and log rotation
- Monitoring and alerting infrastructure