Перейти к основному содержимому

Tips & Tricks

Additional Tips

Welcome to the Additional Tips section! Whether you're a seasoned farmer or just starting out with the Autonomys Network, these tips and tricks are designed to enhance your experience and efficiency. Here, we delve into practical advice and lesser-known techniques to help you fine-tune your farming setup and navigate common challenges with ease. From configuring your environment to managing background processes, these insights are tailored to ensure your Farmer operates smoothly and effectively. Let's dive in and explore how you can get the most out of your Autonomys Network Farmer.

Telemetry & Block Explorer

Explore the Autonomys Network in depth with our variety of telemetry tools and block explorers. Whether you're monitoring network activity or exploring blockchain data, these resources provide comprehensive insights into the network's performance and transactions.

  • Telemetry Server: Get real-time insights into network activity and performance metrics. Ideal for monitoring the overall health and status of the Autonomys Network.

  • Official Block Explorer: Our primary tool for viewing blocks, transactions, and network activity on the Autonomys Network. This explorer offers an intuitive interface and detailed information.

  • Subscan Block Explorer: An alternative block explorer providing detailed views of blocks, transactions, and network events. Subscan is known for its user-friendly interface and additional data analytics features.

  • Polkadot.js Block Explorer: For users familiar with the Polkadot ecosystem, this explorer offers a seamless experience for exploring the Autonomys Network using the Polkadot.js interface.

Snap Sync

As of the June 11, 2024 version, a new option called Snap Sync is available for fast syncing. It only works for the initial sync, but allow your node to be fully synced within hours instead of days. The parameter is --sync snap. Currently, Snap Sync only works with a farming node, not an operator node.

Snap Sync works by jumping to near the end of the chain, allowing you to sync to the current block more quickly. Once the initial jump is complete, it behaves like a full sync. In fact, if you haven't synced for more than a few days, it might be quicker to remove the node database and use Snap Sync to quickly sync to the most current block.

There are cases where you might not want to use Snap Sync. To perform a full sync you can either omit the sync parameter, or use --sync full. This is useful if you want to run an RPC node for public use or indexing. In that case, you would need to run an archival node, not just a pruned node. An archival node requires a full sync.

Plotting concurrency

Starting with the February 19th release, plotting performance was improved by increasing internal concurrency. During plotting, there are both parallel and sequential parts of the table generation. By generating several tables simultaneously, we can overlap the sequential parts with parallel parts, thus improving CPU utilization. While generating tables for all records requires significant RAM, producing tables for only a few records at a time offers an optimal balance between CPU and RAM usage. We added the --record-encoding-concurrency option to override the default behavior, which allocates one record for every two cores but does not exceed eight in parallel. According to our internal testing with P-cores, E-cores, and combinations of P+E cores, this setting appears to achieve peak performance. If you prefer to use the previous behavior, or if your RAM usage becomes too high, you can set --record-encoding-concurrency to 1. You may also experiment with setting it to 2, 3, etc., which may yield better results depending on your specific CPU/RAM configuration.

Create Missing Farms

A farmer has the option to specify whether the system should automatically create missing farms upon startup. If you provide a path to a plot that isn't found, the system will generate a new one. However, this may not be desirable if a drive fails to mount properly. By default, this option is set to true, but you can override it by adding --create false. Setting this flag to false after establishing your plots can prevent unintentional file writes to the wrong drive.

Record Chunks Reading Mode

Upon startup, each farm will benchmark the performance of every plot to identify the most efficient proving method, yielding either ConcurrentChunks or WholeSector results. If you already know the optimal method for your setup, you can specify it as an argument for each farm to save time benchmarking. For example, you can include record-chunks-mode= after defining the path and size, assigning the desired value, e.g., path=/mnt/subspace1,size=100G,record-chunks-mode=ConcurrentChunks. If you do not provide this parameter, the system will benchmark each disk on startup to identify the most efficient method.

Benchmarking Your Farmer

Benchmarking helps you test the plotting speed of your farmer against different versions of the Autonomys Network.

./subspace-farmer benchmark audit /path/to/your/plot

Replace /path/to/your/plot with the actual path to your plot. This will provide you with metrics and insights regarding plotting performance.

At the moment, Autonomys Node supports rayon implementation mechanism, that opens files as many times as there are farming threads and, for each thread, uses a dedicated file handle.

To interpret the results: typically, you assess throughput to determine the maximum auditable size across any number of disks. Both CPU and disk performance influence this metric.

To read more about audit benchmarking, read this forum post.

There is a second command available, which helps you determine how much time your farmer has after auditing to provide proof.

./subspace-farmer benchmark prove /path/to/your/plot

Each farmer has ~4 seconds to audit and prove, so depending on how fast auditing is, the remaining time will be used for proving. Proving performance doesn’t depend on the plot size.

If an environment does not have enough time for the proving step, a message such as this is logged by the farmer:

Proving for solution skipped due to farming time limit slot=6723846 sector_index=46

To read more about prove benchmarking, read this forum post.

Scrubbing Your Farmer

In certain situations, especially when the farmer terminates unexpectedly or encounters an error, it might fail to restart correctly. The scrub command assists in resolving such issues by cleaning or resetting the specified plot.

./subspace-farmer scrub /path/to/your/plot

Ensure to replace /path/to/your/plot with your actual plot path. Use this command cautiously as it modifies the plot state to recover from errors.

Specify Exact CPU Cores for Plotting/Replotting

This option will override any custom logic that the subspace farmer might otherwise use. You can specify the plotting CPU cores by adding --plotting-cpu-cores, followed by the desired cores parameters. Cores should be listed as comma-separated values, with whitespace used to separate different thread pools or encoding instances. For example, --plotting-cpu-cores 0,1 2,3 will result in two sectors being encoded simultaneously, each using a pair of CPU cores.

Similarly, you can customize the replotting CPU cores using --replotting-cpu-cores, followed by arguments similar to those used in the --plotting-cpu-cores example above.

Note that setting --plotting-cpu-cores requires --replotting-cpu-cores to be configured with the same number of CPU core groups. If the --replotting-cpu-cores setting is omitted, the node will default to using the same thread pool as for plotting.

Switching to a new snapshot from older/different versions of Autonomys

к сведению

Unless specifically mentioned by the Development team you should NOT have to wipe your configuration on new releases.

In general you should be able to download the latest release, and re-start the Node & Farmer with the same commands as you started to prior version with no errors.

There are some cases where version updates will cause issue with your Node & Farmer and you may have to wipe your node, typically when errors occur. If you have any issues you can always check our Forums and hop in our Discord Server to ask for help.

Wipe

If you were running a node previously, and want to switch to a new network, please perform these steps and then follow the guide again:

# Replace `FARMER_FILE_NAME` with the name of the farmer file you downloaded from releases
./FARMER_FILE_NAME wipe PATH_TO_FARM
# Replace `NODE_FILE_NAME` with the name of the node file you downloaded from releases
./NODE_FILE_NAME wipe NODE_DATA_PATH

Now follow the installation guide from the beginning.

Docker Wipe

In case of Docker setup run docker compose down -v (and manually delete custom directories if you have specified them).

Now follow the installation guide.

If you are having issues with running a node or farmer for Autonomys, feel free to join our Discord or even better you can take a look at our forums and review existing questions or post a new one if needed!

Help

There are extra commands and parameters you can use on farmer or node, use the --help after any other command to display additional options.

# Replace `FARMER_FILE_NAME` with the name of the node file you downloaded from releases
./FARMER_FILE_NAME farm --help

Timekeepers

Gemini-3g introduces Proof-of-Time and a new, optional role has been added to the node. Timekeepers help run PoT to ensure the security of the network. Timekeeping requires a high-performance core CPU but can be undertaken by any node runner. You can enable timekeeping with the following commands.

  • --timekeeper: To enable timekeeping on the node
  • --timekeeper-cpu-cores: To specify which cores the Timekeeper will run on.

Read more about Timekeepers

Node & Farmer Commands Guide

Both the node and the farmer have a variety of flags and parameters. To see a full list, append the --help flag to either the node or the farmer command.

Common Command Examples

Farming Changes

Please note that as of Gemini-3g farming no longer occurs while plotting. This is to ensure the plotting process occurs in the most efficient manner. You can change this by adding the --farm-during-initial-plotting flag to the farmer.

For both the node and the farmer, here are some frequently used commands:

  • Display farm information: ./FARMER_FILE_NAME info PATH_TO_FARM
  • Scrub the farm for errors: ./FARMER_FILE_NAME scrub PATH_TO_FARM
  • Erase all farmer-related data: ./FARMER_FILE_NAME wipe PATH_TO_FARM
  • Erase all node-related data: ./NODE_FILE_NAME wipe NODE_DATA_PATH

Utilizing Multiple Disks

To maximize storage capabilities, you can engage multiple disks directly. This is often more efficient than relying on RAID configurations:

Example:

./FARMER_FILE_NAME farm --reward-address WALLET_ADDRESS \
path=/media/ssd1,size=100GiB \
path=/media/ssd2,size=10T \
path=/media/ssd3,size=10T

Optimizing DSN Syncing

примечание

The DSN can be a nuanced topic, to better understand our Decentralized Storage Network, please refer to this guide from our Academy.

Parameters to Use with Caution

While it might be tempting to adjust certain parameters that seem related to DSN Syncing, it's advised to be cautious. Some options will not enhance syncing and are best left at their default values:

--out-peers
--in-peers
--dsn-target-connections
--dsn-pending-in-connections
--dsn-in-connections

The default parameters are set with the capabilities of common consumer modem/routers in mind. Adjusting certain parameters could enhance DSN sync performance by increasing parallelism. However, if you decide to increase them significantly, ensure that your modem/router is performant enough to handle the increased traffic. Node:

--dsn-out-connections
--dsn-pending-out-connections

Farmer: Increasing the values of the farmer parameters could increase the plotting speed.

--out-connections
--pending-out-connections

NUMA Support

примечание

NUMA support will benefit farmers with large, powerful CPUs and lots of RAM available. The required RAM amount depends on the processor and the number of threads.

Previously plotting/replotting thread pools were created for each farm separately even though only a configured number of them can be used at a time (by default just one). With the introduction of NUMA support, the farmer application has a thread pool manager that will create a necessary number of thread pools that will be allocated to currently plotting/replotting farms. When a thread pool is created, it is assigned to a set of CPU cores and will only be able to use those cores. Pinning doesn’t pin threads to cores 1:1, instead, the OS is free to move threads between cores, but only within CPU cores allocated for the thread pool. This will ensure plotting for a particular sector only happens on a particular CPU/NUMA node.

Enabling NUMA on Windows/Linux machines

On Linux and Windows, the farmer will detect NUMA systems and create a number of thread pools that correspond to the number of NUMA nodes. This means the default behavior will change for large CPUs and consume more memory as a result, but that can be changed to the previous behavior with CLI options if desired. To use NUMA, you need to enable it via the BIOS of your motherboard for the farmer to know it exists. This option is present in motherboards for Threadripper/Epyc processors but might exist in others too. If you don’t enable it, both OS and farmer will think you have a single UMA processor and will not be able to apply optimizations.

To read more about NUMA support and the benefits it provides for large CPUs read this forum post.

Changing number of open files limit (Linux)

Changing the number of open files limit on Linux is sometimes necessary for both system and application performance tuning. Applications that manage file sharing or distribution, including peer-to-peer systems, may require opening numerous connections to different peers simultaneously. A higher open files limit allows these applications to maintain more connections, improving data transfer rates and system efficiency.

This can also help to mitigate the Too Many Open Files error.

You can use the ulimit command to change the limit for open files temporarily. For example, setting ulimit -n 2048 will set the limit to 2048. This change is reverted after logging out or a reboot.

For a permanent change, you have two approaches:

  • To modify the kernel parameter fs.file-max. You can do this using the sysctl command sysctl -w fs.file-max=500000.
  • To set limits for individual users, you need to edit the /etc/security/limits.conf file. You can specify hard and soft limits for the number of open files username soft nofile 10000 for the soft limit and username hard nofile 30000 for the hard limit.