How to Build an EOS FULL NODE SERVER

CUBE
CUBE
Published in
6 min readOct 11, 2018

--

How to build an EOS full node, as shown by ITAM Network

Preparation(preliminary knowledge)

The most crucial infrastructure of EOS is the BP’s. This is because they not only produce blocks and process transactions, but also provide http end point (access point) to the users. However, even with the tremendous efforts by the BPs, the increasing demands of the EOS users is difficult to keep up with in a fast and safe manner.

The requirements by the DApp developers especially require a great amount of resources depending on the query amount of information, but with the current state where the BPs solely need to manage these loads, it doesn’t look like pressuring to increase the infrastructure of BPs or to simply wait to be a good solution.

Moreover, in the case of DApps that inevitably needs a large amount of Transaction’s Action Data for meaningful action (e.g. requirement of a certain user’s action data in sequential order, or to show the history), synchronizing the block data of the blockchain for the local server may be an option.

However, in the case of EOS, the block size increases at a very rapid speed and the equipment needed for installment has high specification. Thus, the decision to build a Full Node is something to think long and hard about, especially with the money needed to be invested.

This post is to be a reference to those that want to build a Full Node server like this, showing the methods and appropriate system specifications needed. For a technological benchmark, the infrastructure used for base to explain will be AWS.

MongoDB PlugIn

Before building the server, you need to think about if what you are trying to do really requires the synchronization of a Full Node. Most of the time, the main reason would be to fetch (withdrawal/query) the information related to the block data’s history.

Then how would you be able to get the data after the synchronization of the blocks, and retrieve the information of the keyword? There’s the method of looking at each and every one of the data of the blocks in the blockchain, but there’s a better methodology of retrieving the data, known as DB. Plus, in the case of EOS, even the Dawn version before the mainnet has the mongodb plugin.

Yes, this plugin may have only been viable after 1.X, but what’s important is that this plugin offers the feature of saving the transactions into the appropriate format onto the DB when synchronizing the blocks on EOS. This means that DApp developers do not need to look at each and every data of the blockchain’s blocks, but can simply retrieve the info from the DB with Query. Also, I believe there won’t be that many cases where the DB plugin isn’t used when building a Full Node server for DApps.

In this post, we will be linking the MongoDB Plugin.

Nodeos Installation

In the case of Nodeos, it requires a minimum of 8Gbytes of RAM and at least 20G of free space on the system for installment. Because the proposed specification may be problematic in operation, I chose t2.xlarge (vcpu: 4, RAM: 16G) based on the T2 instance that is provided in a general form in AWS.

Once the EC2 setting on AWS is complete, proceed with the general installation. The pre-dawn installation method had many problems, such as having to solve the many dependencies by hand, but in the current version of 1.3.X, it is easy to install by using the installation script.

# cd ~
# git clone https://github.com/EOSIO/eos --recursive
.... [The download will proceed]
# cd eos
# ./eosio_build.sh
.... [The eosio including the dependence package will be built. This requires a lot of time.]
# cd build
# make install
.... [The eosio executable file and the header files will be saved to the system. By default, the executable is located in / usr / local / eosio / bin. If necessary, it will be added to PATH and used.]

Once the installation is finished, complete the settings later, and install mongodb first.

Caution) When you install eosio, the mongodb cpp driver and mongodb server will be installed in the ~/opt directory. By default, it’s set to work if executed with the setting of mongodb, but in this writing, we will be separating the mongodb into a different ec2 host. This is because mongodb also requires a lot of CPU, RAM, and HDD space when synchronizing.

MongoDB Installation

Because the transaction amount of EOS is hefty, synchronization of mongodb also requires a great amount of system resources. Synchronization and replay of the chain especially requires a large amount of query, and so I chose t2.xlarge (vcpu: 4, RAM: 16G) based on the T2 instance that is provided in general form in AWS. In addition, for easy installation of mongodb, the OS chosen was the Amazon Linux 2 AMI version provided by AWS.

If the EC2 setting is complete on AWS, proceed with the installation following the general method. The detailed installation process is beyond the scope of this writing, so please refer to other documents in case of problems.

First, use yum to add the below repository that’s needed for the installation of mongodb.

# vim /etc/yum.repos.d/mongodb-org-3.6.repo[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc

Use yum from the added repository to install mongodb.

yum install mongodb-org

Modify the configuration file to enable external connections.

# vim /etc/mongod.confnet:
port: 27017
bindIp: 0.0.0.0

Initial Node Server Execution

The configuration files and data storage directories of the node server are executed initially and created. To show an easy method in this post, since we will use the default file location and configuration file, first run nodeos once and then exit by pressing the Ctrl-C keys.

First, you need to download the genesis block file because you need the information of the genesis block for initial execution. You can paste as below and save it as genesis.json, or just download it.

wget https://eosnodes.privex.io/static/genesis.json{
"initial_timestamp": "2018-06-08T08:08:08.888",
"initial_key": "EOS7EarnUhcyYqmdnPon8rm7mBCTnBoot6o7fE2WzjvEX2TdggbL3",
"initial_configuration": {
"max_block_net_usage": 1048576,
"target_block_net_usage_pct": 1000,
"max_transaction_net_usage": 524288,
"base_per_transaction_net_usage": 12,
"net_usage_leeway": 500,
"context_free_discount_net_usage_num": 20,
"context_free_discount_net_usage_den": 100,
"max_block_cpu_usage": 200000,
"target_block_cpu_usage_pct": 1000,
"max_transaction_cpu_usage": 150000,
"min_transaction_cpu_usage": 100,
"max_transaction_lifetime": 3600,
"deferred_trx_expiration_window": 600,
"max_transaction_delay": 3888000,
"max_inline_action_size": 4096,
"max_inline_action_depth": 4,
"max_authority_depth": 6
}
}

Add the below option and run nodeos.

nodeos --genesis-json=genesis.json

As soon as nodeos is running, press Ctrl-C to exit.

Adding Data Storage Volume

Because we are using AWS ec2, we must add volume needed for storing the block data created by nodeos and the data from the database created by the mongodb plugin. As of time of writing (Oct. 2018), since the block data is 56G and the database is about 510G, we added 200G volume and two 1000G volume, respectively, and mounted to the /data, /eosdata directory. I will not cover how to add volumes on ec2 or how to mount volumes as directories.

Setting the Node Server

Modification of the configuration file before running the node server is necessary. There isn’t much that needs to be changed in the default configuration, but it may take a long time because you need to write the addresses of the p2p nodes synchronized to the main net.

The overall reference can be found at https://eosnodes.privex.io/

abi-serializer-max-time-ms = 15000
chain-state-db-size-mb = 4096
https-validate-host = false
mongodb-queue-size = 4096
mongodb-uri = mongodb://XXX.XXX.XXX.XXX:27017/EOS

plugin = eosio::http_plugin
plugin = eosio::chain_plugin
plugin = eosio::mongo_db_plugin

p2p-peer-address = bp.eosbeijing.one:8080
....

Since the above configuration file is for setting the base point for building a Full Node server, reviewing the detailed configuration file and using the appropriate plug-in/setting value suitable for the purpose of use is highly recommended.

Running the Node Server and Conclusion

Finally, run the nodeos to see if the blocks are synchronized. Since the messages are very detailed, you can refer to the information on the console.

There aren’t any specific technical issues in building a Full Node server, but it takes quite a while to synchronize. This term isn’t merely a few hours; to synchronize the 1.5 million blocks as of (August 2018) mentioned in this post, it took about a week and a half.

Furthermore, although it may be possible to adjust the cost by optimizing the specifications, based on this post, it will cost about $ 500 per month. Also, server operation and continuing management are necessary in addition to the cost as well.

Therefore, deep thoughts are necessary before building a Full Node server, in order to determine whether it is actually necessary for the development of the DApp service.

ITAM Games is a blockchain platform for a transparent gaming ecosystem

Subscribe to ITAM Games and receive the latest info.

Visit the ITAM Games Telegram to communicate regarding ITAM Games and Blockchain. Join by clicking the link below! 👫

Website: https://itam.games
Telegram: https://t.me/itamgames

--

--