Ultima-1.3.1

Release Summary

itemvaluecomment
Mandatory upgradeYes
Versionultima1.3.1
Upgrade Deadline2023.11.25 08:00 (UTC+8)
Tag version of Githubhttps://github.com/vision-consensus/vision-core/releases/tag/mainnet_ultima_v1.3.1
Docker Image Versionmaintainers/vision-mainnet:ultima_v1.3.1
maintainers/vision-mainnet:latest

Features changed in this version

Ultima version 1.3.1 has 1 updates

  1. Adding a proposal for a polling algorithm to control optimization after defragmenting unfrozen resources

CORE

1. Adding a proposal for a polling algorithm to control optimization after defragmenting unfrozen resources

Node Upgrade Procedure

Compile the source code to upgrade

1. Close the process

# Get the PID of the running vision-core process
ps aux | grep "java -Xmx.*g -XX:+UseConcMarkSweepGC -jar"
# Stop the process
kill -15 PID

2. back up the node's data

# First enter the working directory of your node, here we use $OLD_WORKDIR to represent the working directory of your node, the specific path should be replaced according to the directory you selected when deploying your node
cd $OLD_WORKDIR

# Back up the jar package of the currently running vision-core application
mv FullNode.jar FullNode.jar.$(date '+%FT%T').bak

# Back up the current database
tar --force-local -zcvf "output-directory-$(date '+%FT%T')-backup.tar.gz" output-directory

# Back up configuration files
cp vision-mainnet.config vision-mainnet.config.bak

3. Get the new version of the jar package

# Get the latest version of java source code
git clone https://github.com/vision-consensus/vision-core.git

# Compile the source code and get the FullNode.jar package
cd vision-core
gradle build -x test

# Copy the FullNode.jar package to the previous working directory, you need to replace $OLD_WORKDIR in the following command according to the working directory you are using
cp -a build/libs/FullNode.jar $OLD_WORKDIR

# Back in the node working directory
cd $OLD_WORKDIR

# Get the latest configuration file to replace the original one, assuming we have the configuration file in the configs subdirectory of our node's working directory and the configuration file name is the default vision-mainnet.config

wget https://vision-mainnet-configs.s3.us-east-2.amazonaws.com/stage001/vision-mainnet.config -O configs/vision-mainnet.config

📘

Get the official jar package directly

You can also get the official jar package directly to re-run the service.
Link address : < https://github.com/vision-consensus/vision-core/releases/download/mainnet_ultima_v1.3.1/FullNode.jar>

Alternate Links: https://vision-mainnet-latest-rocksdb-database-without-internal-tx.s3.us-east-2.amazonaws.com/vision-mainnet-fullnode-jars/1.3.1/FullNode.jar

4. Start the node

# Go to your original working directory, replace $OLD_WORKDIR by yourself
cd $OLD_WORKDIR

# Start the node
# FullNode:
nohup java -Xmx12g -XX:+UseConcMarkSweepGC -jar FullNode.jar -c configs/vision-mainnet.config &

# FVGuarantee:
nohup java -Xmx12g -XX:+UseConcMarkSweepGC -jar FullNode.jar --witness -p <privateKey> -c configs/vision-mainnet.config &

# If you need to use another witness account, please replace the <privateKey>

📘

Optimize memory allocation with Google tcmalloc library

If you are using the Google tcmalloc library, please set the relevant environment variables before the command to start the node, for example, on Ubuntu 18.04 the start command is as follows.

export LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4"
export TCMALLOC_RELEASE_RATE=10

nohup java -Xmx12g -XX:+UseConcMarkSweepGC -jar FullNode.jar -c configs/vision-mainnet.config &

  1. Upgrade completed, please wait until the whole network upgrade is completed.

📘

Delete backup data to save disk space

After the network upgrade, users can delete the packages, configuration files and database files we backed up in order to save disk space on the node machines.

Docker image upgrade

1. Close the container

# Shut down the docker container, please replace the container name of the vision-core service by yourself

docker stop $CONTAINER_NAME

docker rm $CONTAINER_NAME

2. backup node data

# Enter the external volume mapped by the vision-core service container, please replace VOLUME_NAME by yourself

cd $VOLUME_NAME

# Back up the current database

tar --force-local -zcvf "output-directory-$(date '+%FT%T')-backup.tar.gz" output-directory

3. Pull the latest docker image and update the configuration file

# Pull the docker image

docker pull maintainers/vision-mainnet:latest

# Update the configuration file, assuming our original configuration file is mainnet.config in the configs subdirectory of the external mapping volume of the docker container, we go to that external mapping volume and execute the following command.

wget https://vision-mainnet-configs.s3.us-east-2.amazonaws.com/stage001/vision-mainnet.config -O configs/mainnet.config

🚧

Pull the image first and then run the container

Please use "docker pull" to pull the latest image independently first. Then use the following docker run command to run the container. Otherwise, nodes that have pulled the image with the same name locally will run the container directly with the original image. In this case, the node is restarting the service without upgrading the underlying code.

4. Starting a Docker container service with a new image

# fv witness Node:
docker run -itd \
    -v "/data/mainnet:/data/vision" \
    -p 7080:7080 \
    -p 7081:7081 \
    -p 7082:7082 \
    -p 16666:16666 \
    -p 60061:60061 \
    -p 60071:60071 \
    -p 60081:60081 \
    --name vision-mainnet-FVGuarantee \
    maintainers/vision-mainnet:latest --private <private-key>

# Please replace <private-key> with the private key corresponding to the witness account used by your node.



# FullNode 节点:
docker run -itd \
    -v "/data/mainnet:/data/vision" \
    -p 7080:7080 \
    -p 7081:7081 \
    -p 7082:7082 \
    -p 16666:16666 \
    -p 60061:60061 \
    -p 60071:60071 \
    -p 60081:60081 \
    --name vision-fullnode \
    maintainers/vision-mainnet:latest

❗️

Keep the original external mount volume

When rerunning a container with a new image, be sure to keep the original external container hanging on the volume. Do not change it.

5. The upgrade is complete, please wait for the completion of the network-wide upgrade