Scylla Cluster

Create a ScyllaDB Cluster - Single Data Center (DC)

Prerequisites

  • Make sure that all the ports are open.

    Port Description Protocol
    9042 CQL (native_transport_port) TCP
    9142 SSL CQL (secure client to node) TCP
    7000 Inter-node communication (RPC) TCP
    7001 SSL inter-node communication (RPC) TCP
    7199 JMX management TCP
    10000 Scylla REST API TCP
    9180 Prometheus API TCP
    9100 node_exporter (Optionally) TCP
    9160 Scylla client port (Thrift) TCP
    19042 Native shard-aware transport port TCP
    19142 Native shard-aware transport port (SSL) TCP
  • Obtain the IP addresses of all nodes which have been created for the cluster.

  • Select a unique name as cluster_name for the cluster (identical for all the nodes in the cluster).

Procedure

These steps need to be done for each of the nodes in the new cluster.

  1. Install Scylla on a node.Follow the Scylla install procedure up to scylla.yaml configuration phase.

    Note: In case that the node starts during the process follow these instructions

    Procedure

    1. Stop the Scylla service
    sudo systemctl stop scylla-server
    
    1. Delete the Data and Commitlog folders.
    sudo rm -rf /data/ds/sdc/scylla/data
    sudo find /data/ds/sdc/scylla/commitlog -type f -delete
    sudo find /data/ds/sdc/scylla/hints -type f -delete
    sudo find /data/ds/sdc/scylla/view_hints -type f -delete
    
  2. In the scylla.yaml file, edit the parameters listed below. The file can be found under /etc/scylla/

    • cluster_name - Set the selected cluster_name
    • seeds - Specify the IP of the first node and only the first node. New nodes will use the IP of this seed node to connect to the cluster and learn the cluster topology and state.
    • listen_address - IP address that Scylla used to connect to other Scylla nodes in the cluster
    • endpoint_snitch - Set the selected snitch
    • rpc_address - Address for client connection (Thrift, CQL)
  3. This step needs to be done only if you are using the GossipingPropertyFileSnitch. If not, skip this step. In the cassandra-rackdc.properties file, edit the parameters listed below. The file can be found under /etc/scylla/

    • dc - Set the datacenter name
    • rack - Set the rack name For example:
    # cassandra-rackdc.properties
    #
    # The lines may include white spaces at the beginning and the end.
    # The rack and data center names may also include white spaces.
    # All trailing and leading white spaces will be trimmed.
    #
    dc=thedatacentername
    rack=therackname
    # prefer_local=<false | true>
    # dc_suffix=<Data Center name suffix, used by EC2SnitchXXX snitches>
    
  4. After Scylla has been installed and configured, edit scylla.yaml file on all the nodes, using the first node as the seed node. Start the seed node, and once it is in UN state, repeat for all the other nodes, each after the previous is in UN state.

sudo systemctl start scylla-server
  1. Verify that the node has been added to the cluster using nodetool status

Example

This example shows how to install and configure a five nodes cluster using GossipingPropertyFileSnitch as the endpoint_snitch, each node on a different rack.

  1. Installing Five Scylla nodes, the IP’s are:
172.21.0.61
172.21.0.62
172.21.0.63
172.21.0.64
172.21.0.65
  1. In each Scylla node, edit the scylla.yaml file

    172.21.0.61

    cluster_name: 'pssb-ds-sdc'
    seeds: "172.21.0.61"
    endpoint_snitch: GossipingPropertyFileSnitch
    rpc_address: "172.21.0.61"
    listen_address: "172.21.0.61"
    

    172.21.0.62

    cluster_name: 'pssb-ds-sdc'
    seeds: "172.21.0.61"
    endpoint_snitch: GossipingPropertyFileSnitch
    rpc_address: "172.21.0.62"
    listen_address: "172.21.0.62"
    

    172.21.0.63

    cluster_name: 'pssb-ds-sdc'
    seeds: "172.21.0.61"
    endpoint_snitch: GossipingPropertyFileSnitch
    rpc_address: "172.21.0.63"
    listen_address: "172.21.0.63"
    

    172.21.0.64

    cluster_name: 'pssb-ds-sdc'
    seeds: "172.21.0.61"
    endpoint_snitch: GossipingPropertyFileSnitch
    rpc_address: "172.21.0.64"
    listen_address: "172.21.0.64"
    

    172.21.0.65

    cluster_name: 'pssb-ds-sdc'
    seeds: "172.21.0.61"
    endpoint_snitch: GossipingPropertyFileSnitch
    rpc_address: "172.21.0.65"
    listen_address: "172.21.0.65"
    
  2. This step needs to be done only if using GossipingPropertyFileSnitch. In each Scylla node, edit the cassandra-rackdc.properties file

    172.21.0.61

    # cassandra-rackdc.properties
    #
    # The lines may include white spaces at the beginning and the end.
    # The rack and data center names may also include white spaces.
    # All trailing and leading white spaces will be trimmed.
    #
    dc=pssb-ds-sdc
    rack=pssb1avm001-rack01
    # prefer_local=<false | true>
    # dc_suffix=<Data Center name suffix, used by EC2SnitchXXX snitches>
    

    172.21.0.62

    # cassandra-rackdc.properties
    #
    # The lines may include white spaces at the beginning and the end.
    # The rack and data center names may also include white spaces.
    # All trailing and leading white spaces will be trimmed.
    #
    dc=pssb-ds-sdc
    rack=pssb1avm002-rack01
    # prefer_local=<false | true>
    # dc_suffix=<Data Center name suffix, used by EC2SnitchXXX snitches>
    

    172.21.0.63

    # cassandra-rackdc.properties
    #
    # The lines may include white spaces at the beginning and the end.
    # The rack and data center names may also include white spaces.
    # All trailing and leading white spaces will be trimmed.
    #
    dc=pssb-ds-sdc
    rack=pssb1abm003-rack01
    # prefer_local=<false | true>
    # dc_suffix=<Data Center name suffix, used by EC2SnitchXXX snitches>
    

    172.21.0.64

    # cassandra-rackdc.properties
    #
    # The lines may include white spaces at the beginning and the end.
    # The rack and data center names may also include white spaces.
    # All trailing and leading white spaces will be trimmed.
    #
    dc=pssb-ds-sdc
    rack=pssb1avm004-rack01
    # prefer_local=<false | true>
    # dc_suffix=<Data Center name suffix, used by EC2SnitchXXX snitches>
    

    172.21.0.65

    # cassandra-rackdc.properties
    #
    # The lines may include white spaces at the beginning and the end.
    # The rack and data center names may also include white spaces.
    # All trailing and leading white spaces will be trimmed.
    #
    dc=pssb-ds-sdc
    rack=pssb1avm005-rack01
    # prefer_local=<false | true>
    # dc_suffix=<Data Center name suffix, used by EC2SnitchXXX snitches>
    
  3. Starting Scylla nodes, since our seed node is 172.21.0.61 we will start it first, wait until it is in a UN state, and repeat for the other nodes

    sudo systemctl start scylla-server
    
  4. Verify that the node has been added to the cluster by using the nodetool status command

    Datacenter: pssb-ds-sdc
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address      Load       Tokens       Owns    Host ID                               Rack
    UN  172.21.0.65  1.63 GB    256          ?       2be7b1d1-6a70-4a91-86d5-a79018875dc4  pssb1avm005-rack01
    UN  172.21.0.64  1.56 GB    256          ?       ef604852-293d-4f9f-8420-d526044a7771  pssb1avm004-rack01
    UN  172.21.0.61  1.56 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
    UN  172.21.0.63  1.63 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
    UN  172.21.0.62  1.51 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01
    
    Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
    

Cluster Managment

Adding a New Node Into an Existing ScyllaDB Cluster

When you add a new node, other nodes in the cluster stream data to the new node. This operation is called bootstrapping and may be time-consuming, depending on the data size and network bandwidth.

Prerequisites

  1. Before adding the new node, check the node’s status in the cluster using nodetool status command. You cannot add new nodes to the cluster if any nodes are down.If any node in the cluster down,first remove the node from the cluster and try to add new node.

For example:

Datacenter: pssb-ds-sdc
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns    Host ID                               Rack
UN  172.21.0.64  1.86 GB    256          ?       ef604852-293d-4f9f-8420-d526044a7771  pssb1avm004-rack01
UN  172.21.0.61  1.87 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
UN  172.21.0.63  2.31 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
UN  172.21.0.62  1.82 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
  1. Login to one of the nodes in the cluster to collect the following information:
    • cluster_name - grep cluster_name /etc/scylla/scylla.yaml
    • seeds - grep seeds: /etc/scylla/scylla.yaml
    • endpoint_snitch - grep endpoint_snitch /etc/scylla/scylla.yaml
    • Scylla version - scylla –version
    • Authenticator - grep authenticator /etc/scylla/scylla.yaml

Procedure

  1. Install Scylla on a new node.Make sure that the Scylla version of the new node is identical to the other nodes in the cluster.

    • If the node starts during the process, follow

      • If, for any reason, the Scylla service started before you had a chance to update the configuration file, some of the system tables may already reflect an incorrect status, and unfortunately, a simple restart will not fix the issue. In this case, the safest way is to stop the service, clean all of the data, and start the service again.

      Procedure

      1. Stop the Scylla service.
      sudo systemctl stop scylla-server
      
      1. Delete the Data and Commitlog folders.
      sudo rm -rf /data/ds/sdc/scylla/data
      sudo find /data/ds/sdc/scylla/commitlog -type f -delete
      sudo find /data/ds/sdc/scylla/hints -type f -delete
      sudo find /data/ds/sdc/scylla/view_hints -type f -delete
      
      1. Start the Scylla service.
      sudo systemctl start scylla-server
      
  2. In the scylla.yaml file in /etc/scylla/, edit the following parameters:

    • cluster_name - Specifies the name of the cluster.
    • listen_address - Specifies the IP address that Scylla used to connect to the other Scylla nodes in the cluster.
    • endpoint_snitch - Specifies the selected snitch.
    • rpc_address - Specifies the address for client connections (Thrift, CQL).
    • seeds - Specifies the IP address of an existing node in the cluster. The new node will use this IP to connect to the cluster and learn the cluster topology and state.
  3. This step needs to be done only if using GossipingPropertyFileSnitch. In each Scylla node, edit the cassandra-rackdc.properties file

    Node : 172.21.0.65

    # cassandra-rackdc.properties
    #
    # The lines may include white spaces at the beginning and the end.
    # The rack and data center names may also include white spaces.
    # All trailing and leading white spaces will be trimmed.
    #
    dc=pssb-ds-sdc
    rack=pssb1avm005-rack01
    # prefer_local=<false | true>
    # dc_suffix=<Data Center name suffix, used by EC2SnitchXXX snitches>
    
  4. Start the ScyllaDB node with the following command:

    sudo systemctl start scylla-server
    
  5. Verify that the node was added to the cluster using nodetool status command. Other nodes in the cluster will be streaming data to the new node, so the new node will be in Up Joining (UJ) status. Wait until the node’s status changes to Up Normal (UN) - the time depends on the data size and network bandwidth. For example:

    • Nodes in the cluster are streaming data to the new node:
      Datacenter: pssb-ds-sdc
      =======================
      Status=Up/Down
      |/ State=Normal/Leaving/Joining/Moving
      --  Address      Load       Tokens       Owns    Host ID                               Rack
      UN  172.21.0.64  1.86 GB    256          ?       ef604852-293d-4f9f-8420-d526044a7771  pssb1avm004-rack01
      UN  172.21.0.61  1.87 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
      UN  172.21.0.63  2.31 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
      UN  172.21.0.62  1.82 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01
      UJ  172.21.0.65  1124.42 KB    256          ?    2be7b1d1-6a70-4a91-86d5-a79018875dc4    pssb1avm005-rack01
      
      Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
      
    • Nodes in the cluster finished streaming data to the new node:
      Datacenter: pssb-ds-sdc
      =======================
      Status=Up/Down
      |/ State=Normal/Leaving/Joining/Moving
      --  Address      Load       Tokens       Owns    Host ID                               Rack
      UN  172.21.0.65  1.63 GB    256          ?       2be7b1d1-6a70-4a91-86d5-a79018875dc4  pssb1avm005-rack01
      UN  172.21.0.64  1.56 GB    256          ?       ef604852-293d-4f9f-8420-d526044a7771  pssb1avm004-rack01
      UN  172.21.0.61  1.56 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
      UN  172.21.0.63  1.63 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
      UN  172.21.0.62  1.51 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01
      
      Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
      
  6. When the new node status is Up Normal (UN), run the nodetool cleanup command on all nodes in the cluster except for the new node that has just been added. Cleanup removes keys that were streamed to the newly added node and are no longer owned by the node.

  7. Wait until the new node becomes UN (Up Normal) in the output of nodetool status on one of the old nodes.

Remove a Node from a ScyllaDB Cluster

You can remove nodes from your cluster to reduce its size.

Removing a Running Node

  1. Run the nodetool status command to check the status of the nodes in your cluster.

    Datacenter: pssb-ds-sdc
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address      Load       Tokens       Owns    Host ID                               Rack
    UN  172.21.0.65  1.63 GB    256          ?       2be7b1d1-6a70-4a91-86d5-a79018875dc4  pssb1avm005-rack01
    UN  172.21.0.64  1.56 GB    256          ?       ef604852-293d-4f9f-8420-d526044a7771  pssb1avm004-rack01
    UN  172.21.0.61  1.56 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
    UN  172.21.0.63  1.63 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
    UN  172.21.0.62  1.51 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01
    
    Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
    
  2. If the node status is Up Normal (UN), run the nodetool decommission command to remove the node you are connected to. Using nodetool decommission is the recommended method for cluster scale-down operations. It prevents data loss by ensuring that the node you’re removing streams its data to the remaining nodes in the cluster.

  3. Run the nodetool netstats command to monitor the progress of the token reallocation.

  4. Run the nodetool status command to verify that the node has been removed. For example node05(pssb1avm005) is removed from the cluster

    Datacenter: pssb-ds-sdc
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address      Load       Tokens       Owns    Host ID                               Rack
    UN  172.21.0.64  1.56 GB    256          ?       ef604852-293d-4f9f-8420-d526044a7771  pssb1avm004-rack01
    UN  172.21.0.61  1.56 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
    UN  172.21.0.63  1.63 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
    UN  172.21.0.62  1.51 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01
    
    Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
    
  5. Manually remove the data and commit log stored on that node. When a node is removed from the cluster, its data is not automatically removed. You need to manually remove the data to ensure it is no longer counted against the load on that node. Delete the data with the following commands:

    sudo rm -rf /data/ds/sdc/scylla/data
    sudo find /data/ds/sdc/scylla/commitlog -type f -delete
    sudo find /data/ds/sdc/scylla/hints -type f -delete
    sudo find /data/ds/sdc/scylla/view_hints -type f -delete
    

    Safely Remove a Joining Node

    Sometimes when adding a node to the cluster, it gets stuck in a JOINING state (UJ) and never completes the process to an Up-Normal (UN) state. The only solution is to remove the node. As long as the node did not join the cluster, meaning it never went into UN state, you can stop this node, clean its data, and try again.

    1. Run the nodetool drain command (Scylla stops listening to its connections from the client and other nodes).
    2. Stop the node
    sudo systemctl stop scylla-server
    
    1. Clean the data
    sudo rm -rf /data/ds/sdc/scylla/data
    sudo find /data/ds/sdc/scylla/commitlog -type f -delete
    sudo find /data/ds/sdc/scylla/hints -type f -delete
    sudo find /data/ds/sdc/scylla/view_hints -type f -delete
    
    1. Start the node
    sudo systemctl start scylla-server
    

    Removing an Unavailable Node

    If the node status is Down Normal (DN), you should try to restore it. Once the node is up, use the nodetool decommission command to remove it.If all attempts to restore the node have failed and the node is permanently down, you can remove the node by running the nodetool removenode command providing the Host ID of the node you are removing. Example:

    nodetool removenode <Host ID>
    

Remove a Seed Node from Seed List

This procedure describes how to remove a seed node from the seed list.

Prerequisites

Verify that the seed node you want to remove is listed as a seed node in the scylla.yaml file by running cat /etc/scylla/scylla.yaml | grep seeds:

Procedure

  1. Update the Scylla configuration file, scylla.yaml, which can be found under /etc/scylla/. For example: Seed list before removing the node:
    - seeds: "172.21.0.61,172.21.0.62,172.21.0.63,172.21.0.64,172.21.0.65"
    
    Seed list after removing the node:
    - seeds: "172.21.0.61,172.21.0.62,172.21.0.63,172.21.0.64"
    
  2. Scylla will read the updated seed list the next time it starts. You can force Scylla to read the list immediately by restarting Scylla as follows:
sudo systemctl restart scylla-server

Replace a Dead Node in a ScyllaDB Cluster

Replace dead node operation will cause the other nodes in the cluster to stream data to the node that was replaced. This operation can take some time (depending on the data size and network bandwidth).

This procedure is for replacing one dead node. To replace more than one dead node, run the full procedure to completion one node at a time.

Prerequisites Verify the status of the node using nodetool status command, the node with status DN is down and need to be replaced

```
Datacenter: pssb-ds-sdc
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns    Host ID                               Rack
DN  172.21.0.64  1.56 GB    256          ?       ef604852-293d-4f9f-8420-d526044a7771  pssb1avm004-rack01
UN  172.21.0.61  1.56 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
UN  172.21.0.63  1.63 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
UN  172.21.0.62  1.51 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
```

It’s essential to ensure the replaced (dead) node will never come back to the cluster, which might lead to a split-brain situation. Remove the replaced (dead) node from the cluster network or VPC.

Log in to the dead node and manually remove the data if you can. Delete the data with the following commands:

sudo rm -rf /data/ds/sdc/scylladata
sudo find /data/ds/sdc/scyllacommitlog -type f -delete
sudo find /data/ds/sdc/scyllahints -type f -delete
sudo find /data/ds/sdc/scyllaview_hints -type f -delete

Login to one of the nodes in the cluster with (UN) status. Collect the following info from the node:

cluster_name - cat /etc/scylla/scylla.yaml | grep cluster_name
seeds - cat /etc/scylla/scylla.yaml | grep seeds:
endpoint_snitch - cat /etc/scylla/scylla.yaml | grep endpoint_snitch
Scylla version - scylla --version

Procedure

  1. Install Scylla on a new node, see Getting Started for further instructions. Follow the Scylla install procedure up to scylla.yaml configuration phase. Ensure that the Scylla version of the new node is identical to the other nodes in the cluster.
  2. In the scylla.yaml file edit the parameters listed below. The file can be found under /etc/scylla/.
cluster_name - Set the selected cluster_name
listen_address - IP address that Scylla uses to connect to other Scylla nodes in the cluster
seeds - Set the seed nodes
endpoint_snitch - Set the selected snitch
rpc_address - Address for client connection (Thrift, CQL)
  1. Add the replace_node_first_boot parameter to the scylla.yaml config file on the new node. This line can be added to any place in the config file. After a successful node replacement, there is no need to remove it from the scylla.yaml file. (Note: The obsolete parameters “replace_address” and “replace_address_first_boot” are not supported and should not be used). The value of the replace_node_first_boot parameter should be the Host ID of the node to be replaced.

For example (using the Host ID of the failed node from above):

replace_node_first_boot: 675ed9f4-6564-6dbd-can8-43fddce952gy
  1. Start Scylla node.
sudo systemctl start scylla-server
  1. Verify that the node has been added to the cluster using nodetool status command.

For example:

```
Datacenter: pssb-ds-sdc
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns    Host ID                               Rack
DN  172.21.0.64  1.56 GB    256          ?       ef604852-293d-4f9f-8420-d526044a7771  pssb1avm004-rack01
UN  172.21.0.61  1.56 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
UN  172.21.0.63  1.63 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
UN  172.21.0.62  1.51 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless 
```
172.21.0.64 is the dead node.

The replacing node 172.21.0.65 will be bootstrapping data. We will not see 172.21.0.65 in nodetool status during the bootstrap.

Use nodetool gossipinfo to see 172.21.0.65 is in NORMAL status.

/172.21.0.65
  generation:1553759984
  heartbeat:104
  HOST_ID:655ae64d-e3fb-45cc-9792-2b648b151b67
  STATUS:NORMAL
  RELEASE_VERSION:3.0.8
  X3:3
  X5:
  NET_VERSION:0
  DC:DC1
  X4:0
  SCHEMA:2790c24e-39ff-3c0a-bf1c-cd61895b6ea1
  RPC_ADDRESS:172.21.0.64
  X2:
  RACK:B1
  INTERNAL_IP:172.21.0.64

/172.21.0.64
  generation:1553759866
  heartbeat:2147483647
  HOST_ID:675ed9f4-6564-6dbd-can8-43fddce952gy
  STATUS:shutdown,true
  RELEASE_VERSION:3.0.8
  X3:3
  X5:0:18446744073709551615:1553759941343
  NET_VERSION:0
  DC:DC1
  X4:1
  SCHEMA:2790c24e-39ff-3c0a-bf1c-cd61895b6ea1
  RPC_ADDRESS:172.21.0.65
  RACK:B1
  LOAD:1.09776e+09
  INTERNAL_IP:172.21.0.65

After the bootstrapping is over, nodetool status will show:

```
Datacenter: pssb-ds-sdc
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns    Host ID                               Rack
UN  172.21.0.65  1.56 GB    256          ?       655ae64d-e3fb-45cc-9792-2b648b151b67  pssb1avm005-rack01
UN  172.21.0.61  1.56 GB    256          ?       e912b5c2-b277-4b70-aacc-637497d4384d  pssb1avm001-rack01
UN  172.21.0.63  1.63 GB    256          ?       cbc270bc-12f9-4d74-b3d8-022afa07feab  pssb1abm003-rack01
UN  172.21.0.62  1.51 GB    256          ?       7a55d4d5-645d-4b83-8d32-65bdf42fbc90  pssb1avm002-rack01

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless 
```
  1. Run the nodetool repair command on the node that was replaced to make sure that the data is synced with the other nodes in the cluster. You can use Scylla Manager to run the repair.