PSorbit Gluster Filesystem setup

Setting Up a 3-Node GlusterFS Replication File System for PS-orbit cluster

Process of setting up a GlusterFS volume with 3 nodes: psorbit-node01, psorbit-node02, psorbit-node03. Each node will have 300MB of storage for the brick, and the volume will be replicated across all 3 nodes.

Node and Disk Configuration

  • Node Names:

    • psorbit-node01 - 172.21.0.90 (KVM Node 1)
    • psorbit-node02 - 172.21.0.91 (KVM Node 1)
    • psorbit-node03 - 172.21.0.92 (KVM Node 1)
  • Brick Path:

    • For all nodes: /export/vdb/brick
  • Mount Point: /data/ps/orbit

  • Disk Capacity per Node: 300MB per brick

Step 1: Install GlusterFS on All Nodes and configure

  1. Install GlusterFS on all 3 nodes:

    • On each node, run the following commands:
     add-apt-repository ppa:gluster/glusterfs-11
     apt update
    apt install glusterfs-server
    
  2. Configure port usage:

    • On each node, replace the content in the file /etc/glusterfs/glusterd.vol with:
    volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option transport.socket.listen-port 24007
    option ping-timeout 0
    option event-threads 1
    #option lock-timer 180
    #option transport.address-family inet6
    option base-port 49152
    option max-port  49252
    end-volume
    
  3. Configure firewall rules:

    ufw allow 49152:49252 && ufw allow 49152
    ufw reload
    ufw status | grep -E '^(491|2400)'
    
  4. Start and Enable GlusterFS on each node:

    systemctl enable --now glusterd
    
  5. Verify the GlusterFS service is running:

    systemctl status glusterd
    

Step 2: Create Physical Disks on KVM Hosts

  1. For KVM Host 1 (Nodes psorbit-node01, psorbit-node02 and psorbit-node03):
    • Create a disk for each node: On first KVM host(syhydsrv01), execute the following command to create a 300MB raw disk image:
      qemu-img create -f raw /data1/d_disks/psorbit-in-demo1a-brick1.img 300M
      qemu-img create -f raw /data1/d_disks/psorbit-in-demo1a-brick2.img 300M
      qemu-img create -f raw /data1/d_disks/psorbit-in-demo1a-brick3.img 300M
      

Step 3: Attach the Disk to Each VM

You can attach the disk via Cockpit UI (optional) or directly by editing the VM’s XML file.

Using XML Procedure:

For each node VM, follow these steps:

  1. Edit the VM’s XML configuration using the command:

    virsh edit <vm_name>
    
  2. Add a disk entry for the newly created disk under the <devices> section, replacing the path with the appropriate disk image:

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/data1/d_disks/<brick_name>.img'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    
  3. Save the file and exit.

  4. Restart the VM to apply the new disk configuration:

    virsh reboot <vm_name>
    

Step 4: Format the Attached Disk on Each Node

  1. Check the attached disk:

    lsblk
    
  2. Format the disk with the XFS file system:

    mkfs.xfs -i size=512 /dev/vdb
    
  3. Mount the disk to the /export/vdb directory:

    mkdir -p /export/vdb && mount /dev/vdb /export/vdb
    

Step 5: Persistent Mount Configuration

  1. Add the mount configuration to /etc/fstab:

    echo "/dev/vdb /export/vdb xfs defaults 0 0" >> /etc/fstab
    
  2. Mount all devices:

    mount -a
    

Step 6: Create the brick Directory on the Attached Disk on each node

  1. Create the brick directory to store the volume data:
    mkdir -p /export/vdb/brick
    

Step 7: Create the New GlusterFS Volume

While creating select any arbitrary node to run the below commands

  1. Create the volume: Use the following command to create the GlusterFS volume, including all the nodes and their respective bricks:
    gluster volume create <volume_name> replica 3 \
    psorbit-node01:/export/vdb/brick \
    psorbit-node02:/export/vdb/brick \
    psorbit-node03:/data/export/vdb/brick \
    

Step 9: Start the Volume

  1. Start the GlusterFS volume:
    gluster volume start <volume_name>
    

Step 10: Verify the Volume Information

  1. Check the volume information:
    gluster volume info
    

Create client mounts and mount them on each node:

Create the client mount directory on each node is necessary but creating the directories inside the volume is not necessary and ownership as the volume is already started and syncs the required directories on every brick

  1. Create client mount directory along with required directories for application code on the volume:

    mkdir -p /data/ps/orbit
    mkdir -p /data/ps/orbit/playstore
    mkdir -p /data/ps/orbit/health_monitor
    chmod a+rx /data
    
  2. Give ownership to the tomcat directory

    chown -R tomcat:tomcat /data/ps/orbit
    

Mount the volume to the client mount point

  1. Mount the volume into this directory for each node: On node01:

    mount -t glusterfs psorbit-node01:pssb_dfs /data/ps/orbit
    

    On node02:

    mount -t glusterfs psorbit-node02:pssb_dfs /data/ps/orbit
    

    On node03:

    mount -t glusterfs pssb1bvm003:pssb_dfs /data/ps/orbit
    
  2. Make the mounts persistent on each node using:

Persistent Mounts on Each Node

To ensure the GlusterFS mounts persist across reboots, add the following entries to the /etc/fstab file on each node:

  • On node01:

    psorbit-node01:pssb_dfs /data/ps/orbit glusterfs defaults,_netdev 1 0
    
  • On node02:

    psorbit-node02:pssb_dfs /data/ps/orbit glusterfs defaults,_netdev 1 0
    
  • On node03:

    pssb1bvm003:pssb_dfs /data/ps/orbit glusterfs defaults,_netdev 1 0
    
  1. Open the /etc/fstab file on each node using a text editor:

    sudo vi /etc/fstab
    
  2. Add the respective entry for the node.

  3. Save the file and exit.

  4. Test the persistent mount by running:

    sudo mount -a
    

    This command will mount all entries from the /etc/fstab file without requiring a reboot.

  5. Verify the mount using:

    mount | grep /data/ps/orbit
    

It is crucial to perform reboots for all nodes after the setup is completed to ensure mounts are persistent.