Process of setting up a GlusterFS volume with 5 nodes: pssb1avm001
, pssb1avm002
, pssb1abm003
, pssb1avm004
, and pssb1avm005
. Each node will have 10GB of storage for the brick, and the volume will be replicated across all 5 nodes.
Node Names:
pssb1avm001
- 172.21.0.61
(KVM Node 1)pssb1avm002
- 172.21.0.62
(KVM Node 1)pssb1abm003
- 172.21.0.63
(Baremetal Node)pssb1avm004
- 172.21.0.64
(KVM Node 2)pssb1avm005
- 172.21.0.65
(KVM Node 2)Brick Path:
/export/vdb/brick
pssb1abm003
): /data/export/vdb/brick
Mount Point: /data/pssb
Disk Capacity per Node: 10GB per brick
Install GlusterFS on all 5 nodes:
add-apt-repository ppa:gluster/glusterfs-11
apt update
apt install glusterfs-server
Configure port usage:
/etc/glusterfs/glusterd.vol
with:volume management
type mgmt/glusterd
option working-directory /var/lib/glusterd
option transport-type socket
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
option transport.socket.read-fail-log off
option transport.socket.listen-port 24007
option ping-timeout 0
option event-threads 1
#option lock-timer 180
#option transport.address-family inet6
option base-port 49152
option max-port 49252
end-volume
Configure firewall rules:
ufw allow 49152:49252 && ufw allow 49152
ufw reload
ufw status | grep -E '^(491|2400)'
Start and Enable GlusterFS on each node:
systemctl enable --now glusterd
Verify the GlusterFS service is running:
systemctl status glusterd
For KVM Host 1 (Nodes pssb1avm001
and pssb1avm002
):
qemu-img create -f raw /data1/d_disks/pssb1avm001-brick.img 10G
qemu-img create -f raw /data1/d_disks/pssb1avm002-brick.img 10G
For KVM Host 2 (Nodes pssb1avm004
and pssb1avm005
):
qemu-img create -f raw /data1/d_disks/pssb1avm004-brick.img 10G
qemu-img create -f raw /data1/d_disks/pssb1avm005-brick.img 10G
You can attach the disk via Cockpit UI (optional) or directly by editing the VM’s XML file.
For each node VM, follow these steps:
Edit the VM’s XML configuration using the command:
virsh edit <vm_name>
Add a disk entry for the newly created disk under the <devices>
section, replacing the path with the appropriate disk image:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/data1/d_disks/<brick_name>.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
Save the file and exit.
Restart the VM to apply the new disk configuration:
virsh reboot <vm_name>
Check the attached disk:
lsblk
Format the disk with the XFS file system:
mkfs.xfs -i size=512 /dev/vdb
Mount the disk to the /export/vdb
directory:
mkdir -p /export/vdb && mount /dev/vdb /export/vdb
Add the mount configuration to /etc/fstab
:
echo "/dev/vdb /export/vdb xfs defaults 0 0" >> /etc/fstab
Mount all devices:
mount -a
brick
Directory on the Attached Disk on each nodebrick
directory to store the volume data:
mkdir -p /export/vdb/brick
gluster volume create <volume_name> replica 5 \
pssb1avm001:/export/vdb/brick \
pssb1avm002:/export/vdb/brick \
pssb1abm003:/data/export/vdb/brick \
pssb1avm004:/export/vdb/brick \
pssb1avm005:/export/vdb/brick
gluster volume start <volume_name>
gluster volume info
Create the client mount directory on each node is necessary but creating the directories inside the volume is not necessary and ownership as the volume is already started and syncs the required directories on every brick
Create client mount directory along with required directories for application code on the volume:
mkdir -p /data/pssb
mkdir -p /data/pssb/archive
mkdir -p /data/pssb/reports
mkdir -p /data/pssb/health_monitor
chmod a+rx /data
Give ownership to the tomcat directory
chown -R tomcat:tomcat /data/pssb
Mount the volume to the client mount point
Mount the volume into this directory for each node: On node01:
mount -t glusterfs pssb1avm001:pssb_dfs /data/pssb
On node02:
mount -t glusterfs pssb1avm002:pssb_dfs /data/pssb
On node03:
mount -t glusterfs pssb1bvm003:pssb_dfs /data/pssb
On node04:
mount -t glusterfs pssb1avm004:pssb_dfs /data/pssb
On node05:
mount -t glusterfs pssb1avm005:pssb_dfs /data/pssb
Make the mounts persistent on each node using:
To ensure the GlusterFS mounts persist across reboots, add the following entries to the /etc/fstab
file on each node:
On node01:
pssb1avm001:pssb_dfs /data/pssb glusterfs defaults,_netdev 1 0
On node02:
pssb1avm002:pssb_dfs /data/pssb glusterfs defaults,_netdev 1 0
On node03:
pssb1bvm003:pssb_dfs /data/pssb glusterfs defaults,_netdev 1 0
On node04:
pssb1avm004:pssb_dfs /data/pssb glusterfs defaults,_netdev 1 0
On node05:
pssb1avm005:pssb_dfs /data/pssb glusterfs defaults,_netdev 1 0
Open the /etc/fstab
file on each node using a text editor :
sudo vi /etc/fstab
Add the respective entry for the node.
Save the file and exit.
Test the persistent mount by running:
sudo mount -a
This command will mount all entries from the /etc/fstab
file without requiring a reboot.
Verify the mount using:
mount | grep /data/pssb
It is crucial to perform reboots for all nodes after the setup is completed to ensure mounts are persistent.