Process of setting up a GlusterFS volume with 3 nodes: psorbit-node01
, psorbit-node02
, psorbit-node03
. Each node will have 300MB of storage for the brick, and the volume will be replicated across all 3 nodes.
Node Names:
psorbit-node01
- 172.21.0.90
(KVM Node 1)psorbit-node02
- 172.21.0.91
(KVM Node 1)psorbit-node03
- 172.21.0.92
(KVM Node 1)Brick Path:
/export/vdb/brick
Mount Point: /data/ps/orbit
Disk Capacity per Node: 300MB per brick
Install GlusterFS on all 3 nodes:
add-apt-repository ppa:gluster/glusterfs-11
apt update
apt install glusterfs-server
Configure port usage:
/etc/glusterfs/glusterd.vol
with:volume management
type mgmt/glusterd
option working-directory /var/lib/glusterd
option transport-type socket
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
option transport.socket.read-fail-log off
option transport.socket.listen-port 24007
option ping-timeout 0
option event-threads 1
#option lock-timer 180
#option transport.address-family inet6
option base-port 49152
option max-port 49252
end-volume
Configure firewall rules:
ufw allow 49152:49252 && ufw allow 49152
ufw reload
ufw status | grep -E '^(491|2400)'
Start and Enable GlusterFS on each node:
systemctl enable --now glusterd
Verify the GlusterFS service is running:
systemctl status glusterd
psorbit-node01
, psorbit-node02
and psorbit-node03
):
qemu-img create -f raw /data1/d_disks/psorbit-in-demo1a-brick1.img 300M
qemu-img create -f raw /data1/d_disks/psorbit-in-demo1a-brick2.img 300M
qemu-img create -f raw /data1/d_disks/psorbit-in-demo1a-brick3.img 300M
You can attach the disk via Cockpit UI (optional) or directly by editing the VM’s XML file.
For each node VM, follow these steps:
Edit the VM’s XML configuration using the command:
virsh edit <vm_name>
Add a disk entry for the newly created disk under the <devices>
section, replacing the path with the appropriate disk image:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/data1/d_disks/<brick_name>.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
Save the file and exit.
Restart the VM to apply the new disk configuration:
virsh reboot <vm_name>
Check the attached disk:
lsblk
Format the disk with the XFS file system:
mkfs.xfs -i size=512 /dev/vdb
Mount the disk to the /export/vdb
directory:
mkdir -p /export/vdb && mount /dev/vdb /export/vdb
Add the mount configuration to /etc/fstab
:
echo "/dev/vdb /export/vdb xfs defaults 0 0" >> /etc/fstab
Mount all devices:
mount -a
brick
Directory on the Attached Disk on each nodebrick
directory to store the volume data:
mkdir -p /export/vdb/brick
gluster volume create <volume_name> replica 3 \
psorbit-node01:/export/vdb/brick \
psorbit-node02:/export/vdb/brick \
psorbit-node03:/data/export/vdb/brick \
gluster volume start <volume_name>
gluster volume info
Create the client mount directory on each node is necessary but creating the directories inside the volume is not necessary and ownership as the volume is already started and syncs the required directories on every brick
Create client mount directory along with required directories for application code on the volume:
mkdir -p /data/ps/orbit
mkdir -p /data/ps/orbit/playstore
mkdir -p /data/ps/orbit/health_monitor
chmod a+rx /data
Give ownership to the tomcat directory
chown -R tomcat:tomcat /data/ps/orbit
Mount the volume to the client mount point
Mount the volume into this directory for each node: On node01:
mount -t glusterfs psorbit-node01:pssb_dfs /data/ps/orbit
On node02:
mount -t glusterfs psorbit-node02:pssb_dfs /data/ps/orbit
On node03:
mount -t glusterfs pssb1bvm003:pssb_dfs /data/ps/orbit
Make the mounts persistent on each node using:
To ensure the GlusterFS mounts persist across reboots, add the following entries to the /etc/fstab
file on each node:
On node01:
psorbit-node01:pssb_dfs /data/ps/orbit glusterfs defaults,_netdev 1 0
On node02:
psorbit-node02:pssb_dfs /data/ps/orbit glusterfs defaults,_netdev 1 0
On node03:
pssb1bvm003:pssb_dfs /data/ps/orbit glusterfs defaults,_netdev 1 0
Open the /etc/fstab
file on each node using a text editor:
sudo vi /etc/fstab
Add the respective entry for the node.
Save the file and exit.
Test the persistent mount by running:
sudo mount -a
This command will mount all entries from the /etc/fstab
file without requiring a reboot.
Verify the mount using:
mount | grep /data/ps/orbit
It is crucial to perform reboots for all nodes after the setup is completed to ensure mounts are persistent.