Node Names:
psorbit-node01
- 172.21.0.90
(KVM Node 1)psorbit-node02
- 172.21.0.91
(KVM Node 1)psorbit-node03
- 172.21.0.92
(KVM Node 1)Data directory:
/data/db/psql
/data/db/psql/archive
Repeat the following steps on all 3 nodes
Download the Percona Release Package
Use the wget
command to download the Percona release package for your distribution:
wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
Install the Percona Release Package
Install the downloaded .deb
package using dpkg
:
sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
Update the Package Repository
Update the package lists to include the Percona repository:
sudo apt update
Set Up the PostgreSQL Version (PPG 16)
Configure the Percona PostgreSQL distribution to use version 16:
sudo percona-release setup ppg-16
Install Percona PostgreSQL Server (PPG 16)
Finally, install the Percona PostgreSQL Server version 16:
sudo apt install percona-ppg-server-16
Verify Installation
Check the PostgreSQL version to confirm successful installation:
psql --version
Edit hostnames:
Select a master node among the 3 nodes(psorbit-node01,psorbit-node02,psorbit-node03).
Inlude a new hostname called pg-master
to the selected node and map its IP address in each of the three node’s local DNS file(/etc/hosts
).
vi /etc/hosts
Add the following line: <master-node-ip> pg-master
and save it.
This change allows us to quickly change the master node incase of an recovery for the mdm-server(tomcats) to access correct master.
Configure pg_hba.conf
file for access management on all 3 nodes:
Add the following content in the file: /etc/postgresql/16/main/pg_hba.conf
:
host replication replicator 172.21.0.90/32 md5
host replication replicator 172.21.0.91/32 md5
host replication replicator 172.21.0.92/32 md5
host all hmdm 172.21.0.90/32 md5
host all hmdm 172.21.0.91/32 md5
host all hmdm 172.21.0.92/32 md5
host haproxy_checks haproxy 172.21.0.90/32 md5
host haproxy_checks haproxy 172.21.0.91/32 md5
host haproxy_checks haproxy 172.21.0.92/32 md5
host haproxy_checks haproxy 172.21.0.20/32 md5
host postgres postgres_exporter 172.21.0.90/32 trust
Change postgresql data directory:
Make sure to mount and create directories: /data/db/psql/16/main
and /data/db/psql/16/archive
and copy the old data to the new data directory.
Edit the postgresql configuration file and modify the data_directory in the given parameter given(/etc/postgresql/16/main/postgresql.conf
):
data_directory = '/data/db/psql/16/main' # use data in another directory
mount -a
mkdir -p /data/db/psql/16/main
mkdir -p /data/db/psql/16/archive
Copy old data directory to the new directory:
cp -rdp /var/lib/postgresql/16/main/* /data/db/psql/16/main/
Make sure that postgres user has required permissions:
chmod a+rx /data/db/
chown -R postgres:postgres /data/db/psql
Commit the changes by restarting the postgresql service
systemctl restart postgresql
Check postgresql database operation by logging in:
sudo su postgres && psql
Try reading the databases:
\l
Sample output:
Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges
----------------+----------+----------+-----------------+---------+---------+------------+-----------+------------------------------
haproxy_checks | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | |
hmdm | hmdm | UTF8 | libc | C.UTF-8 | C.UTF-8 | | |
postgres | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | | =Tc/postgres +
| | | | | | | | postgres=CTc/postgres +
| | | | | | | | postgres_exporter=c/postgres
template0 | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | libc | C.UTF-8 | C.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
You might not see these many rows in your output as the databases are yet to be created
The master node is the only place where database operations occur. Every mdm-server connects to the master node’s postgresql database using pg-master
hostname which gets updated as per changes of the master node. The slot usage should be like below(the configuration files are given such that it follows the given order)no matter what the master is:
psorbit-node01 --> slot_1
psorbit-node02 --> slot_2
psorbit-node03 --> slot_3
Creating replication slots:
Create replication slots for the hotstandby to connect to a specific slot to recieve WAL
s from master to slaves.
Login to the psql
shell interface:
sudo su postgres
psql
Run:
SELECT * FROM pg_create_physical_replication_slot('slot_1');
SELECT * FROM pg_create_physical_replication_slot('slot_2');
SELECT * FROM pg_create_physical_replication_slot('slot_3');
Sample output:
slot_name | lsn
-------------+-----
slot_1 |
(1 row)
Configuration file changes:
Select psorbit-node02(172.21.0.91)
as master node:
Make sure to apply the following changes to the parameters in the master node’s configuration file(/etc/potgresql/16/main/postgresql.conf
)
data_directory = '/data/db/psql/16/main' # use data in another directory
listen_addresses = '172.21.0.91' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
wal_level = replica # minimal, replica, or logical
max_wal_size = 1GB
min_wal_size = 80MB
archive_mode = always # enables archiving; off, on, or always
archive_command = 'cp %p /data/db/psql/archive/%f' # command to use to archive a WAL file
restore_command = 'cp /data/db/psql/archive/%f %p' # command to use to restore an archived WAL file
max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
max_replication_slots = 10 # max number of replication slots
primary_conninfo = 'user=replicator password=''1!Ps@PgSQLReplica'' channel_binding=prefer host=pg-master port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable' # connection string to sending server
primary_slot_name = 'slot_2' # replication slot on sending server
hot_standby = on # "off" disallows queries during recovery
log_timezone = 'Asia/Kolkata'
# - Process Title -
cluster_name = 'psorbit-pg-cluster' # added to process titles if nonempty
Commit the configuration changes by restarting the postgresql service:
systemctl restart postgresql
Now, the master is ready to recieve connections.
Make sure to change the configuration paramters in the slave nodes files(/etc/postgresql/16/main/postgresql.conf
):
For slave-1(psorbit-node01):
data_directory = '/data/db/psql/16/main' # use data in another directory
listen_addresses = '172.21.0.90' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
wal_level = replica # minimal, replica, or logical
max_wal_size = 1GB
min_wal_size = 80MB
archive_mode = always # enables archiving; off, on, or always
archive_command = 'cp %p /data/db/psql/archive/%f' # command to use to archive a WAL file
restore_command = 'cp /data/db/psql/archive/%f %p' # command to use to restore an archived WAL file
max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
max_replication_slots = 10 # max number of replication slots
primary_conninfo = 'user=replicator password=''1!Ps@PgSQLReplica'' channel_binding=prefer host=pg-master port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable' # connection string to sending server
primary_slot_name = 'slot_1' # replication slot on sending server
hot_standby = on # "off" disallows queries during recovery
log_timezone = 'Asia/Kolkata'
# - Process Title -
cluster_name = 'psorbit-pg-cluster' # added to process titles if nonempty
For Slave-3,On node-3(psorbit-node03):
data_directory = '/data/db/psql/16/main' # use data in another directory
listen_addresses = '172.21.0.91' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
wal_level = replica # minimal, replica, or logical
max_wal_size = 1GB
min_wal_size = 80MB
archive_mode = always # enables archiving; off, on, or always
archive_command = 'cp %p /data/db/psql/archive/%f' # command to use to archive a WAL file
restore_command = 'cp /data/db/psql/archive/%f %p' # command to use to restore an archived WAL file
max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
max_replication_slots = 10 # max number of replication slots
primary_conninfo = 'user=replicator password=''1!Ps@PgSQLReplica'' channel_binding=prefer host=pg-master port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable' # connection string to sending server
primary_slot_name = 'slot_3' # replication slot on sending server
hot_standby = on # "off" disallows queries during recovery
log_timezone = 'Asia/Kolkata'
# - Process Title -
cluster_name = 'psorbit-pg-cluster' # added to process titles if nonempty
Get the base data from the master on both slaves:
pg_basebackup -h pg-master -U replicator -p 5432 -D /data/postgresql/16/main -P -Xs -R && chown -R postgres:postgres /data/db/psql
Restart the postgresql service on both slaves:
systemctl restart postgresql
Login to psql shell:
sudo su postgres
psql
Create database
CREATE DATABASE hmdm;
CREATE USER hmdm WITH PASSWORD 'qsjkdk283nfjkasn';
GRANT ALL PRIVILEGES ON DATABASE hmdm TO hmdm;
gdown
is available on each node. If not, install it using:
pip install gdown
java --version
Install Java 11 if not present (e.g., using apt
or yum
depending on your distribution).gdown
to download the Apache Tomcat archive file:
gdown 1Hq_o4Hivkfa163Ylo59daureMEH8FRjz
.tar.gz
file to /opt/mdm
as the Tomcat directory:
mkdir -p /opt/mdm
tar -xzvf apache-tomcat*.tar.gz -C /opt/mdm
mv /opt/mdm/apache-tomcat* /opt/mdm/tomcat
/etc/systemd/system/tomcat-mdm.service
with the following contents:
[Unit]
Description=Tomcat 9 servlet container
[Service]
Type=forking
User=tomcat
Group=tomcat
Environment="JAVA_HOME=/opt/jdk-11"
Environment="JAVA_OPTS=-Djava.security.egd=file:///dev/urandom -Djava.awt.headless=true"
Environment="CATALINA_BASE=/opt/mdm/tomcat"
Environment="CATALINA_HOME=/opt/mdm/tomcat"
Environment="CATALINA_PID=/opt/mdm/tomcat/temp/tomcat.pid"
Environment="CATALINA_OPTS=-Xms1024M -Xmx3072M -server -XX:+UseParallelGC"
ExecStart=/opt/mdm/tomcat/bin/startup.sh
ExecStop=/opt/mdm/tomcat/bin/shutdown.sh
[Install]
WantedBy=multi-user.target
Here’s how to perform the outlined steps:
Download the File:
gdown
to accomplish this:
gdown 1X6dzXhLjilTj6Bl1S8UOrgDzasgxSWDW
Extract the File:
tar -xvzf mdm_package.tar.gz -C ~/
Navigate to the Extracted Folder:
cd ~/hmdm-install
Run the Installation Script:
bash hmdm_install.sh
This step configure the installed tomcat to work with the hmdm.war file.
systemctl daemon-reload
systemctl enable tomcat-mdm
systemctl start tomcat-mdm
systemctl status tomcat-mdm
Repeat these steps for all tomcats.