MDM server setup

Setting up 3 node postgresql setup with 1 master and 2 slaves

Step-1: Installation and Configuration of services

  • Node Names:

    • psorbit-node01 - 172.21.0.90 (KVM Node 1)
    • psorbit-node02 - 172.21.0.91 (KVM Node 1)
    • psorbit-node03 - 172.21.0.92 (KVM Node 1)
  • Data directory:

    • For all nodes: /data/db/psql
    • For archive: /data/db/psql/archive

Postgresql

Repeat the following steps on all 3 nodes

  1. Download the Percona Release Package
    Use the wget command to download the Percona release package for your distribution:

    wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
    
  2. Install the Percona Release Package
    Install the downloaded .deb package using dpkg:

    sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
    
  3. Update the Package Repository
    Update the package lists to include the Percona repository:

    sudo apt update
    
  4. Set Up the PostgreSQL Version (PPG 16)
    Configure the Percona PostgreSQL distribution to use version 16:

    sudo percona-release setup ppg-16
    
  5. Install Percona PostgreSQL Server (PPG 16)
    Finally, install the Percona PostgreSQL Server version 16:

    sudo apt install percona-ppg-server-16
    
  6. Verify Installation
    Check the PostgreSQL version to confirm successful installation:

    psql --version
    
  7. Edit hostnames: Select a master node among the 3 nodes(psorbit-node01,psorbit-node02,psorbit-node03).
    Inlude a new hostname called pg-master to the selected node and map its IP address in each of the three node’s local DNS file(/etc/hosts).

    vi /etc/hosts
    

    Add the following line: <master-node-ip> pg-master and save it.
    This change allows us to quickly change the master node incase of an recovery for the mdm-server(tomcats) to access correct master.

  8. Configure pg_hba.conf file for access management on all 3 nodes: Add the following content in the file: /etc/postgresql/16/main/pg_hba.conf:

    host    replication     replicator      172.21.0.90/32          md5
    host    replication     replicator      172.21.0.91/32          md5
    host    replication     replicator      172.21.0.92/32          md5
    host    all             hmdm            172.21.0.90/32          md5
    host    all             hmdm            172.21.0.91/32          md5
    host    all             hmdm            172.21.0.92/32          md5
    host    haproxy_checks  haproxy         172.21.0.90/32          md5
    host    haproxy_checks  haproxy         172.21.0.91/32          md5
    host    haproxy_checks  haproxy         172.21.0.92/32          md5
    host    haproxy_checks  haproxy         172.21.0.20/32          md5
    host    postgres        postgres_exporter       172.21.0.90/32  trust
    
  9. Change postgresql data directory: Make sure to mount and create directories: /data/db/psql/16/main and /data/db/psql/16/archive and copy the old data to the new data directory.

    Edit the postgresql configuration file and modify the data_directory in the given parameter given(/etc/postgresql/16/main/postgresql.conf):

    data_directory = '/data/db/psql/16/main'                # use data in another directory
    
    mount -a
    mkdir -p /data/db/psql/16/main
    mkdir -p /data/db/psql/16/archive
    

    Copy old data directory to the new directory:

    cp -rdp /var/lib/postgresql/16/main/* /data/db/psql/16/main/
    

    Make sure that postgres user has required permissions:

    chmod a+rx /data/db/
    chown -R postgres:postgres /data/db/psql
    

    Commit the changes by restarting the postgresql service

    systemctl restart postgresql
    
  10. Check postgresql database operation by logging in:

    sudo su postgres && psql
    

    Try reading the databases:

    \l
    

    Sample output:

            Name      |  Owner   | Encoding | Locale Provider | Collate |  Ctype  | ICU Locale | ICU Rules |      Access privileges       
    ----------------+----------+----------+-----------------+---------+---------+------------+-----------+------------------------------
    haproxy_checks | postgres | UTF8     | libc            | C.UTF-8 | C.UTF-8 |            |           | 
    hmdm           | hmdm     | UTF8     | libc            | C.UTF-8 | C.UTF-8 |            |           | 
    postgres       | postgres | UTF8     | libc            | C.UTF-8 | C.UTF-8 |            |           | =Tc/postgres                +
                    |          |          |                 |         |         |            |           | postgres=CTc/postgres       +
                    |          |          |                 |         |         |            |           | postgres_exporter=c/postgres
    template0      | postgres | UTF8     | libc            | C.UTF-8 | C.UTF-8 |            |           | =c/postgres                 +
                    |          |          |                 |         |         |            |           | postgres=CTc/postgres
    template1      | postgres | UTF8     | libc            | C.UTF-8 | C.UTF-8 |            |           | =c/postgres                 +
                    |          |          |                 |         |         |            |           | postgres=CTc/postgres
    

You might not see these many rows in your output as the databases are yet to be created

Setup the master node:

The master node is the only place where database operations occur. Every mdm-server connects to the master node’s postgresql database using pg-master hostname which gets updated as per changes of the master node. The slot usage should be like below(the configuration files are given such that it follows the given order)no matter what the master is:

psorbit-node01 --> slot_1
psorbit-node02 --> slot_2
psorbit-node03 --> slot_3

Creating replication slots: Create replication slots for the hotstandby to connect to a specific slot to recieve WALs from master to slaves.

Login to the psql shell interface:

sudo su postgres
psql

Run:

SELECT * FROM pg_create_physical_replication_slot('slot_1');
SELECT * FROM pg_create_physical_replication_slot('slot_2');
SELECT * FROM pg_create_physical_replication_slot('slot_3');

Sample output:

  slot_name  | lsn 
-------------+-----
 slot_1 | 
(1 row)

Configuration file changes:

Select psorbit-node02(172.21.0.91) as master node:
Make sure to apply the following changes to the parameters in the master node’s configuration file(/etc/potgresql/16/main/postgresql.conf)

data_directory = '/data/db/psql/16/main'                # use data in another directory
listen_addresses = '172.21.0.91'                # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 100                   # (change requires restart)
wal_level = replica                     # minimal, replica, or logical
max_wal_size = 1GB
min_wal_size = 80MB
archive_mode = always           # enables archiving; off, on, or always
archive_command = 'cp %p /data/db/psql/archive/%f'              # command to use to archive a WAL file
restore_command = 'cp /data/db/psql/archive/%f %p'              # command to use to restore an archived WAL file
max_wal_senders = 10            # max number of walsender processes
                                # (change requires restart)
max_replication_slots = 10      # max number of replication slots
primary_conninfo = 'user=replicator password=''1!Ps@PgSQLReplica'' channel_binding=prefer host=pg-master port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable'                       # connection string to sending server
primary_slot_name = 'slot_2'                    # replication slot on sending server
hot_standby = on                        # "off" disallows queries during recovery
log_timezone = 'Asia/Kolkata'

# - Process Title -

cluster_name = 'psorbit-pg-cluster'                     # added to process titles if nonempty

Commit the configuration changes by restarting the postgresql service:

systemctl restart postgresql

Now, the master is ready to recieve connections.

Setting up slaves:

Make sure to change the configuration paramters in the slave nodes files(/etc/postgresql/16/main/postgresql.conf):

For slave-1(psorbit-node01):

data_directory = '/data/db/psql/16/main'                # use data in another directory
listen_addresses = '172.21.0.90'                # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 100                   # (change requires restart)
wal_level = replica                     # minimal, replica, or logical
max_wal_size = 1GB
min_wal_size = 80MB
archive_mode = always           # enables archiving; off, on, or always
archive_command = 'cp %p /data/db/psql/archive/%f'              # command to use to archive a WAL file
restore_command = 'cp /data/db/psql/archive/%f %p'              # command to use to restore an archived WAL file
max_wal_senders = 10            # max number of walsender processes
                                # (change requires restart)
max_replication_slots = 10      # max number of replication slots
primary_conninfo = 'user=replicator password=''1!Ps@PgSQLReplica'' channel_binding=prefer host=pg-master port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable'                       # connection string to sending server
primary_slot_name = 'slot_1'                    # replication slot on sending server
hot_standby = on                        # "off" disallows queries during recovery
log_timezone = 'Asia/Kolkata'

# - Process Title -

cluster_name = 'psorbit-pg-cluster'                     # added to process titles if nonempty

For Slave-3,On node-3(psorbit-node03):

data_directory = '/data/db/psql/16/main'                # use data in another directory
listen_addresses = '172.21.0.91'                # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 100                   # (change requires restart)
wal_level = replica                     # minimal, replica, or logical
max_wal_size = 1GB
min_wal_size = 80MB
archive_mode = always           # enables archiving; off, on, or always
archive_command = 'cp %p /data/db/psql/archive/%f'              # command to use to archive a WAL file
restore_command = 'cp /data/db/psql/archive/%f %p'              # command to use to restore an archived WAL file
max_wal_senders = 10            # max number of walsender processes
                                # (change requires restart)
max_replication_slots = 10      # max number of replication slots
primary_conninfo = 'user=replicator password=''1!Ps@PgSQLReplica'' channel_binding=prefer host=pg-master port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable'                       # connection string to sending server
primary_slot_name = 'slot_3'                    # replication slot on sending server
hot_standby = on                        # "off" disallows queries during recovery
log_timezone = 'Asia/Kolkata'

# - Process Title -

cluster_name = 'psorbit-pg-cluster'                     # added to process titles if nonempty

Get the base data from the master on both slaves:

pg_basebackup -h pg-master -U replicator -p 5432 -D /data/postgresql/16/main -P -Xs -R && chown -R postgres:postgres /data/db/psql

Restart the postgresql service on both slaves:

systemctl restart postgresql
Setting up database for mdm-webserver

Login to psql shell:

sudo su postgres
psql

Create database

CREATE DATABASE hmdm;
CREATE USER hmdm WITH PASSWORD 'qsjkdk283nfjkasn';
GRANT ALL PRIVILEGES ON DATABASE hmdm TO hmdm;

MDM-webserver

Documentation for Setting Up Tomcat on 3 Nodes (psorbit-node01, psorbit-node02, psorbit-node03)

Prerequisites

  1. Ensure gdown is available on each node. If not, install it using:
    pip install gdown
    

Steps for Tomcat Installation

1. Ensure Java 11 Installation

  1. Verify that Java 11 is installed on each node. If not, install Java 11 before proceeding.
    java --version
    
    Install Java 11 if not present (e.g., using apt or yum depending on your distribution).

2. Download and Extract Apache Tomcat

  1. Use gdown to download the Apache Tomcat archive file:
    gdown 1Hq_o4Hivkfa163Ylo59daureMEH8FRjz
    
  2. Extract the .tar.gz file to /opt/mdm as the Tomcat directory:
    mkdir -p /opt/mdm
    tar -xzvf apache-tomcat*.tar.gz -C /opt/mdm
    mv /opt/mdm/apache-tomcat* /opt/mdm/tomcat
    

3. Create the Tomcat Service File

  1. Create a service file for Tomcat at /etc/systemd/system/tomcat-mdm.service with the following contents:
    [Unit]
    Description=Tomcat 9 servlet container
    
    [Service]
    Type=forking
    User=tomcat
    Group=tomcat
    Environment="JAVA_HOME=/opt/jdk-11"
    Environment="JAVA_OPTS=-Djava.security.egd=file:///dev/urandom -Djava.awt.headless=true"
    Environment="CATALINA_BASE=/opt/mdm/tomcat"
    Environment="CATALINA_HOME=/opt/mdm/tomcat"
    Environment="CATALINA_PID=/opt/mdm/tomcat/temp/tomcat.pid"
    Environment="CATALINA_OPTS=-Xms1024M -Xmx3072M -server -XX:+UseParallelGC"
    ExecStart=/opt/mdm/tomcat/bin/startup.sh
    ExecStop=/opt/mdm/tomcat/bin/shutdown.sh
    
    [Install]
    WantedBy=multi-user.target
    

4. Configure hmdm tomcat files

Here’s how to perform the outlined steps:

Steps:

  1. Download the File:

    • Use the Google Drive file ID to download the file. You can use a tool like gdown to accomplish this:
      gdown 1X6dzXhLjilTj6Bl1S8UOrgDzasgxSWDW
      
  2. Extract the File:

    • Extract the downloaded file to any folder:
      tar -xvzf mdm_package.tar.gz -C ~/
      
  3. Navigate to the Extracted Folder:

    • Move into the extracted folder:
      cd ~/hmdm-install
      
  4. Run the Installation Script:

    • Execute the installation script:
      bash hmdm_install.sh
      

This step configure the installed tomcat to work with the hmdm.war file.

5. Enable and Start the Tomcat Service

  1. Reload the systemd daemon to recognize the new service file:
    systemctl daemon-reload
    
  2. Enable the Tomcat service to start on boot:
    systemctl enable tomcat-mdm
    
  3. Start the Tomcat service:
    systemctl start tomcat-mdm
    
  4. Check the service status to ensure it is running:
    systemctl status tomcat-mdm
    

Repeat these steps for all tomcats.