Please read the following documents before addressing the issues to become familiar with the gluster architecture.
Configuration manual
Haproxy monitoring guide
Errors reference
When alerts are triggered, the C3 team receives notifications via email. The C3 team is expected to follow the outlined procedures below.
Data Collection: When an alert is fired, the C3 team should first gather relevant data to understand the source of the issue.
Severity-Based Actions:
Severity-Specific Notifications:
Before taking action on the C3 Remedy, the C3 team should thoroughly review the “Dependent Metrics and Checks” section to ensure all supporting data is understood.
This process ensures effective response and resolution for all alerts based on severity and priority.
The expression (time() - haproxy_process_start_time_seconds) / 60
calculates the uptime of the HAProxy process in minutes. This metric provides information how long the HAProxy process has been running since its last restart. It is useful for monitoring process stability and identifying unexpected restarts. This alert indicates that the haproxy is restarted very recently. C3 team should check the logs to identify potential causes of the restart, such as crashes or system reboots, and confirm with the DevOps team whether the restart was intentional. If the restart was not planned, all relevant data, including logs and recent configuration changes, should be provided to the DevOps team for further investigation.
Instance name, IP address, and alert age:
HAProxy process uptime:
(time() - haproxy_process_start_time_seconds) / 60
to determine the process uptime in minutes.Service status:
systemctl status haproxy
and note whether it is active, restarting, or failed.Process status:
ps aux | grep haproxy
to confirm if the HAProxy process is running and collect the process details.Configuration validation:
haproxy -c -f /etc/haproxy/haproxy.cfg
to check for configuration errors and collect the output.Log messages:
tail -100f /var/log/haproxy/haproxy_notice.log
tail -100f /var/log/haproxy/haproxy_debug.log
tail -100f /var/log/haproxy/haproxy_info.log
Port status:
netstat -tlnp | grep haproxy
and collect the output.Resource availability:
top
, free -h
, and df -h
to ensure resource availability.Firewall status:
ufw status && ufw status | grep -E "(80|443)"
to confirm if any ports are blocked and collect the output, if the firewall is active.When the HAProxy process uptime indicates a recent restart, please check the following metrics/data:
HAProxy Backend Availability:
haproxy_backend_up{instance="$host"}
to verify the status of backend servers connected to the HAProxy instance.Frontend Request Rates:
rate(haproxy_frontend_http_requests_total{instance="$host"}[<time interval>])
to observe any differences in request patterns after the restart.Error Rates:
haproxy_frontend_request_errors_total{instance="$host"}
to identify if error rates have spiked following the restart.Queue Metrics:
haproxy_server_queue_size{instance="$host"}
to ensure backend servers are not overwhelmed and queues are being processed efficiently.Response Times:
haproxy_server_http_response_time_average_seconds
to detect unusual latency patterns.By examining these metrics, the team can diagnose the cause of the restart and address underlying issues.
Follow these steps for basic troubleshooting after a recent haproxy restart. Do not perform any configuration changes or actions that might impact the overall functionality. For unresolved issues, escalate to the DevOps team.
Verify HAProxy Service Status
systemctl status haproxy
Sample Output:
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2024-12-02 10:00:00 UTC; 2m ago
Docs: man:haproxy(1)
Main PID: 2345 (haproxy)
Tasks: 4 (limit: 9375)
Memory: 2.0M
CGroup: /system.slice/haproxy.service
├─2345 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg
└─2346 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg
Check Recent Logs
journalctl -u haproxy | tail -n 50
Look for logs related to configuration issues, errors.Verify HAProxy Listener Ports
netstat -tlnp | grep haproxy
Sample Output:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2345/haproxy
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 2345/haproxy
Ensure the necessary ports (e.g., 80
and 443
) are in a LISTEN
state.Inspect Resource Usage
top -b -n1 | grep haproxy
Look for excessive resource usage that might cause instability right after restarts.Basic Network Checks
ping -c 3 <backend_server>
Confirm the backend servers are accessible without significant packet loss or latency.Check Backend Health Metrics
curl -I http://172.21.0.61:8182/apm/mon/health
Sample output:
HTTP/1.1 200
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
vary: accept-encoding
Content-Length: 0
Date: Thu, 05 Dec 2024 13:17:48 GMT
http://172.21.0.20:9500/stats
URL. Look for health check failures/other issues.
Credentials for stat page:User: haproxy
Password: 1!Qhaproxy
Frontend Connection Check
curl -I http://172.21.0.61:8182
Sample output:
HTTP/1.1 200
Set-Cookie: pssbtom=56884264BFCD00D15F0ED47ED976D524; Path=/; Secure; HttpOnly
Set-Cookie: pssbhz=HZ5608D764C6224BD397B566FA57BC3CEB; Path=/
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
vary: accept-encoding
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Thu, 05 Dec 2024 13:14:24 GMT
Escalate to the DevOps team for further troubleshooting and configuration changes.
In case the restart of HAProxy was not intentional, perform the following remedies to identify and resolve the root cause:
Check for OOM Killer Involvement
dmesg | grep -i "Out of memory"
dmesg | grep -i haproxy
Sample output:
[12345.678910] Out of memory: Kill process 5432 (haproxy) score 950 or sacrifice child
[12345.678912] Killed process 5432 (haproxy) total-vm:1048576kB, anon-rss:512000kB, file-rss:1024kB, shmem-rss:0kB
vm.overcommit_memory
to a safe value(2
) in /etc/sysctl.conf
to prevent aggressive memory allocation.
After setting the above parameter, to apply, Use: sudo sysctl -p
2
ensure stability by overallocating memory to the process in case of process needs.Analyze Resource Usage
Check if the server faced CPU, memory, or disk bottlenecks:
top -b -n 1 | grep haproxy
free -h
df -h
Make sure that there is plenty of memory, disk space available.
Check if the limits for the haproxy process is too low for the load by inspecting the proc file system for the process’s informtion, To get the main PID:
systemctl status haproxy | grep "Main PID"
Sample output:
Main PID: 2793613 (haproxy)
To inspect the limits enforced:
cat /proc/2793613/limits
Sample output:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 772996 772996 processes
Max open files 80334 524288 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 772996 772996 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
[Service]
LimitNOFILE=102400
LimitNPROC=102400
And perform systemctl daemon-reload && systemctl restart haproxy
Discuss with devops team and finalize the new values while adjusting above parameters.
Investigate Connections Overload
maxconn
limit:echo "show info" | socat /var/run/haproxy.stat stdio | grep -i maxconn
Sample output:
Maxconn: 40100
Hard_maxconn: 40100
MaxConnRate: 16
MaxConnRate should always be less than Maxconn or Hard_maxconn.
maxconn
value in the HAProxy configuration:global
maxconn 80200
grep -ir "Connection error" /var/log/haproxy/*
Validate HAProxy Configuration before restarting with the above changes
Ensure the configuration is error-free:
haproxy -c -f /etc/haproxy/haproxy.cfg
Sample output:
Configuration file is valid
You might see some configuration warnings for the above output(can be ignored with proper reasons).
Correct any detected syntax errors and restart.
systemctl restart haproxy
The expression (increase(haproxy_frontend_request_errors_total[1m]) / increase(haproxy_frontend_http_requests_total[1m])) * 100
calculates the percentage of frontend request errors in HAProxy over a one-minute period. This metric helps monitor the health of the HAProxy frontend by showing the ratio of failed requests to total requests. A high error rate may indicate problems such as misconfigurations, server overloads, or network issues. This alert triggers when the error rate exceeds threshold(>25 for warning, >50 for critical), signaling a potential issue that requires attention. C3 team should conduct the data collection and remedies given before intimating the issue to devops team.
Instance name, IP address, and alert age:
HAProxy process uptime:
(time() - haproxy_process_start_time_seconds) / 60
to determine the process uptime in minutes.Service status:
systemctl status haproxy
and note whether it is active, restarting, or failed.Process status:
ps aux | grep haproxy
to confirm if the HAProxy process is running and collect the process details.Configuration validation:
haproxy -c -f /etc/haproxy/haproxy.cfg
to check for configuration errors and collect the output and collect the warnings in the output.Log messages:
tail -100f /var/log/haproxy/haproxy_notice.log
tail -100f /var/log/haproxy/haproxy_debug.log
tail -100f /var/log/haproxy/haproxy_info.log
Resource availability:
top
, free -h
, and df -h
to ensure resource availability.Active backends: Use haproxy_backend_status
and check if all backends are up.
Active frontends: Use haproxy_frontend_status
and check if all frontends are in active status.
Queue status: Check current and max queues of servers to see if requests are being failed due to excess requests than the server can handle.
Use: haproxy_server_current_queue
and haproxy_server_max_queue
to find out if maximum queue limit is reached.
Identify Backends:
haproxy_backend_status
.Ping Backends:
ping -c 4 <backend-IP>
Monitor Backend Health Check Status:
health_check_status
codes returned by backends to diagnose the specific service causing the issue.Interpret Status Codes:
Escalate as Necessary:
Check Backend Status Changes:
haproxy_backend_status
to compare the number of backends up and down over the last hour.
increase(haproxy_backend_up[1h])
Analyze Network Issues:
Escalation:
Review HAProxy Logs:
tail -100 /var/log/haproxy/haproxy_notice.log
tail -100 /var/log/haproxy/haproxy_info.log
tail -100 /var/log/haproxy/haproxy_debug.log
Report Findings:
Upon receiving escalation from the C3 team regarding backend connectivity or health check failures, the DevOps team should proceed with targeted troubleshooting for the identified service(s).
General Troubleshooting for Backend Services
Validate Connectivity to Backends:
telnet <backend-IP> <service-port>
Check Backend Logs:
tail -100 /var/log/mysql/error.log
Verify Resource Availability:
top
free -h
df -h
Service-Specific Remedies
systemctl status mysql
systemctl restart mysql
nodetool status
systemctl restart scylla-server
systemctl status redpanda
systemctl restart redpanda
541-543
gluster volume status
systemctl restart glusterd
Check for Split-Brain
gluster volume heal <volume-name> info split-brain
gluster volume heal <volume-name> full
Solve for Volume Inconsistencies
gluster volume heal <volume-name>
The expression haproxy_server_http_response_time_average_seconds
calculates the average response time of a backend server in seconds. This metric provides insights into the latency of responses from HAProxy backend servers, which can indicate performance issues. This alert triggers when the average response time exceeds 5
, suggesting that backend servers may be experiencing delays or are under heavy load. The C3 team should investigate the affected backend servers, review server logs, and check resource utilization (e.g., CPU, memory, or network) on both HAproxy node and backend nodes.
Instance name, IP address, and alert age:
Response time details:
haproxy_server_http_response_time_average_seconds
to evaluate the average response time of backend servers.Recent log messages:
tail -100 /var/log/haproxy/haproxy_info.log
tail -100 /var/log/haproxy/haproxy_debug.log
tail -100 /var/log/haproxy/haproxy_notice.log
Resource availability:
top
or htop
.df -h
to check for sufficient space.Collect HTTP responses:
rate(haproxy_backend_http_responses_total{instance="$host"}[<time_interval>])
to observe changes in backend response rates, especially errors like 5xx
, 4xx
.Collect response rates from last few minutes:
increase(haproxy_backend_http_responses_total[<time interval>])
to check if there are sudden increase in number of responses the backend is serving.Frontend Session Rate:
rate(haproxy_frontend_current_session_rate[<time interval>])
to identify spikes in session creation rates.Frontend Connections:
rate(haproxy_frontend_connections_tota[<time interval>])
to monitor increases in frontend connections over time.View average queue time:
haproxy_backend_http_queue_time_average_seconds
By collecting these metrics and analyzing them, it helps in understanding the root cause for the issue.
Investigate High Response Times:
haproxy_server_http_response_time_average_seconds{instance="$host"}
to identify the backend servers with high response times.ssh devopsadmin@backend-server-ip
top -n 1
free -h
iostat
Collect and analyze CPU, memory, and disk I/O statistics.Clear Buffer and Cache:
sync; echo 3 > /proc/sys/vm/drop_caches
Confirm memory availability after clearing:
free -h
Validate Backend Health:
haproxy_backend_up{instance="$host"}
to confirm backend servers are marked as healthy.tail -f /var/log/tomcat/catalina.out
tail -f /var/log/mysql/error.log
Analyze Bandwidth and Traffic:
iftop
:iftop -i virbr20
to monitor traffic between backends and haproxy node. syhydsrv001:48456 => 172.21.0.42 9.03Kb 1.81Kb 462b
<= 1.43Mb 294Kb 73.4Kb
iftop -i testbr
to monitor traffic between frontends and client connections.
Sample output:
syhydsrv001 => 183.82.7.33.actcorp.in 32.4Kb 38.1Kb 47.6Kb
<= 19.3Kb 21.2Kb 30.6Kb
syhydsrv001 => 223.182.53.215 57.9Kb 23.2Kb 8.91Kb <= 12.7Kb 5.33Kb 2.05Kb ```
Capture traffic using tcpdump
for detailed packet analysis:
tcpdump -i testbr host <backend-server-ip> -w haproxy_traffic.pcap
Use Wireshark to open .pcap
files for deep inspection.
On the backend servers, monitor bandwidth:
iftop -i eth0
Capture incoming traffic to the backend:
tcpdump -i virbr20 port 3306 -w backend_traffic.pcap
Inspect Network Performance:
ping -c 4 backend-server-ip
traceroute backend-server-ip
Check for latencies.Monitor Backend Queues:
haproxy_server_current_queue{instance="$host"}
Inspect Logs for Errors:
tail -100f /var/log/haproxy/haproxy_debug.log
503
, 502
) and address root causes.By following these remedies, the team can systematically diagnose and resolve high response times in HAProxy and its backend servers, ensuring optimal performance.
Tune HAProxy Timeouts:
/etc/haproxy/haproxy.cfg
:
timeout connect 5s
timeout client 50s
timeout server 50s
haproxy -c -f /etc/haproxy/haproxy.cfg
systemctl reload haproxy
Increase resources for tomcat and other services as required
For tomcat:
sudo vi /etc/systemd/system/<tomcat_name>.service
Environment
line containing CATALINA_OPTS
.-Xms
and -Xmx
) to suit the application’s requirements. For example, to increase to 1 GB minimum and 4 GB maximum:
Environment="CATALINA_OPTS=-Xms1024M -Xmx4096M -server -XX:+UseParallelGC -javaagent:/opt/tomcat-exporter/jmx_prometheus_javaagent-1.0.1.jar=9115:/opt/tomcat-exporter/config.yaml"
sudo systemctl daemon-reload
sudo systemctl restart tomcat
ps aux | grep tomcat
Look for the -Xms
and -Xmx
values in the process command line.top
or htop
to monitor memory usage and ensure the changes have stabilized the application:
top -p $(pgrep -d',' -f tomcat)
Redpanda:
Locate the Configuration File:
sudo vi /etc/default/redpanda
Modify the Memory Setting:
START_ARGS
line and adjust the --memory
value. For example, to allocate 2048 MB:
START_ARGS=--check=true --memory 2048M
Save and Apply Changes:
sudo systemctl daemon-reload
Restart Redpanda Service:
sudo systemctl restart redpanda
Verify Changes:
ps aux | grep redpanda
Look for the --memory
parameter in the command line.Monitor Resource Usage:
top
or htop
to observe Redpanda’s memory usage:
top -p $(pgrep -d',' -f redpanda)
MySQL Buffer Pool Size Adjustment
Locate the Configuration File:
sudo vi /etc/mysql/mysql.conf.d/mysqld.cnf
Modify the Buffer Pool Size:
innodb_buffer_pool_size
line and adjust the value. For example, to set it to 4096 MB:
innodb_buffer_pool_size=4096M
Save and Apply Changes:
sudo systemctl restart mysql
Verify Changes:
mysql -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"
Monitor Resource Usage:
top
or htop
to monitor MySQL’s memory usage:
top -p $(pgrep -d',' -f mysql)
3. Scylla Memory
Locate the Configuration File:
sudo vi /etc/default/scylla-server
Modify the Memory Setting:
SCYLLA_ARGS
line and adjust the --memory
value. For example, to allocate 4096 MB:
SCYLLA_ARGS="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info --network-stack posix --memory 4096M"
Save and Apply Changes:
sudo systemctl daemon-reload
Restart Scylla Service:
sudo systemctl restart scylla-server
Verify Changes:
ps aux | grep scylla
Look for the --memory
parameter in the command line.Monitor Resource Usage:
top
or htop
to observe Scylla’s memory usage:
top -p $(pgrep -d',' -f scylla-server)
By following these steps, you can adjust and verify memory allocation for Redpanda, MySQL, and Scylla effectively, ensuring they run optimally based on application demands.
For redpanda:
Increase Resources if Necessary for the nodes:
free -h
lscpu
Check with management/IT team to replace the LAN connections with more bandwidth allowing hardware if response latencies won’t get better with any of the above remedies.
The backend weight for the pssb_webservers proxy is not equal to the expected value of 5
haproxy_backend_weight{proxy=~"(pssb_webservers)", instance="172.21.0.20"}
backend pssb_webservers
server pssb1avm001 172.21.0.61:8182 maxconn 5000 check inter 55s fall 3 rise 3 cookie pssb1avm001 observe layer4 error-limit 9 on-error mark-down
server pssb1avm002 172.21.0.62:8182 maxconn 5000 check inter 55s fall 3 rise 3 cookie pssb1avm002 observe layer4 error-limit 9 on-error mark-down
server pssb1abm003 172.21.0.63:8182 maxconn 5000 check inter 55s fall 3 rise 3 cookie pssb1abm003 observe layer4 error-limit 9 on-error mark-down
server pssb1avm004 172.21.0.64:8182 maxconn 5000 check inter 55s fall 3 rise 3 cookie pssb1avm004 observe layer4 error-limit 9 on-error mark-down
server pssb1avm005 172.21.0.65:8182 maxconn 5000 check inter 55s fall 3 rise 3 cookie pssb1avm005 observe layer4 error-limit 9 on-error mark-down
To view stats of haproxy : Haproxy stats
haproxy_backend_active_servers{instance="172.21.0.20", job="haproxy_exporter", proxy="pssb_webservers"}
haproxy_backend_status{proxy="pssb_webservers"}
haproxy_server_status{proxy="pssb_webservers"}
haproxy_backend_downtime_seconds_total{instance="172.21.0.20", job="haproxy_exporter", proxy="pssb_webservers"}
tail -100f /var/log/haproxy/haproxy_notice.log
tail -100f /var/log/haproxy/haproxy_debug.log
tail -100f /var/log/haproxy/haproxy_info.log
systemctl reload haproxy
The backend status for one or more specified backends is not equal to 1.
haproxy_backend_status{proxy=~"(artifacts_panchayatseva|elitical_api|elitical_ui|geomaps|jenkins|mdm_sb_node|monitor|panchayatseva_bot|portereu_api|portereu_ui|portereu_webservers|ps_jacoco|ps_surefire|ps_swagger|psorbit_webservers|pssb_webservers|sonar|stats|survey_artifacts|techdoc)",instance="172.21.0.20",state="UP"}
tail -100f /var/log/haproxy/haproxy_notice.log
tail -100f /var/log/haproxy/haproxy_debug.log
tail -100f /var/log/haproxy/haproxy_info.log
systemctl reload haproxy
The rate of backend health check up/down state changes exceeds the threshold.
rate(haproxy_backend_check_up_down_total{proxy=~"(artifacts_panchayatseva|elitical_api|elitical_ui|geomaps|jenkins|mdm_sb_node|monitor|panchayatseva_bot|portereu_api|portereu_ui|portereu_webservers|ps_jacoco|ps_surefire|ps_swagger|psorbit_webservers|pssb_webservers|sonar|stats|survey_artifacts|techdoc)", instance="172.21.0.20"}[$__rate_interval]) > 5
To view stats of haproxy : Haproxy stats
<backend_ip>:<port> example : 172.21.0.94:8182
systemctl reload haproxy
The frontend for https_443_frontend or stats is not in the UP state.
haproxy_frontend_status{proxy=~"(https_443_frontend|stats)",instance="172.21.0.20",state="UP"}
tail -100f /var/log/haproxy/haproxy_notice.log
tail -100f /var/log/haproxy/haproxy_debug.log
tail -100f /var/log/haproxy/haproxy_info.log
haproxy_frontend_connections_total
haproxy_frontend_current_sessions
haproxy_frontend_current_session_rate
haproxy_frontend_current_session_rate
haproxy_frontend_requests_denied_total
haproxy_frontend_sessions_total
haproxy_frontend_http_requests_total
haproxy_frontend_bytes_in_total and haproxy_frontend_bytes_out_total
haproxy_frontend_http_responses_total
netstat -tuln | grep <port>
systemctl restart haproxy
haproxy -c -f /etc/haproxy/haproxy.cfg
systemctl reload haproxy
ufw allow <port>
global
log 127.0.0.1 local0 debug
The backend weight for the psorbit_webservers proxy is either not equal to 3
haproxy_backend_weight{proxy=~"(psorbit_webservers)", instance="172.21.0.20"} != 3
haproxy_backend_active_servers{instance="172.21.0.20", job="haproxy_exporter", proxy="psorbit_webservers"}
haproxy_backend_status{proxy="psorbit_webservers"}
haproxy_server_status{proxy="psorbit_webservers"}
haproxy_backend_downtime_seconds_total{instance="172.21.0.20", job="haproxy_exporter", proxy="psorbit_webservers"}
systemctl reload haproxy
The backend weight for one or more specified servers is not equal to 1. Alert Query:
haproxy_backend_weight{proxy=~"(artifacts_panchayatseva|elitical_api|elitical_ui|geomaps|jenkins|mdm_sb_node|monitor|panchayatseva_bot|portereu_api|portereu_ui|portereu_webservers|ps_jacoco|ps_surefire|ps_swagger|sonar|stats|survey_artifacts|techdoc)", instance="172.21.0.20"} != 1
tail -100f /var/log/haproxy/haproxy_notice.log
tail -100f /var/log/haproxy/haproxy_debug.log
tail -100f /var/log/haproxy/haproxy_info.log
To view stats of haproxy : Haproxy stats
systemctl reload haproxy
The difference between backend session limits and max sessions is below the threshold.
(haproxy_backend_limit_sessions{proxy=~"(artifacts_panchayatseva|elitical_api|elitical_ui|geomaps|jenkins|mdm_sb_node|monitor|panchayatseva_bot|portereu_api|portereu_ui|portereu_webservers|ps_jacoco|ps_surefire|ps_swagger|psorbit_webservers|pssb_webservers|sonar|stats|survey_artifacts|techdoc)",instance="172.21.0.20"} - haproxy_backend_max_sessions{proxy=~"(artifacts_panchayatseva|elitical_api|elitical_ui|geomaps|jenkins|mdm_sb_node|monitor|panchayatseva_bot|portereu_api|portereu_ui|portereu_webservers|ps_jacoco|ps_surefire|ps_swagger|psorbit_webservers|pssb_webservers|sonar|stats|survey_artifacts|techdoc)",instance="172.21.0.20"}) < 3500
backend <backend_name>
server server1 192.168.x.x:80 check maxconn 10000
The difference between frontend_limit_sessions and frontend_max_sessions is below the threshold.
haproxy_frontend_limit_sessions{proxy=~"(https_443_frontend|stats)",instance="172.21.0.20"} - haproxy_frontend_max_sessions{proxy=~"(https_443_frontend|stats)",instance="172.21.0.20"} < 40000
To view stats of haproxy : Haproxy stats
tail -100f /var/log/haproxy/haproxy_notice.log
tail -100f /var/log/haproxy/haproxy_debug.log
tail -100f /var/log/haproxy/haproxy_info.log