July 2019
« Jun    


WordPress Quotes

I've learned that you shouldn't go through life with a catcher's mitt on both hands; you need to be able to throw something back.
Maya Angelou
July 2019
« Jun    

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (36)
Ansibile (19)
Apache (134)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (267)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (1)
horoscope (23)
Hyper-V (10)
IIS (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

22 visitors online now
1 guests, 21 bots, 0 members

Hit Counter provided by dental implants orange county

International Men’s Health Week: Here are 7 tests Every Man Above 40 Should Consider

International Men’s Health Week, which is celebrated annually during the week ending on Father’s Day, honours the importance of the health and wellness of boys and men. International Men’s Health Week provides an opportunity to educate the public about what can be done to improve the state of men’s health.

With today’s world becoming full of stress, pressures and health crises, the body faces early depreciation than before. On the occasion of International Men’s Health Week, we take a look at some important health tests men should take to indicate how fit they are and what changes they need to bring about for a healthier life.

Blood Sugar Test: It measures the amount of glucose in the blood and is an important screening for diabetes or pre-diabetes and insulin resistance. Untreated diabetes can cause problems with eyes, feet, heart, skin, nerves, kidneys and more. It can also affect mental health. The risk of prostate and other cancers also increases with high blood sugar.

Colorectal Cancer Screening: Men above 40 should get screened for colon cancer. Any of the three following tests: the sigmoidoscopy, colonoscopy, and the faecal occult blood test can help in detection. A colonoscopy is painless and takes only 15 to 20 minutes. Even better, this test can detect colon cancer early, when it’s most treatable.

Cholesterol test: There are three kinds of cholesterol circulating in the blood. Men above forty should get themselves checked for total cholesterol, low-density lipoprotein (LDL) or bad cholesterol and high-density lipoprotein (HDL) or good cholesterol. High cholesterol is the cause of heart disease.

Bone Density: While osteoporosis may be more common in women, men get it too. According to experts, men over 50 who are in a high-risk group (family history, sedentary lifestyle etc) should get themselves tested. A bone density can determine the strength of a person’s bone and the risk of a fracture.

Testosterone test: With age, there is a risk in a dip in libido as well. Low testosterone can cause erectile dysfunction, fatigue, weight gain, loss of muscle, loss of body hair, sleep problems, trouble concentrating, bone loss, and personality changes.

Stool sample Test: This test helps determine if there are any impurities in the blood and must be done once in every 2 years once you cross 40.

PSA test: The PSA test is a blood test used primarily to screen for prostate cancer. The test measures the amount of prostate-specific antigen (PSA) in your blood.

Eye test: Getting eye tests done post 40 is pertinent as the risk of Hypermetropia or long-sightedness as well as myopia increases with age. Diabetes could also increase the risk of both eye ailments.

Tomcat log automatic deletion implementation

Tomcat log automatic deletion implementation


In the production environment, Tomcat generates a lot of logs every day. If you don’t clean up the disk capacity, it will be enough. Manual cleaning is too much trouble. Therefore, write a script to delete the log files 5 days ago (depending on the actual situation).

Writing a script

  1. Write a /usr/local/script/cleanTomcatlog.sh script


export WEB_TOMCAT1=/usr/local/tomcat1/logs
export WEB_TOMCAT2=/usr/local/tomcat2/logs
export WEB_TOMCAT3=/usr/local/tomcat3/logs
echo > ${WEB_TOMCAT1}/catalina.out
echo > ${WEB_TOMCAT2}/catalina.out
echo > ${WEB_TOMCAT3}/catalina.out
find ${WEB_TOMCAT1}/* -mtime +5 -type f -exec rm -f {} \;
find ${WEB_TOMCAT2}/* -mtime +5 -type f -exec rm -f {} \;
find ${WEB_TOMCAT3}/* -mtime +5 -type f -exec rm -f {} \;

  1. Set the cleanTomcatlog.sh script to execute
    chmod a+x cleanTomcatlog.sh
  2. Enter the following command
    crontab -e on the console
  3. Press i to edit this text file, enter the following, restart tomcat every day at 4:30 am

Press esc to exit editing, enter wq and enter to save
30 04 * * * /usr/local/script/cleanTomcatlog.sh

Press esc to exit editing, enter wq and enter to save.

The restart timer task

[the root @]

# the crond STOP-Service [the root @] # the crond Start-Service

Name explanation

Explain the crontab and find commands

can set the execution schedule of the program through crontab, for example, let the program execute at 8 o’clock every day, or every 10 o’clock on Monday.
crontab -l lists the schedule;
crontab -e to edit schedule;
crontab -d deletion schedule; “the -l” nothing to say, is a view of it; “-e” is the editor,

and vi no difference (in fact, vi is editing a specific file); “-d” basic need, because it put all the user’s schedule are removed, usually do not put a timetable for progressive deleted with “-e” editor; that How to edit it? crontab file format is: MHD md CMD. A 6 field, the last CMD is the program to be executed, such as cleanTomcatlog.sh. M: minute (0-59) H: hour (0-23) D: date (1-31) m: month (1-12) d: one day of the week (0-6, 0 for Sunday) these five fields separated by a space of time which can be a digital value, may be a plurality of numbers separated by commas (or other), if there were not set,

the default is “*.” For example, every day 04 points 30 points execution cleanTomcatlog.sh, is == 30 04 * * * /usr/local/script/cleanTomcatlog.sh==.

Steps to install Oracle 19c in CentOS 7.6 RPM mode

Steps to install Oracle 19c in CentOS 7.6 RPM mode

  1. Download the required installation package:

1.1 preinstall

1.2 Oracle rpm installation package

It is recommended to download at home or see the VPN proxy download speed in the company.

  1. Installation.

yum localinstall -y oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm

Install after installation is complete

yum localinstall -y oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
Wait for the installation results.

Different servers take different time:

The result of my installation here is:

Total size: 6.9 G
Installed size: 6.9 G
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : oracle-database-ee-19c-1.0-1.x86_64 1/1
[INFO] Executing post installation scripts…
[INFO] Oracle home installed successfully and ready to be configured.
To configure a sample Oracle Database you can execute the following service configuration script as root: /etc/init.d/oracledb_ORCLCDB-19c configure
Verifying : oracle-database-ee-19c-1.0-1.x86_64 1/1

oracle-database-ee-19c.x86_64 0:1.0-1

Note that the configuration after the installation is complete requires the root user.

  1. As with previous blogs, you need to modify the character set and other configurations:

The modified configuration file of oracle19c is:

vim /etc/init.d/oracledb_ORCLCDB-19c
The revised content is mainly the part of the circle

Text version:

export TEMPLATE_NAME=General_Purpose.dbc
export CREATE_AS_CDB=true
Corresponding to copy a parameter file

cd /etc/sysconfig/

cp oracledb_ORCLCDB-19c.conf oracledb_ORA19C-19c.conf

  1. Configure with the root user.

The root user executes the command:
/etc/init.d/oracledb_ORCLCDB-19c configure
Wait for the Oracle database to perform initialization operations.

. Processing after the completion of the execution.

Increase environment variable processing

vim /etc/profile.d/oracle19c.sh

Add content as:
export ORACLE_HOME=/opt/oracle/product/19c/dbhome_1
export PATH=$PATH:/opt/oracle/product/19c/dbhome_1/bin
Modify the password of the Oracle user:

passwd oracle
Use Oracle login for related processing

sqlplus / as sysdba
View pdb information

show pdbs
5.1 Create a trigger to automatically start pdb (Do not set the PDB boot startup Many programs can not connect to the PDB, it is recommended to use show pdbs to view the status, manual start can also. Can not create business data in the CDB, will prompt to create the user name does not meet c# ##???)

CREATE TRIGGER open_all_pdbs
EXECUTE IMMEDIATE ‘alter pluggable database all open’;
END open_all_pdbs;

CentOS 7.6 configures Nginx reverse proxy

Using a three CentOS 7 virtual machine to build a simple Nginx reverse proxy load cluster, three virtual machine addresses and functions nginx load balancer web01 server web02 server

Second, install the nginx software (the following operations must be carried out on three virtual machines)

Some Centos 7.6 does not have the wget command installed, so install it yourself:

yum -y install wget

Install nginx software: (three servers must be installed)

$ wget http://dl.Fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

$ rpm -ivh epel-release-latest-7.noarch.rpm

$ yum install nginx (direct yum installation)

Installation is so simple and convenient, after the installation is complete, you can use systemctl to control the startup of nginx.

$ systemctl enable nginx (join boot)
$ systemctl start nginx (turn on nginx)
$ systemctl status nginx (view status)

After the three servers are installed with nginx respectively, the test can run normally and provide web services. If the error is probably the cause of the firewall, please see the last few steps about the firewall.

Modify the configuration file of the nginx of the proxy server to implement load balancing. As the name implies, multiple requests are distributed to different services to achieve a balanced load and reduce the pressure on a single service.

$ vi /etc/nginx/nginx.conf (modify configuration file, global configuration file)

For more information on configuration, see:

* Official English Documentation: http://nginx.org/en/docs/

* Official Russian Documentation: http://nginx.org/ru/docs/

User nginx;
worker_processes auto; (default is automatic, you can set it yourself, generally no more than cpu core)
error_log /var/log/nginx/error.log; (error log path)
pid /run/nginx.pid; (pid file path)

Load dynamic modules. See /usr/share/nginx/README.dynamic.

include /usr/share/nginx/modules/*.conf;

Events { accept_mutex on; (set network connection serialization to prevent surprises, default is on) 
multi_accept on; (set whether a process accepts multiple network connections at the same time, the default is off) 
worker_connections 1024; (the maximum of a process Number of connections) 


http {
log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;

access_log  /var/log/nginx/access.log  main;

Sendfile     on; # tcp_nopush on; (not commented out here) 
tcp_nodelay on; 
keepalive_timeout 65; (connection timeout) 
types_hash_max_size 2048; 
gzip on; (open compression) 
include /etc/nginx/mime.types; 
default_type application/octet-stream;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

Here to set load balancing, load balancing has multiple strategies, nginx comes with polling, weights, ip-hash, response time and so on.

Default is to split the http load, the way to poll.

is to distribute the request according to the weight, the load with high weight is large

ip-hash, according to ip to allocate, keep the same ip on the same server.

Response time, according to the response time of the server nginx, preferentially distributed to the server with fast response.

The centralized strategy can be combined with
upstream tomcat { (tomcat is a custom load balancing rule name)
ip_hash; (ip_hash is the ip-hash method)

??????server weight=3 fail_timeout=20s;
??????server weight=4 fail_timeout=20s;

can define multiple sets of rules


Server { 
    listen 80 default_server; (default listening port 80) 
    listen localhost; (listening server) 
    server_name _; 
    root /usr/share/nginx/html;

    # Load configuration files for the default server block.
    include /etc/nginx/default.d/*.conf;

    Location / { ( / means all requests, can be customized to set different load rules and services for different domain names) 

proxy_pass http://tomcat; (reverse proxy, fill in your own load balancing rule name)
proxy_redirect off; (The following settings can be copied directly. If not, it may lead to some problems such as unauthentication.)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90; The following are just some timeout settings, but don’t)
proxy_send_timeout 90;
proxy_read_timeout 90;
# location ~.(gif|jpg|png)$ { (for example, write in regular expression)
# root /home/root/ Images;
# }

    error_page 404 /404.html;
        location = /40x.html {

    error_page 500 502 503 504 /50x.html;
        location = /50x.html {

Settings for a TLS enabled server.


server {

listen 443 ssl http2 default_server;

listen [::]:443 ssl http2 default_server;

server_name _;

root /usr/share/nginx/html;


ssl_certificate “/etc/pki/nginx/server.crt”;

ssl_certificate_key “/etc/pki/nginx/private/server.key”;

ssl_session_cache shared:SSL:1m;

ssl_session_timeout 10m;

ssl_ciphers HIGH:!aNULL:!MD5;

ssl_prefer_server_ciphers on;


# Load configuration files for the default server block.

include /etc/nginx/default.d/*.conf;


location / {



error_page 404 /404.html;

location = /40x.html {



error_page 500 502 503 504 /50x.html;

location = /50x.html {




After the configuration is updated, the reload configuration can take effect without restarting the service.

nginx -s reload

If you can’t access it, it may be because the firewall is open and the port is not open:

Start: systemctl start firewalld
off: systemctl stop firewalld
view status: systemctl status firewalld
boot disable: systemctl disable firewalld
boot enable: systemctl enable firewalld

Open a port:

firewall-cmd –zone=public –add-port=80/tcp –permanent (–permanent is permanent, no failure after restarting this parameter)
firewall-cmd –reload
firewall-cmd — zone = public –query-port = 80 / tcp
firewall-cmd –zone = public –remove- port = 80 / tcp –permanent

selinux nginx

Restart Nginx and bind() to failed (13: Permission denied)

First declare: If you do not use SELinux you can skip this article.

The Nginx service is installed on ContOS 7. For the project, you need to modify the default 80 port of Nginx to 8088. After modifying the configuration file, restart the Nginx service and check the log for the following error:


9011#0: bind() to failed (13: Permission denied)

The permission was denied, and I thought that the port was occupied by another program. I checked the active port but no program used this port. The online search said that it requires root privileges, but I am running the root user. This is very depressed, but it is still Give google the answer, because selinux only allows 80,81,443,8008,8009,8443,9000 as the HTTP port.

To view the http port allowed by selinux, you must use the semanage command. First install the semanage command tool first.

Before installing the semanage tool, we first install a tab to complete the secondary command function tool bash-completion:

Yum -y install bash-completion

Semanage found directly through the yum installation found no such package:

yum install semange

NO package semanage available.

Then find out which package the semanage command provides for this command.

yum provides semanage

Or use the following command:

yum whatprovides /usr/sbin/semanage

We found that we need to install the package policycoreutils- Python to use the semanage command.

Now that we have installed this package via yum, we can use tabs to complete it:

yum install policycoreutils-python.x86_64

Now that you can finally use semanage, let’s first look at the ports that http allow access to:

semanage port -l | grep http_port_t

Http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000

Then we will add the port 8088 to be used in the port list:

semanage port -a -t http_port_t -p tcp 8088

semanage port -l | grep http_port_t

Http_port_t tcp 8088, 80, 81, 443, 488, 8008, 8009, 8443, 9000

Ok, now nginx can use port 8088.

The selinux log is in /var/log/audit/audit.log

But the information recorded in this file is not obvious enough, it is difficult to see, we can use the audit2why and audit2allow tools to view, these two tools are also provided by the policycoreutils-python package.

audit2why < /var/log/audit/audit.log

Collect the logs of the selinux tool, there is another tool setroubleshoot, the corresponding package is setroubleshoot-server

Check if host is a live bash script

TCP-ping in bash (not tested)
if [ "X$HOSTNAME" == "X" ]; then
echo "Specify a hostname"
exit 1
if [ "X$PORT" == "X" ]; then
exec 3<>/dev/tcp/$HOSTNAME/$PORT
if [ $? -eq 0 ]; then
echo "Alive."
echo "Dead."
exec 3>&-

Tomcat log cutting script

time=$(date +%H) 
end_time=`expr $time – 2`
BF_TIME=$(date +%Y%m%d)_$a:00-$time:00
cp /usr/local/tomcat8/logs/catalina.out /var/log/tomcat/oldlog/catalina.$BF_TIME.out
echo ” ” > /usr/local/tomcat8/logs/catalina.out


mkdir  -p  /var/log/tomcat/oldlog/

chmod  +x  /root/tom_log.sh

 crontab -e
0 */2 * * * sh /root/tom_log.sh

ls /var/log/tomcat/oldlog/

catalina.20190102_15:00-17:00.out  catalina.20190102_17:00-19:00.out

docker tomcat + mysql

Build on a clean CentOS image
Centos image preparation
Pull the Centos image on the virtual machine: docker pull centos
Create a container to run the Centos image: docker run -it -d –name mycentos centos /bin/bash
Note: There is an error here [ WARNING: IPv4 forwarding is disabled. Networking will not work. ]

Change the virtual machine file: vim /usr/lib/sysctl.d/00-system.conf
Add the following content
Restart the network: systemctl restart network
Note: There is another problem here, systemctl in docker can not be used normally. Find the following solutions on the official website

Link: https://forums.docker.com/t/systemctl-status-is-not-working-in-my-docker-container/9075/4

Run mirror with the following statement
docker run –privileged -v /sys/fs/cgroup:/sys/fs/cgroup -it -d –name usr_sbin_init_centos centos /usr/sbin/init

1. Must have –privileged

2. Must have -v /sys/fs/cgroup:/sys/fs/cgroup

3. Replace bin/bash with /usr/sbin/init

Install the JAVA environment
Prepare the JDK tarball to upload to the virtual machine
Put the tarball into the docker container using docker cp
docker cp jdk-11.0.2_linux-x64_bin.tar.gz 41dbc0fbdf3c:/

The same as the linux cp specified usage, you need to add the identifier of the container: id or name

Extract the tar package
tar -xf jdk-11.0.2_linux-x64_bin.tar.gz -C /usr/local/java/jdk
Edit profile file export java environment variable


export JAVA_HOME=/usr/local/java/jdk/jdk1.8.0_91
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
Run source /etc/profile to make environment variables take effect
Whether the test is successful
java –version


java 11.0.2 2019-01-15 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.2+9-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.2+9-LTS, mixed mode)
Install Tomcat
Prepare the tomcat tar package to upload to the virtual machine and cp to the docker container
Extract to
tar -xf apache-tomcat-8.5.38.tar.gz -C /usr/local/tomcat
Set boot boot, by using rc.local file

rc.local Add the following code

export JAVA_HOME=/usr/local/java/jdk/jdk-11.0.2
Open tomcat

/usr/local/tomcat/apache-tomcat-8.5.38/bin/ directory running
curl localhost:8080

?html source content

Install mysql
Get the yum source of mysql
wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm
Install the above yum source
yum -y install mysql57-community-release-el7-10.noarch.rpm
Yum install mysql
yum -y install mysql-community-server
Change the mysql configuration: /etc/my/cnf
Validate_password=OFF # Turn off password verification
Initialize specified but the data directory has files in it # The default behavior of timestamp from 5.6 is already deprecated, need to close the warning


Get the mysql initial password

grep “password” /var/log/mysqld.log

[Note] A temporary password is generated for root@localhost: k:nT<dT,t4sF

Use this password to log in to mysql

Go to mysql and proceed


mysql -u root -p

change the password

ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘111111’;

Change to make mysql remote access

update user set host = ‘%’ where user = ‘root’;
Test, you can use physical machine, use navicat to access mysql in docker
Packing container
On the docker hub

Submit the container as a mirror

docker commit -a ‘kane’ -m ‘test’ container_id images_name:images_tag

docker push kane0725/tomcat
Local tarball everywhere

Export to cost tar package

docker export -o test.tar a404c6c174a2

Import the tar package into a mirror

docker import test.tar test_images
Use Dockerfile
Note: only build a mirror of tomcat

Ready to work
Create a special folder and put the jdk and tomcat tarballs
Create a Dockerfile in this directory
Centos base image
document content
FROM centos
MAINTAINER tomcat mysql
ADD jdk-11.0.2 /usr/local/java
ENV JAVA_HOME /usr/local/java/
ADD apache-tomcat-8.5.38 /usr/local/tomcat8
Output results using docker build

[root@localhost dockerfile]

# docker build -t tomcats:centos .
Sending build context to Docker daemon 505.8 MB
Step 1/7 : FROM centos
—> 1e1148e4cc2c
Step 2/7 : MAINTAINER tomcat mysql
—> Using cache
—> 889454b28f55
Step 3/7 : ADD jdk-11.0.2 /usr/local/java
—> Using cache
—> 8cad86ae7723
Step 4/7 : ENV JAVA_HOME /usr/local/java/
—> Running in 15d89d66adb4
—> 767983acfaca
Removing intermediate container 15d89d66adb4
Step 5/7 : ADD apache-tomcat-8.5.38 /usr/local/tomcat8
—> 4219d7d611ec
Removing intermediate container 3c2438ecf955
Step 6/7 : EXPOSE 8080
—> Running in 56c4e0c3b326
—> 7c5bd484168a
Removing intermediate container 56c4e0c3b326
Step 7/7 : RUN /usr/local/tomcat8/bin/startup.sh
—> Running in 7a73d0317db3

Tomcat started.
—> b53a6d54bf64
Removing intermediate container 7a73d0317db3
Successfully built b53a6d54bf64
Docker build problem
Be sure to bring the order behind. Otherwise it will report an error.
“docker build” requires exactly 1 argument(s).
Run a container

docker run -it –name tomcats –restart always -p 1234:8080 tomcats /bin/bash



Using CATALINA_BASE: /usr/local/tomcat8
Using CATALINA_HOME: /usr/local/tomcat8
Using CATALINA_TMPDIR: /usr/local/tomcat8/temp
Using JRE_HOME: /usr/local/java/
Using CLASSPATH: /usr/local/tomcat8/bin/bootstrap.jar:/usr/local/tomcat8/bin/tomcat-juli.jar
Tomcat started.
Use docker compose
Install docker compose
Official: https://docs.docker.com/compose/install/

The way I choose is pip installation


pip install docker-compose

docker-compose –version


docker-compose version 1.23.2, build 1110ad0
Write docker-compose.yml

This yml file builds a mysql a tomcat container

version: “3”
container_name: mysql
image: mysql:5.7
restart: always
– ./mysql/data/:/var/lib/mysql/
– ./mysql/conf/:/etc/mysql/mysql.conf.d/
– “6033:3306”
container_name: tomcat
restart: always
image: tomcat
– 8080:8080
– 8009:8009
– mysql:m1 #Connect to database mirroring

Volumn must be a path, you cannot specify a file

Tomcat specifies that the external conf has been created unsuccessfully, do not know why, prompt

tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina load
tomcat | WARNING: Unable to load server configuration from [/usr/local/tomcat/conf/server.xml]
tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina start
tomcat | SEVERE: Cannot start server. Server instance is not configured.
tomcat exited with code 1
Run command
Note: Must be executed under the directory of the yml file

docker-compose up -d

———-View docker container——-

[root@localhost docker-compose]

# docker ps -a
1a8a0165a3a8 tomcat “catalina.sh run” 7 seconds ago Up 6 seconds>8009/tcp,>8080/tcp tomcat
ddf081e87d67 mysql:5.7 “docker-entrypoint…” 7 seconds ago Up 7 seconds 33060/tcp, 0

How to recover “rpmdb open failed” error in RHEL or Centos Linux

You are updating the system through yum command and suddenly power goes down or what happen if yum process is accidentally killed. Post this situation when you tried to update the system again with yum command now you are getting below error message related to rpmdb:

error: rpmdb: BDB0113 Thread/process 2196/139984719730496 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm

Error: rpmdb open failed
You are also not able to perform rpm query and getting same error message on screen:


# rpm -qa
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm

[root@testvm ~]

# rpm -Va
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm


The reason for this error is rpmdb has corrupted. No worry it is easy to recover the rpmdb by following below steps:

  1. Create backup directory in which you need to dump the rpmdb backup.

mkdir /tmp/rpm_db_bak

  1. Backup the rpmdb files in created backup directory in /tmp

mv /var/lib/rpm/__db* /tmp/rpm_db_bak

  1. Clean the yum cache from below command:

yum clean all

  1. Now again run the yum update command. It will rebuilt the rpmdb and should fetch and apply the updates from your repository or RHSM (or CentOS CDN in case of CentOS Linux)

[root@testvm ~]

# yum update
Loaded plugins: fastestmirror
base | 3.6 kB 00:00
epel/x86_64/metalink | 5.0 kB 00:00
epel | 4.3 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
(1/7): base/7/x86_64/group_gz | 155 kB 00:02
(2/7): epel/x86_64/group_gz | 170 kB 00:04
(3/7): extras/7/x86_64/primary_db | 191 kB 00:12
(4/7): epel/x86_64/updateinfo | 809 kB 00:21
(5/7): base/7/x86_64/primary_db | 5.6 MB 00:26
(6/7): epel/x86_64/primary_db | 4.8 MB 00:46
(7/7): updates/7/x86_64/primary_db | 7.8 MB 00:50
Determining fastest mirrors

  • base: mirror.ehost.vn
  • epel: repo.ugm.ac.id
  • extras: mirror.ehost.vn
  • updates: mirror.dhakacom.com
    Resolving Dependencies
    –> Running transaction check
    —> Package NetworkManager.x86_64 1:1.4.0-19.el7_3 will be updated
    —> Package NetworkManager.x86_64 1:1.4.0-20.el7_3 will be an update
    —> Package NetworkManager-adsl.x86_64 1:1.4.0-19.el7_3 will be updated
    –> Finished Dependency Resolution

Dependencies Resolved


Package Arch Version Repository Size

kernel x86_64 3.10.0-514.26.2.el7 updates 37 M
python2-libcomps x86_64 0.1.8-3.el7 epel 46 k
replacing python-libcomps.x86_64 0.1.6-13.el7
NetworkManager x86_64 1:1.4.0-20.el7_3 updates 2.5 M
NetworkManager-adsl x86_64 1:1.4.0-20.el7_3 updates 146 k
NetworkManager-bluetooth x86_64 1:1.4.0-20.el7_3 updates 165 k
NetworkManager-glib x86_64 1:1.4.0-20.el7_3 updates 385 k
NetworkManager-libnm x86_64 1:1.4.0-20.el7_3 updates 443 k
NetworkManager-team x86_64 1:1.4.0-20.el7_3 updates 147 k
python-perf x86_64 3.10.0-514.26.2.el7 updates 4.0 M
sudo x86_64 1.8.6p7-23.el7_3 updates 735 k
systemd x86_64 219-30.el7_3.9 updates 5.2 M
systemd-libs x86_64 219-30.el7_3.9 updates 369 k
systemd-sysv x86_64 219-30.el7_3.9 updates 64 k
tuned noarch 2.7.1-3.el7_3.2 updates 210 k
xfsprogs x86_64 4.5.0-10.el7_3 updates 895 k
kernel x86_64 3.10.0-123.el7 @anaconda 127 M

Transaction Summary

Install 2 Packages
Upgrade 46 Packages
Remove 1 Package

Total download size: 84 M
Is this ok [y/d/N]: y

Simple way to configure Ngnix High Availability Web Server with Pacemaker and Corosync on CentOS7

Pacemaker is an open source cluster manager software which provide high availability of resources or services in CentOS 7 or RHEL 7 Linux. It has feature of scalable and advanced HA Cluster Manager. This HA cluster manager distributed by ClusterLabs.

Corosync is the core of Pacemaker Cluster Manager as it is responsible for generating heartbeat communication between cluster nodes which make it capable of deploying High Availability in applications. Corosync is derived from an Open Source project OpenAIS under new BSD License.

Pcsd is a Pacemaker command line interface (CLI) and GUI for managing the Pacemaker cluster. PCSD command pcs is use for creating, configuring and adding a new node to cluster.

In this tutorial I will use pcsd in CLI for configuring Active/Passive Pacemaker Cluster to provide high availability of Nginx webservice in CentOS 7. In this article I have tried to give basic idea of how to configure the Pacemaker cluster on CentOS 7 (applicable same to RHEL 7 as CentOS is mimic of RHEL). For basic cluster configuration I have disable the STONITH and ignore the Quorum but for Production environment I suggest to use STONITH feature of Pacemaker.

Here is Short Defination of STONITH: STONITH or Shoot The Other Node In The Head is the fencing implementation on Pacemaker. It is a technique for fencing in computer clusters. Fencing is the isolation of a failed node so that it does not cause disruption to a computer cluster.

For demonstration I have built two VMs (Virtual Machines) on KVM based on my Ubuntu 16.04 base machine and those VMs have private IP addresses.

Note: I am referring my VMs as Cluster node for better presenting them in rest of the topics.

Pre-requisite for configuring pacemaker cluster

Minimum two CentOS 7 Server
Floating IP Address:
Root Privilege

Below are the points which I will follow for Installing and Configuring two node Pacemaker Cluster:

  1. Mapping of Host File
  2. Installation of Epel Repository and Nginx
  3. Installation and Configuration of Pacemaker, Corosync, and Pcsd
  4. Creation and Configuration of Cluster
  5. Disabling of STONITH and Ignoring Quorum Policy
  6. Adding of Floating-IP and Resources
  7. Testing the Cluster service

Steps for Installation and configuration of pacemaker cluster

  1. Mapping of host files:

As in my test lab I am not using DNS for resolving the both pacemaker cluster node hostname thus I have configured /etc/hosts file for resolving hostname of both nodes. But my suggestion is, though you have DNS in your environment for name resolution but still for better Pacemaker Cluster heartbeat communication between cluster nodes you should configure /etc/hosts file.

Edit the /etc/hosts file with desire editor in both cluster nodes, below is example of /etc/hosts file which I have configured in both cluster nodes.

[root@webserver01 ~]

# cat /etc/hosts localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 webserver01 webserver02

[root@webserver01 ~]


[root@webserver02 ~]

# cat /etc/hosts localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 webserver01 webserver02

[root@webserver02 ~]

Post /etc/hosts file configuration we will test the connectivity of both cluster nodes with each other through ping command:

ping -c 3 webserver01
ping -c 3 webserver02
If we will get reply like below that means our webservers are communicating with each other.

[root@webserver01 ~]

# ping -c 3 webserver02
PING webserver02 ( 56(84) bytes of data.
64 bytes from webserver02 ( icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from webserver02 ( icmp_seq=2 ttl=64 time=0.727 ms
64 bytes from webserver02 ( icmp_seq=3 ttl=64 time=0.698 ms

— webserver02 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.698/0.843/1.106/0.188 ms

[root@webserver01 ~]


[root@webserver02 ~]

# ping -c 3 webserver01
PING webserver01 ( 56(84) bytes of data.
64 bytes from webserver01 ( icmp_seq=1 ttl=64 time=0.197 ms
64 bytes from webserver01 ( icmp_seq=2 ttl=64 time=0.123 ms
64 bytes from webserver01 ( icmp_seq=3 ttl=64 time=0.114 ms

— webserver01 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.114/0.144/0.197/0.039 ms



  1. Installation of Epel Repository and Nginx

In this Steps we will install EPEL (Extra Package for Enterprise Linux) repository and then Nginx. For Nginx installation EPEL repository package need to install first.

yum -y install epel-release
Now install Nginx:

yum -y install nginx

  1. Install and Configure Pacemaker, Corosync, and Pcsd

Now we will install the pacemaker, pcs and corosync package with yum command. These package does not require seperate repository as they will use default CentOS repository.

yum -y install corosync pacemaker pcs
Once Cluster packages will install successfully enable the cluster services in startup through systemctl commands as mentioned below:

systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
Now Start the pcsd service in both cluster nodes and also enable it in system startup.

systemctl start pcsd.service
systemctl enable pcsd.service
The pcsd daemon works with the pcs command-line interface to manage synchronizing the corosync configuration across all nodes in the cluster.

The user hacluster is created automatically with disable password during package installation this account is needed a login credential for syncing the corosync configuration, or starting and stopping the cluster service on other cluster nodes.
In next step we will create a new password for hacluster user and we will use same password for rest cluster node as well.

passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

  1. Creation and Configuration of Cluster

Note: This steps from Step 4 to 7 will only need to perform on webserver01 server.
This step will cover the creating of new 2 nodes CentOS Linux cluster servers which will host Nginx resources and Floating IP Address.

First of all to create cluster we need to authorize all servers using the pcs command and the hacluster user.

Authorize both cluster webservers with the pcs command and hacluster user and password.

[root@webserver01 ~]

# pcs cluster auth webserver01 webserver02
Username: hacluster
webserver01: Authorized
webserver02: Authorized

[root@webserver01 ~]

Note: If you are getting below error after running above auth command:

[root@webserver01 ~]

# pcs cluster auth webserver01 webserver02
Username: hacluster
webserver01: Authorized
Error: Unable to communicate with webserver02
Then you need to define firewalld rules in both nodes which enable the communication of both Cluster nodes:

Below are the example for adding rules for Cluster and ngnix as well

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=high-availability

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=http

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=https

[root@webserver01 ~]

# firewall-cmd –reload

[root@webserver01 ~]

# firewall-cmd –list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
services: ssh dhcpv6-client high-availability http https
masquerade: no
rich rules:

[root@webserver01 ~]

Now we will define the cluster name and cluster node members.

pcs cluster setup –name web_cluster webserver01 webserver02
Next start the all cluster services and enable them in system startup.

[root@webserver01 ~]

# pcs cluster start –all
webserver02: Starting Cluster…
webserver01: Starting Cluster…

[root@webserver01 ~]

# pcs cluster enable –all
webserver01: Cluster Enabled
webserver02: Cluster Enabled

[root@webserver01 ~]

Run the below command to check the Cluster status

pcs status cluster

[root@webserver01 ~]

# pcs status cluster
Cluster Status:
Stack: corosync
Current DC: webserver02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) – partition with quorum
Last updated: Tue Sep 4 02:38:20 2018
Last change: Tue Sep 4 02:33:06 2018 by hacluster via crmd on webserver02
2 nodes configured
0 resources configured

PCSD Status:
webserver01: Online
webserver02: Online

[root@webserver01 ~]


  1. Disabling of STONITH and Ignoring Quorum Policy
    In this tutorial we will disable the STONITH and Quorum policy as we are not using fencing device here. But if you want to implement Cluster in Production environment then I suggest to use Fencing and Quorum Policy.

Disable the STOITH:

pcs property set stonith-enabled=false
Ignore the Quorum Policy:

pcs property set no-quorum-policy=ignore
Now Check whether STONITH and Quorum policy are disable or not with below command:

pcs property list

[root@webserver01 ~]

# pcs property list
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: web_cluster
dc-version: 1.1.18-11.el7_5.3-2b07d5c5a9
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: false

[root@webserver01 ~]


  1. Adding of Floating-IP and Resources

Floating IP address are cluster virtual IP address which float or move automatically from one cluster node to another cluster node in event of one Active Cluster node failure or disable which was hosting cluster resources.

In this step we will add Floating IP and Nginx resources:

Adding Floating IP

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip= cidr_netmask=32 op monitor interval=30s
Adding nginx resources

pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf op monitor timeout=”5s” interval=”5s”
Now check newly added resources from below command:

pcs status resources

[root@webserver01 ~]

# pcs status resources
virtual_ip (ocf::heartbeat:IPaddr2): Started webserver01
webserver (ocf::heartbeat:nginx): Started webserver01

[root@webserver01 ~]


  1. Testing the Cluster service

To check cluster service running status
Now We will check the Cluster service status before moving to test nginx webservice failover in event of one Active Cluster node fail.

To check running cluster service status below is the command with example:

[root@webserver01 ~]

# pcs status
Cluster name: web_cluster
Stack: corosync
Current DC: webserver01 (version 1.1.18-11.el7_5.3-2b07d5c5a9) – partition with quorum
Last updated: Tue Sep 4 03:55:47 2018
Last change: Tue Sep 4 03:15:29 2018 by root via cibadmin on webserver01

2 nodes configured
2 resources configured

Online: [ webserver01 webserver02 ]

Full list of resources:

virtual_ip (ocf::heartbeat:IPaddr2): Started webserver01
webserver (ocf::heartbeat:nginx): Started webserver01

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

[root@webserver01 ~]

To test ngnix webservice failover:

First of we will create a webpages in both cluster nodes by below command:
In webserver01:

echo ‘

webserver01 – Web-Cluster-Testing

‘ > /usr/share/nginx/html/index.html
echo ‘

webserver02 – Web-Cluster-Testing

‘ > /usr/share/nginx/html/index.html
Now open this web page with Floating IP address ( which we had configured with Cluster resources in previous steps, you will see currently webpage is accessible from webserver01:

Now stop the cluster service in webserver01 and after it again open the webpage with same floating IP address. Below is the command for Stopping pacemaker cluster in webserver01:

pcs cluster stop webserver01
After stopping the pacemaker cluster in webserver01 this time webpage should be accessed from webserver02: