September 2019
M T W T F S S
« Aug    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Categories

WordPress Quotes

Nothing can stop the man with the right mental attitude from achieving his goal; nothing on earth can help the man with the wrong mental attitude.
Thomas Jefferson
September 2019
M T W T F S S
« Aug    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (40)
Ansibile (19)
Apache (135)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (268)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (1)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Uncategorized (30)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

33 visitors online now
4 guests, 29 bots, 0 members

Hit Counter provided by dental implants orange county

selinux nginx

Restart Nginx and bind() to 0.0.0.0:8088 failed (13: Permission denied)

First declare: If you do not use SELinux you can skip this article.

The Nginx service is installed on ContOS 7. For the project, you need to modify the default 80 port of Nginx to 8088. After modifying the configuration file, restart the Nginx service and check the log for the following error:

[emerg]

9011#0: bind() to 0.0.0.0:8088 failed (13: Permission denied)

The permission was denied, and I thought that the port was occupied by another program. I checked the active port but no program used this port. The online search said that it requires root privileges, but I am running the root user. This is very depressed, but it is still Give google the answer, because selinux only allows 80,81,443,8008,8009,8443,9000 as the HTTP port.

To view the http port allowed by selinux, you must use the semanage command. First install the semanage command tool first.

Before installing the semanage tool, we first install a tab to complete the secondary command function tool bash-completion:

Yum -y install bash-completion

Semanage found directly through the yum installation found no such package:

yum install semange

NO package semanage available.

Then find out which package the semanage command provides for this command.

yum provides semanage

Or use the following command:

yum whatprovides /usr/sbin/semanage

We found that we need to install the package policycoreutils- Python to use the semanage command.

Now that we have installed this package via yum, we can use tabs to complete it:

yum install policycoreutils-python.x86_64

Now that you can finally use semanage, let’s first look at the ports that http allow access to:

semanage port -l | grep http_port_t

Http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000

Then we will add the port 8088 to be used in the port list:

semanage port -a -t http_port_t -p tcp 8088

semanage port -l | grep http_port_t

Http_port_t tcp 8088, 80, 81, 443, 488, 8008, 8009, 8443, 9000

Ok, now nginx can use port 8088.

The selinux log is in /var/log/audit/audit.log

But the information recorded in this file is not obvious enough, it is difficult to see, we can use the audit2why and audit2allow tools to view, these two tools are also provided by the policycoreutils-python package.

audit2why < /var/log/audit/audit.log

Collect the logs of the selinux tool, there is another tool setroubleshoot, the corresponding package is setroubleshoot-server

Check if host is a live bash script

!/bin/bash
#
TCP-ping in bash (not tested)
HOSTNAME="$1"
PORT="$2"
if [ "X$HOSTNAME" == "X" ]; then
echo "Specify a hostname"
exit 1
fi
if [ "X$PORT" == "X" ]; then
PORT="22"
fi
exec 3<>/dev/tcp/$HOSTNAME/$PORT
if [ $? -eq 0 ]; then
echo "Alive."
else
echo "Dead."
fi
exec 3>&-

Tomcat log cutting script

time=$(date +%H) 
end_time=`expr $time – 2`
a=$end_time
BF_TIME=$(date +%Y%m%d)_$a:00-$time:00
cp /usr/local/tomcat8/logs/catalina.out /var/log/tomcat/oldlog/catalina.$BF_TIME.out
echo ” ” > /usr/local/tomcat8/logs/catalina.out

catalina.out

mkdir  -p  /var/log/tomcat/oldlog/

chmod  +x  /root/tom_log.sh

 crontab -e
0 */2 * * * sh /root/tom_log.sh

ls /var/log/tomcat/oldlog/

catalina.20190102_15:00-17:00.out  catalina.20190102_17:00-19:00.out

docker tomcat + mysql

Build on a clean CentOS image
Centos image preparation
Pull the Centos image on the virtual machine: docker pull centos
Create a container to run the Centos image: docker run -it -d –name mycentos centos /bin/bash
Note: There is an error here [ WARNING: IPv4 forwarding is disabled. Networking will not work. ]

Change the virtual machine file: vim /usr/lib/sysctl.d/00-system.conf
Add the following content
net.ipv4.ip_forward=1
Restart the network: systemctl restart network
Note: There is another problem here, systemctl in docker can not be used normally. Find the following solutions on the official website

Link: https://forums.docker.com/t/systemctl-status-is-not-working-in-my-docker-container/9075/4

Run mirror with the following statement
docker run –privileged -v /sys/fs/cgroup:/sys/fs/cgroup -it -d –name usr_sbin_init_centos centos /usr/sbin/init

1. Must have –privileged

2. Must have -v /sys/fs/cgroup:/sys/fs/cgroup

3. Replace bin/bash with /usr/sbin/init

Install the JAVA environment
Prepare the JDK tarball to upload to the virtual machine
Put the tarball into the docker container using docker cp
docker cp jdk-11.0.2_linux-x64_bin.tar.gz 41dbc0fbdf3c:/

The same as the linux cp specified usage, you need to add the identifier of the container: id or name

Extract the tar package
tar -xf jdk-11.0.2_linux-x64_bin.tar.gz -C /usr/local/java/jdk
Edit profile file export java environment variable

/etc/profile

export JAVA_HOME=/usr/local/java/jdk/jdk1.8.0_91
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
Run source /etc/profile to make environment variables take effect
Whether the test is successful
java –version

result

java 11.0.2 2019-01-15 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.2+9-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.2+9-LTS, mixed mode)
Install Tomcat
Prepare the tomcat tar package to upload to the virtual machine and cp to the docker container
Extract to
tar -xf apache-tomcat-8.5.38.tar.gz -C /usr/local/tomcat
Set boot boot, by using rc.local file

rc.local Add the following code

export JAVA_HOME=/usr/local/java/jdk/jdk-11.0.2
/usr/local/tomcat/apache-tomcat-8.5.38/bin/startup.sh
Open tomcat

/usr/local/tomcat/apache-tomcat-8.5.38/bin/ directory running
./startup.sh
Detection
curl localhost:8080

?html source content

Install mysql
Get the yum source of mysql
wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm
Install the above yum source
yum -y install mysql57-community-release-el7-10.noarch.rpm
Yum install mysql
yum -y install mysql-community-server
Change the mysql configuration: /etc/my/cnf
Validate_password=OFF # Turn off password verification
character-set-server=utf8
collation-server=utf8_general_ci
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Initialize specified but the data directory has files in it # The default behavior of timestamp from 5.6 is already deprecated, need to close the warning

[client]

default-character-set=utf8
Get the mysql initial password

grep “password” /var/log/mysqld.log

[Note] A temporary password is generated for root@localhost: k:nT<dT,t4sF

Use this password to log in to mysql

Go to mysql and proceed

Enter

mysql -u root -p

change the password

ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘111111’;

Change to make mysql remote access

update user set host = ‘%’ where user = ‘root’;
Test, you can use physical machine, use navicat to access mysql in docker
Packing container
On the docker hub

Submit the container as a mirror

docker commit -a ‘kane’ -m ‘test’ container_id images_name:images_tag

dockerhub
docker push kane0725/tomcat
Local tarball everywhere

Export to cost tar package

docker export -o test.tar a404c6c174a2

Import the tar package into a mirror

docker import test.tar test_images
Use Dockerfile
Note: only build a mirror of tomcat

Ready to work
Create a special folder and put the jdk and tomcat tarballs
Create a Dockerfile in this directory
Centos base image
document content
FROM centos
MAINTAINER tomcat mysql
ADD jdk-11.0.2 /usr/local/java
ENV JAVA_HOME /usr/local/java/
ADD apache-tomcat-8.5.38 /usr/local/tomcat8
EXPOSE 8080
Output results using docker build

[root@localhost dockerfile]

# docker build -t tomcats:centos .
Sending build context to Docker daemon 505.8 MB
Step 1/7 : FROM centos
—> 1e1148e4cc2c
Step 2/7 : MAINTAINER tomcat mysql
—> Using cache
—> 889454b28f55
Step 3/7 : ADD jdk-11.0.2 /usr/local/java
—> Using cache
—> 8cad86ae7723
Step 4/7 : ENV JAVA_HOME /usr/local/java/
—> Running in 15d89d66adb4
—> 767983acfaca
Removing intermediate container 15d89d66adb4
Step 5/7 : ADD apache-tomcat-8.5.38 /usr/local/tomcat8
—> 4219d7d611ec
Removing intermediate container 3c2438ecf955
Step 6/7 : EXPOSE 8080
—> Running in 56c4e0c3b326
—> 7c5bd484168a
Removing intermediate container 56c4e0c3b326
Step 7/7 : RUN /usr/local/tomcat8/bin/startup.sh
—> Running in 7a73d0317db3

Tomcat started.
—> b53a6d54bf64
Removing intermediate container 7a73d0317db3
Successfully built b53a6d54bf64
Docker build problem
Be sure to bring the order behind. Otherwise it will report an error.
“docker build” requires exactly 1 argument(s).
Run a container

docker run -it –name tomcats –restart always -p 1234:8080 tomcats /bin/bash

/usr/local/tomcat8/bin/startup.sh

result

Using CATALINA_BASE: /usr/local/tomcat8
Using CATALINA_HOME: /usr/local/tomcat8
Using CATALINA_TMPDIR: /usr/local/tomcat8/temp
Using JRE_HOME: /usr/local/java/
Using CLASSPATH: /usr/local/tomcat8/bin/bootstrap.jar:/usr/local/tomcat8/bin/tomcat-juli.jar
Tomcat started.
Use docker compose
Install docker compose
Official: https://docs.docker.com/compose/install/

The way I choose is pip installation

installation

pip install docker-compose

docker-compose –version

———————–

docker-compose version 1.23.2, build 1110ad0
Write docker-compose.yml

This yml file builds a mysql a tomcat container

version: “3”
services:
mysql:
container_name: mysql
image: mysql:5.7
restart: always
volumes:
– ./mysql/data/:/var/lib/mysql/
– ./mysql/conf/:/etc/mysql/mysql.conf.d/
ports:
– “6033:3306”
environment:
– MYSQL_ROOT_PASSWORD=
tomcat:
container_name: tomcat
restart: always
image: tomcat
ports:
– 8080:8080
– 8009:8009
links:
– mysql:m1 #Connect to database mirroring
Note:

Volumn must be a path, you cannot specify a file

Tomcat specifies that the external conf has been created unsuccessfully, do not know why, prompt

tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina load
tomcat | WARNING: Unable to load server configuration from [/usr/local/tomcat/conf/server.xml]
tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina start
tomcat | SEVERE: Cannot start server. Server instance is not configured.
tomcat exited with code 1
Run command
Note: Must be executed under the directory of the yml file

docker-compose up -d

———-View docker container——-

[root@localhost docker-compose]

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a8a0165a3a8 tomcat “catalina.sh run” 7 seconds ago Up 6 seconds 0.0.0.0:8009->8009/tcp, 0.0.0.0:8080->8080/tcp tomcat
ddf081e87d67 mysql:5.7 “docker-entrypoint…” 7 seconds ago Up 7 seconds 33060/tcp, 0

How to recover “rpmdb open failed” error in RHEL or Centos Linux

You are updating the system through yum command and suddenly power goes down or what happen if yum process is accidentally killed. Post this situation when you tried to update the system again with yum command now you are getting below error message related to rpmdb:

error: rpmdb: BDB0113 Thread/process 2196/139984719730496 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:

Error: rpmdb open failed
You are also not able to perform rpm query and getting same error message on screen:

[root@testvm~]

# rpm -qa
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm

[root@testvm ~]

# rpm -Va
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm

[root@testvm~]

#
The reason for this error is rpmdb has corrupted. No worry it is easy to recover the rpmdb by following below steps:

  1. Create backup directory in which you need to dump the rpmdb backup.

mkdir /tmp/rpm_db_bak

  1. Backup the rpmdb files in created backup directory in /tmp

mv /var/lib/rpm/__db* /tmp/rpm_db_bak

  1. Clean the yum cache from below command:

yum clean all

  1. Now again run the yum update command. It will rebuilt the rpmdb and should fetch and apply the updates from your repository or RHSM (or CentOS CDN in case of CentOS Linux)

[root@testvm ~]

# yum update
Loaded plugins: fastestmirror
base | 3.6 kB 00:00
epel/x86_64/metalink | 5.0 kB 00:00
epel | 4.3 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
(1/7): base/7/x86_64/group_gz | 155 kB 00:02
(2/7): epel/x86_64/group_gz | 170 kB 00:04
(3/7): extras/7/x86_64/primary_db | 191 kB 00:12
(4/7): epel/x86_64/updateinfo | 809 kB 00:21
(5/7): base/7/x86_64/primary_db | 5.6 MB 00:26
(6/7): epel/x86_64/primary_db | 4.8 MB 00:46
(7/7): updates/7/x86_64/primary_db | 7.8 MB 00:50
Determining fastest mirrors

  • base: mirror.ehost.vn
  • epel: repo.ugm.ac.id
  • extras: mirror.ehost.vn
  • updates: mirror.dhakacom.com
    Resolving Dependencies
    –> Running transaction check
    —> Package NetworkManager.x86_64 1:1.4.0-19.el7_3 will be updated
    —> Package NetworkManager.x86_64 1:1.4.0-20.el7_3 will be an update
    —> Package NetworkManager-adsl.x86_64 1:1.4.0-19.el7_3 will be updated
    […]
    –> Finished Dependency Resolution

Dependencies Resolved

================================================================================

Package Arch Version Repository Size

Installing:
kernel x86_64 3.10.0-514.26.2.el7 updates 37 M
python2-libcomps x86_64 0.1.8-3.el7 epel 46 k
replacing python-libcomps.x86_64 0.1.6-13.el7
Updating:
NetworkManager x86_64 1:1.4.0-20.el7_3 updates 2.5 M
NetworkManager-adsl x86_64 1:1.4.0-20.el7_3 updates 146 k
NetworkManager-bluetooth x86_64 1:1.4.0-20.el7_3 updates 165 k
NetworkManager-glib x86_64 1:1.4.0-20.el7_3 updates 385 k
NetworkManager-libnm x86_64 1:1.4.0-20.el7_3 updates 443 k
NetworkManager-team x86_64 1:1.4.0-20.el7_3 updates 147 k
python-perf x86_64 3.10.0-514.26.2.el7 updates 4.0 M
sudo x86_64 1.8.6p7-23.el7_3 updates 735 k
systemd x86_64 219-30.el7_3.9 updates 5.2 M
systemd-libs x86_64 219-30.el7_3.9 updates 369 k
systemd-sysv x86_64 219-30.el7_3.9 updates 64 k
tuned noarch 2.7.1-3.el7_3.2 updates 210 k
xfsprogs x86_64 4.5.0-10.el7_3 updates 895 k
Removing:
kernel x86_64 3.10.0-123.el7 @anaconda 127 M

Transaction Summary

Install 2 Packages
Upgrade 46 Packages
Remove 1 Package

Total download size: 84 M
Is this ok [y/d/N]: y

Simple way to configure Ngnix High Availability Web Server with Pacemaker and Corosync on CentOS7

Pacemaker is an open source cluster manager software which provide high availability of resources or services in CentOS 7 or RHEL 7 Linux. It has feature of scalable and advanced HA Cluster Manager. This HA cluster manager distributed by ClusterLabs.

Corosync is the core of Pacemaker Cluster Manager as it is responsible for generating heartbeat communication between cluster nodes which make it capable of deploying High Availability in applications. Corosync is derived from an Open Source project OpenAIS under new BSD License.

Pcsd is a Pacemaker command line interface (CLI) and GUI for managing the Pacemaker cluster. PCSD command pcs is use for creating, configuring and adding a new node to cluster.

In this tutorial I will use pcsd in CLI for configuring Active/Passive Pacemaker Cluster to provide high availability of Nginx webservice in CentOS 7. In this article I have tried to give basic idea of how to configure the Pacemaker cluster on CentOS 7 (applicable same to RHEL 7 as CentOS is mimic of RHEL). For basic cluster configuration I have disable the STONITH and ignore the Quorum but for Production environment I suggest to use STONITH feature of Pacemaker.

Here is Short Defination of STONITH: STONITH or Shoot The Other Node In The Head is the fencing implementation on Pacemaker. It is a technique for fencing in computer clusters. Fencing is the isolation of a failed node so that it does not cause disruption to a computer cluster.

For demonstration I have built two VMs (Virtual Machines) on KVM based on my Ubuntu 16.04 base machine and those VMs have private IP addresses.

Note: I am referring my VMs as Cluster node for better presenting them in rest of the topics.

Pre-requisite for configuring pacemaker cluster

Minimum two CentOS 7 Server
webserver01: 192.168.1.33
webserver02: 192.168.1.34
Floating IP Address: 192.168.1.30
Root Privilege

Below are the points which I will follow for Installing and Configuring two node Pacemaker Cluster:

  1. Mapping of Host File
  2. Installation of Epel Repository and Nginx
  3. Installation and Configuration of Pacemaker, Corosync, and Pcsd
  4. Creation and Configuration of Cluster
  5. Disabling of STONITH and Ignoring Quorum Policy
  6. Adding of Floating-IP and Resources
  7. Testing the Cluster service

Steps for Installation and configuration of pacemaker cluster

  1. Mapping of host files:

As in my test lab I am not using DNS for resolving the both pacemaker cluster node hostname thus I have configured /etc/hosts file for resolving hostname of both nodes. But my suggestion is, though you have DNS in your environment for name resolution but still for better Pacemaker Cluster heartbeat communication between cluster nodes you should configure /etc/hosts file.

Edit the /etc/hosts file with desire editor in both cluster nodes, below is example of /etc/hosts file which I have configured in both cluster nodes.

[root@webserver01 ~]

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.33 webserver01
192.168.1.34 webserver02

[root@webserver01 ~]

#

[root@webserver02 ~]

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.33 webserver01
192.168.1.34 webserver02

[root@webserver02 ~]

#
Post /etc/hosts file configuration we will test the connectivity of both cluster nodes with each other through ping command:
Example:

ping -c 3 webserver01
ping -c 3 webserver02
If we will get reply like below that means our webservers are communicating with each other.

[root@webserver01 ~]

# ping -c 3 webserver02
PING webserver02 (192.168.1.34) 56(84) bytes of data.
64 bytes from webserver02 (192.168.1.34): icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from webserver02 (192.168.1.34): icmp_seq=2 ttl=64 time=0.727 ms
64 bytes from webserver02 (192.168.1.34): icmp_seq=3 ttl=64 time=0.698 ms

— webserver02 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.698/0.843/1.106/0.188 ms

[root@webserver01 ~]

#

[root@webserver02 ~]

# ping -c 3 webserver01
PING webserver01 (192.168.1.33) 56(84) bytes of data.
64 bytes from webserver01 (192.168.1.33): icmp_seq=1 ttl=64 time=0.197 ms
64 bytes from webserver01 (192.168.1.33): icmp_seq=2 ttl=64 time=0.123 ms
64 bytes from webserver01 (192.168.1.33): icmp_seq=3 ttl=64 time=0.114 ms

— webserver01 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.114/0.144/0.197/0.039 ms

[root@webserver02~]

#

  1. Installation of Epel Repository and Nginx

In this Steps we will install EPEL (Extra Package for Enterprise Linux) repository and then Nginx. For Nginx installation EPEL repository package need to install first.

yum -y install epel-release
Now install Nginx:

yum -y install nginx

  1. Install and Configure Pacemaker, Corosync, and Pcsd

Now we will install the pacemaker, pcs and corosync package with yum command. These package does not require seperate repository as they will use default CentOS repository.

yum -y install corosync pacemaker pcs
Once Cluster packages will install successfully enable the cluster services in startup through systemctl commands as mentioned below:

systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
Now Start the pcsd service in both cluster nodes and also enable it in system startup.

systemctl start pcsd.service
systemctl enable pcsd.service
The pcsd daemon works with the pcs command-line interface to manage synchronizing the corosync configuration across all nodes in the cluster.

The user hacluster is created automatically with disable password during package installation this account is needed a login credential for syncing the corosync configuration, or starting and stopping the cluster service on other cluster nodes.
In next step we will create a new password for hacluster user and we will use same password for rest cluster node as well.

passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

  1. Creation and Configuration of Cluster

Note: This steps from Step 4 to 7 will only need to perform on webserver01 server.
This step will cover the creating of new 2 nodes CentOS Linux cluster servers which will host Nginx resources and Floating IP Address.

First of all to create cluster we need to authorize all servers using the pcs command and the hacluster user.

Authorize both cluster webservers with the pcs command and hacluster user and password.

[root@webserver01 ~]

# pcs cluster auth webserver01 webserver02
Username: hacluster
Password:
webserver01: Authorized
webserver02: Authorized

[root@webserver01 ~]

#
Note: If you are getting below error after running above auth command:

[root@webserver01 ~]

# pcs cluster auth webserver01 webserver02
Username: hacluster
Password:
webserver01: Authorized
Error: Unable to communicate with webserver02
Then you need to define firewalld rules in both nodes which enable the communication of both Cluster nodes:

Below are the example for adding rules for Cluster and ngnix as well

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=high-availability
success

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=http
success

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=https
success

[root@webserver01 ~]

# firewall-cmd –reload
success

[root@webserver01 ~]

# firewall-cmd –list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: ssh dhcpv6-client high-availability http https
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

[root@webserver01 ~]

#
Now we will define the cluster name and cluster node members.

pcs cluster setup –name web_cluster webserver01 webserver02
Next start the all cluster services and enable them in system startup.

[root@webserver01 ~]

# pcs cluster start –all
webserver02: Starting Cluster…
webserver01: Starting Cluster…

[root@webserver01 ~]

# pcs cluster enable –all
webserver01: Cluster Enabled
webserver02: Cluster Enabled

[root@webserver01 ~]

#
Run the below command to check the Cluster status

pcs status cluster

[root@webserver01 ~]

# pcs status cluster
Cluster Status:
Stack: corosync
Current DC: webserver02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) – partition with quorum
Last updated: Tue Sep 4 02:38:20 2018
Last change: Tue Sep 4 02:33:06 2018 by hacluster via crmd on webserver02
2 nodes configured
0 resources configured

PCSD Status:
webserver01: Online
webserver02: Online

[root@webserver01 ~]

#

  1. Disabling of STONITH and Ignoring Quorum Policy
    In this tutorial we will disable the STONITH and Quorum policy as we are not using fencing device here. But if you want to implement Cluster in Production environment then I suggest to use Fencing and Quorum Policy.

Disable the STOITH:

pcs property set stonith-enabled=false
Ignore the Quorum Policy:

pcs property set no-quorum-policy=ignore
Now Check whether STONITH and Quorum policy are disable or not with below command:

pcs property list

[root@webserver01 ~]

# pcs property list
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: web_cluster
dc-version: 1.1.18-11.el7_5.3-2b07d5c5a9
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: false

[root@webserver01 ~]

#

  1. Adding of Floating-IP and Resources

Floating IP address are cluster virtual IP address which float or move automatically from one cluster node to another cluster node in event of one Active Cluster node failure or disable which was hosting cluster resources.

In this step we will add Floating IP and Nginx resources:

Adding Floating IP

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.30 cidr_netmask=32 op monitor interval=30s
Adding nginx resources

pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf op monitor timeout=”5s” interval=”5s”
Now check newly added resources from below command:

pcs status resources

[root@webserver01 ~]

# pcs status resources
virtual_ip (ocf::heartbeat:IPaddr2): Started webserver01
webserver (ocf::heartbeat:nginx): Started webserver01

[root@webserver01 ~]

#

  1. Testing the Cluster service

To check cluster service running status
Now We will check the Cluster service status before moving to test nginx webservice failover in event of one Active Cluster node fail.

To check running cluster service status below is the command with example:

[root@webserver01 ~]

# pcs status
Cluster name: web_cluster
Stack: corosync
Current DC: webserver01 (version 1.1.18-11.el7_5.3-2b07d5c5a9) – partition with quorum
Last updated: Tue Sep 4 03:55:47 2018
Last change: Tue Sep 4 03:15:29 2018 by root via cibadmin on webserver01

2 nodes configured
2 resources configured

Online: [ webserver01 webserver02 ]

Full list of resources:

virtual_ip (ocf::heartbeat:IPaddr2): Started webserver01
webserver (ocf::heartbeat:nginx): Started webserver01

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

[root@webserver01 ~]

#
To test ngnix webservice failover:

First of we will create a webpages in both cluster nodes by below command:
In webserver01:

echo ‘

webserver01 – Web-Cluster-Testing

‘ > /usr/share/nginx/html/index.html
echo ‘

webserver02 – Web-Cluster-Testing

‘ > /usr/share/nginx/html/index.html
Now open this web page with Floating IP address (192.168.1.30) which we had configured with Cluster resources in previous steps, you will see currently webpage is accessible from webserver01:

Now stop the cluster service in webserver01 and after it again open the webpage with same floating IP address. Below is the command for Stopping pacemaker cluster in webserver01:

pcs cluster stop webserver01
After stopping the pacemaker cluster in webserver01 this time webpage should be accessed from webserver02:

check Redhat version

The objective of this guide is to provide you with some hints on how to check system version of your Redhat Enterprise Linux (RHEL). There exist multiple ways on how to check the system version, however, depending on your system configuration, not all examples described below may be suitable. For a CentOS specific guide visit How to check CentOS version guide.

Requirements

Privileged access to to your RHEL system may be required.

Difficulty

EASY

Conventions

  • # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
  • $ – requires given linux commands to be executed as a regular non-privileged user

Instructions

Using hostnamectl

hostnamectl is most likely the first and last command you need to execute to reveal your RHEL system version:

$ hostnamectl 
   Static hostname: localhost.localdomain
Transient hostname: status
         Icon name: computer-vm
           Chassis: vm
        Machine ID: d731df2da5f644b3b4806f9531d02c11
           Boot ID: 384b6cf4bcfc4df9b7b48efcad4b6280
    Virtualization: xen
  Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:7.3:GA:server
            Kernel: Linux 3.10.0-514.el7.x86_64
      Architecture: x86-64

Query Release Package

Use rpm command to query Redhat’s release package:

RHEL 7
$ rpm --query redhat-release-server
redhat-release-server-7.3-7.el7.x86_64
RHEL 8
$ rpm --query redhat-release
redhat-release-8.0-0.34.el8.x86_64

Common Platform Enumeration

Check Common Platform Enumeration source file:

$ cat /etc/system-release-cpe 
cpe:/o:redhat:enterprise_linux:7.3:ga:server

LSB Release

Depending on whether a redhat-lsb package is installed on your system you may also use lsb_release -d command to check Redhat’s system version:

$ lsb_release -d
Description:	Red Hat Enterprise Linux Server release 7.3 (Maipo)

Alternatively install redhat-lsb package with:

# yum install redhat-lsb


Check Release Files

There are number of release files located in the /etc/ directory. Namely os-releaseredhat-release and system-release:

$ ls /etc/*release
os-release  redhat-release  system-release

Use cat to check the content of each file to reveal your Redhat OS version. Alternatively, use the below for loop for an instant check:

$ for i in $(ls /etc/*release); do echo ===$i===; cat $i; done

Depending on your RHEL version, the output of the above shell for loop may look different:

===os-release===
NAME="Red Hat Enterprise Linux Server"
VERSION="7.3 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.3 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.3"
===redhat-release===
Red Hat Enterprise Linux Server release 7.3 (Maipo)
===system-release===
Red Hat Enterprise Linux Server release 7.3 (Maipo)

Grub Config

The least reliable way on how to check Redhat’s OS version is by looking at Grub configuration. Grub configuration may not produce a definitive answer, but it will provide some hints on how the system booted. 

The default locations of grub config files are /boot/grub2/grub.cfg and /etc/grub2.cfg. Use grep command to check for menuentry keyword:

# grep -w menuentry /boot/grub2/grub.cfg /etc/grub2.cfg

An another alternative is to check the value of the “GRUB Environment Block”:

# grep saved_entry /boot/grub2/grubenv 
saved_entry=Red Hat Enterprise Linux Server (3.10.0-514.el7.x86_64) 7.3 (Maipo)

Nginx server configuration

yum -y install make gcc gcc-c++ openssl openssl-devel pcre-devel zlib-devel

wget -c http://nginx.org/download/nginx-1.14.2.tar.gz

tar zxvf nginx-1.14.2.tar.gz

cd nginx-1.14.2

./configure –prefix=/usr/local/nginx

make && make install

cd /usr/local/nginx

./sbin/nginx

ps aux|grep nginx

Nginx load balancing configuration example

Load balancing is mainly achieved through specialized hardware devices or through software algorithms. The load balancing effect achieved by the hardware device is good, the efficiency is high, and the performance is stable, but the cost is relatively high. The load balancing implemented by software mainly depends on the selection of the equalization algorithm and the robustness of the program. Equalization algorithms are also diverse, and there are two common types: static load balancing algorithms and dynamic load balancing algorithms. The static algorithm is relatively simple to implement, and it can achieve better results in the general network environment, mainly including general polling algorithm, ratio-based weighted rounding algorithm and priority-based weighted rounding algorithm. The dynamic load balancing algorithm is more adaptable and effective in more complex network environments. It mainly has a minimum connection priority algorithm based on task volume, a performance-based fastest response priority algorithm, a prediction algorithm and a dynamic performance allocation algorithm.

The general principle of network load balancing technology is to use a certain allocation strategy to distribute the network load to each operating unit of the network cluster in a balanced manner, so that a single heavy load task can be distributed to multiple units for parallel processing, or a large number of concurrent access or data. Traffic sharing is handled separately on multiple units, thereby reducing the user’s waiting response time.

Nginx server load balancing configuration
The Nginx server implements a static priority-based weighted round-robin algorithm. The main configuration is the proxy_pass command and the upstream command. These contents are actually easy to understand. The key point is that the configuration of the Nginx server is flexible and diverse. How to configure load balancing? At the same time, rationally integrate other functions to form a set of configuration solutions that can meet actual needs.

The following are some basic example fragments. Of course, it is impossible to include all the configuration situations. I hope that it can be used as a brick-and-mortar effect. At the same time, we need to summarize and accumulate more in the actual application process. The places to note in the configuration will be added as comments.

Configuration Example 1: Load balancing of general polling rules for all requests
In the following example fragment, all servers in the backend server group are configured with the default weight=1, so they receive the request tasks in turn according to the general polling policy. This configuration is the easiest configuration to implement Nginx server load balancing. All requests to access www.rmohan.com will be load balanced in the backend server group. The example code is as follows:

Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}??

Configuration Example 2: Load balancing of weighted polling rules for all requests
Compared with “Configuration Instance One”, in this instance segment, the servers in the backend server group are given different priority levels, and the value of the weight variable is the “weight” in the polling policy. Among them, 192.168.1.2:80 has the highest level, and receives and processes client requests preferentially; 192.168.1.4:80 has the lowest level, which is the server that receives and processes the least client requests, and 192.168.1.3:80 will be between the above two. between. All requests to access www.rmohan.com will implement weighted load balancing in the backend server group. The example code is as follows:

Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80 weight=5;
Server 192.168.1.3:80 weight=2;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}

Configuration Example 3: Load balancing a specific resource
In this example fragment, we set up two sets of proxy server groups, a group named “videobackend” for load balancing client requests requesting video resources, and another group for clients requesting filed resources. The end requests load balancing. All requests for “http://www.mywebname/video/” will be balanced in the videobackend server group, and all requests for “http://www.mywebname/file/” will be in the filebackend server group. Get a balanced effect. The configuration shown in this example is to implement general load balancing. For the configuration of weighted load balancing, refer to Configuration Example 2.

In the location /file/ {...} block, we populate the client's real information into the "Host", "X-Real-IP", and "X-Forwareded-For" header fields in the request header. So, the request received by the backend server group retains the real information of the client, not the information of the Nginx server. The example code is as follows:

Upstream videobackend #Configuring backend server group 1
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80;
}
Upstream filebackend #Configuring backend server group 2
{
Server 192.168.1.5:80;
Server 192.168.1.6:80;
Server 192.168.1.7:80;
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location /video/ {
Proxy_pass http://videobackend; #Use backend server group 1
Prox_set_header Host $host;

}
Location /file/ {
Proxy_pass http://filebackend; #Use backend server group 2
#Retain the real information of the client
Prox_set_header Host $host;
Proxy_set_header X-Real-IP $remote_addr;
Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}
}??

Configuration Example 4: Load balancing different domain names
In this example fragment, we set up two virtual servers and two sets of backend proxy server groups to receive different domain name requests and load balance these requests. If the client requests the domain name as “home.rmohan.com”, the server server1 receives and redirects to the homebackend server group for load balancing; if the client requests the domain name as “bbs.rmohan.com”, the server server2 receives the bbsbackend server level. Perform load balancing processing. This achieves load balancing of different domain names.

Note that there is one server server 192.168.1.4:80 in the two backend server groups that is public. All resources under the two domain names need to be deployed on this server to ensure that client requests are not problematic. The example code is as follows:


Upstream bbsbackend #Configuring backend server group 1
{
Server 192.168.1.2:80 weight=2;
Server 192.168.1.3:80 weight=2;
Server 192.168.1.4:80;
}
Upstream homebackend #Configuring backend server group 2
{
Server 192.168.1.4:80;
Server 192.168.1.5:80;
Server 192.168.1.6:80;
}
#Start to configure server 1
Server
{
Listen 80;
Server_name home.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://homebackend;
Prox_set_header Host $host;

}

}
#Start to configure server 2
Server
{
Listen 80;
Server_name bbs.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://bbsbackend;
Prox_set_header Host $host;

}

}

Configuration Example 5: Implementing load balancing with URL rewriting
First, let’s look at the specific source code. This is a modification made on the basis of instance one:


Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;

Location /file/ {
Rewrite ^(/file/.)/media/(.).*$) $1/mp3/$2.mp3 last;
}

Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}
This instance fragment adds a URL rewriting function to the URI containing “/file/” compared to “Configuration One”. For example, when the URL requested by the client is “http://www.rmohan.com/file/downlaod/media/1.mp3”, the virtual server first uses the location file/ {…} block to forward to the post. Load balancing is implemented in the backend server group. In this way, load balancing with URL rewriting is easily implemented in the car. In this configuration scheme, it is necessary to grasp the difference between the last flag and the break flag in the rewrite instruction in order to achieve the expected effect.

The above five configuration examples show the basic method of Nginx server to implement load balancing configuration under different conditions. Since the functions of the Nginx server are incremental in structure, we can continue to add more functions based on these configurations, such as Web caching and other functions, as well as Gzip compression technology, identity authentication, and rights management. At the same time, when configuring the server group using the upstream command, you can fully utilize the functions of each command and configure the Nginx server that meets the requirements, is efficient, stable, and feature-rich.

Localtion configuration
syntax structure: location [ = ~ ~* ^~ ] uri{ … }
uri variable is a matching request character, can be a string without regular expression, or a string containing regular
[ ] In the optional uri is required: used to change the matching method of the request string and uri

    = for the standard uri front, requires the request string to strictly match the uri, if the match has been successful, stop the match and immediately process the request 

    ~ indicates that the uri contains a regular expression and is case-sensitive 

    ~* is used to indicate that the uri contains a regular expression that is not case sensitive 

    ^~ requires finding the location that represents the highest match between the uri and the request string, and then processes the request for the 

website error page
1xx: Indication – Indicates that the request has been received, continues processing
2xx: Success – indicates that the request has been successfully received, understood, accepted
3xx: Redirected – further operations must be performed to complete the request
4xx: Client Error – Request has Syntax error or request could not be implemented
5xx: server side error – server failed to implement legitimate request
http message code meaning
to move 301 requested data with There is a new location, and the change is a permanent
redirect 302 request data temporary location change
Cannot find webpage 400 can connect to the server, but due to the address problem, can’t find the webpage
website refuses to display 404 can connect to the website but can’t find the webpage
can’t display the page 405 can connect the website, the page content can’t be downloaded, the webpage writing method
can’t be solved This page is displayed. The server problem is
not executed. 501 The website settings that are not being accessed are displayed as the website requested by the browser. The
version of the protocol is not supported. 505 The protocol version information requested is
:
200 OK //The client request is successful.
400 Bad Request //Client The request has a syntax error and cannot be understood by the server.
401 Unauthorized // The request is unauthorized. This status code must be used with the WWW-Authenticate header field.
403 Forbidden //The server receives the request, but refuses to provide the service
404 Not Found //Request The resource does not exist, eg: entered the wrong URL
500 Internal Server Error //The server has an unexpected error
503 Server Unavailable //The server is currently unable to process the client’s request and may resume normal after a while
eg: HTTP/1.1 200 OK ( CRLF)

common feature File Description
1. error_log file | stderr [debug | info | notice | warn | error | crit | alert | emerg ]
debug — debug level output log information most complete
?? info — normal level output prompt information
?? notice — attention level output Note the information
?? warn — warning level output some insignificant error message
?? error — error level has an error
?? crit affecting the normal operation of the service — serious error level serious error level
?? alert — very serious level very serious
?? emerg — – Super serious super serious
nginx server log file output to a file or output to standard output error output to stder:
followed by the log level option, from low to high divided into debug …. emerg after setting the level Unicom’s high-level will also not record

    2, user user group 

    configuration starter user group wants all can start without writing 

    3, worker_processes number | auto

    Specifies the number nginx process to do more to generate woker peocess number of 
    auto nginx process automatically detects the number 

    4, pid file 

    specifies the file where pid where 
    pid log / nginx.pid time to pay attention to set the profile name, or can not find 

    5, include file 

    contains profile, other configurations incorporated 

    6, acept_mutex on | off 

    connection network provided serialization 

    7, multi_accept on | off 

    settings allow accept multiple simultaneous network connections 

    8, use method 

    selection event driven model 

    9, worker_connections number 

    configuration allows each workr process a maximum number of connections, the default is 1024 

    10, mime-type 

    resource allocation type, mime-type is a media type of network resource 
    format: default_type MIME-type 

    . 11, path access_log [the format [Buffer size =]] 

    from Define the server's log
    path: configuration storage server log file path and name 
    format: optional, self-defined format string server log 
    size: the size of the memory buffer for temporary storage configuration log 

    12, log_format name sting ...; 

    in combination with access_log, Dedicated to define the format of the server log 
    and can define a name for the format, making the access_log easy to call


    Name : the name of the format string default combined 
    string format string of the service log

main log_format 'REMOTE_ADDR $ - $ REMOTE_USER [$ time_local] "$ Request"' 
                '$ $ body_bytes_sent Status "$ HTTP_REFERER"' 
                  ' "$ HTTP_USER_AGENT" "$ HTTP_X_FORWARDED_FOR"';


    ??$remote_addr 10.0.0.1 ---Visitor source address information

????$remote_user – — The authentication information displayed when the nginx server wakes up for authentication access

????[$time_local] — Display access time ? information

????”$request” “GET / HTTP/1.1” — Requested line

            $status 200 --- Show status? Information Shows 304 because of read cache

????$body_bytes_sent 256 — Response data size information

????”http_refer” — Link destination
?? “$http_user_agent” — Browser information accessed by the client

????”$http_X_forwarded_for” is referred to as the XFF header. It represents the client, which is the real IP of the HTTP requester. It is only added when passing the HTTP proxy or load balancing server. It is not the standard request header information defined in the RFC. It can be found in the Squid Cache Proxy Development document.

??13, sendfile no | off

    configuration allows the sendfile mode to transfer files 

    14, sendfile_max_chunk size 

    configure each worker_process of the nginx process to call senfile() each time the maximum amount of data transfer cannot exceed 

    15, keepalive_timeout timeout[header_timeout]; 

    configure connection timeout 
    timeout The server maintains the time of the connection 
    head_timeout, the keeplive field of the response message header sets the timeout period 

    16, and the number of 

    single-link requests for the keepalive_repuests number is 

    17. 

    There are three ways to configure the network listening configuration listener: 
        Listening IP address: 
        listen address[:port] [default_server] [setfib=number] [backlog=number] [rcvbuf=size] 
        

[sndbuf=size]

[deferred] Listening configuration port: Listen port [default_server] [setfib=number] [backlog=number] [rcvbuf=size] [sndbuf=size]

[accept_file=filter]

listen for socket listen unix:path [default_server] [backlog=number] [rcvbuf=size] [ Sndbuf=size] [accept_file=filter]

[deferred]

address : IP address port: port number path: socket file path default_server: identifier, set this virtual host to address:port default host setfib=number: current detachment freeBSD useful Is the 0.8.44 version listening scoket associated routing table backlog=number: Set the listening function listen() to the maximum number of network connections hang freeBSD defaults to -1 Other 511 rcvbuf=size: Listening socket accept buffer size sndbuf=size: Listen Socket send buffer size Deferred : The identifier sets accept() to Deferred accept_file=filter: Sets the listening port to filter the request, since the swimming bind for freeBSD and netBSd 5.0+ : The identifier uses the independent bind() to handle the address:port ssl: identifier Set the paint connection to use ssl mode for 18, server_name name based on the name of the virtual host configuration for multiple matching successful processing priority: exact match server_name wildcard match at the beginning match server_name successful wildcard at the end is matching server_ then successful regex If the matching server_name is successfully matched multiple times in the appeal matching mode, the first matching request will be processed first. After receiving the request , the root path configuration server of the root path configuration request

needs to find the requested resource in the directory specified by the server.
This path is Specify the file directory

    20, alias path (used in the location module) to 

    change the request path of the URI received by the location, which can be followed by the variable information.

    21, index file ...; 

    set the default home page 

    22 of the website , error_page code ...[=[response]] uri 

    set the error page information 

        code to deal with the http error code 
        resoonse optional code code specified error code into new Error code 
        uri error page path or website address 

    23, allow address | CIDR |all 

    configuration ip-based access permission permission 

    address allows access to the client's ip does not support setting 
    CIDR for clients that are allowed to access multiple CIDR such as 185.199.110.153/24 
    All means that all clients can access 

    24, deny address | CIDR |all 

    configuration ip-based access prohibition permission 

    address allows access to the client's ip does not support setting 
    CIDR for clients that allow access to multiple CIDR such as 185.199.110.153/24 
    all for all customers You can access        


    25, auth_basic string |off to 

    configure password-based nginx access.

    string open authentication, verify and configure the type of instructions 

    off closed 

    26, auth_basic_user_file file 

    to configure password to access nginx access permissions to files based on 

    file files need to use absolute paths

Docker installation builds Tomcat + MySQL

Docker installation builds Tomcat + MySQL

virtual machine
Virtual machine installation Docker
Build on a clean CentOS image
Centos image preparation
Pull the Centos image on the virtual machine: docker pull centos
Create a container to run the Centos image: docker run -it -d –name mycentos centos /bin/bash
Note: There is an error here [ WARNING: IPv4 forwarding is disabled. Networking will not work. ]

Change the virtual machine file: vim /usr/lib/sysctl.d/00-system.conf
Add the following content
net.ipv4.ip_forward=1
Restart the network: systemctl restart network
Note: There is another problem here, systemctl in docker can not be used normally. Find the following solutions on the official website

Link: https://forums.docker.com/t/systemctl-status-is-not-working-in-my-docker-container/9075/4

Run mirror with the following statement
docker run –privileged -v /sys/fs/cgroup:/sys/fs/cgroup -it -d –name usr_sbin_init_centos centos /usr/sbin/init

1. Must have –privileged

2. Must have -v /sys/fs/cgroup:/sys/fs/cgroup

3. Replace bin/bash with /usr/sbin/init

Finally, I was able to run a Centos image.

Install the JAVA environment
Prepare the JDK tarball to upload to the virtual machine
Put the tarball into the docker container using docker cp
docker cp jdk-11.0.2_linux-x64_bin.tar.gz 41dbc0fbdf3c:/

The same as the linux cp specified usage, you need to add the identifier of the container: id or name

Extract the tar package
tar -xf jdk-11.0.2_linux-x64_bin.tar.gz -C /usr/local/java/jdk
Edit profile file export java environment variable

/etc/profile

export JAVA_HOME=/usr/local/java/jdk/jdk1.8.0_91
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
Run source /etc/profile to make environment variables take effect
Whether the test is successful
java –version

result

java 11.0.2 2019-01-15 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.2+9-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.2+9-LTS, mixed mode)
Install Tomcat
Prepare the tomcat tar package to upload to the virtual machine and cp to the docker container
Extract to
tar -xf apache-tomcat-8.5.38.tar.gz -C /usr/local/tomcat
Set boot boot, by using rc.local file

rc.local Add the following code

export JAVA_HOME=/usr/local/java/jdk/jdk-11.0.2
/usr/local/tomcat/apache-tomcat-8.5.38/bin/startup.sh
Open tomcat
/usr/local/tomcat/apache-tomcat-8.5.38/bin/ directory running
./startup.sh
Detection
curl localhost:8080

Install mysql
Get the yum source of mysql
wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm
Install the above yum source
yum -y install mysql57-community-release-el7-10.noarch.rpm
Yum install mysql
yum -y install mysql-community-server
Change the mysql configuration: /etc/my/cnf
Validate_password=OFF # Turn off password verification
character-set-server=utf8
collation-server=utf8_general_ci
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Initialize specified but the data directory has files in it # The default behavior of timestamp from 5.6 is already deprecated, need to close the warning

[client]

default-character-set=utf8
Get the mysql initial password
grep “password” /var/log/mysqld.log

[Note] A temporary password is generated for root@localhost: k:nT<dT,t4sF

Use this password to log in to mysql

Go to mysql and proceed

Enter

mysql -u root -p

change the password

ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘111111’;

Change to make mysql remote access

update user set host = ‘%’ where user = ‘root’;
Test, you can use physical machine, use navicat to access mysql in docker
Packing container
On the docker hub

Submit the container as a mirror

docker commit -a ‘kane’ -m ‘test’ container_id images_name:images_tag

dockerhub
docker push kane0725/tomcat
Local tarball everywhere

Export to cost tar package

docker export -o test.tar a404c6c174a2

Import the tar package into a mirror

docker import test.tar test_images
Use Dockerfile
Note: only build a mirror of tomcat

Ready to work
Create a special folder and put the jdk and tomcat tarballs
Create a Dockerfile in this directory
Centos base image
document content
FROM centos
MAINTAINER tomcat mysql
ADD jdk-11.0.2 /usr/local/java
ENV JAVA_HOME /usr/local/java/
ADD apache-tomcat-8.5.38 /usr/local/tomcat8
EXPOSE 8080
Output results using docker build

[root@localhost dockerfile]

# docker build -t tomcats:centos .
Sending build context to Docker daemon 505.8 MB
Step 1/7 : FROM centos
—> 1e1148e4cc2c
Step 2/7 : MAINTAINER tomcat mysql
—> Using cache
—> 889454b28f55
Step 3/7 : ADD jdk-11.0.2 /usr/local/java
—> Using cache
—> 8cad86ae7723
Step 4/7 : ENV JAVA_HOME /usr/local/java/
—> Running in 15d89d66adb4
—> 767983acfaca
Removing intermediate container 15d89d66adb4
Step 5/7 : ADD apache-tomcat-8.5.38 /usr/local/tomcat8
—> 4219d7d611ec
Removing intermediate container 3c2438ecf955
Step 6/7 : EXPOSE 8080
—> Running in 56c4e0c3b326
—> 7c5bd484168a
Removing intermediate container 56c4e0c3b326
Step 7/7 : RUN /usr/local/tomcat8/bin/startup.sh
—> Running in 7a73d0317db3

Tomcat started.
—> b53a6d54bf64
Removing intermediate container 7a73d0317db3
Successfully built b53a6d54bf64
Docker build problem
Be sure to bring the order behind. Otherwise it will report an error.
“docker build” requires exactly 1 argument(s).
Run a container

docker run -it –name tomcats –restart always -p 1234:8080 tomcats /bin/bash

tomcat startup.sh

/usr/local/tomcat8/bin/startup.sh

result

Using CATALINA_BASE: /usr/local/tomcat8
Using CATALINA_HOME: /usr/local/tomcat8
Using CATALINA_TMPDIR: /usr/local/tomcat8/temp
Using JRE_HOME: /usr/local/java/
Using CLASSPATH: /usr/local/tomcat8/bin/bootstrap.jar:/usr/local/tomcat8/bin/tomcat-juli.jar
Tomcat started.
Use docker compose
Install docker compose
Official: https://docs.docker.com/compose/install/

The way I choose is pip installation

installation

pip install docker-compose

docker-compose –version

———————–

docker-compose version 1.23.2, build 1110ad0
Write docker-compose.yml

This yml file builds a mysql a tomcat container

version: “3”
services:
mysql:
container_name: mysql
image: mysql:5.7
restart: always
volumes:
– ./mysql/data/:/var/lib/mysql/
– ./mysql/conf/:/etc/mysql/mysql.conf.d/
ports:
– “6033:3306”
environment:
– MYSQL_ROOT_PASSWORD=
tomcat:
container_name: tomcat
restart: always
image: tomcat
ports:
– 8080:8080
– 8009:8009
links:
– mysql:m1 #Connect to database mirroring
Note:

Volumn must be a path, you cannot specify a file

Tomcat specifies that the external conf has been created unsuccessfully, do not know why, prompt

tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina load
tomcat | WARNING: Unable to load server configuration from [/usr/local/tomcat/conf/server.xml]
tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina start
tomcat | SEVERE: Cannot start server. Server instance is not configured.
tomcat exited with code 1
Run command
Note: Must be executed under the directory of the yml file

docker-compose up -d

———-View docker container——-

[root@localhost docker-compose]

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a8a0165a3a8 tomcat “catalina.sh run” 7 seconds ago Up 6 seconds 0.0.0.0:8009->8009/tcp, 0.0.0.0:8080->8080/tcp tomcat
ddf081e87d67 mysql:5.7 “docker-entrypoint…” 7 seconds ago Up 7 seconds 33060/tcp, 0

Kubernetes install centos7

Kubeadm quickly builds a k8s cluster

surroundings

Master01: 192.168.1.110 (minimum 2 core CPU)

node01: 192.168.1.100

planning

Services network: 10.96.0.0/12

Pod network: 10.244.0.0/16

  1. Configure hosts to resolve each host

vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.110 master01
192.168.1.100 node01

  1. Synchronize each host time

yum install -y ntpdate
ntpdate time.windows.com

14 Mar 16:51:32 ntpdate[46363]: adjust time server 13.65.88.161 offset -0.001108 sec

  1. Close SWAP and close selinux

swapoff -a

vim /etc/selinux/config

This file controls the state of SELinux on the system.

SELINUX= can take one of these three values:

enforcing – SELinux security policy is enforced.

permissive – SELinux prints warnings instead of enforcing.

disabled – No SELinux policy is loaded.

SELINUX=disabled

  1. Install docker-ce

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo http://mirrors.aliyun.com/docker-ce/linux/CentOS/docker-ce.repo
yum makecache fast
yum -y install docker-ce

Appears after Docker installation: WARNING: bridge-nf-call-iptables is disabled

vim /etc/sysctl.conf

sysctl settings are defined through files in

/usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.

Vendors settings live in /usr/lib/sysctl.d/.

To override a whole file, create a new file with the same in

/etc/sysctl.d/ and put new settings there. To override

only specific settings, add a file with a lexically later

name in /etc/sysctl.d/ and put new settings there.

#

For more information, see sysctl.conf(5) and sysctl.d(5).

net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1

systemctl enable docker && systemctl start docker

  1. Install kubernetes

cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

  1. Initialize the cluster

kubeadm init –image-repository registry.aliyuncs.com/google_containers –cubernetes-version v1.13.1 –pod-network-cidr = 10.244.0.0 / 16

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.1.110:6443 –token wgrs62.vy0trlpuwtm5jd75 –discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0

  1. Manually deploy flannel

Flannel URL: https://github.com/coreos/flannel

for Kubernetes v1.7 +

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

8.node placement

Install docker kubelet kubeadm

Docker installation is the same as step 4, kubelet kubeadm installation is the same as step 5

9.node joins the master

kubeadm join 192.168.1.110:6443 –token wgrs62.vy0trlpuwtm5jd75 –discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0

Kubectl get nodes #View node status

NAME STATUS ROLES AGE VERSION
localhost.localdomain NotReady 130m v1.13.4
master01 Ready master 4h47m v1.13.4
node01 Ready 94m v1.13.4

Kubectl get cs #View component status

NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}

Kubectl get ns #View namespace

NAME STATUS AGE
default Active 4h41m
kube-public Active 4h41m
kube-system Active 4h41m

Kubectl get pods -n kube-system #View pod status

NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-bszbk 1/1 Running 0 4h44m
coredns-78d4cf999f-j68hb 1/1 Running 0 4h44m
etcd-master01 1/1 Running 0 4h43m
kube-apiserver-master01 1/1 Running 1 4h43m
kube-controller-manager-master01 1/1 Running 2 4h43m
kube-flannel-ds-amd64-27×59 1/1 Running 1 126m
kube-flannel-ds-amd64-5sxgk 1/1 Running 0 140m
kube-flannel-ds-amd64-xvrbw 1/1 Running 0 91m
kube-proxy-4pbdf 1/1 Running 0 91m
kube-proxy-9fmrl 1/1 Running 0 4h44m
kube-proxy-nwkl9 1/1 Running 0 126m
kube-scheduler-master01 1/1 Running 2 4h43m

Environment preparation master01 node01 node02, connect to the network, modify the hosts file, and confirm that the three hosts resolve each other.

Vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.201 master01
192.168.1.202 node01
192.168.1.203 node02

Host configuration Ali YUM source

mv /etc/yum.repos.d/ CentOS -Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup && curl -o /etc/yum.repos.d/CentOS-Base.repo http ://mirrors.aliyun.com/repo/Centos-7.repo

Start deploying kubernetes

  1. Install etcd on master01

Yum install etcd -y

After the installation is complete, modify the etcd configuration file /etc/etcd/etcd.conf

Vim /etc/etcd/etcd.conf

ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379″

Modify the listening address ETCD_LISTEN_CLIENT_URLS=”http://192.168.1.201:2379″ #Modify the etcd address as the local address

Set service startup

Systemctl start etcd && systemctl enable etcd

  1. Install kubernetes on all hosts

Yum install kubernetes -y

  1. Configure the master

Vim /etc/kubernetes/config

KUBE_MASTER=”–master=http://192.168.1.201:8080″ #Modify kube_master address

Vim /etc/kubernetes/apiserver

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″

Modify the listening address KUBE_ETCD_SERVERS=”–etcd-servers=http://192.168.1.201:2379″ #Modify the etcd address

KUBE_ADMISSION_CONTROL=”–admission-control =NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota” #Delete authentication parameter ServiceAccount

Set the service startup, start sequence apiserver>scheduler=controller-manager

Systemctl start docker && systemctl enable docker
systemctl start kube-apiserver && systemctl enable kube-apiserver
systemctl start kube-scheduler && systemctl enable kube-scheduler
systemctl start kube-controller-manager && systemctl enable kube-controller-manager

  1. Configure node

Vim /etc/kubernetes/config

KUBE_MASTER=”–master=http://192.168.1.201:8080″ #Modify the master address

Vim /etc/kubernetes/kubelet

KUBELET_ADDRESS=”–address=192.168.1.202″ #Modify kubelet address
KUBELET_HOSTNAME=”–hostname-override=192.168.1.202″ #Modify kubelet hostname
KUBELET_API_SERVER=”–api-servers=http://192.168.1.201: 8080″ #Modify apiserver address

Set service startup

Systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
systemctl start kube-proxy && systemctl enable kube-proxy

  1. Deployment is complete, check the cluster status

Kubectl get nodes

[root@node02 kubernetes]

# kubectl -s http://192.168.1.201:8080 get nodes -o wide
NAME STATUS AGE EXTERNAL-IP
192.168.1.202 Ready 29s
192.168.1.203 Ready 16m

  1. Install flannel on all hosts

Yum install flannel -y

Vim /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS=”http://192.168.1.201:2379″ #Modify the etcd address

Etcdctl mk /atomic.io/network/config ‘{ “Network”: “172.16.0.0/16” }’ #Set the container network in the etcd host

Master host restart service

Systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kube-apiserver
systemctl restart kube-scheduler
systemctl restart kube-controller-manager

Node host restart service

Systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy