February 2020
M T W T F S S
« Jan    
 12
3456789
10111213141516
17181920212223
242526272829  

Categories

WordPress Quotes

What we plant in the soil of contemplation, we shall reap in the harvest of action.
Meister Eckhart
February 2020
M T W T F S S
« Jan    
 12
3456789
10111213141516
17181920212223
242526272829  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (40)
Ansibile (19)
Apache (135)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (268)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (2)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Ubuntu (1)
Uncategorized (30)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

19 visitors online now
7 guests, 12 bots, 0 members

Hit Counter provided by dental implants orange county

docker install on centos 7.4

hostnamectl set-hostname clusterserver1.rmohan.com
ifconfig -a
vi /etc/selinux/config
systemctl disable firewalld
systemctl stop firewalld
vi /etc/hosts
yum install wget
yum update
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager –enable docker-ce-edge
yum list docker-ce –showduplicates | sort -r
yum install docker-ce

systemctl enable docker

systemctl start docker

 

Uninstall Docker CE

  1. Uninstall the Docker package:
    $yum remove docker-ce
    
  2. Images, containers, volumes, or customized configuration files on your host are not automatically removed. To delete all images, containers, and volumes:
    $ rm -rf /var/lib/docker

disable directory browsing apache2.4

<Directory /var/www/>
        Options Indexes FollowSymLinks
        AllowOverride None
        Require all granted
</Directory>

to

<Directory /var/www/>
        Options FollowSymLinks
        AllowOverride None
        Require all granted
</Directory>

dockert centos

Docker is an open-source tool that automates the deployment of applications inside software containers by providing an additional layer of abstraction of operating system level virtualization on Linux. Docker makes it easier to create, deploy and run applications by using containers. Containers allow a developer to package up an application with all of the libraries and dependencies, and ship it all out as one package. Docker is same as virtual machine, rather than creating a whole virtual operating system. Docker allows applications to use the same Linux kernel as the system that they are running on. This gives a significant performance boost and reduces the size of the application.

Docker is preferred by many system admins and developers to maximize their creativity in an App production environment. For system admins, Docker gives flexibility and reduces the number of systems needed because of its small footprint and lower overhead.

In this tutorial, We’ll learn how to install Docker on CentOS-7.

##Requirements

CentOS-7 64-bit configured with static IP address. Kernel version must be 3.10 at minimum.

##Installing Docker

It is recommended to update your system before installing docker.

To update the system, Run:

sudo yum -y update

Docker package is not available in CentOS-7 repository. So you need to create repo for docker.

You can create docker repo by creating docker.repo file inside /etc/yum.repos.d/ directory.

sudo nano /etc/yum.repos.d/docker.repo

Add the following line:


[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

After creating docker repo, run the following command to install docker.

sudo yum install docker-engine

Once docker has been installed, it is recommended to start docker service and set docker service to start at boot.

You can do this by running the following command:

sudo systemctl start docker.service sudo systemctl enable docker.service

You can check the status of Docker by running the following command:

sudo systemctl status docker.service

Finally, you can verify that the latest version of docker is installed with the following command

sudo docker version

You can see the output some thing like this:

Client:
 Version:      1.10.0
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   590d5108
 Built:        Thu Feb  4 18:36:33 2016
 OS/Arch:      linux/amd64

Server:
Version: 1.10.0
API version: 1.22
Go version: go1.5.3
Git commit: 590d5108
Built: Thu Feb 4 18:36:33 2016
OS/Arch: linux/amd64

##Working With Docker

###Downloading a Docker Container

Let’s start using docker. You will need to have an image present on your host machine where the containers will exist. You can download the CentOS docker image by running the following command.

sudo docker pull centos

You can see the output some thing like this:

Using default tag: latest
latest: Pulling from library/centos

a3ed95caeb02: Pull complete
196355c4b639: Pull complete
Digest: sha256:381f21e4c7b3724c6f420b2bcfa6e13e47ed155192869a2a04fa10f944c78476
Status: Downloaded newer image for centos:latest

###Running a Docker Container

Now, You just run one command to setup a basic CentOS container with a bash shell.

To run a command in new container, Run:

sudo docker run -i -t centos /bin/bash

Note: -i flag attaches stdin and stdout and -t flag allocates a tty.

That’s it. Now you’re using a bash shell inside of a centos docker container.

There are many containers already available on docker website. You can found any container through a search. For example to search centos container, Run:

sudo docker search centos

You can see the output some thing like this:

NAME                            DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
centos                          The official build of CentOS.                   2079      [OK]       
jdeathe/centos-ssh              CentOS-6 6.7 x86_64 / CentOS-7 7.2.1511 x8...   19                   [OK]
jdeathe/centos-ssh-apache-php   CentOS-6 6.7 x86_64 / Apache / PHP / PHP M...   14                   [OK]
million12/centos-supervisor     Base CentOS-7 with supervisord launcher, h...   10                   [OK]
blalor/centos                   Bare-bones base CentOS 6.5 image                8                    [OK]
nimmis/java-centos              This is docker images of CentOS 7 with dif...   8                    [OK]
torusware/speedus-centos        Always updated official CentOS docker imag...   7                    [OK]
centos/mariadb55-centos7                                                        3                    [OK]
nickistre/centos-lamp           LAMP on centos setup                            3                    [OK]
nathonfowlie/centos-jre         Latest CentOS image with the JRE pre-insta...   3                    [OK]
consol/sakuli-centos-xfce       Sakuli end-2-end testing and monitoring co...   2                    [OK]
timhughes/centos                Centos with systemd installed and running       1                    [OK]
darksheer/centos                Base Centos Image -- Updated hourly             1                    [OK]
softvisio/centos                Centos                                          1                    [OK]
lighthopper/orientdb-centos     A Dockerfile for creating an OrientDB imag...   1                    [OK]
yajo/centos-epel                CentOS with EPEL and fully updated              1                    [OK]
grayzone/centos                 auto build for centos.                          0                    [OK]
ustclug/centos                   USTC centos                                    0                    [OK]
januswel/centos                 yum update-ed CentOS image                      0                    [OK]
dmglab/centos                   CentOS with some extras - This is for the ...   0                    [OK]
jsmigel/centos-epel             Docker base image of CentOS w/ EPEL installed   0                    [OK]
grossws/centos                  CentOS 6 and 7 base images with gosu and l...   0                    [OK]
labengine/centos                Centos image base                               0                    [OK]
lighthopper/openjdk-centos      A Dockerfile for creating an OpenJDK image...   0                    [OK]
blacklabelops/centos            CentOS Base Image! Built and Updates Daily!     0                    [OK]

You can also list all available images on your system using the following command:

sudo docker images

You can see the output some thing like this:

REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
firefox-instance            latest              8e61bff07fa0        3 weeks ago         354.6 MB
centos                      latest              d0e7f81ca65c        4 weeks ago         196.6 MB
debian                      latest              f50f9524513f        4 weeks ago         125.1 MB
apache/ubuntu               latest              196655130bc9        4 weeks ago         224.1 MB
apache-instance             latest              7da78270c5f7        4 weeks ago         224.1 MB
apache-instance             ubuntu              7da78270c5f7        4 weeks ago         224.1 MB
hitjethva/apache-instance   ubuntu              7da78270c5f7        4 weeks ago         224.1 MB

##Docker basic command lines

Let’s start with seeing all available commands docker have.

You can list all available docker command by running the following command:

sudo docker

You should see the following output:

Commands:
``` language-bash
    attach    Attach to a running container
    build     Build an image from a Dockerfile
    commit    Create a new image from a container's changes
    cp        Copy files/folders between a container and the local filesystem
    create    Create a new container
    diff      Inspect changes on a container's filesystem
    events    Get real time events from the server
    exec      Run a command in a running container
    export    Export a container's filesystem as a tar archive
    history   Show the history of an image
    images    List images
    import    Import the contents from a tarball to create a filesystem image
    info      Display system-wide information
    inspect   Return low-level information on a container or image
    kill      Kill a running container
    load      Load an image from a tar archive or STDIN
    login     Register or log in to a Docker registry
    logout    Log out from a Docker registry
    logs      Fetch the logs of a container
    network   Manage Docker networks
    pause     Pause all processes within a container
    port      List port mappings or a specific mapping for the CONTAINER
    ps        List containers
    pull      Pull an image or a repository from a registry
    push      Push an image or a repository to a registry
    rename    Rename a container
    restart   Restart a container
    rm        Remove one or more containers
    rmi       Remove one or more images
    run       Run a command in a new container
    save      Save an image(s) to a tar archive
    search    Search the Docker Hub for images
    start     Start one or more stopped containers
    stats     Display a live stream of container(s) resource usage statistics
    stop      Stop a running container
    tag       Tag an image into a repository
    top       Display the running processes of a container
    unpause   Unpause all processes within a container
    update    Update resources of one or more containers
    version   Show the Docker version information
    volume    Manage Docker volumes
    wait      Block until a container stops, then print its exit code

Run ‘docker COMMAND –help’ for more information on a command.


To check system-wide information on docker, run:

`sudo docker info`

You should see the following output:

Containers: 6 Running: 0 Paused: 0 Stopped: 6 Images: 22 Server Version: 1.10.0 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 35 Dirperm1 Supported: false Execution Driver: native-0.2 Logging Driver: json-file Plugins: Volume: local Network: bridge null host Kernel Version: 3.13.0-32-generic Operating System: Zorin OS 9 OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 3.746 GiB Name: Vyom-PC ID: G2NZ:3DDJ:KJFV:HC2E:HR3Y:J4JH:TX2D:EX57:K26Y:3AFH:FGKB:XEIF Username: hitjethva Registry: https://index.docker.io/v1/ WARNING: No swap limit support


You can use the following command to list all running containers:

`sudo docker ps`

You can see the running container in following image:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 534544f2e924 centos “/bin/bash” 30 seconds ago Up 29 seconds boring_hamilton “`

You can also list both running and non-running containers by running the following command:

sudo docker ps -a

Sometimes container stops due to its process ending or you stopping it explicitly. In this situation you can run container again with container ID.

sudo docker run "container ID"

You can also stop a running container by running the following command:

sudo docker stop "container ID"

Note: You can find container ID using sudo docker ps command.

If you would like to save the changes you have made with a container, use commit command to save to save it as an image.

sudo docker commit "container ID" image_name

This command turns your container to an image. You can roll back when your container you need.

If you want to attach into a running container, Docker allows you to interaction with running containers using the attach command.

You can use attach command with the container ID..

sudo docker attach "container ID"

##Stop and Delete all Containers and Images

To stop all running containers, Run:

sudo docker stop $(docker ps -a -q)

To delete all existing containers, Run:

sudo docker rm $(docker ps -a -q)

To delete all existing images, Run:

sudo docker rmi $(docker images -q -a)

##Conclusion

Congratulations! You now have a centos server with a Docker platform for your application development environment.

How to sync standby database which is lagging behind from primary database

How to sync standby database which is lagging behind from primary database
Primary Database cluster: cluster1.rmohan.com
Standby Database cluster: cluster2.rmohan.com

Primary Database: prim
Standby database: stand

Database version:11.2.0.1.0

Reason:-
1. Might be due to the network outage between the primary and the standby database leading to the archive
gaps. Data guard would be able to detect the archive gaps automatically and can fetch the missing logs as
soon as the connection is re-established.

2. It could also be due to archive logs getting missed out on the primary database or the archives getting
corrupted and there would be no valid backups.

In such cases where the standby lags far behind from the primary database, incremental backups can be used
as one of the methods to roll forward the physical standby database to have it in sync with the primary database.

At primary database:-
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE
———— —————- —————-
OPEN prim PRIMARY

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 214

At standby database:-
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE
———— —————- —————-
OPEN stand PHYSICAL STANDBY

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 42
So we can see the standby database is having archive gap of around (214-42) 172 logs.

Step 1: Take a note of the Current SCN of the Physical Standby Database.
SQL> select current_scn from v$database;

CURRENT_SCN
———–
1022779

Step 2 : Cancel the Managed Recovery Process on the Standby database.
SQL> alter database recover managed standby database cancel;

Database altered.

Step 3: On the Primary database, take the incremental SCN backup from the SCN that is currently recorded on the standby database (1022779)
At primary database:-

RMAN> backup incremental from scn 1022779 database format ‘/tmp/rman_bkp/stnd_backp_%U.bak’;

Starting backup at 28-DEC-14

using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-15
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/prim/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/prim/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/prim/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/prim/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/prim/users01.dbf
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak tag=TAG20141228T025048 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25

using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-15
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak tag=TAG20141228T025048 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

We took the backup inside /tmp/rman_bkp directory and ensure that it contains nothing besides the incremental backups of scn.

Step 4: Take the standby controlfile backup of the Primary database controlfile.

At primary database:

RMAN> backup current controlfile for standby format ‘/tmp/rman_bkp/stnd_%U.ctl’;

Starting backup at 28-DEC-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including standby control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl tag=TAG20141228T025301 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

Starting Control File and SPFILE Autobackup at 28-DEC-14
piece handle=/u01/app/oracle/flash_recovery_area/PRIM/autobackup/2014_12_28/o1_mf_s_867466384_b9y8sr8k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 28-DEC-14

Step 5: Transfer the backups from the Primary cluster to the Standby cluster.
[oracle@cluster1 ~]$ cd /tmp/rman_bkp/
[oracle@cluster1 rman_bkp]$ ls -ltrh
total 24M
-rw-r—–. 1 oracle oinstall 4.2M Dec 28 02:51 stnd_backp_0cpr8v08_1_1.bak
-rw-r—–. 1 oracle oinstall 9.7M Dec 28 02:51 stnd_backp_0dpr8v12_1_1.bak
-rw-r—–. 1 oracle oinstall 9.7M Dec 28 02:53 stnd_0epr8v4e_1_1.ctl

oracle@cluster1 rman_bkp]$ scp *.* oracle@cluster2:/tmp/rman_bkp/
oracle@cluster2’s password:
stnd_0epr8v4e_1_1.ctl 100% 9856KB 9.6MB/s 00:00
stnd_backp_0cpr8v08_1_1.bak 100% 4296KB 4.2MB/s 00:00
stnd_backp_0dpr8v12_1_1.bak 100% 9856KB 9.6MB/s 00:00

Step 6: On the standby cluster, connect the Standby Database through RMAN and catalog the copied
incremental backups so that the Controlfile of the Standby Database would be aware of these
incremental backups.

At standby database:-

SQL>

[oracle@cluster2 ~]$ rman target /
RMAN> catalog start with ‘/tmp/rman_bkp’;

using target database control file instead of recovery catalog
searching for all files that match the pattern /tmp/rman_bkp

List of Files Unknown to the Database
=====================================
File Name: /tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl
File Name: /tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak
File Name: /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files…
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl
File Name: /tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak
File Name: /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak

Step 7. Shutdown the database and open it in mount stage for recovery purpose.
SQL> shut immediate;
SQL> startup mount;

Step 8.Now recover the database :-
RMAN> rman target /
RMAN> recover database noredo;

Starting recover at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/app/oracle/oradata/stand/system01.dbf
destination for restore of datafile 00002: /u01/app/oracle/oradata/stand/sysaux01.dbf
destination for restore of datafile 00003: /u01/app/oracle/oradata/stand/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/stand/users01.dbf
destination for restore of datafile 00005: /u01/app/oracle/oradata/stand/example01.dbf
channel ORA_DISK_1: reading from backup piece /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak
channel ORA_DISK_1: piece handle=/tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak tag=TAG20141228T025048
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03

Finished recover at 28-DEC-14
exit.

Step 9 : Shutdown the physical standby database, start it in nomount stage and restore the standby controlfile
backup that we had taken from the primary database.

SQL> shut immediate;
SQL> startup nomount;

[oracle@cluster2 rman_bkp]$ rman target /
RMAN> restore standby controlfile from ‘/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl’;
ecovery Manager: Release 11.2.0.1.0 – Production on Sun Dec 28 03:08:45 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: PRIM (not mounted)

RMAN> restore standby controlfile from ‘/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl’;

Starting restore at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/stand/stand.ctl
output file name=/u01/app/oracle/flash_recovery_area/stand/stand.ctl
Finished restore at 28-DEC-14

Step 10: Shutdown the standby database and mount the standby database, so that the standby database would
be mounted with the new controlfile that was restored in the previous step.

SQL> shut immediate;
SQL> startup mount;

At standby database:-
SQL> alter database recover managed standby database disconnect from session;

At primary database:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 215

At standby database:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 215

Step 11.Now we will cancel the recovery to open the database
SQL> alter database recover managed standby database cancel;

SQL> alter database open;
Database altered.

SQL> alter database recover managed standby database using current logfile disconnect from session;
Database altered.

SQL> select open_mode from v$database;

OPEN_MODE
——————–
READ ONLY WITH APPLY

Now standby database is in sync with the Primary Database.

Kubernetes basic concepts study notes

Kubernetes (commonly known as K8s) is an open source system for automatically deploying, extending, and managing containerized applications and is an “open source” version of Borg, Google’s internal tool.

Kubernetes is currently recognized as the most advanced container cluster management tool. After the release of version 1.0, Kubernetes has been developing at a faster speed and has been fully supported by container ecosystem companies, including coreos, rancher, and many other public cloud services. Vendors also provide infrastructure services based on Kubernetes’ secondary development when providing container services, such as Huawei. It can be said that Kubernetes is also Docker’s foray into the strongest competitors in container cluster management and service orchestration (Docker Swarm).

Kubernetes defines a set of building blocks that together provide a mechanism for deploying, maintaining, and extending applications. The components that make up Kubernetes are designed to be loosely coupled and scalable so that they can meet a variety of different workloads. Scalability is largely provided by the Kubernetes API – it is used as an internal component of extensions and containers that run on Kubernetes.

Because Kubernetes is a system made up of many components, it is still difficult for Kubernetes to install and deploy, and Kubernetes is developed by Google. There are many internal dependencies that need to be accessed through the wall.

Of course, there are quick installation tools, such as kubeadm, kubeadm is the official tool provided by Kubernetes to quickly install and initialize the Kubernetes cluster. Currently, it is still in an incubator development state. With the release of Kubernetes, the release of each version will be updated synchronously. Of course, the current kubeadm It cannot be used in a production environment.

 

 

2. Kubernetes features

Kubernetes features:

  • Simple : lightweight, simple, easy to use
  • Portable : public, private, hybrid, multi-cloud
  • Extensible : modular, plug-in, mountable, combinable
  • Self -healing : Automatic layout, automatic restart, automatic copy

In layman terms:

  • Automated container deployment and replication
  • Expand or shrink containers at any time
  • Organize containers into groups and provide load balancing among containers
  • Easily upgrade new versions of application containers
  • Provide container flexibility to replace a container if it fails

3. Kubernetes terminology

Kubernetes terminology:

  • Master Node : The computer used to control the Kubernetes node. All task assignments come from this.
  • Minion Node : The computer that performs the request and assignment tasks. The Kubernetes host is responsible for controlling the nodes.
  • Namespace : Namespace is an abstract set of a set of resources and objects. For example, it can be used to divide the internal objects of the system into different project groups or user groups. The common pods, services, replication controllers, and deployments are all part of a namespace (the default is default), while node, persistentVolumes, etc. do not belong to any namespace.
  • Pod : A container group that is deployed on a single node and contains one or more containers. A Pod can be created, scheduled, and shared with all Kubernetes managed minimum deployment units, all containers in the same container set. IP address, IPC, host name, and other resources. The container set abstracts the network and storage from the underlying container so that you can more easily move the container in the cluster.
  • Deployment : Deployment is a new generation of objects for Pod management. Compared with Replication Controller, it provides more complete functionality and is easier and more convenient to use.
  • Replication Controller : The replication controller manages the lifecycle of pods. They ensure that a specified number of pods are running at any given time. They do this by creating or deleting pods.
  • Service : The service provides a single stable name and address for a group of pods. The service separates the work definition from the container set. The Kubernetes service agent automatically assigns the service request to the correct container set—whether or not the container set moves to the cluster. Which position in it, even if it has been replaced.
  • Lable : Labels are used to organize and select object groups based on key-value pairs, which are used for each Kubernetes component.

In Kubernetes, all containers are run in a Pod, a Pod to hold a single container, or multiple cooperating containers. In the latter case, the containers in the Pod are guaranteed to be placed on the same machine and resources can be shared. A Pod can also contain zero or more volumes. Volumes are private to a container or can be shared between containers in a Pod. For each Pod created by the user, the system finds a machine that is healthy and has sufficient capacity, and then starts to start the corresponding container there. If a container fails, it is automatically restarted by Kubernetes’ node agent. This node agent is called a Kubelet. However, if the Pod or his machine fails, it will not be automatically transferred or restarted unless the user also defines a Replication Controller.

Pod’s copy sets can collectively form an entire application, a microservice, or a layer of a multi-tier application. Once the Pod is created, the system continuously monitors their health status and the health of the machine they are running on. If a Pod has problems due to a software problem or a machine failure, the Replication controller automatically creates a new Pod on a healthy machine.

Kubernetes supports a unique network model. Kubernetes encourages the use of flat address spaces and does not dynamically allocate ports, but rather allows users to choose any port that suits them. To achieve this, it assigns each Pod an IP address.

Kubernetes provides an abstraction of Service that provides a stable IP address and DNS name to correspond to a set of dynamic pods, such as a set of pods that make up a microservice. This Pod group is defined by the Label selector, because you can specify any Pod group. When a container running in a Kubernetes Pod connects to this address, the connection is forwarded by the local proxy (called the kube proxy). The agent runs on the source machine, the forwarding destination is a corresponding back-end container, and the exact back-end is selected by the round-robin policy to balance the load. The kube proxy will also track the dynamic changes of the backend Pod group, such as when the Pod is replaced by a new Pod on the new machine, so the IP and DNS names of the service do not need to be changed.

Every Kubernetes resource, such as a pod, is identified by a URI and has a UID. One of the key components in URIs is the type of object (eg, Pod), the name of the object, and the namespace of the object. For a particular object type, each name is unique in its namespace, given the name of an object is not given in the namespace, that is the default namespace, UID in both time and space range only.


More about Service:

  • Service is an application service abstraction that provides load balancing and service discovery for the application through labels. The list of Pod IPs and ports that match the labels constitutes endpoints, which are used by kube-proxy to load balance the service IP to these endpoints.
  • Each Service automatically assigns a cluster IP (virtual address accessible only within the cluster) and a DNS name from which other containers can access the service without having to know about the backend container’s operation.

 

 

Kubernetes components

Kubernetes components:

  • kubectl : client command-line tool, formatted commands sent to the kube-apiserver accepted as the operation of the entire system entrance.
  • Kube-apiserver : Serves as a control entry for the entire system, providing interfaces with REST API services.
  • kube-controller-manager : Used to perform background tasks in the entire system, including the status of nodes, the number of pods, and the association between Pods and Service.
  • kube-scheduler ( Pods are scheduled to Node): Responsible for node resource management, accept PUBs created from kube-apiserver and assign them to a node.
  • etcd : Responsible for service discovery and configuration sharing between nodes.
  • kube-proxy : Runs on each compute node, responsible for the Pod web proxy. Regular Getd service information from etcd to do the appropriate strategy.
  • kubelet : runs on each compute node, acts as an agent, accepts Pods tasks and management containers that are assigned to this node, periodically obtains container status and feeds back to kube-apiserver.
  • DNS : An optional DNS service for creating DNS records for each Service object so that all pods can access services through DNS.
  • flannel : Flannel is CoreOS team designed a cover for Kubernetes network (Overlay Network) tool, you need to download another deployment. We know that when we start Docker there will be an IP address for interacting with the container. If you do not manage it, the IP address may be the same on all machines, and it is limited to communicate on the machine, you can not access Docker containers on other machines. The purpose of the Flannel is to re-plan the use of IP addresses for all nodes in the cluster, so that containers on different nodes can obtain IP addresses that belong to the same intranet and do not duplicate IP addresses, and allow containers belonging to different nodes to directly pass the IP address. Network IP communication.

master node contains the components:

docker
etcd
kube-apiserver
kube-controller-manager
kubelet
kube-scheduler

The minion node contains components:

Docker
kubelet
kube-proxy

 

 

Tomcat log cutting and regular deletion

Tomcat log cutting and regular deletion

In Tomcat’s software environment, if we allow log files to grow indefinitely, one day the disk is full (crap).
Especially in the case of log file growth is very fast, cutting log files by log and delete, is a very necessary work, the following describes the method of cutting log files.

[root@server1 ~]# cat /etc/RedHat-release
CentOS release 6.5 (Final)
[root@server1 ~]# uname -r
2.6.32-431.el6.x86_64
[root@server1 ~]# uname -m
x86_64

[root@server1 ~]# java -version
java version “1.7.0_67”
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)

[root@server1 ~]# /opt/gw/tomcat7/bin/catalina.sh version
Using CATALINA_BASE: /opt/gw/tomcat7
Using CATALINA_HOME: /opt/gw/tomcat7
Using CATALINA_TMPDIR: /opt/gw/tomcat7/temp
Using JRE_HOME: /usr/local/jdk1.7
Using CLASSPATH: /opt/gw/tomcat7/bin/bootstrap.jar:/opt/gw/tomcat7/bin/tomcat-juli.jar
Server version: Apache Tomcat/7.0.57
Server built: Nov 3 2014 08:39:16 UTC
Server number: 7.0.57.0
OS Name: Linux
OS Version: 2.6.32-431.el6.x86_64
Architecture: i386
JVM Version: 1.7.0_67-b01
JVM Vendor: Oracle Corporation

cd /usr/local/src
wget https://files.cnblogs.com/files/crazyzero/cronolog-1.6.2.tar.gz
[root@mohan src]# md5sum cronolog-1.6.2.tar.gz
a44564fd5a5b061a5691b9a837d04979 cronolog-1.6.2.tar.gz

[root@mohan src]# tar xf cronolog-1.6.2.tar.gz
[root@mohan src]# cd cronolog-1.6.2
[root@mohan cronolog-1.6.2]# ./configure
[root@mohan cronolog-1.6.2]# make && make install
[root@mohan cronolog-1.6.2]# which cronolog
/usr/local/sbin/cronolog

[root@server1 ~]# which cronolog
/usr/local/sbin/cronolog

Chapter 3 configuration tomcat log cutting
Configuration log cutting, simply modify the configuration file catalina.sh (if windows is catalina.bat, here does not introduce the case of windows) can be.
Probably in the catalina file on the 380th and 390th lines, amended as follows:

org.apache.catalina.startup.Bootstrap “$@” start \
>> “$CATALINA_OUT” 2>&1 “&”

org.apache.catalina.startup.Bootstrap “$@” start \
2>&1 |/usr/local/sbin/cronolog “$CATALINA_BASE/logs/catalina-%Y-%m-%d.out” &

org.apache.catalina.startup.Bootstrap “$@” start \
>> “$CATALINA_OUT” 2>&1 “&”

org.apache.catalina.startup.Bootstrap “$@” start \
2>&1 |/usr/local/sbin/cronolog “$CATALINA_BASE/logs/catalina-%Y-%m-%d.out” &

00 00 * * * /bin/find /opt/gdyy/tomcat7/logs/ -type f -mtime +7 | xargs rm -f &>/dev/null

remove gw log 7 days ago by liutao at 2018-02-08
00 00 * * * /bin/find /opt/gw/tomcat7/logs/ -type f -mtime +7 | xargs -i mv {} /data/bak/gw_log/ &>/dev/null

Tomcat load balancing using Nginx reverse proxies.

This essay focuses on Tomcat clusters and Tomcat load balancing using Nginx reverse proxies.

First, we need to literacy some knowledge points (to literacy, embarrassment):
Cluster (Cluster)
Simply put, N servers form a loosely coupled multiprocessor system (external is a server), internal communication through the network. Let N servers cooperate with each other and jointly carry the request pressure of a website. In the words of the last author is “the same business, deployed on multiple servers,” This is the cluster. The more important task in the cluster is task scheduling.
Load Balance
simply means that the request is distributed to each server in the cluster according to a certain load policy, and the entire server group processes the website request so as to jointly complete the task.
2, the installation environment is as follows:
Tencent cloud host, the installation is CentOS 7.3 64bits
Nginx 1.7.4
JDK8 and Tomcat8

Configure Nginx web reverse proxy to implement two Tomcat load balancing:

–Install and configure Tomcat tar -zxvf apache-tomcat-8.5.28.tar.gz
cp -rf apache-tomcat-8.5.28 /usr/local/
tomcat1 mv apache-tomcat-8.5.28 /usr/local/tomcat2

– Modify the tomcat1 port number
$ cd /usr/local/tomcat1/conf/
$ cp server.xml server.xml.bak
$ cp web.xml web.xml.bak
$ vi server.xml

# # Modify the Tomcat2 port number
$ cd / usr / local / tomcat / conf /
$ cp server.xml server.xml.bak
$ cp web.xml web.xml.bak
$ vi server.xml

– Add Tomcat1 to boot automatically
– Copy the /usr/local/tomcat1/bin/catalina.sh file to the /etc/init.d directory and rename it to tomcat1
# cp /usr/local/tomcat1/bin/catalina. Sh /etc/init.d/tomcat1 –Modify the
/etc/init.d/tomcat1 file and add it in the file:
# vi /etc/init.d/tomcat1 –Enter the
following in the first line (otherwise Error: The tomcat service does not support chkconfig):

#!/bin/sh
# chkconfig: 2345 10 90
# description:Tomcat1 service

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
……………………. ……………………………………..
# $ Id: catalina. Sh 1498485 2013-07-01 14:37:43Z markt $
# ———————————– ——————————————

CATALINA_HOME=/usr/local/tomcat1
JAVA_HOME=/usr/local/jdk8

# OS specific support. $var _must_ be set to either true or false.

– Add tomcat service:
# chkconfig –add tomcat1
– Set tomcat to boot from:
# chkconfig tomcat1 on – Tomcat2 will be set to boot from
– Copy /usr/local/tomcat2/bin/catalina.sh file To /etc/init.d directory and renamed tomcat2
# cp /usr/local/tomcat2/bin/catalina.sh /etc/init.d/tomcat2
– Modify / etc / init.d / tomcat2 file in the file :
# Vi /etc/init.d/tomcat2
– Enter the following in the first line (otherwise error: tomcat service does not support chkconfig):
#! / Bin / sh
# chkconfig: 2345 10 90
# description: Tomcat2 Service

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
……………………. ……………………………………..
# —– ————————————————– ———————-

CATALINA_HOME=/usr/local/tomcat2
JAVA_HOME=/usr/local/jdk8

# OS specific support. $var must be set to either true or false.

– Add tomcat service:
# chkconfig –add tomcat2
— Set tomcat to start from boot:
# chkconfig tomcat2 on

Here, tomcat has been installed, respectively, enable them, print out the referenced environment corresponds to two tomcat is right:

[root @ VM_177_101_centos src] # service tomcat1 start
Using CATALINA_BASE: / usr / local / tomcat1
Using CATALINA_HOME: / usr / local / tomcat1
Using CATALINA_TMPDIR: / usr / local / tomcat1 / temp
Using JRE_HOME: / usr / local / jdk8
Using CLASSPATH : /usr/local/tomcat1/bin/bootstrap.jar:/usr/local/tomcat1/bin/tomcat-juli.jar
Tomcat started.

[root @ VM_177_101_centos src] # service tomcat2 start
Using CATALINA_BASE: / usr / local / tomcat2
Using CATALINA_HOME: / usr / local / tomcat2
Using CATALINA_TMPDIR: / usr / local / tomcat2 / temp
Using JRE_HOME: / usr / local / jdk8
Using CLASSPATH: /usr/local/tomcat2/bin/bootstrap.jar:/usr/local/tomcat2/bin/tomcat-juli.jar
Tomcat started.

Last configuration configuration Nginx:

– Switch to the directory
cd /usr/local/nginx/conf
– Modify the configuration file
vi nginx.conf

– some common configuration –
worker_processes: the number of work processes can be configured more than one
–worker_connections: the maximum number of connections
per process –server: each server is equivalent to a proxy server
–lister: monitor port, the default 80
– server_name: The current service domain name, there can be more than one, separated by a space (we are local and therefore localhost) –
Location: said path matching, then configured / that all requests are matched here
–index: when If you do not specify a home page, the specified file will be selected by default, and can be separated by more than one space.
–proxy_pass: request to a custom server list
–upstream name {}: server cluster name

– Now want to access tomcat through nginx, you need to modify the server part of the configuration

server
{
listen 80 default;
charset utf-8;
server_name localhost;
access_log logs / host.access.log;

Location / {
proxy_pass http://localhost:8080;
proxy_redirect default;
}
}
– The proxy has been completed here, so that all requests need to go through the proxy server to access the official server.

Next, load balancing is implemented. During the installation process, the port configured by tomcat1 is 8080, and the port configured by tomcat2 is 8081. Then we need to define the upstream server in the configuration file (upstream server)

# The cluster
upstream testcomcat {
#weight is the greater the weight, the greater the probability of allocation
server 127.0.0.1:8080 weight=1;
server 127.0.0.1:8081 weight=2;
}

Server
{
listen 80 default;
charset utf-8;
access_log logs/host.access.log;

location / {
proxy_pass http: // testcomcat;
proxy_redirect default;
}
}

– In order to see is not the same, I will tomcat root the following index.jsp page slightly changed, respectively, joined TEST1, TEST2, to facilitate the distinction, restart nginx, the browser address bar enter the IP, visits, refresh several times Page, you will find that the switch between the two servers, as shown below:
service nginx reload

Docker Webshpere

Docker Webshpere

Step 1 docker websphere
1. docker pull  ibmcom/websphere-traditional:8.5.5.12-profile

2. docker run –name websphere -h test -e UPDATE_HOSTNAME=true -p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:8.5.5.12-profile

3. docker exec websphere cat /tmp/PASSWORD

4. docker run –name test -h test -v $(pwd)/PASSWORD:/tmp/PASSWORD  -p 9045:9043 -p 9445:9443 -d ibmcom/websphere-traditional:8.5.5.12-profile

5.websphere?    https://172.10.21.30:9043/ibm/console/login.do?action=secure

Install image

The ibmcom/websphere-traditional:install Docker image contains IBM WebSphere Application Server traditional for Developers and can be started by:
1.Running the image using default values:docker run –name test -h test -p 9043:9043 -p 9443:9443 -d \
ibmcom/websphere-traditional:install

2.Running the image by passing values for the environment variables:docker run –name test -h test -e HOST_NAME=test -p 9043:9043 -p 9443:9443 -d \
-e PROFILE_NAME=AppSrv02 -e CELL_NAME=DefaultCell02 -e NODE_NAME=DefaultNode02 \
ibmcom/websphere-traditional:install

•PROFILE_NAME (optional, default is ‘AppSrv01’)
•CELL_NAME (optional, default is ‘DefaultCell01’)
•NODE_NAME (optional, default is ‘DefaultNode01’)
•HOST_NAME (optional, default is ‘localhost’)

Profile image

The ibmcom/websphere-traditional:profile Docker image contains IBM WebSphere Application Server traditional for Developers with the profile already created and can be started by:
1.Running the image using default values:docker run –name test -h test -p 9043:9043 -p 9443:9443 -d \
ibmcom/websphere-traditional:profile

2.Running the image by passing values for the environment variables:docker run –name test -h test -e UPDATE_HOSTNAME=true -p 9043:9043 -p 9443:9443 -d \
ibmcom/websphere-traditional:profile

•UPDATE_HOSTNAME (optional, set to ‘true’ if the hostname should be updated from the default of ‘localhost’ to match the hostname allocated at runtime)

Admin console

In both cases a secure profile is created with an admin user ID of wsadmin and a generated password. The generated password can be retrieved from the container using the following command:
docker exec test cat /tmp/PASSWORD

It is also possible to specify a password when the container is run by mounting a file containing the password to /tmp/PASSWORD. For example:
docker run –name test -h test -v $(pwd)/PASSWORD:/tmp/PASSWORD \
-p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:install

Once the server has started, you can access the admin console at https://localhost:9043/ibm/console/login.do?action=secure. If you are running in Docker Toolbox, use the value returned by docker-machine ip instead of localhost.

How to Set up Nginx High Availability with Pacemaker and Corosync on CentOS 7

How to Set up Nginx High Availability with Pacemaker and Corosync on CentOS 7

We will create the Active-Passive Cluster or Failover-cluster Nginx web server using Pacemaker on a CentOS 7 system.
Pacemaker is an open source cluster manager software that achieves maximum high availability of your services. It’s an advanced and scalable HA cluster manager distributed by ClusterLabs.
Corosync Cluster Engine is an open source project derived from the OpenAIS project under new BSD License. It’s a group communication system with additional features for implementing High Availability within applications.
There are some applications for the Pacemaker interfaces. Pcsd is one of the Pacemaker command line interface and GUI for managing the Pacemaker. We can create, configure, or add a new node to the cluster with the pcsd command pcs.
Prerequisites

[root@clusterserver1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.20 clusterserver1.rmohan.com clusterserver1
192.168.1.21 clusterserver2.rmohan.com clusterserver2
192.168.1.22 clusterserver3.rmohan.com clusterserver3
[root@clusterserver1 ~]#

Floating IP Address 192.168.1.25
Root Privileges

Now test the hosts’ mapping configuration.

ping -c 3 clusterserver1
ping -c 3 clusterserver2

Install Epel Repository and Nginx
In this step, we will install the epel repository and then install the Nginx web server. EPEL or Extra Packages for Enterprise Linux repository is needed for installing Nginx packages.

Install EPEL Repository using the following yum command.

yum -y install epel-release

Now install Nginx web server from the EPEL repository.

yum -y install nginx

systemctl start nginx
systemctl enable nginx
systemctl status nginx

#Run Command on ‘clusterserver1’
echo ‘<h1>clusterserver1 – TEST SERVER1</h1>’ > /usr/share/nginx/html/index.html

#Run Command on ‘clusterserver2’
echo ‘<h1>clusterserver2 – TEST SERVER2</h1>’ > /usr/share/nginx/html/index.html

#Run Command on ‘clusterserver3’
echo ‘<h1>clusterserver3 – TEST SERVER3</h1>’ > /usr/share/nginx/html/index.html

Install and configure Pacemaker, Corosync, and Pcsd
Pacemaker, Corosync, and Pcsd are available in the default system repository. So they all can be installed from the CentOS repository using the following yum command.
yum -y install corosync pacemaker pcs
After the installation has been completed, enable all services to launch automatically at system boot using the systemctl commands below.
systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
Now start the pcsd Pacemaker command line interface on all servers.
systemctl start pcsd
Next, create a new password for ‘hacluster’ user and use the same password for all servers. This user has been created automatically during software installation.
Here’s how you configure a password for the ‘hacluster’ user.
passwd hacluster
Enter new password:
High Availability software stack Pacemaker, Corosync, and Pcsd are installed on to the system.

Create and Configure the Cluster

In this step, we will create a new cluster with 3 centos servers. Then configure the Floating IP address and add new Nginx resources.
To create the cluster, we need to authorize all servers using the pcs command and the hacluster user.
Authorize all servers with the pcs command and hacluster user and password.
pcs cluster auth clusterserver1 clusterserver2 clusterserver3
Username: hacluster
Password: test123

[root@clusterserver1 ~]# pcs cluster auth clusterserver1 clusterserver2 clusterserver3
Username: hacluster
Password:
clusterserver3: Authorized
clusterserver2: Authorized
clusterserver1: Authorized
[root@clusterserver1 ~]#

Now it’s time set up the cluster. Define the cluster name and all servers that will be part of the cluster.

pcs cluster setup –name mohan_cluster clusterserver1 clusterserver2 clusterserver3

[root@clusterserver1 ~]# pcs cluster setup –name mohan_cluster clusterserver1 clusterserver2 clusterserver3
Destroying cluster on nodes: clusterserver1, clusterserver2, clusterserver3…
clusterserver1: Stopping Cluster (pacemaker)…
clusterserver2: Stopping Cluster (pacemaker)…
clusterserver3: Stopping Cluster (pacemaker)…
clusterserver1: Successfully destroyed cluster
clusterserver3: Successfully destroyed cluster
clusterserver2: Successfully destroyed cluster

Sending ‘pacemaker_remote authkey’ to ‘clusterserver1’, ‘clusterserver2’, ‘clusterserver3’
clusterserver3: successful distribution of the file ‘pacemaker_remote authkey’
clusterserver1: successful distribution of the file ‘pacemaker_remote authkey’
clusterserver2: successful distribution of the file ‘pacemaker_remote authkey’
Sending cluster config files to the nodes…
clusterserver1: Succeeded
clusterserver2: Succeeded
clusterserver3: Succeeded

Synchronizing pcsd certificates on nodes clusterserver1, clusterserver2, clusterserver3…
clusterserver3: Success
clusterserver2: Success
clusterserver1: Success
Restarting pcsd on the nodes in order to reload the certificates…
clusterserver3: Success
clusterserver2: Success

[root@clusterserver1 ~]# pcs cluster start –all
clusterserver3: Starting Cluster…
clusterserver1: Starting Cluster…
clusterserver2: Starting Cluster…
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# pcs status cluster
Cluster Status:
Stack: unknown
Current DC: NONE
Last updated: Fri Mar 10 03:56:45 2017
Last change: Fri Mar 10 03:56:26 2017 by hacluster via crmd on clusterserver1
3 nodes configured
0 resources configured

PCSD Status:
clusterserver1: Online
clusterserver2: Online
clusterserver3: Online
[root@clusterserver1 ~]#

Disable STONITH and Ignore the Quorum Policy
Since we’re not using the fencing device, we will disable the STONITH. STONITH or Shoot The Other Node
In The Head is the fencing implementation on Pacemaker. If you’re in production, it’s better to enable STONITH.
Disable STONITH with the following pcs command.

pcs property set stonith-enabled=false
Next, for the Quorum policy, ignore it.
pcs property set no-quorum-policy=ignore
Check the property list and make sure stonith and the quorum policy are disabled.

pcs property list

[root@clusterserver1 ~]# pcs property list
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: mohan_cluster
dc-version: 1.1.16-12.el7_4.5-94ff4df
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: false
[root@clusterserver1 ~]#

The STONITH and Quorum Policy is disabled.

Add the Floating-IP and Resources
Floating IP is the IP address that can be migrated/moved automatically from one server to another server in the same Data Center. And we’ve already defined the floating IP address for the Pacemaker High-Availability to be ‘10.0.15.15’. Now we want to add two resources, the Floating IP address resource with the name ‘virtual_ip’ and a new resource for the Nginx web server named ‘webserver’.
Add the new resource floating IP address ‘virtual_ip’ using the pcs command as shown below.

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.25 cidr_netmask=32 op monitor interval=30s

Next, add a new resource for the Nginx ‘webserver’.

pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf op monitor timeout=”5s” interval=”5s”

Make sure you got no error result, then check the resources available.

pcs status
pcs status resources

You will see two resources ‘virtual_ip’ and a ‘webserver’. New resources for the Floating IP and Nginx web server have been added.

Add Constraint Rules to the Cluster

In this step, we will setup High Availability Rules, and will setup resource constraint with the pcs command line interface.
Set the collation constraint for webserver and virtual_ip resources with score ‘INFINITY’.
Also, setup the webserver and virtual_ip resources as same on all server nodes.

pcs constraint colocation add webserver virtual_ip INFINITY

Set the ‘virtual_ip’ and ‘webserver’ resources always on same node servers.

pcs constraint order virtual_ip then the webserver

pcs constraint colocation add webserver virtual_ip INFINITY

Next, stop the cluster and then start again.

pcs cluster stop –all
pcs cluster start –all

[root@clusterserver1 ~]# pcs cluster stop –all
clusterserver1: Stopping Cluster (pacemaker)…
clusterserver3: Stopping Cluster (pacemaker)…
clusterserver2: Stopping Cluster (pacemaker)…
clusterserver3: Stopping Cluster (corosync)…
clusterserver1: Stopping Cluster (corosync)…
clusterserver2: Stopping Cluster (corosync)…
[root@clusterserver1 ~]# pcs cluster start –all
clusterserver1: Starting Cluster…
clusterserver2: Starting Cluster…
clusterserver3: Starting Cluster…
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#

Testing
In this step, we’re gonna do some test for the cluster. Test the node status (‘Online’ or ‘Offline’), test the corosync members and status, and then test the high-availability of the Nginx webserver by accessing the Floating IP address.
Test node status with the following command.

[root@clusterserver1 ~]# pcs status nodes
Pacemaker Nodes:
Online: clusterserver1
Standby:
Maintenance:
Offline: clusterserver2 clusterserver3
Pacemaker Remote Nodes:
Online:
Standby:
Maintenance:
Offline:
[root@clusterserver1 ~]#

Test the corosync members.
corosync-cmapctl | grep members
You will get Corosync members IP address

[root@clusterserver1 ~]# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.1.20)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined

[root@clusterserver1 ~]# pcs status
Cluster name: mohan_cluster
Stack: corosync
Current DC: clusterserver1 (version 1.1.16-12.el7_4.5-94ff4df) – partition WITHOUT quorum
Last updated: Fri Mar 10 04:12:59 2017
Last change: Fri Mar 10 04:11:59 2017 by root via cibadmin on clusterserver1

3 nodes configured
2 resources configured

Online: [ clusterserver1 ]
OFFLINE: [ clusterserver2 clusterserver3 ]

Full list of resources:

virtual_ip (ocf::heartbeat:IPaddr2): Started clusterserver1
webserver (ocf::heartbeat:nginx): Started clusterserver1

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#

Docker installation on RHEL 7

Objective

The objective is to install Docker engine on Redhat 7 Linux using native docker script.

Requirements

Internet connection as well as a privileged access to your Redhat 7 Linux is required.

Difficulty

EASY

Conventions

  • # – requires given command to be executed with root privileges either directly as a root user or by use of sudo command
  • $ – given command to be executed as a regular non-privileged user

Instructions

Install docker

Installation of docker using a native docker script is one command, straight forward process. Before you run the below docker installation command ensure that curl package is installed on your system:

 

yum  -y remove  docker-common docker container-selinux docker-selinux docker-engine

 

yum -y install lvm2 device-mapper device-mapper-persistent-data device-mapper-event device-mapper-libs device-mapper-event-libs

# curl --version
curl 7.29.0 (x86_64-redhat-linux-gnu)

Once ready, install docker using curl command which will download and execute a native docker installation script:

# curl -sSL https://get.docker.com/ | sh
+ sh -c 'sleep 3; yum -y -q install docker-engine'
warning: /var/cache/yum/x86_64/7Server/docker-main-repo/packages/docker-engine-1.12.3-1.el7.centos.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 2c52609d: NOKEY
Importing GPG key 0x2C52609D:
 Userid     : "Docker Release Tool (releasedocker) <docker@docker.com>"
 Fingerprint: 5811 8e89 f3a9 1289 7c07 0adb f762 2157 2c52 609d
 From       : https://yum.dockerproject.org/gpg
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!
</docker@docker.com>

Enable and Start docker

To enable docker to start on your Redhat 7 Linux after reboot run the following command:

# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

To start docker daemon run:

# systemctl start docker
 wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.12.1.ce-1.el7.centos.x86_64.rpm

 wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm


 yum install --setopt=obsoletes=0 docker-ce-17.12.1.ce-1.el7.centos.x86_64.rpm


[root@ip-192-168-4-100 software]# yum install –setopt=obsoletes=0 docker-ce-17.12.1.ce-1.el7.centos.x86_64.rpm
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Examining docker-ce-17.12.1.ce-1.el7.centos.x86_64.rpm: docker-ce-17.12.1.ce-1.el7.centos.x86_64
Marking docker-ce-17.12.1.ce-1.el7.centos.x86_64.rpm to be installed
Resolving Dependencies
–> Running transaction check
—> Package docker-ce.x86_64 0:17.12.1.ce-1.el7.centos will be installed
–> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-17.12.1.ce-1.el7.centos.x86_64
–> Processing Dependency: libltdl.so.7()(64bit) for package: docker-ce-17.12.1.ce-1.el7.centos.x86_64
–> Running transaction check
—> Package container-selinux.noarch 2:2.36-1.gitff95335.el7 will be installed
—> Package libtool-ltdl.x86_64 0:2.4.2-22.el7_3 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================================================
Installing:
docker-ce x86_64 17.12.1.ce-1.el7.centos /docker-ce-17.12.1.ce-1.el7.centos.x86_64 123 M
Installing for dependencies:
container-selinux noarch 2:2.36-1.gitff95335.el7 rhui-REGION-rhel-server-extras 31 k
libtool-ltdl x86_64 2.4.2-22.el7_3 rhui-REGION-rhel-server-releases 49 k

Transaction Summary
================================================================================================================================================================================================================
Install 1 Package (+2 Dependent packages)

Total size: 123 M
Total download size: 80 k
Installed size: 123 M

unning transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 2:container-selinux-2.36-1.gitff95335.el7.noarch 1/3
setsebool: SELinux is disabled.
Installing : libtool-ltdl-2.4.2-22.el7_3.x86_64 2/3
Installing : docker-ce-17.12.1.ce-1.el7.centos.x86_64 3/3
Verifying : libtool-ltdl-2.4.2-22.el7_3.x86_64 1/3
Verifying : 2:container-selinux-2.36-1.gitff95335.el7.noarch 2/3
Verifying : docker-ce-17.12.1.ce-1.el7.centos.x86_64 3/3

Installed:
docker-ce.x86_64 0:17.12.1.ce-1.el7.centos

Dependency Installed:
container-selinux.noarch 2:2.36-1.gitff95335.el7 libtool-ltdl.x86_64 0:2.4.2-22.el7_3

Complete!

 

 

 

How to enable docker service

$ sudo systemctl enable docker.service
Sample outputs:

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

How to start/stop/restart docker service on CentOS7/RHEL7

$ sudo systemctl start docker.service ## <-- Start docker ##
$ sudo systemctl stop docker.service ## <-- Stop docker ##
$ sudo systemctl restart docker.service ## <-- Restart docker ##
$ sudo systemctl status docker.service ## <-- Get status of docker ##

 # verify operation 
docker ps -a 
docker images 
docker version 
docker info 


Process below works fine for AWS RedHat 7.X instance:

 yum install --setopt=obsoletes=0 docker-ce-17.03.2.ce-1.el7.centos.x86_64 docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch
 yum install -y yum-utils device-mapper-persistent-data lvm2
 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
 yum-config-manager --enable docker-ce-edge
 yum makecache fast
 yum -y --enablerepo=rhui-REGION-rhel-server-extras install container-selinux
 yum -y install docker-ce