July 2019
M T W T F S S
« Jun    
1234567
891011121314
15161718192021
22232425262728
293031  

Categories

WordPress Quotes

I've learned that you shouldn't go through life with a catcher's mitt on both hands; you need to be able to throw something back.
Maya Angelou
July 2019
M T W T F S S
« Jun    
1234567
891011121314
15161718192021
22232425262728
293031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (36)
Ansibile (19)
Apache (134)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (267)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (1)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

20 visitors online now
1 guests, 19 bots, 0 members

Hit Counter provided by dental implants orange county

Install Mod Security on Nginx for CentOS 6 and 7

Install Mod Security on Nginx for CentOS 6 and 7

Introduction

ModSecurity is a toolkit for real-time web application monitoring, logging, and access control. you can consider it as an enabler, there are no hard rules telling you what to do, instead, it is up to you to choose your own path through the available features. The freedom to choose what to do is an essential part of ModSecurity’s identity and goes very well with its open source nature. With full access to the source code, your freedom to choose extends to the ability to customize and extend the tool itself to make it fit your needs.

We are assuming that you have root permission, otherwise, you may start commands with “sudo”.

 

Attention

Building a ModSecurity on a Nginx server is kinda hard because you have to download and compile both of them yourself and installing them through a package installer is not possible for now, meanwhile, you have to install previous releases of the Nginx web server.

Download Nginx and ModSecurity

You can download the compatible version of Nginx and ModSecurity easily with “Wget”:

wget http://nginx.org/download/nginx-1.8.0.tar.gz wget https://www.modsecurity.org/tarball/2.9.1/modsecurity-2.9.1.tar.gz

Extract them as well:

tar xvzf nginx-1.8.0.tar.gz tar xvzf modsecurity-2.9.1.tar.gz

And you should download some dependencies so you can compile them:

yum install gcc make automake autoconf libtool pcre pcre-devel libxml2 libxml2-devel curl curl-devel httpd-devel

Compiling ModSecurity with Nginx

Enter the ModSecurity directory:

cd modsecurity-2.9.1 ./configure --enable-standalone-module make

Then we are going to install Nginx with ModSecurity module:

cd nginx-1.8.0
./configure \
> --user=nginx \
> --group=nginx \
> --sbin-path=/usr/sbin/nginx \
> --conf-path=/etc/nginx/nginx.conf \
> --pid-path=/var/run/nginx.pid \
> --lock-path=/var/run/nginx.lock \
> --error-log-path=/var/log/nginx/error.log \
> --http-log-path=/var/log/nginx/access.log \
> --add-module=../modsecurity-2.9.1/nginx/modsecurity

Now we can compile and install Nginx:

make make install

Configure Nginx and ModSecurity

We have to move the ModSecurity config files to Nginx main directory, execute the commands below:

cp modsecurity-2.9.1/modsecurity.conf-recommended /etc/nginx/ cp modsecurity-2.9.1/unicode.mapping /etc/nginx/

Now we have to rename the ModSecurity config file;

cd /etc/nginx/ mv modsecurity.conf-recommended modsecurity.conf

Open the “nginx.conf” and add the following lines under the directive “location /” it’s about line 47:

nano nginx.conf

ModSecurityEnabled on;
ModSecurityConfig modsecurity.conf;

Save and Exit

Create Nginx user with the command below:

useradd -r nginx

We can test our Nginx config file to check if everything is ok:

cd /usr/sbin/ ./nginx -t

You should get something like below:


nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Creating the Nginx Service

It’s time to create the Nginx Service so you can start, stop and see your service status:

Create the init.d script file with your text editor in the following path:

nano /etc/init.d/nginx

Paste the following script in your file then save and exit:


#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig:   - 85 15
# description:  NGINX is an HTTP(S) server, HTTP(S) reverse \
#               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /etc/nginx/nginx.conf
# config:      /etc/sysconfig/nginx
# pidfile:     /var/run/nginx.pid

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/sbin/nginx"
prog=$(basename $nginx)

NGINX_CONF_FILE="/etc/nginx/nginx.conf"

[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx

lockfile=/var/lock/subsys/nginx

make_dirs() {
   # make required directories
   user=`$nginx -V 2>&1 | grep "configure arguments:.*--user=" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
   if [ -n "$user" ]; then
      if [ -z "`grep $user /etc/passwd`" ]; then
         useradd -M -s /bin/nologin $user
      fi
      options=`$nginx -V 2>&1 | grep 'configure arguments:'`
      for opt in $options; do
          if [ `echo $opt | grep '.*-temp-path'` ]; then
              value=`echo $opt | cut -d "=" -f 2`
              if [ ! -d "$value" ]; then
                  # echo "creating" $value
                  mkdir -p $value && chown -R $user $value
              fi
          fi
       done
    fi
}

start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    make_dirs
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    configtest || return $?
    stop
    sleep 1
    start
}

reload() {
    configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}

force_reload() {
    restart
}

configtest() {
  $nginx -t -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart|configtest)
        $1
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
            ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac

Create the “nginx.service” file in the following path:

nano /lib/systemd/system/nginx.service

Paste the following script then save and exit:

[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Now you can easily use the following commands to control your Nginx service:

systemctl enable nginx systemctl start nginx systemctl restart nginx systemctl status nginx

Varify ModSecurity working with Nginx properly

 

cd /usr/sbin/ ./nginx -V

if you get something like below it means that your Nginx compiled with ModSecurity successfully:


built by gcc 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC)
configure arguments: --user=nginx --group=nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --add-module=../modsecurity-2.9.1/nginx/modsecurity

If you want to check if the ModSecurity module has been loaded on your Nginx successfuly you have to check last lines of your Nginx’s error log:

cd /var/log/nginx/ tail error.log

You have to search for something like below:

[notice] 13285#0: ModSecurity: PCRE compiled version="7.8 "; loaded version="7.8 2008-09-05"

Rule-Set Recommendation

Start a docker container on CentOS at boot time as a linux service

Note: If docker daemon does not start at boot, you might want to enable the docker service

1
systemctl enabledocker.service

Here are the steps.

Create the file /etc/systemd/system/docker_demo_container.service

1
2
3
4
5
6
7
8
9
10
11
[Unit]
Wants=docker.service
After=docker.service
[Service]
RemainAfterExit=yes
ExecStart=/usr/bin/dockerstart my_container_name
ExecStop=/usr/bin/dockerstop my_container_name
[Install]
WantedBy=multi-user.target

Now I can start the service

1
systemctl start docker_demo_container

And I can enable the service so it is executed at boot

1
systemctl enabledocker_demo_container

That’s it, my container is started at boot.

 

[Unit]
Description=MariaDB container
Requires=docker.service
After=docker.service
[Service]
User=php
Restart=always
RestartSec=10
# ExecStartPre=-/usr/bin/docker kill database
# ExecStartPre=-/usr/bin/docker rm database
ExecStart=/usr/bin/docker start -a database
ExecStop=/usr/bin/docker stop -t 2 database
[Install]

WantedBy=multi-user.target

 

 

#!/bin/bash
exec "$@"

and I have the following in my Dockerfile:

ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["date"]

docker install on centos 7.4

hostnamectl set-hostname clusterserver1.rmohan.com
ifconfig -a
vi /etc/selinux/config
systemctl disable firewalld
systemctl stop firewalld
vi /etc/hosts
yum install wget
yum update
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager –enable docker-ce-edge
yum list docker-ce –showduplicates | sort -r
yum install docker-ce

systemctl enable docker

systemctl start docker

 

Uninstall Docker CE

  1. Uninstall the Docker package:
    $yum remove docker-ce
    
  2. Images, containers, volumes, or customized configuration files on your host are not automatically removed. To delete all images, containers, and volumes:
    $ rm -rf /var/lib/docker

disable directory browsing apache2.4

<Directory /var/www/>
        Options Indexes FollowSymLinks
        AllowOverride None
        Require all granted
</Directory>

to

<Directory /var/www/>
        Options FollowSymLinks
        AllowOverride None
        Require all granted
</Directory>

dockert centos

Docker is an open-source tool that automates the deployment of applications inside software containers by providing an additional layer of abstraction of operating system level virtualization on Linux. Docker makes it easier to create, deploy and run applications by using containers. Containers allow a developer to package up an application with all of the libraries and dependencies, and ship it all out as one package. Docker is same as virtual machine, rather than creating a whole virtual operating system. Docker allows applications to use the same Linux kernel as the system that they are running on. This gives a significant performance boost and reduces the size of the application.

Docker is preferred by many system admins and developers to maximize their creativity in an App production environment. For system admins, Docker gives flexibility and reduces the number of systems needed because of its small footprint and lower overhead.

In this tutorial, We’ll learn how to install Docker on CentOS-7.

##Requirements

CentOS-7 64-bit configured with static IP address. Kernel version must be 3.10 at minimum.

##Installing Docker

It is recommended to update your system before installing docker.

To update the system, Run:

sudo yum -y update

Docker package is not available in CentOS-7 repository. So you need to create repo for docker.

You can create docker repo by creating docker.repo file inside /etc/yum.repos.d/ directory.

sudo nano /etc/yum.repos.d/docker.repo

Add the following line:


[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

After creating docker repo, run the following command to install docker.

sudo yum install docker-engine

Once docker has been installed, it is recommended to start docker service and set docker service to start at boot.

You can do this by running the following command:

sudo systemctl start docker.service sudo systemctl enable docker.service

You can check the status of Docker by running the following command:

sudo systemctl status docker.service

Finally, you can verify that the latest version of docker is installed with the following command

sudo docker version

You can see the output some thing like this:

Client:
 Version:      1.10.0
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   590d5108
 Built:        Thu Feb  4 18:36:33 2016
 OS/Arch:      linux/amd64

Server:
Version: 1.10.0
API version: 1.22
Go version: go1.5.3
Git commit: 590d5108
Built: Thu Feb 4 18:36:33 2016
OS/Arch: linux/amd64

##Working With Docker

###Downloading a Docker Container

Let’s start using docker. You will need to have an image present on your host machine where the containers will exist. You can download the CentOS docker image by running the following command.

sudo docker pull centos

You can see the output some thing like this:

Using default tag: latest
latest: Pulling from library/centos

a3ed95caeb02: Pull complete
196355c4b639: Pull complete
Digest: sha256:381f21e4c7b3724c6f420b2bcfa6e13e47ed155192869a2a04fa10f944c78476
Status: Downloaded newer image for centos:latest

###Running a Docker Container

Now, You just run one command to setup a basic CentOS container with a bash shell.

To run a command in new container, Run:

sudo docker run -i -t centos /bin/bash

Note: -i flag attaches stdin and stdout and -t flag allocates a tty.

That’s it. Now you’re using a bash shell inside of a centos docker container.

There are many containers already available on docker website. You can found any container through a search. For example to search centos container, Run:

sudo docker search centos

You can see the output some thing like this:

NAME                            DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
centos                          The official build of CentOS.                   2079      [OK]       
jdeathe/centos-ssh              CentOS-6 6.7 x86_64 / CentOS-7 7.2.1511 x8...   19                   [OK]
jdeathe/centos-ssh-apache-php   CentOS-6 6.7 x86_64 / Apache / PHP / PHP M...   14                   [OK]
million12/centos-supervisor     Base CentOS-7 with supervisord launcher, h...   10                   [OK]
blalor/centos                   Bare-bones base CentOS 6.5 image                8                    [OK]
nimmis/java-centos              This is docker images of CentOS 7 with dif...   8                    [OK]
torusware/speedus-centos        Always updated official CentOS docker imag...   7                    [OK]
centos/mariadb55-centos7                                                        3                    [OK]
nickistre/centos-lamp           LAMP on centos setup                            3                    [OK]
nathonfowlie/centos-jre         Latest CentOS image with the JRE pre-insta...   3                    [OK]
consol/sakuli-centos-xfce       Sakuli end-2-end testing and monitoring co...   2                    [OK]
timhughes/centos                Centos with systemd installed and running       1                    [OK]
darksheer/centos                Base Centos Image -- Updated hourly             1                    [OK]
softvisio/centos                Centos                                          1                    [OK]
lighthopper/orientdb-centos     A Dockerfile for creating an OrientDB imag...   1                    [OK]
yajo/centos-epel                CentOS with EPEL and fully updated              1                    [OK]
grayzone/centos                 auto build for centos.                          0                    [OK]
ustclug/centos                   USTC centos                                    0                    [OK]
januswel/centos                 yum update-ed CentOS image                      0                    [OK]
dmglab/centos                   CentOS with some extras - This is for the ...   0                    [OK]
jsmigel/centos-epel             Docker base image of CentOS w/ EPEL installed   0                    [OK]
grossws/centos                  CentOS 6 and 7 base images with gosu and l...   0                    [OK]
labengine/centos                Centos image base                               0                    [OK]
lighthopper/openjdk-centos      A Dockerfile for creating an OpenJDK image...   0                    [OK]
blacklabelops/centos            CentOS Base Image! Built and Updates Daily!     0                    [OK]

You can also list all available images on your system using the following command:

sudo docker images

You can see the output some thing like this:

REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
firefox-instance            latest              8e61bff07fa0        3 weeks ago         354.6 MB
centos                      latest              d0e7f81ca65c        4 weeks ago         196.6 MB
debian                      latest              f50f9524513f        4 weeks ago         125.1 MB
apache/ubuntu               latest              196655130bc9        4 weeks ago         224.1 MB
apache-instance             latest              7da78270c5f7        4 weeks ago         224.1 MB
apache-instance             ubuntu              7da78270c5f7        4 weeks ago         224.1 MB
hitjethva/apache-instance   ubuntu              7da78270c5f7        4 weeks ago         224.1 MB

##Docker basic command lines

Let’s start with seeing all available commands docker have.

You can list all available docker command by running the following command:

sudo docker

You should see the following output:

Commands:
``` language-bash
    attach    Attach to a running container
    build     Build an image from a Dockerfile
    commit    Create a new image from a container's changes
    cp        Copy files/folders between a container and the local filesystem
    create    Create a new container
    diff      Inspect changes on a container's filesystem
    events    Get real time events from the server
    exec      Run a command in a running container
    export    Export a container's filesystem as a tar archive
    history   Show the history of an image
    images    List images
    import    Import the contents from a tarball to create a filesystem image
    info      Display system-wide information
    inspect   Return low-level information on a container or image
    kill      Kill a running container
    load      Load an image from a tar archive or STDIN
    login     Register or log in to a Docker registry
    logout    Log out from a Docker registry
    logs      Fetch the logs of a container
    network   Manage Docker networks
    pause     Pause all processes within a container
    port      List port mappings or a specific mapping for the CONTAINER
    ps        List containers
    pull      Pull an image or a repository from a registry
    push      Push an image or a repository to a registry
    rename    Rename a container
    restart   Restart a container
    rm        Remove one or more containers
    rmi       Remove one or more images
    run       Run a command in a new container
    save      Save an image(s) to a tar archive
    search    Search the Docker Hub for images
    start     Start one or more stopped containers
    stats     Display a live stream of container(s) resource usage statistics
    stop      Stop a running container
    tag       Tag an image into a repository
    top       Display the running processes of a container
    unpause   Unpause all processes within a container
    update    Update resources of one or more containers
    version   Show the Docker version information
    volume    Manage Docker volumes
    wait      Block until a container stops, then print its exit code

Run ‘docker COMMAND –help’ for more information on a command.


To check system-wide information on docker, run:

`sudo docker info`

You should see the following output:

Containers: 6 Running: 0 Paused: 0 Stopped: 6 Images: 22 Server Version: 1.10.0 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 35 Dirperm1 Supported: false Execution Driver: native-0.2 Logging Driver: json-file Plugins: Volume: local Network: bridge null host Kernel Version: 3.13.0-32-generic Operating System: Zorin OS 9 OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 3.746 GiB Name: Vyom-PC ID: G2NZ:3DDJ:KJFV:HC2E:HR3Y:J4JH:TX2D:EX57:K26Y:3AFH:FGKB:XEIF Username: hitjethva Registry: https://index.docker.io/v1/ WARNING: No swap limit support


You can use the following command to list all running containers:

`sudo docker ps`

You can see the running container in following image:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 534544f2e924 centos “/bin/bash” 30 seconds ago Up 29 seconds boring_hamilton “`

You can also list both running and non-running containers by running the following command:

sudo docker ps -a

Sometimes container stops due to its process ending or you stopping it explicitly. In this situation you can run container again with container ID.

sudo docker run "container ID"

You can also stop a running container by running the following command:

sudo docker stop "container ID"

Note: You can find container ID using sudo docker ps command.

If you would like to save the changes you have made with a container, use commit command to save to save it as an image.

sudo docker commit "container ID" image_name

This command turns your container to an image. You can roll back when your container you need.

If you want to attach into a running container, Docker allows you to interaction with running containers using the attach command.

You can use attach command with the container ID..

sudo docker attach "container ID"

##Stop and Delete all Containers and Images

To stop all running containers, Run:

sudo docker stop $(docker ps -a -q)

To delete all existing containers, Run:

sudo docker rm $(docker ps -a -q)

To delete all existing images, Run:

sudo docker rmi $(docker images -q -a)

##Conclusion

Congratulations! You now have a centos server with a Docker platform for your application development environment.

How to sync standby database which is lagging behind from primary database

How to sync standby database which is lagging behind from primary database
Primary Database cluster: cluster1.rmohan.com
Standby Database cluster: cluster2.rmohan.com

Primary Database: prim
Standby database: stand

Database version:11.2.0.1.0

Reason:-
1. Might be due to the network outage between the primary and the standby database leading to the archive
gaps. Data guard would be able to detect the archive gaps automatically and can fetch the missing logs as
soon as the connection is re-established.

2. It could also be due to archive logs getting missed out on the primary database or the archives getting
corrupted and there would be no valid backups.

In such cases where the standby lags far behind from the primary database, incremental backups can be used
as one of the methods to roll forward the physical standby database to have it in sync with the primary database.

At primary database:-
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE
———— —————- —————-
OPEN prim PRIMARY

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 214

At standby database:-
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE
———— —————- —————-
OPEN stand PHYSICAL STANDBY

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 42
So we can see the standby database is having archive gap of around (214-42) 172 logs.

Step 1: Take a note of the Current SCN of the Physical Standby Database.
SQL> select current_scn from v$database;

CURRENT_SCN
———–
1022779

Step 2 : Cancel the Managed Recovery Process on the Standby database.
SQL> alter database recover managed standby database cancel;

Database altered.

Step 3: On the Primary database, take the incremental SCN backup from the SCN that is currently recorded on the standby database (1022779)
At primary database:-

RMAN> backup incremental from scn 1022779 database format ‘/tmp/rman_bkp/stnd_backp_%U.bak’;

Starting backup at 28-DEC-14

using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-15
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/prim/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/prim/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/prim/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/prim/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/prim/users01.dbf
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak tag=TAG20141228T025048 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25

using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-15
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak tag=TAG20141228T025048 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

We took the backup inside /tmp/rman_bkp directory and ensure that it contains nothing besides the incremental backups of scn.

Step 4: Take the standby controlfile backup of the Primary database controlfile.

At primary database:

RMAN> backup current controlfile for standby format ‘/tmp/rman_bkp/stnd_%U.ctl’;

Starting backup at 28-DEC-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including standby control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl tag=TAG20141228T025301 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

Starting Control File and SPFILE Autobackup at 28-DEC-14
piece handle=/u01/app/oracle/flash_recovery_area/PRIM/autobackup/2014_12_28/o1_mf_s_867466384_b9y8sr8k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 28-DEC-14

Step 5: Transfer the backups from the Primary cluster to the Standby cluster.
[oracle@cluster1 ~]$ cd /tmp/rman_bkp/
[oracle@cluster1 rman_bkp]$ ls -ltrh
total 24M
-rw-r—–. 1 oracle oinstall 4.2M Dec 28 02:51 stnd_backp_0cpr8v08_1_1.bak
-rw-r—–. 1 oracle oinstall 9.7M Dec 28 02:51 stnd_backp_0dpr8v12_1_1.bak
-rw-r—–. 1 oracle oinstall 9.7M Dec 28 02:53 stnd_0epr8v4e_1_1.ctl

oracle@cluster1 rman_bkp]$ scp *.* oracle@cluster2:/tmp/rman_bkp/
oracle@cluster2’s password:
stnd_0epr8v4e_1_1.ctl 100% 9856KB 9.6MB/s 00:00
stnd_backp_0cpr8v08_1_1.bak 100% 4296KB 4.2MB/s 00:00
stnd_backp_0dpr8v12_1_1.bak 100% 9856KB 9.6MB/s 00:00

Step 6: On the standby cluster, connect the Standby Database through RMAN and catalog the copied
incremental backups so that the Controlfile of the Standby Database would be aware of these
incremental backups.

At standby database:-

SQL>

[oracle@cluster2 ~]$ rman target /
RMAN> catalog start with ‘/tmp/rman_bkp’;

using target database control file instead of recovery catalog
searching for all files that match the pattern /tmp/rman_bkp

List of Files Unknown to the Database
=====================================
File Name: /tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl
File Name: /tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak
File Name: /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files…
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl
File Name: /tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak
File Name: /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak

Step 7. Shutdown the database and open it in mount stage for recovery purpose.
SQL> shut immediate;
SQL> startup mount;

Step 8.Now recover the database :-
RMAN> rman target /
RMAN> recover database noredo;

Starting recover at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/app/oracle/oradata/stand/system01.dbf
destination for restore of datafile 00002: /u01/app/oracle/oradata/stand/sysaux01.dbf
destination for restore of datafile 00003: /u01/app/oracle/oradata/stand/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/stand/users01.dbf
destination for restore of datafile 00005: /u01/app/oracle/oradata/stand/example01.dbf
channel ORA_DISK_1: reading from backup piece /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak
channel ORA_DISK_1: piece handle=/tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak tag=TAG20141228T025048
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03

Finished recover at 28-DEC-14
exit.

Step 9 : Shutdown the physical standby database, start it in nomount stage and restore the standby controlfile
backup that we had taken from the primary database.

SQL> shut immediate;
SQL> startup nomount;

[oracle@cluster2 rman_bkp]$ rman target /
RMAN> restore standby controlfile from ‘/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl’;
ecovery Manager: Release 11.2.0.1.0 – Production on Sun Dec 28 03:08:45 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: PRIM (not mounted)

RMAN> restore standby controlfile from ‘/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl’;

Starting restore at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/stand/stand.ctl
output file name=/u01/app/oracle/flash_recovery_area/stand/stand.ctl
Finished restore at 28-DEC-14

Step 10: Shutdown the standby database and mount the standby database, so that the standby database would
be mounted with the new controlfile that was restored in the previous step.

SQL> shut immediate;
SQL> startup mount;

At standby database:-
SQL> alter database recover managed standby database disconnect from session;

At primary database:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 215

At standby database:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 215

Step 11.Now we will cancel the recovery to open the database
SQL> alter database recover managed standby database cancel;

SQL> alter database open;
Database altered.

SQL> alter database recover managed standby database using current logfile disconnect from session;
Database altered.

SQL> select open_mode from v$database;

OPEN_MODE
——————–
READ ONLY WITH APPLY

Now standby database is in sync with the Primary Database.

Kubernetes basic concepts study notes

Kubernetes (commonly known as K8s) is an open source system for automatically deploying, extending, and managing containerized applications and is an “open source” version of Borg, Google’s internal tool.

Kubernetes is currently recognized as the most advanced container cluster management tool. After the release of version 1.0, Kubernetes has been developing at a faster speed and has been fully supported by container ecosystem companies, including coreos, rancher, and many other public cloud services. Vendors also provide infrastructure services based on Kubernetes’ secondary development when providing container services, such as Huawei. It can be said that Kubernetes is also Docker’s foray into the strongest competitors in container cluster management and service orchestration (Docker Swarm).

Kubernetes defines a set of building blocks that together provide a mechanism for deploying, maintaining, and extending applications. The components that make up Kubernetes are designed to be loosely coupled and scalable so that they can meet a variety of different workloads. Scalability is largely provided by the Kubernetes API – it is used as an internal component of extensions and containers that run on Kubernetes.

Because Kubernetes is a system made up of many components, it is still difficult for Kubernetes to install and deploy, and Kubernetes is developed by Google. There are many internal dependencies that need to be accessed through the wall.

Of course, there are quick installation tools, such as kubeadm, kubeadm is the official tool provided by Kubernetes to quickly install and initialize the Kubernetes cluster. Currently, it is still in an incubator development state. With the release of Kubernetes, the release of each version will be updated synchronously. Of course, the current kubeadm It cannot be used in a production environment.

 

 

2. Kubernetes features

Kubernetes features:

  • Simple : lightweight, simple, easy to use
  • Portable : public, private, hybrid, multi-cloud
  • Extensible : modular, plug-in, mountable, combinable
  • Self -healing : Automatic layout, automatic restart, automatic copy

In layman terms:

  • Automated container deployment and replication
  • Expand or shrink containers at any time
  • Organize containers into groups and provide load balancing among containers
  • Easily upgrade new versions of application containers
  • Provide container flexibility to replace a container if it fails

3. Kubernetes terminology

Kubernetes terminology:

  • Master Node : The computer used to control the Kubernetes node. All task assignments come from this.
  • Minion Node : The computer that performs the request and assignment tasks. The Kubernetes host is responsible for controlling the nodes.
  • Namespace : Namespace is an abstract set of a set of resources and objects. For example, it can be used to divide the internal objects of the system into different project groups or user groups. The common pods, services, replication controllers, and deployments are all part of a namespace (the default is default), while node, persistentVolumes, etc. do not belong to any namespace.
  • Pod : A container group that is deployed on a single node and contains one or more containers. A Pod can be created, scheduled, and shared with all Kubernetes managed minimum deployment units, all containers in the same container set. IP address, IPC, host name, and other resources. The container set abstracts the network and storage from the underlying container so that you can more easily move the container in the cluster.
  • Deployment : Deployment is a new generation of objects for Pod management. Compared with Replication Controller, it provides more complete functionality and is easier and more convenient to use.
  • Replication Controller : The replication controller manages the lifecycle of pods. They ensure that a specified number of pods are running at any given time. They do this by creating or deleting pods.
  • Service : The service provides a single stable name and address for a group of pods. The service separates the work definition from the container set. The Kubernetes service agent automatically assigns the service request to the correct container set—whether or not the container set moves to the cluster. Which position in it, even if it has been replaced.
  • Lable : Labels are used to organize and select object groups based on key-value pairs, which are used for each Kubernetes component.

In Kubernetes, all containers are run in a Pod, a Pod to hold a single container, or multiple cooperating containers. In the latter case, the containers in the Pod are guaranteed to be placed on the same machine and resources can be shared. A Pod can also contain zero or more volumes. Volumes are private to a container or can be shared between containers in a Pod. For each Pod created by the user, the system finds a machine that is healthy and has sufficient capacity, and then starts to start the corresponding container there. If a container fails, it is automatically restarted by Kubernetes’ node agent. This node agent is called a Kubelet. However, if the Pod or his machine fails, it will not be automatically transferred or restarted unless the user also defines a Replication Controller.

Pod’s copy sets can collectively form an entire application, a microservice, or a layer of a multi-tier application. Once the Pod is created, the system continuously monitors their health status and the health of the machine they are running on. If a Pod has problems due to a software problem or a machine failure, the Replication controller automatically creates a new Pod on a healthy machine.

Kubernetes supports a unique network model. Kubernetes encourages the use of flat address spaces and does not dynamically allocate ports, but rather allows users to choose any port that suits them. To achieve this, it assigns each Pod an IP address.

Kubernetes provides an abstraction of Service that provides a stable IP address and DNS name to correspond to a set of dynamic pods, such as a set of pods that make up a microservice. This Pod group is defined by the Label selector, because you can specify any Pod group. When a container running in a Kubernetes Pod connects to this address, the connection is forwarded by the local proxy (called the kube proxy). The agent runs on the source machine, the forwarding destination is a corresponding back-end container, and the exact back-end is selected by the round-robin policy to balance the load. The kube proxy will also track the dynamic changes of the backend Pod group, such as when the Pod is replaced by a new Pod on the new machine, so the IP and DNS names of the service do not need to be changed.

Every Kubernetes resource, such as a pod, is identified by a URI and has a UID. One of the key components in URIs is the type of object (eg, Pod), the name of the object, and the namespace of the object. For a particular object type, each name is unique in its namespace, given the name of an object is not given in the namespace, that is the default namespace, UID in both time and space range only.


More about Service:

  • Service is an application service abstraction that provides load balancing and service discovery for the application through labels. The list of Pod IPs and ports that match the labels constitutes endpoints, which are used by kube-proxy to load balance the service IP to these endpoints.
  • Each Service automatically assigns a cluster IP (virtual address accessible only within the cluster) and a DNS name from which other containers can access the service without having to know about the backend container’s operation.

 

 

Kubernetes components

Kubernetes components:

  • kubectl : client command-line tool, formatted commands sent to the kube-apiserver accepted as the operation of the entire system entrance.
  • Kube-apiserver : Serves as a control entry for the entire system, providing interfaces with REST API services.
  • kube-controller-manager : Used to perform background tasks in the entire system, including the status of nodes, the number of pods, and the association between Pods and Service.
  • kube-scheduler ( Pods are scheduled to Node): Responsible for node resource management, accept PUBs created from kube-apiserver and assign them to a node.
  • etcd : Responsible for service discovery and configuration sharing between nodes.
  • kube-proxy : Runs on each compute node, responsible for the Pod web proxy. Regular Getd service information from etcd to do the appropriate strategy.
  • kubelet : runs on each compute node, acts as an agent, accepts Pods tasks and management containers that are assigned to this node, periodically obtains container status and feeds back to kube-apiserver.
  • DNS : An optional DNS service for creating DNS records for each Service object so that all pods can access services through DNS.
  • flannel : Flannel is CoreOS team designed a cover for Kubernetes network (Overlay Network) tool, you need to download another deployment. We know that when we start Docker there will be an IP address for interacting with the container. If you do not manage it, the IP address may be the same on all machines, and it is limited to communicate on the machine, you can not access Docker containers on other machines. The purpose of the Flannel is to re-plan the use of IP addresses for all nodes in the cluster, so that containers on different nodes can obtain IP addresses that belong to the same intranet and do not duplicate IP addresses, and allow containers belonging to different nodes to directly pass the IP address. Network IP communication.

master node contains the components:

docker
etcd
kube-apiserver
kube-controller-manager
kubelet
kube-scheduler

The minion node contains components:

Docker
kubelet
kube-proxy

 

 

Tomcat log cutting and regular deletion

Tomcat log cutting and regular deletion

In Tomcat’s software environment, if we allow log files to grow indefinitely, one day the disk is full (crap).
Especially in the case of log file growth is very fast, cutting log files by log and delete, is a very necessary work, the following describes the method of cutting log files.

[root@server1 ~]# cat /etc/RedHat-release
CentOS release 6.5 (Final)
[root@server1 ~]# uname -r
2.6.32-431.el6.x86_64
[root@server1 ~]# uname -m
x86_64

[root@server1 ~]# java -version
java version “1.7.0_67”
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)

[root@server1 ~]# /opt/gw/tomcat7/bin/catalina.sh version
Using CATALINA_BASE: /opt/gw/tomcat7
Using CATALINA_HOME: /opt/gw/tomcat7
Using CATALINA_TMPDIR: /opt/gw/tomcat7/temp
Using JRE_HOME: /usr/local/jdk1.7
Using CLASSPATH: /opt/gw/tomcat7/bin/bootstrap.jar:/opt/gw/tomcat7/bin/tomcat-juli.jar
Server version: Apache Tomcat/7.0.57
Server built: Nov 3 2014 08:39:16 UTC
Server number: 7.0.57.0
OS Name: Linux
OS Version: 2.6.32-431.el6.x86_64
Architecture: i386
JVM Version: 1.7.0_67-b01
JVM Vendor: Oracle Corporation

cd /usr/local/src
wget https://files.cnblogs.com/files/crazyzero/cronolog-1.6.2.tar.gz
[root@mohan src]# md5sum cronolog-1.6.2.tar.gz
a44564fd5a5b061a5691b9a837d04979 cronolog-1.6.2.tar.gz

[root@mohan src]# tar xf cronolog-1.6.2.tar.gz
[root@mohan src]# cd cronolog-1.6.2
[root@mohan cronolog-1.6.2]# ./configure
[root@mohan cronolog-1.6.2]# make && make install
[root@mohan cronolog-1.6.2]# which cronolog
/usr/local/sbin/cronolog

[root@server1 ~]# which cronolog
/usr/local/sbin/cronolog

Chapter 3 configuration tomcat log cutting
Configuration log cutting, simply modify the configuration file catalina.sh (if windows is catalina.bat, here does not introduce the case of windows) can be.
Probably in the catalina file on the 380th and 390th lines, amended as follows:

org.apache.catalina.startup.Bootstrap “$@” start \
>> “$CATALINA_OUT” 2>&1 “&”

org.apache.catalina.startup.Bootstrap “$@” start \
2>&1 |/usr/local/sbin/cronolog “$CATALINA_BASE/logs/catalina-%Y-%m-%d.out” &

org.apache.catalina.startup.Bootstrap “$@” start \
>> “$CATALINA_OUT” 2>&1 “&”

org.apache.catalina.startup.Bootstrap “$@” start \
2>&1 |/usr/local/sbin/cronolog “$CATALINA_BASE/logs/catalina-%Y-%m-%d.out” &

00 00 * * * /bin/find /opt/gdyy/tomcat7/logs/ -type f -mtime +7 | xargs rm -f &>/dev/null

remove gw log 7 days ago by liutao at 2018-02-08
00 00 * * * /bin/find /opt/gw/tomcat7/logs/ -type f -mtime +7 | xargs -i mv {} /data/bak/gw_log/ &>/dev/null

Tomcat load balancing using Nginx reverse proxies.

This essay focuses on Tomcat clusters and Tomcat load balancing using Nginx reverse proxies.

First, we need to literacy some knowledge points (to literacy, embarrassment):
Cluster (Cluster)
Simply put, N servers form a loosely coupled multiprocessor system (external is a server), internal communication through the network. Let N servers cooperate with each other and jointly carry the request pressure of a website. In the words of the last author is “the same business, deployed on multiple servers,” This is the cluster. The more important task in the cluster is task scheduling.
Load Balance
simply means that the request is distributed to each server in the cluster according to a certain load policy, and the entire server group processes the website request so as to jointly complete the task.
2, the installation environment is as follows:
Tencent cloud host, the installation is CentOS 7.3 64bits
Nginx 1.7.4
JDK8 and Tomcat8

Configure Nginx web reverse proxy to implement two Tomcat load balancing:

–Install and configure Tomcat tar -zxvf apache-tomcat-8.5.28.tar.gz
cp -rf apache-tomcat-8.5.28 /usr/local/
tomcat1 mv apache-tomcat-8.5.28 /usr/local/tomcat2

– Modify the tomcat1 port number
$ cd /usr/local/tomcat1/conf/
$ cp server.xml server.xml.bak
$ cp web.xml web.xml.bak
$ vi server.xml

# # Modify the Tomcat2 port number
$ cd / usr / local / tomcat / conf /
$ cp server.xml server.xml.bak
$ cp web.xml web.xml.bak
$ vi server.xml

– Add Tomcat1 to boot automatically
– Copy the /usr/local/tomcat1/bin/catalina.sh file to the /etc/init.d directory and rename it to tomcat1
# cp /usr/local/tomcat1/bin/catalina. Sh /etc/init.d/tomcat1 –Modify the
/etc/init.d/tomcat1 file and add it in the file:
# vi /etc/init.d/tomcat1 –Enter the
following in the first line (otherwise Error: The tomcat service does not support chkconfig):

#!/bin/sh
# chkconfig: 2345 10 90
# description:Tomcat1 service

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
……………………. ……………………………………..
# $ Id: catalina. Sh 1498485 2013-07-01 14:37:43Z markt $
# ———————————– ——————————————

CATALINA_HOME=/usr/local/tomcat1
JAVA_HOME=/usr/local/jdk8

# OS specific support. $var _must_ be set to either true or false.

– Add tomcat service:
# chkconfig –add tomcat1
– Set tomcat to boot from:
# chkconfig tomcat1 on – Tomcat2 will be set to boot from
– Copy /usr/local/tomcat2/bin/catalina.sh file To /etc/init.d directory and renamed tomcat2
# cp /usr/local/tomcat2/bin/catalina.sh /etc/init.d/tomcat2
– Modify / etc / init.d / tomcat2 file in the file :
# Vi /etc/init.d/tomcat2
– Enter the following in the first line (otherwise error: tomcat service does not support chkconfig):
#! / Bin / sh
# chkconfig: 2345 10 90
# description: Tomcat2 Service

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
……………………. ……………………………………..
# —– ————————————————– ———————-

CATALINA_HOME=/usr/local/tomcat2
JAVA_HOME=/usr/local/jdk8

# OS specific support. $var must be set to either true or false.

– Add tomcat service:
# chkconfig –add tomcat2
— Set tomcat to start from boot:
# chkconfig tomcat2 on

Here, tomcat has been installed, respectively, enable them, print out the referenced environment corresponds to two tomcat is right:

[root @ VM_177_101_centos src] # service tomcat1 start
Using CATALINA_BASE: / usr / local / tomcat1
Using CATALINA_HOME: / usr / local / tomcat1
Using CATALINA_TMPDIR: / usr / local / tomcat1 / temp
Using JRE_HOME: / usr / local / jdk8
Using CLASSPATH : /usr/local/tomcat1/bin/bootstrap.jar:/usr/local/tomcat1/bin/tomcat-juli.jar
Tomcat started.

[root @ VM_177_101_centos src] # service tomcat2 start
Using CATALINA_BASE: / usr / local / tomcat2
Using CATALINA_HOME: / usr / local / tomcat2
Using CATALINA_TMPDIR: / usr / local / tomcat2 / temp
Using JRE_HOME: / usr / local / jdk8
Using CLASSPATH: /usr/local/tomcat2/bin/bootstrap.jar:/usr/local/tomcat2/bin/tomcat-juli.jar
Tomcat started.

Last configuration configuration Nginx:

– Switch to the directory
cd /usr/local/nginx/conf
– Modify the configuration file
vi nginx.conf

– some common configuration –
worker_processes: the number of work processes can be configured more than one
–worker_connections: the maximum number of connections
per process –server: each server is equivalent to a proxy server
–lister: monitor port, the default 80
– server_name: The current service domain name, there can be more than one, separated by a space (we are local and therefore localhost) –
Location: said path matching, then configured / that all requests are matched here
–index: when If you do not specify a home page, the specified file will be selected by default, and can be separated by more than one space.
–proxy_pass: request to a custom server list
–upstream name {}: server cluster name

– Now want to access tomcat through nginx, you need to modify the server part of the configuration

server
{
listen 80 default;
charset utf-8;
server_name localhost;
access_log logs / host.access.log;

Location / {
proxy_pass http://localhost:8080;
proxy_redirect default;
}
}
– The proxy has been completed here, so that all requests need to go through the proxy server to access the official server.

Next, load balancing is implemented. During the installation process, the port configured by tomcat1 is 8080, and the port configured by tomcat2 is 8081. Then we need to define the upstream server in the configuration file (upstream server)

# The cluster
upstream testcomcat {
#weight is the greater the weight, the greater the probability of allocation
server 127.0.0.1:8080 weight=1;
server 127.0.0.1:8081 weight=2;
}

Server
{
listen 80 default;
charset utf-8;
access_log logs/host.access.log;

location / {
proxy_pass http: // testcomcat;
proxy_redirect default;
}
}

– In order to see is not the same, I will tomcat root the following index.jsp page slightly changed, respectively, joined TEST1, TEST2, to facilitate the distinction, restart nginx, the browser address bar enter the IP, visits, refresh several times Page, you will find that the switch between the two servers, as shown below:
service nginx reload

Docker Webshpere

Docker Webshpere

Step 1 docker websphere
1. docker pull  ibmcom/websphere-traditional:8.5.5.12-profile

2. docker run –name websphere -h test -e UPDATE_HOSTNAME=true -p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:8.5.5.12-profile

3. docker exec websphere cat /tmp/PASSWORD

4. docker run –name test -h test -v $(pwd)/PASSWORD:/tmp/PASSWORD  -p 9045:9043 -p 9445:9443 -d ibmcom/websphere-traditional:8.5.5.12-profile

5.websphere?    https://172.10.21.30:9043/ibm/console/login.do?action=secure

Install image

The ibmcom/websphere-traditional:install Docker image contains IBM WebSphere Application Server traditional for Developers and can be started by:
1.Running the image using default values:docker run –name test -h test -p 9043:9043 -p 9443:9443 -d \
ibmcom/websphere-traditional:install

2.Running the image by passing values for the environment variables:docker run –name test -h test -e HOST_NAME=test -p 9043:9043 -p 9443:9443 -d \
-e PROFILE_NAME=AppSrv02 -e CELL_NAME=DefaultCell02 -e NODE_NAME=DefaultNode02 \
ibmcom/websphere-traditional:install

•PROFILE_NAME (optional, default is ‘AppSrv01’)
•CELL_NAME (optional, default is ‘DefaultCell01’)
•NODE_NAME (optional, default is ‘DefaultNode01’)
•HOST_NAME (optional, default is ‘localhost’)

Profile image

The ibmcom/websphere-traditional:profile Docker image contains IBM WebSphere Application Server traditional for Developers with the profile already created and can be started by:
1.Running the image using default values:docker run –name test -h test -p 9043:9043 -p 9443:9443 -d \
ibmcom/websphere-traditional:profile

2.Running the image by passing values for the environment variables:docker run –name test -h test -e UPDATE_HOSTNAME=true -p 9043:9043 -p 9443:9443 -d \
ibmcom/websphere-traditional:profile

•UPDATE_HOSTNAME (optional, set to ‘true’ if the hostname should be updated from the default of ‘localhost’ to match the hostname allocated at runtime)

Admin console

In both cases a secure profile is created with an admin user ID of wsadmin and a generated password. The generated password can be retrieved from the container using the following command:
docker exec test cat /tmp/PASSWORD

It is also possible to specify a password when the container is run by mounting a file containing the password to /tmp/PASSWORD. For example:
docker run –name test -h test -v $(pwd)/PASSWORD:/tmp/PASSWORD \
-p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:install

Once the server has started, you can access the admin console at https://localhost:9043/ibm/console/login.do?action=secure. If you are running in Docker Toolbox, use the value returned by docker-machine ip instead of localhost.