July 2019
M T W T F S S
« Jun    
1234567
891011121314
15161718192021
22232425262728
293031  

Categories

WordPress Quotes

I've learned that you shouldn't go through life with a catcher's mitt on both hands; you need to be able to throw something back.
Maya Angelou
July 2019
M T W T F S S
« Jun    
1234567
891011121314
15161718192021
22232425262728
293031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (36)
Ansibile (19)
Apache (134)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (267)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (1)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

24 visitors online now
1 guests, 23 bots, 0 members

Hit Counter provided by dental implants orange county

check Redhat version

The objective of this guide is to provide you with some hints on how to check system version of your Redhat Enterprise Linux (RHEL). There exist multiple ways on how to check the system version, however, depending on your system configuration, not all examples described below may be suitable. For a CentOS specific guide visit How to check CentOS version guide.

Requirements

Privileged access to to your RHEL system may be required.

Difficulty

EASY

Conventions

  • # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
  • $ – requires given linux commands to be executed as a regular non-privileged user

Instructions

Using hostnamectl

hostnamectl is most likely the first and last command you need to execute to reveal your RHEL system version:

$ hostnamectl 
   Static hostname: localhost.localdomain
Transient hostname: status
         Icon name: computer-vm
           Chassis: vm
        Machine ID: d731df2da5f644b3b4806f9531d02c11
           Boot ID: 384b6cf4bcfc4df9b7b48efcad4b6280
    Virtualization: xen
  Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:7.3:GA:server
            Kernel: Linux 3.10.0-514.el7.x86_64
      Architecture: x86-64

Query Release Package

Use rpm command to query Redhat’s release package:

RHEL 7
$ rpm --query redhat-release-server
redhat-release-server-7.3-7.el7.x86_64
RHEL 8
$ rpm --query redhat-release
redhat-release-8.0-0.34.el8.x86_64

Common Platform Enumeration

Check Common Platform Enumeration source file:

$ cat /etc/system-release-cpe 
cpe:/o:redhat:enterprise_linux:7.3:ga:server

LSB Release

Depending on whether a redhat-lsb package is installed on your system you may also use lsb_release -d command to check Redhat’s system version:

$ lsb_release -d
Description:	Red Hat Enterprise Linux Server release 7.3 (Maipo)

Alternatively install redhat-lsb package with:

# yum install redhat-lsb


Check Release Files

There are number of release files located in the /etc/ directory. Namely os-releaseredhat-release and system-release:

$ ls /etc/*release
os-release  redhat-release  system-release

Use cat to check the content of each file to reveal your Redhat OS version. Alternatively, use the below for loop for an instant check:

$ for i in $(ls /etc/*release); do echo ===$i===; cat $i; done

Depending on your RHEL version, the output of the above shell for loop may look different:

===os-release===
NAME="Red Hat Enterprise Linux Server"
VERSION="7.3 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.3 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.3"
===redhat-release===
Red Hat Enterprise Linux Server release 7.3 (Maipo)
===system-release===
Red Hat Enterprise Linux Server release 7.3 (Maipo)

Grub Config

The least reliable way on how to check Redhat’s OS version is by looking at Grub configuration. Grub configuration may not produce a definitive answer, but it will provide some hints on how the system booted. 

The default locations of grub config files are /boot/grub2/grub.cfg and /etc/grub2.cfg. Use grep command to check for menuentry keyword:

# grep -w menuentry /boot/grub2/grub.cfg /etc/grub2.cfg

An another alternative is to check the value of the “GRUB Environment Block”:

# grep saved_entry /boot/grub2/grubenv 
saved_entry=Red Hat Enterprise Linux Server (3.10.0-514.el7.x86_64) 7.3 (Maipo)

Nginx server configuration

yum -y install make gcc gcc-c++ openssl openssl-devel pcre-devel zlib-devel

wget -c http://nginx.org/download/nginx-1.14.2.tar.gz

tar zxvf nginx-1.14.2.tar.gz

cd nginx-1.14.2

./configure –prefix=/usr/local/nginx

make && make install

cd /usr/local/nginx

./sbin/nginx

ps aux|grep nginx

Nginx load balancing configuration example

Load balancing is mainly achieved through specialized hardware devices or through software algorithms. The load balancing effect achieved by the hardware device is good, the efficiency is high, and the performance is stable, but the cost is relatively high. The load balancing implemented by software mainly depends on the selection of the equalization algorithm and the robustness of the program. Equalization algorithms are also diverse, and there are two common types: static load balancing algorithms and dynamic load balancing algorithms. The static algorithm is relatively simple to implement, and it can achieve better results in the general network environment, mainly including general polling algorithm, ratio-based weighted rounding algorithm and priority-based weighted rounding algorithm. The dynamic load balancing algorithm is more adaptable and effective in more complex network environments. It mainly has a minimum connection priority algorithm based on task volume, a performance-based fastest response priority algorithm, a prediction algorithm and a dynamic performance allocation algorithm.

The general principle of network load balancing technology is to use a certain allocation strategy to distribute the network load to each operating unit of the network cluster in a balanced manner, so that a single heavy load task can be distributed to multiple units for parallel processing, or a large number of concurrent access or data. Traffic sharing is handled separately on multiple units, thereby reducing the user’s waiting response time.

Nginx server load balancing configuration
The Nginx server implements a static priority-based weighted round-robin algorithm. The main configuration is the proxy_pass command and the upstream command. These contents are actually easy to understand. The key point is that the configuration of the Nginx server is flexible and diverse. How to configure load balancing? At the same time, rationally integrate other functions to form a set of configuration solutions that can meet actual needs.

The following are some basic example fragments. Of course, it is impossible to include all the configuration situations. I hope that it can be used as a brick-and-mortar effect. At the same time, we need to summarize and accumulate more in the actual application process. The places to note in the configuration will be added as comments.

Configuration Example 1: Load balancing of general polling rules for all requests
In the following example fragment, all servers in the backend server group are configured with the default weight=1, so they receive the request tasks in turn according to the general polling policy. This configuration is the easiest configuration to implement Nginx server load balancing. All requests to access www.rmohan.com will be load balanced in the backend server group. The example code is as follows:

Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}??

Configuration Example 2: Load balancing of weighted polling rules for all requests
Compared with “Configuration Instance One”, in this instance segment, the servers in the backend server group are given different priority levels, and the value of the weight variable is the “weight” in the polling policy. Among them, 192.168.1.2:80 has the highest level, and receives and processes client requests preferentially; 192.168.1.4:80 has the lowest level, which is the server that receives and processes the least client requests, and 192.168.1.3:80 will be between the above two. between. All requests to access www.rmohan.com will implement weighted load balancing in the backend server group. The example code is as follows:

Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80 weight=5;
Server 192.168.1.3:80 weight=2;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}

Configuration Example 3: Load balancing a specific resource
In this example fragment, we set up two sets of proxy server groups, a group named “videobackend” for load balancing client requests requesting video resources, and another group for clients requesting filed resources. The end requests load balancing. All requests for “http://www.mywebname/video/” will be balanced in the videobackend server group, and all requests for “http://www.mywebname/file/” will be in the filebackend server group. Get a balanced effect. The configuration shown in this example is to implement general load balancing. For the configuration of weighted load balancing, refer to Configuration Example 2.

In the location /file/ {...} block, we populate the client's real information into the "Host", "X-Real-IP", and "X-Forwareded-For" header fields in the request header. So, the request received by the backend server group retains the real information of the client, not the information of the Nginx server. The example code is as follows:

Upstream videobackend #Configuring backend server group 1
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80;
}
Upstream filebackend #Configuring backend server group 2
{
Server 192.168.1.5:80;
Server 192.168.1.6:80;
Server 192.168.1.7:80;
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location /video/ {
Proxy_pass http://videobackend; #Use backend server group 1
Prox_set_header Host $host;

}
Location /file/ {
Proxy_pass http://filebackend; #Use backend server group 2
#Retain the real information of the client
Prox_set_header Host $host;
Proxy_set_header X-Real-IP $remote_addr;
Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}
}??

Configuration Example 4: Load balancing different domain names
In this example fragment, we set up two virtual servers and two sets of backend proxy server groups to receive different domain name requests and load balance these requests. If the client requests the domain name as “home.rmohan.com”, the server server1 receives and redirects to the homebackend server group for load balancing; if the client requests the domain name as “bbs.rmohan.com”, the server server2 receives the bbsbackend server level. Perform load balancing processing. This achieves load balancing of different domain names.

Note that there is one server server 192.168.1.4:80 in the two backend server groups that is public. All resources under the two domain names need to be deployed on this server to ensure that client requests are not problematic. The example code is as follows:


Upstream bbsbackend #Configuring backend server group 1
{
Server 192.168.1.2:80 weight=2;
Server 192.168.1.3:80 weight=2;
Server 192.168.1.4:80;
}
Upstream homebackend #Configuring backend server group 2
{
Server 192.168.1.4:80;
Server 192.168.1.5:80;
Server 192.168.1.6:80;
}
#Start to configure server 1
Server
{
Listen 80;
Server_name home.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://homebackend;
Prox_set_header Host $host;

}

}
#Start to configure server 2
Server
{
Listen 80;
Server_name bbs.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://bbsbackend;
Prox_set_header Host $host;

}

}

Configuration Example 5: Implementing load balancing with URL rewriting
First, let’s look at the specific source code. This is a modification made on the basis of instance one:


Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;

Location /file/ {
Rewrite ^(/file/.)/media/(.).*$) $1/mp3/$2.mp3 last;
}

Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}
This instance fragment adds a URL rewriting function to the URI containing “/file/” compared to “Configuration One”. For example, when the URL requested by the client is “http://www.rmohan.com/file/downlaod/media/1.mp3”, the virtual server first uses the location file/ {…} block to forward to the post. Load balancing is implemented in the backend server group. In this way, load balancing with URL rewriting is easily implemented in the car. In this configuration scheme, it is necessary to grasp the difference between the last flag and the break flag in the rewrite instruction in order to achieve the expected effect.

The above five configuration examples show the basic method of Nginx server to implement load balancing configuration under different conditions. Since the functions of the Nginx server are incremental in structure, we can continue to add more functions based on these configurations, such as Web caching and other functions, as well as Gzip compression technology, identity authentication, and rights management. At the same time, when configuring the server group using the upstream command, you can fully utilize the functions of each command and configure the Nginx server that meets the requirements, is efficient, stable, and feature-rich.

Localtion configuration
syntax structure: location [ = ~ ~* ^~ ] uri{ … }
uri variable is a matching request character, can be a string without regular expression, or a string containing regular
[ ] In the optional uri is required: used to change the matching method of the request string and uri

    = for the standard uri front, requires the request string to strictly match the uri, if the match has been successful, stop the match and immediately process the request 

    ~ indicates that the uri contains a regular expression and is case-sensitive 

    ~* is used to indicate that the uri contains a regular expression that is not case sensitive 

    ^~ requires finding the location that represents the highest match between the uri and the request string, and then processes the request for the 

website error page
1xx: Indication – Indicates that the request has been received, continues processing
2xx: Success – indicates that the request has been successfully received, understood, accepted
3xx: Redirected – further operations must be performed to complete the request
4xx: Client Error – Request has Syntax error or request could not be implemented
5xx: server side error – server failed to implement legitimate request
http message code meaning
to move 301 requested data with There is a new location, and the change is a permanent
redirect 302 request data temporary location change
Cannot find webpage 400 can connect to the server, but due to the address problem, can’t find the webpage
website refuses to display 404 can connect to the website but can’t find the webpage
can’t display the page 405 can connect the website, the page content can’t be downloaded, the webpage writing method
can’t be solved This page is displayed. The server problem is
not executed. 501 The website settings that are not being accessed are displayed as the website requested by the browser. The
version of the protocol is not supported. 505 The protocol version information requested is
:
200 OK //The client request is successful.
400 Bad Request //Client The request has a syntax error and cannot be understood by the server.
401 Unauthorized // The request is unauthorized. This status code must be used with the WWW-Authenticate header field.
403 Forbidden //The server receives the request, but refuses to provide the service
404 Not Found //Request The resource does not exist, eg: entered the wrong URL
500 Internal Server Error //The server has an unexpected error
503 Server Unavailable //The server is currently unable to process the client’s request and may resume normal after a while
eg: HTTP/1.1 200 OK ( CRLF)

common feature File Description
1. error_log file | stderr [debug | info | notice | warn | error | crit | alert | emerg ]
debug — debug level output log information most complete
?? info — normal level output prompt information
?? notice — attention level output Note the information
?? warn — warning level output some insignificant error message
?? error — error level has an error
?? crit affecting the normal operation of the service — serious error level serious error level
?? alert — very serious level very serious
?? emerg — – Super serious super serious
nginx server log file output to a file or output to standard output error output to stder:
followed by the log level option, from low to high divided into debug …. emerg after setting the level Unicom’s high-level will also not record

    2, user user group 

    configuration starter user group wants all can start without writing 

    3, worker_processes number | auto

    Specifies the number nginx process to do more to generate woker peocess number of 
    auto nginx process automatically detects the number 

    4, pid file 

    specifies the file where pid where 
    pid log / nginx.pid time to pay attention to set the profile name, or can not find 

    5, include file 

    contains profile, other configurations incorporated 

    6, acept_mutex on | off 

    connection network provided serialization 

    7, multi_accept on | off 

    settings allow accept multiple simultaneous network connections 

    8, use method 

    selection event driven model 

    9, worker_connections number 

    configuration allows each workr process a maximum number of connections, the default is 1024 

    10, mime-type 

    resource allocation type, mime-type is a media type of network resource 
    format: default_type MIME-type 

    . 11, path access_log [the format [Buffer size =]] 

    from Define the server's log
    path: configuration storage server log file path and name 
    format: optional, self-defined format string server log 
    size: the size of the memory buffer for temporary storage configuration log 

    12, log_format name sting ...; 

    in combination with access_log, Dedicated to define the format of the server log 
    and can define a name for the format, making the access_log easy to call


    Name : the name of the format string default combined 
    string format string of the service log

main log_format 'REMOTE_ADDR $ - $ REMOTE_USER [$ time_local] "$ Request"' 
                '$ $ body_bytes_sent Status "$ HTTP_REFERER"' 
                  ' "$ HTTP_USER_AGENT" "$ HTTP_X_FORWARDED_FOR"';


    ??$remote_addr 10.0.0.1 ---Visitor source address information

????$remote_user – — The authentication information displayed when the nginx server wakes up for authentication access

????[$time_local] — Display access time ? information

????”$request” “GET / HTTP/1.1” — Requested line

            $status 200 --- Show status? Information Shows 304 because of read cache

????$body_bytes_sent 256 — Response data size information

????”http_refer” — Link destination
?? “$http_user_agent” — Browser information accessed by the client

????”$http_X_forwarded_for” is referred to as the XFF header. It represents the client, which is the real IP of the HTTP requester. It is only added when passing the HTTP proxy or load balancing server. It is not the standard request header information defined in the RFC. It can be found in the Squid Cache Proxy Development document.

??13, sendfile no | off

    configuration allows the sendfile mode to transfer files 

    14, sendfile_max_chunk size 

    configure each worker_process of the nginx process to call senfile() each time the maximum amount of data transfer cannot exceed 

    15, keepalive_timeout timeout[header_timeout]; 

    configure connection timeout 
    timeout The server maintains the time of the connection 
    head_timeout, the keeplive field of the response message header sets the timeout period 

    16, and the number of 

    single-link requests for the keepalive_repuests number is 

    17. 

    There are three ways to configure the network listening configuration listener: 
        Listening IP address: 
        listen address[:port] [default_server] [setfib=number] [backlog=number] [rcvbuf=size] 
        

[sndbuf=size]

[deferred] Listening configuration port: Listen port [default_server] [setfib=number] [backlog=number] [rcvbuf=size] [sndbuf=size]

[accept_file=filter]

listen for socket listen unix:path [default_server] [backlog=number] [rcvbuf=size] [ Sndbuf=size] [accept_file=filter]

[deferred]

address : IP address port: port number path: socket file path default_server: identifier, set this virtual host to address:port default host setfib=number: current detachment freeBSD useful Is the 0.8.44 version listening scoket associated routing table backlog=number: Set the listening function listen() to the maximum number of network connections hang freeBSD defaults to -1 Other 511 rcvbuf=size: Listening socket accept buffer size sndbuf=size: Listen Socket send buffer size Deferred : The identifier sets accept() to Deferred accept_file=filter: Sets the listening port to filter the request, since the swimming bind for freeBSD and netBSd 5.0+ : The identifier uses the independent bind() to handle the address:port ssl: identifier Set the paint connection to use ssl mode for 18, server_name name based on the name of the virtual host configuration for multiple matching successful processing priority: exact match server_name wildcard match at the beginning match server_name successful wildcard at the end is matching server_ then successful regex If the matching server_name is successfully matched multiple times in the appeal matching mode, the first matching request will be processed first. After receiving the request , the root path configuration server of the root path configuration request

needs to find the requested resource in the directory specified by the server.
This path is Specify the file directory

    20, alias path (used in the location module) to 

    change the request path of the URI received by the location, which can be followed by the variable information.

    21, index file ...; 

    set the default home page 

    22 of the website , error_page code ...[=[response]] uri 

    set the error page information 

        code to deal with the http error code 
        resoonse optional code code specified error code into new Error code 
        uri error page path or website address 

    23, allow address | CIDR |all 

    configuration ip-based access permission permission 

    address allows access to the client's ip does not support setting 
    CIDR for clients that are allowed to access multiple CIDR such as 185.199.110.153/24 
    All means that all clients can access 

    24, deny address | CIDR |all 

    configuration ip-based access prohibition permission 

    address allows access to the client's ip does not support setting 
    CIDR for clients that allow access to multiple CIDR such as 185.199.110.153/24 
    all for all customers You can access        


    25, auth_basic string |off to 

    configure password-based nginx access.

    string open authentication, verify and configure the type of instructions 

    off closed 

    26, auth_basic_user_file file 

    to configure password to access nginx access permissions to files based on 

    file files need to use absolute paths

Docker installation builds Tomcat + MySQL

Docker installation builds Tomcat + MySQL

virtual machine
Virtual machine installation Docker
Build on a clean CentOS image
Centos image preparation
Pull the Centos image on the virtual machine: docker pull centos
Create a container to run the Centos image: docker run -it -d –name mycentos centos /bin/bash
Note: There is an error here [ WARNING: IPv4 forwarding is disabled. Networking will not work. ]

Change the virtual machine file: vim /usr/lib/sysctl.d/00-system.conf
Add the following content
net.ipv4.ip_forward=1
Restart the network: systemctl restart network
Note: There is another problem here, systemctl in docker can not be used normally. Find the following solutions on the official website

Link: https://forums.docker.com/t/systemctl-status-is-not-working-in-my-docker-container/9075/4

Run mirror with the following statement
docker run –privileged -v /sys/fs/cgroup:/sys/fs/cgroup -it -d –name usr_sbin_init_centos centos /usr/sbin/init

1. Must have –privileged

2. Must have -v /sys/fs/cgroup:/sys/fs/cgroup

3. Replace bin/bash with /usr/sbin/init

Finally, I was able to run a Centos image.

Install the JAVA environment
Prepare the JDK tarball to upload to the virtual machine
Put the tarball into the docker container using docker cp
docker cp jdk-11.0.2_linux-x64_bin.tar.gz 41dbc0fbdf3c:/

The same as the linux cp specified usage, you need to add the identifier of the container: id or name

Extract the tar package
tar -xf jdk-11.0.2_linux-x64_bin.tar.gz -C /usr/local/java/jdk
Edit profile file export java environment variable

/etc/profile

export JAVA_HOME=/usr/local/java/jdk/jdk1.8.0_91
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
Run source /etc/profile to make environment variables take effect
Whether the test is successful
java –version

result

java 11.0.2 2019-01-15 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.2+9-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.2+9-LTS, mixed mode)
Install Tomcat
Prepare the tomcat tar package to upload to the virtual machine and cp to the docker container
Extract to
tar -xf apache-tomcat-8.5.38.tar.gz -C /usr/local/tomcat
Set boot boot, by using rc.local file

rc.local Add the following code

export JAVA_HOME=/usr/local/java/jdk/jdk-11.0.2
/usr/local/tomcat/apache-tomcat-8.5.38/bin/startup.sh
Open tomcat
/usr/local/tomcat/apache-tomcat-8.5.38/bin/ directory running
./startup.sh
Detection
curl localhost:8080

Install mysql
Get the yum source of mysql
wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm
Install the above yum source
yum -y install mysql57-community-release-el7-10.noarch.rpm
Yum install mysql
yum -y install mysql-community-server
Change the mysql configuration: /etc/my/cnf
Validate_password=OFF # Turn off password verification
character-set-server=utf8
collation-server=utf8_general_ci
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Initialize specified but the data directory has files in it # The default behavior of timestamp from 5.6 is already deprecated, need to close the warning

[client]

default-character-set=utf8
Get the mysql initial password
grep “password” /var/log/mysqld.log

[Note] A temporary password is generated for root@localhost: k:nT<dT,t4sF

Use this password to log in to mysql

Go to mysql and proceed

Enter

mysql -u root -p

change the password

ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘111111’;

Change to make mysql remote access

update user set host = ‘%’ where user = ‘root’;
Test, you can use physical machine, use navicat to access mysql in docker
Packing container
On the docker hub

Submit the container as a mirror

docker commit -a ‘kane’ -m ‘test’ container_id images_name:images_tag

dockerhub
docker push kane0725/tomcat
Local tarball everywhere

Export to cost tar package

docker export -o test.tar a404c6c174a2

Import the tar package into a mirror

docker import test.tar test_images
Use Dockerfile
Note: only build a mirror of tomcat

Ready to work
Create a special folder and put the jdk and tomcat tarballs
Create a Dockerfile in this directory
Centos base image
document content
FROM centos
MAINTAINER tomcat mysql
ADD jdk-11.0.2 /usr/local/java
ENV JAVA_HOME /usr/local/java/
ADD apache-tomcat-8.5.38 /usr/local/tomcat8
EXPOSE 8080
Output results using docker build

[root@localhost dockerfile]

# docker build -t tomcats:centos .
Sending build context to Docker daemon 505.8 MB
Step 1/7 : FROM centos
—> 1e1148e4cc2c
Step 2/7 : MAINTAINER tomcat mysql
—> Using cache
—> 889454b28f55
Step 3/7 : ADD jdk-11.0.2 /usr/local/java
—> Using cache
—> 8cad86ae7723
Step 4/7 : ENV JAVA_HOME /usr/local/java/
—> Running in 15d89d66adb4
—> 767983acfaca
Removing intermediate container 15d89d66adb4
Step 5/7 : ADD apache-tomcat-8.5.38 /usr/local/tomcat8
—> 4219d7d611ec
Removing intermediate container 3c2438ecf955
Step 6/7 : EXPOSE 8080
—> Running in 56c4e0c3b326
—> 7c5bd484168a
Removing intermediate container 56c4e0c3b326
Step 7/7 : RUN /usr/local/tomcat8/bin/startup.sh
—> Running in 7a73d0317db3

Tomcat started.
—> b53a6d54bf64
Removing intermediate container 7a73d0317db3
Successfully built b53a6d54bf64
Docker build problem
Be sure to bring the order behind. Otherwise it will report an error.
“docker build” requires exactly 1 argument(s).
Run a container

docker run -it –name tomcats –restart always -p 1234:8080 tomcats /bin/bash

tomcat startup.sh

/usr/local/tomcat8/bin/startup.sh

result

Using CATALINA_BASE: /usr/local/tomcat8
Using CATALINA_HOME: /usr/local/tomcat8
Using CATALINA_TMPDIR: /usr/local/tomcat8/temp
Using JRE_HOME: /usr/local/java/
Using CLASSPATH: /usr/local/tomcat8/bin/bootstrap.jar:/usr/local/tomcat8/bin/tomcat-juli.jar
Tomcat started.
Use docker compose
Install docker compose
Official: https://docs.docker.com/compose/install/

The way I choose is pip installation

installation

pip install docker-compose

docker-compose –version

———————–

docker-compose version 1.23.2, build 1110ad0
Write docker-compose.yml

This yml file builds a mysql a tomcat container

version: “3”
services:
mysql:
container_name: mysql
image: mysql:5.7
restart: always
volumes:
– ./mysql/data/:/var/lib/mysql/
– ./mysql/conf/:/etc/mysql/mysql.conf.d/
ports:
– “6033:3306”
environment:
– MYSQL_ROOT_PASSWORD=
tomcat:
container_name: tomcat
restart: always
image: tomcat
ports:
– 8080:8080
– 8009:8009
links:
– mysql:m1 #Connect to database mirroring
Note:

Volumn must be a path, you cannot specify a file

Tomcat specifies that the external conf has been created unsuccessfully, do not know why, prompt

tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina load
tomcat | WARNING: Unable to load server configuration from [/usr/local/tomcat/conf/server.xml]
tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina start
tomcat | SEVERE: Cannot start server. Server instance is not configured.
tomcat exited with code 1
Run command
Note: Must be executed under the directory of the yml file

docker-compose up -d

———-View docker container——-

[root@localhost docker-compose]

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a8a0165a3a8 tomcat “catalina.sh run” 7 seconds ago Up 6 seconds 0.0.0.0:8009->8009/tcp, 0.0.0.0:8080->8080/tcp tomcat
ddf081e87d67 mysql:5.7 “docker-entrypoint…” 7 seconds ago Up 7 seconds 33060/tcp, 0

Kubernetes install centos7

Kubeadm quickly builds a k8s cluster

surroundings

Master01: 192.168.1.110 (minimum 2 core CPU)

node01: 192.168.1.100

planning

Services network: 10.96.0.0/12

Pod network: 10.244.0.0/16

  1. Configure hosts to resolve each host

vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.110 master01
192.168.1.100 node01

  1. Synchronize each host time

yum install -y ntpdate
ntpdate time.windows.com

14 Mar 16:51:32 ntpdate[46363]: adjust time server 13.65.88.161 offset -0.001108 sec

  1. Close SWAP and close selinux

swapoff -a

vim /etc/selinux/config

This file controls the state of SELinux on the system.

SELINUX= can take one of these three values:

enforcing – SELinux security policy is enforced.

permissive – SELinux prints warnings instead of enforcing.

disabled – No SELinux policy is loaded.

SELINUX=disabled

  1. Install docker-ce

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo http://mirrors.aliyun.com/docker-ce/linux/CentOS/docker-ce.repo
yum makecache fast
yum -y install docker-ce

Appears after Docker installation: WARNING: bridge-nf-call-iptables is disabled

vim /etc/sysctl.conf

sysctl settings are defined through files in

/usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.

Vendors settings live in /usr/lib/sysctl.d/.

To override a whole file, create a new file with the same in

/etc/sysctl.d/ and put new settings there. To override

only specific settings, add a file with a lexically later

name in /etc/sysctl.d/ and put new settings there.

#

For more information, see sysctl.conf(5) and sysctl.d(5).

net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1

systemctl enable docker && systemctl start docker

  1. Install kubernetes

cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

  1. Initialize the cluster

kubeadm init –image-repository registry.aliyuncs.com/google_containers –cubernetes-version v1.13.1 –pod-network-cidr = 10.244.0.0 / 16

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.1.110:6443 –token wgrs62.vy0trlpuwtm5jd75 –discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0

  1. Manually deploy flannel

Flannel URL: https://github.com/coreos/flannel

for Kubernetes v1.7 +

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

8.node placement

Install docker kubelet kubeadm

Docker installation is the same as step 4, kubelet kubeadm installation is the same as step 5

9.node joins the master

kubeadm join 192.168.1.110:6443 –token wgrs62.vy0trlpuwtm5jd75 –discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0

Kubectl get nodes #View node status

NAME STATUS ROLES AGE VERSION
localhost.localdomain NotReady 130m v1.13.4
master01 Ready master 4h47m v1.13.4
node01 Ready 94m v1.13.4

Kubectl get cs #View component status

NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}

Kubectl get ns #View namespace

NAME STATUS AGE
default Active 4h41m
kube-public Active 4h41m
kube-system Active 4h41m

Kubectl get pods -n kube-system #View pod status

NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-bszbk 1/1 Running 0 4h44m
coredns-78d4cf999f-j68hb 1/1 Running 0 4h44m
etcd-master01 1/1 Running 0 4h43m
kube-apiserver-master01 1/1 Running 1 4h43m
kube-controller-manager-master01 1/1 Running 2 4h43m
kube-flannel-ds-amd64-27×59 1/1 Running 1 126m
kube-flannel-ds-amd64-5sxgk 1/1 Running 0 140m
kube-flannel-ds-amd64-xvrbw 1/1 Running 0 91m
kube-proxy-4pbdf 1/1 Running 0 91m
kube-proxy-9fmrl 1/1 Running 0 4h44m
kube-proxy-nwkl9 1/1 Running 0 126m
kube-scheduler-master01 1/1 Running 2 4h43m

Environment preparation master01 node01 node02, connect to the network, modify the hosts file, and confirm that the three hosts resolve each other.

Vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.201 master01
192.168.1.202 node01
192.168.1.203 node02

Host configuration Ali YUM source

mv /etc/yum.repos.d/ CentOS -Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup && curl -o /etc/yum.repos.d/CentOS-Base.repo http ://mirrors.aliyun.com/repo/Centos-7.repo

Start deploying kubernetes

  1. Install etcd on master01

Yum install etcd -y

After the installation is complete, modify the etcd configuration file /etc/etcd/etcd.conf

Vim /etc/etcd/etcd.conf

ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379″

Modify the listening address ETCD_LISTEN_CLIENT_URLS=”http://192.168.1.201:2379″ #Modify the etcd address as the local address

Set service startup

Systemctl start etcd && systemctl enable etcd

  1. Install kubernetes on all hosts

Yum install kubernetes -y

  1. Configure the master

Vim /etc/kubernetes/config

KUBE_MASTER=”–master=http://192.168.1.201:8080″ #Modify kube_master address

Vim /etc/kubernetes/apiserver

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″

Modify the listening address KUBE_ETCD_SERVERS=”–etcd-servers=http://192.168.1.201:2379″ #Modify the etcd address

KUBE_ADMISSION_CONTROL=”–admission-control =NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota” #Delete authentication parameter ServiceAccount

Set the service startup, start sequence apiserver>scheduler=controller-manager

Systemctl start docker && systemctl enable docker
systemctl start kube-apiserver && systemctl enable kube-apiserver
systemctl start kube-scheduler && systemctl enable kube-scheduler
systemctl start kube-controller-manager && systemctl enable kube-controller-manager

  1. Configure node

Vim /etc/kubernetes/config

KUBE_MASTER=”–master=http://192.168.1.201:8080″ #Modify the master address

Vim /etc/kubernetes/kubelet

KUBELET_ADDRESS=”–address=192.168.1.202″ #Modify kubelet address
KUBELET_HOSTNAME=”–hostname-override=192.168.1.202″ #Modify kubelet hostname
KUBELET_API_SERVER=”–api-servers=http://192.168.1.201: 8080″ #Modify apiserver address

Set service startup

Systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
systemctl start kube-proxy && systemctl enable kube-proxy

  1. Deployment is complete, check the cluster status

Kubectl get nodes

[root@node02 kubernetes]

# kubectl -s http://192.168.1.201:8080 get nodes -o wide
NAME STATUS AGE EXTERNAL-IP
192.168.1.202 Ready 29s
192.168.1.203 Ready 16m

  1. Install flannel on all hosts

Yum install flannel -y

Vim /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS=”http://192.168.1.201:2379″ #Modify the etcd address

Etcdctl mk /atomic.io/network/config ‘{ “Network”: “172.16.0.0/16” }’ #Set the container network in the etcd host

Master host restart service

Systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kube-apiserver
systemctl restart kube-scheduler
systemctl restart kube-controller-manager

Node host restart service

Systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy

pyenv + pyenv-virtualenv (CentOS 7)

Overview
I summarized the installation on CentOS.
Although it is little different from OSX, it is organized because the premise environment is different.

environment
I am trying in the next environment.

$ uname -r
3.10.0-229.14.1.el7.x86_64
$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.1.1503 (Core)
Release: 7.1.1503
Codename: Core

Installing packages

Installation of packages required for CentOS

yum -y install git
yum -y groupinstall “Development Tools”
yum -y install readline-devel zlib-devel bzip2-devel sqlite-devel openssl-devel

Installation of pyenv

git clone https://github.com/yyuu/pyenv.git ~/.pyenv
echo ‘export PYENV_ROOT=”$HOME/.pyenv”‘ >> ~/.bash_profile
echo ‘export PATH=”$PYENV_ROOT/bin:$PATH”‘ >> ~/.bash_profile
echo ‘eval “$(pyenv init -)”‘ >> ~/.bash_profile
source ~/.bash_profile
exec $SHELL -l

pyenv-virtualenv

git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv
echo ‘eval “$(pyenv virtualenv-init -)”‘ >> ~/.bash_profile
exec $SHELL -l

How to use pyenv

List of installable distributions, versions

pyenv install 2.7.10
pyenv install 3.5.0

Installation

Installation of the version you want to use

$ pyenv versions

  • system (set by /home/saitou/.pyenv/version)
    2.7.10
    3.5.0

Changing the version to be used by default

Do not make changes to Python on the system side.
In the following, 2.7.10 which was additionally installed is changed to the default of the user environment.

pyenv global 2.7.10
pip install – U pip
If you want to use Python on the system side, specify system.

$ python -V
Python 2.7.5
$ pyenv global 2.7.10
$ python -V
Python 2.7.10
$ pyenv global system
$ python -V
Python 2.7.5

How to use pyenv + pyenv-virtualenv

pyenv virtualenv

I want to use different versions for each project.
In this way you can allocate specific directories to specific versions and use them separately.

$ pyenv virtualenv 3.5.0 new_env
$ mkdir -p work/new_project && work/new_project/
$ pyenv versions

  • system (set by /home/saitou/.pyenv/version)
    2.7.10
    3.5.0
    new_env
    $ pyenv local new_env
    pyenv-virtualenv: activate new_env
    $ python -V
    Python 3.5.0
    $ cd ..
    pyenv-virtualenv: deactivate new_env
    $ python -V
    Python 2.7.5
    $ cd new_project/
    pyenv-virtualenv: activate new_env
    $ python -V
    Python 3.5.0

Yellowdog Update Modified

Yum (Yellowdog Update Modified) is an RPM package management tool used on CentOS and RedHat systems. The yum history command allows the system administrator to roll back the system to the previous state, but due to some limitations, rollback is not possible in all cases Success, sometimes yum command may do nothing, and sometimes may delete some other packages.

I suggest that you still have to do a complete system backup before you upgrade, and yum history can not be used to replace the system backup. System backup allows you to restore the system to an arbitrary node status.

In some cases, what should I do if the installed application does not work or has some errors after it has been patched (possibly due to library incompatibilities or package upgrades)?

Talk to the application development team and find out where the problem is with libraries and packages, then use the yum history command to roll back.

Server patching is one of the important task of Linux system administrator to make the system more stable and better performance. All the vendors used to release security/vulnerabilities patches very often, the affected package must be updated in order to limit any potential security risks.

Yum (Yellowdog Update Modified) is RPM Package Management utility for CentOS and Red Hat systems, Yum history command allows administrator to rollback the system to a previous state but due to some limitations, rollbacks do not work in all situations, or The yum command may simply do nothing, or it may remove packages you do not expect.

I advise you to take a full system backup prior to performing any update/upgrade is always recommended, and yum history is NOT meant to replace systems backups. This will help you to restore the system to previous state at any point of time.

yum update
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
epel/metalink | 12 kB 00:00

  • epel: mirror.csclub.uwaterloo.ca
    base | 3.7 kB 00:00
    dockerrepo | 2.9 kB 00:00
    draios | 2.9 kB 00:00
    draios/primary_db | 13 kB 00:00
    epel | 4.3 kB 00:00
    epel/primary_db | 5.9 MB 00:00
    extras | 3.4 kB 00:00
    updates | 3.4 kB 00:00
    updates/primary_db | 2.5 MB 00:00
    Resolving Dependencies
    –> Running transaction check
    —> Package git.x86_64 0:1.7.1-8.el6 will be updated
    —> Package git.x86_64 0:1.7.1-9.el6_9 will be an update
    —> Package httpd.x86_64 0:2.2.15-60.el6.centos.4 will be updated
    —> Package httpd.x86_64 0:2.2.15-60.el6.centos.5 will be an update
    —> Package httpd-tools.x86_64 0:2.2.15-60.el6.centos.4 will be updated
    —> Package httpd-tools.x86_64 0:2.2.15-60.el6.centos.5 will be an update
    —> Package perl-Git.noarch 0:1.7.1-8.el6 will be updated
    —> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be an update
    –> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================

Package Arch Version Repository Size

Updating:
git x86_64 1.7.1-9.el6_9 updates 4.6 M
httpd x86_64 2.2.15-60.el6.centos.5 updates 836 k
httpd-tools x86_64 2.2.15-60.el6.centos.5 updates 80 k
perl-Git noarch 1.7.1-9.el6_9 updates 29 k

Transaction Summary

Upgrade 4 Package(s)

Total download size: 5.5 M
Is this ok [y/N]: n

As you can see in the above output git package update is available, so we are going to take that. Run the following command to know the version information about the package (current installed version and available update version).

yum list git
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile

  • epel: mirror.csclub.uwaterloo.ca
    Installed Packages
    git.x86_64 1.7.1-8.el6 @base
    Available Packages
    git.x86_64

Run the following command to update git package from 1.7.1-8 to 1.7.1-9.

yum update git

Loaded plugins: fastestmirror, presto
Setting up Update Process
Loading mirror speeds from cached hostfile

  • base: repos.lax.quadranet.com
  • epel: fedora.mirrors.pair.com
  • extras: mirrors.seas.harvard.edu
  • updates: mirror.sesp.northwestern.edu
    Resolving Dependencies
    –> Running transaction check
    —> Package git.x86_64 0:1.7.1-8.el6 will be updated
    –> Processing Dependency: git = 1.7.1-8.el6 for package: perl-Git-1.7.1-8.el6.noarch
    —> Package git.x86_64 0:1.7.1-9.el6_9 will be an update
    –> Running transaction check
    —> Package perl-Git.noarch 0:1.7.1-8.el6 will be updated
    —> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be an update
    –> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================

Package Arch Version Repository Size

Updating:
git x86_64 1.7.1-9.el6_9 updates 4.6 M
Updating for dependencies:
perl-Git noarch 1.7.1-9.el6_9 updates 29 k

Transaction Summary

Upgrade 2 Package(s)

Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 4.6 M
(1/2): git-1.7.1-9.el6_9.x86_64.rpm | 4.6 MB 00:00

(2/2): perl-Git-1.7.1-9.el6_9.noarch.rpm | 29 kB 00:00

Total 5.8 MB/s | 4.6 MB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : perl-Git-1.7.1-9.el6_9.noarch 1/4
Updating : git-1.7.1-9.el6_9.x86_64 2/4
Cleanup : perl-Git-1.7.1-8.el6.noarch 3/4
Cleanup : git-1.7.1-8.el6.x86_64 4/4
Verifying : git-1.7.1-9.el6_9.x86_64 1/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 2/4
Verifying : git-1.7.1-8.el6.x86_64 3/4
Verifying : perl-Git-1.7.1-8.el6.noarch 4/4

Updated:
git.x86_64 0:1.7.1-9.el6_9

Dependency Updated:
perl-Git.noarch 0:1.7.1-9.el6_9

Complete!

Verify updated version of git package.

yum list git

Installed Packages
git.x86_64 1.7.1-9.el6_9 @updates

or

rpm -q git

git-1.7.1-9.el6_9.x86_64

As of now, we have successfully completed package update and got a package for rollback. Just follow below steps for rollback mechanism.

First get the yum transaction id using following command. The below output clearly shows all the required information such transaction id, who done the transaction (i mean username), date and time, Actions (Install or update), how many packages altered in this transaction.

yum history

or

yum history list all

Loaded plugins: fastestmirror, presto

ID | Login user | Date and time | Action(s) | Altered

13 | root               | 2017-08-18 13:30 | Update         |    2
12 | root               | 2017-08-10 07:46 | Install        |    1
11 | root               | 2017-07-28 17:10 | E, I, U        |   28 EE
10 | root               | 2017-04-21 09:16 | E, I, U        |  162 EE
 9 | root               | 2017-02-09 17:09 | E, I, U        |   20 EE
 8 | root               | 2017-02-02 10:45 | Install        |    1
 7 | root               | 2016-12-15 06:48 | Update         |    1
 6 | root               | 2016-12-15 06:43 | Install        |    1
 5 | root               | 2016-12-02 10:28 | E, I, U        |   23 EE
 4 | root               | 2016-10-28 05:37 | E, I, U        |   13 EE
 3 | root               | 2016-10-18 12:53 | Install        |    1
 2 | root               | 2016-09-30 10:28 | E, I, U        |   31 EE
 1 | root               | 2016-07-26 11:40 | E, I, U        |  160 EE

The above command shows two packages has been altered because git updated it’s dependence package too perl-Git. Run the following command to view detailed information about the transaction.

yum history info 13

Loaded plugins: fastestmirror, presto
Transaction ID : 13
Begin time : Fri Aug 18 13:30:52 2017
Begin rpmdb : 420:f5c5f9184f44cf317de64d3a35199e894ad71188
End time : 13:30:54 2017 (2 seconds)
End rpmdb : 420:d04a95c25d4526ef87598f0dcaec66d3f99b98d4
User : root
Return-Code : Success
Command Line : update git
Transaction performed with:
Installed rpm-4.8.0-55.el6.x86_64 @base
Installed yum-3.2.29-81.el6.centos.noarch @base
Installed yum-plugin-fastestmirror-1.1.30-40.el6.noarch @base
Installed yum-presto-0.6.2-1.el6.noarch @anaconda-CentOS-201207061011.x86_64/6.3
Packages Altered:
Updated git-1.7.1-8.el6.x86_64 @base
Update 1.7.1-9.el6_9.x86_64 @updates
Updated perl-Git-1.7.1-8.el6.noarch @base
Update 1.7.1-9.el6_9.noarch @updates
history info

Fire the following command to Rollback the git package to the previous version.

yum history undo 13

Loaded plugins: fastestmirror, presto
Undoing transaction 53, from Fri Aug 18 13:30:52 2017
Updated git-1.7.1-8.el6.x86_64 @base
Update 1.7.1-9.el6_9.x86_64 @updates
Updated perl-Git-1.7.1-8.el6.noarch @base
Update 1.7.1-9.el6_9.noarch @updates
Loading mirror speeds from cached hostfile

  • base: repos.lax.quadranet.com
  • epel: fedora.mirrors.pair.com
  • extras: repo1.dal.innoscale.net
  • updates: mirror.vtti.vt.edu
    Resolving Dependencies
    –> Running transaction check
    —> Package git.x86_64 0:1.7.1-8.el6 will be a downgrade
    —> Package git.x86_64 0:1.7.1-9.el6_9 will be erased
    —> Package perl-Git.noarch 0:1.7.1-8.el6 will be a downgrade
    —> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be erased
    –> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================

Package Arch Version Repository Size

Downgrading:
git x86_64 1.7.1-8.el6 base 4.6 M
perl-Git noarch 1.7.1-8.el6 base 29 k

Transaction Summary

Downgrade 2 Package(s)

Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 4.6 M
(1/2): git-1.7.1-8.el6.x86_64.rpm | 4.6 MB 00:00

(2/2): perl-Git-1.7.1-8.el6.noarch.rpm | 29 kB 00:00

Total 3.4 MB/s | 4.6 MB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : perl-Git-1.7.1-8.el6.noarch 1/4
Installing : git-1.7.1-8.el6.x86_64 2/4
Cleanup : perl-Git-1.7.1-9.el6_9.noarch 3/4
Cleanup : git-1.7.1-9.el6_9.x86_64 4/4
Verifying : git-1.7.1-8.el6.x86_64 1/4
Verifying : perl-Git-1.7.1-8.el6.noarch 2/4
Verifying : git-1.7.1-9.el6_9.x86_64 3/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 4/4

Removed:
git.x86_64 0:1.7.1-9.el6_9 perl-Git.noarch 0:1.7.1-9.el6_9

Installed:
git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6

Complete!
After rollback, use the following command to re-check the downgraded package version.

Rollback Updates using YUM downgrade command
Alternatively we can rollback an updates using YUM downgrade command.

yum downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6

Loaded plugins: search-disabled-repos, security, ulninfo
Setting up Downgrade Process
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be a downgrade
—> Package git.x86_64 0:1.7.1-9.el6_9 will be erased
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be a downgrade
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be erased
–> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================

Package Arch Version Repository Size

Downgrading:
git x86_64 1.7.1-8.el6 base 4.6 M
perl-Git noarch 1.7.1-8.el6 base 29 k

Transaction Summary

Downgrade 2 Package(s)

Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): git-1.7.1-8.el6.x86_64.rpm | 4.6 MB 00:00

(2/2): perl-Git-1.7.1-8.el6.noarch.rpm | 28 kB 00:00

Total 3.7 MB/s | 4.6 MB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : perl-Git-1.7.1-8.el6.noarch 1/4
Installing : git-1.7.1-8.el6.x86_64 2/4
Cleanup : perl-Git-1.7.1-9.el6_9.noarch 3/4
Cleanup : git-1.7.1-9.el6_9.x86_64 4/4
Verifying : git-1.7.1-8.el6.x86_64 1/4
Verifying : perl-Git-1.7.1-8.el6.noarch 2/4
Verifying : git-1.7.1-9.el6_9.x86_64 3/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 4/4

Removed:
git.x86_64 0:1.7.1-9.el6_9 perl-Git.noarch 0:1.7.1-9.el6_9

Installed:
git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6

Complete!
Note : You have to downgrade a dependence packages too, otherwise this will remove the current version of dependency packages instead of downgrade because the downgrade command cannot satisfy the dependency.

For Fedora Users
Use the same above commands and change the package manager command to DNF instead of YUM.

dnf list git

dnf history

dnf history info

dnf history undo

dnf list git

dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6

dnf listgit

dnf history

dnf history info

dnf history undo

dnf listgit

dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6

How to Backup and Restore RabbitMQ Data & Configurations

In this post, I’ll like us to look at how to perform a backup for RabbitMQ configurations and data. This will also include information on restoring a RabbitMQ backup into a new deployment.

Get Cluster Status

$ rabbitmqctl cluster_status
Cluster status of node rabbit@computingforgeeks-centos7 …
[{nodes,[{disc,[‘rabbit@computingforgeeks-centos7’]}]},
{running_nodes,[‘rabbit@computingforgeeks-centos7’]},
{cluster_name,<<“rabbit@computingforgeeks-centos7”>>},
{partitions,[]},
{alarms,[{‘rabbit@computingforgeeks-centos7’,[]}]}]
How to Backup RabbitMQ Configurations
Please note this backup doesn’t include Messages since they are stored in a separate message store. It will only backup RabbitMQ users, vhosts, queues, exchanges, and bindings. The backup file is a JSON representation of RabbitMQ metadata. We will do a backup using rabbitmqadmincommand line tool.

The management plugin ships with a command line tool rabbitmqadmin. You need to enable the management plugin:

rabbitmq-plugins enable rabbitmq_management
This plugin is used to perform some of the same actions as the Web-based UI, and which may be more convenient for automation tasks.

Download rabbitmqadmin
Once you enable the management plugin, download rabbitmqadmin Python command line tool that interacts with the HTTP API. It can be downloaded from any RabbitMQ node that has the management plugin enabled at

http://{node-hostname}:15672/cli/
Once downloaded, make the file executable and move it to /usr/local/bin directory:

chmod +x rabbitmqadmin
sudo mv rabbitmqadmin /usr/local/bin
To backup RabbitMQ configurations, use the command:

rabbitmqadmin export
Example:

$ rabbitmqadmin export rabbitmq-backup-config.json
Exported definitions for localhost to “rabbitmq-backup-config.json”
The export will be written to filerabbitmq-backup-config.json.

How to Restore RabbitMQ Configurations backup
If you ever want to restore your RabbitMQ configurations from a backup, use the command:

rabbitmqadmin import
Example

$ rabbitmqadmin import rabbitmq-backup.json
Imported definitions for localhost from “rabbitmq-backup.json”
How to Backup RabbitMQ Data
RabbitMQ Definitions and Messages are stored in an internal database located in the node’s data directory. To get the directory path, run the following command against a running RabbitMQ node:

rabbitmqctl eval ‘rabbit_mnesia:dir().’
Sample output:

“/var/lib/rabbitmq/mnesia/rabbit@computingforgeeks-server1”
This directory contains many files:

ls /var/lib/rabbitmq/mnesia/rabbit@computingforgeeks-centos7

cluster_nodes.config nodes_running_at_shutdown rabbit_durable_route.DCD rabbit_user.DCD schema.DAT
DECISION_TAB.LOG rabbit_durable_exchange.DCD rabbit_runtime_parameters.DCD rabbit_user_permission.DCD schema_version
LATEST.LOG rabbit_durable_exchange.DCL rabbit_serial rabbit_vhost.DCD
msg_stores rabbit_durable_queue.DCD rabbit_topic_permission.DCD rabbit_vhost.DCL
In RabbitMQ versions starting with 3.7.0 all messages data is combined in the msg_stores/vhosts directory and stored in a subdirectory per vhost. Each vhost directory is named with a hash and contains a .vhost file with the vhost name, so a specific vhost’s message set can be backed up separately.

To do RabbitMQ definitions and messages data backup, copy or archive this directory and its contents. But first, you need to stop RabbitMQ service

sudo systemctl stop rabbitmq-server.service
The example below will create an archive:

tar cvf rabbitmq-backup.tgz /var/lib/rabbitmq/mnesia/rabbit@computingforgeeks-centos7
How to Restore RabbitMQ Data
To restore from Backup, extract the files from backup to the data directory.

Internal node database stores node’s name in certain records. Should node name change, the database must first be updated to reflect the change using the following rabbitmqctl command:

rabbitmqctl rename_cluster_node
When a new node starts with a backed up directory and a matching node name, it should perform the upgrade steps as needed and proceed to boot.

Installing RabbitMQ on CentOS 7

I’ll take you through the installation of RabbitMQ on CentOS 7 / Fedora 29 / Fedora 28.

RabbitMQ is an open source message broker software that implements the Advanced Message Queuing Protocol (AMQP).

It receives messages from publishers (applications that publish them) and routes them to consumers (applications that process them).

Follow the steps below to install RabbitMQ on Fedora 29 / Fedora 28.

Step 1: Install Erlang
Before installing RabbitMQ, you must install a supported version of Erlang/OTP. The version of Erlang package available on EPEL repository should be sufficient.

sudo dnf -y install erlang
Confirm installation by running the erlcommand:

$ erl
Erlang/OTP 20 [erts-9.3.3.3] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:10] [hipe] [kernel-poll:false]

Eshell V9.3.3.3 (abort with ^G)
1>
Step 2: Add PackageCloud Yum Repository
A Yum repository with RabbitMQ packages is available from PackageCloud.

Create a new Repository file for RabbitMQ.

sudo vim /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo
Add:

[rabbitmq_rabbitmq-server]

name=rabbitmq_rabbitmq-server
baseurl=https://packagecloud.io/rabbitmq/rabbitmq-server/el/7/$basearch
repo_gpgcheck=1
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300

[rabbitmq_rabbitmq-server-source]

name=rabbitmq_rabbitmq-server-source
baseurl=https://packagecloud.io/rabbitmq/rabbitmq-server/el/7/SRPMS
repo_gpgcheck=1
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
Step 3: Install RabbitMQ on Fedora 29 / Fedora 28
The last step is the actual installation of RabbitMQ:

sudo dnf makecache -y –disablerepo=’*’ –enablerepo=’rabbitmq_rabbitmq-server’
sudo dnf -y install rabbitmq-server
Confirm version of RabbitMQ installed:

$ rpm -qi rabbitmq-server
Name : rabbitmq-server
Version : 3.7.8
Release : 1.el7
Architecture: noarch
Install Date: Thu 15 Nov 2018 01:32:16 PM UTC
Group : Development/Libraries
Size : 10858832
License : MPLv1.1 and MIT and ASL 2.0 and BSD
Signature : RSA/SHA1, Thu 20 Sep 2018 03:32:57 PM UTC, Key ID 6b73a36e6026dfca
Source RPM : rabbitmq-server-3.7.8-1.el7.src.rpm
Build Date : Thu 20 Sep 2018 03:32:56 PM UTC
Build Host : 17dd9d9d-9199-4429-59e6-dc265f3581e9
Relocations : (not relocatable)
URL : http://www.rabbitmq.com/
Summary : The RabbitMQ server
Description :
RabbitMQ is an open source multi-protocol messaging broker.
Step 4: Start RabbitMQ Service
Now that you have RabbitMQ installed on your Fedora, start and enable the service to start on system boot.

sudo systemctl start rabbitmq-server
sudo systemctl enable rabbitmq-server

Step 5: Enable the RabbitMQ Management Dashboard (Optional)
You can optionally enable the RabbitMQ Management Web dashboard for easy management.

sudo rabbitmq-plugins enable rabbitmq_management
The Web service should be listening on TCP port 15672

ss -tunelp | grep 15672

tcp LISTEN 0 128 0.0.0.0:15672 0.0.0.0:* users:((“beam.smp”,pid=9525,fd=71)) uid:111 ino:39934 sk:9 <->
If you have an active Firewalld service, allow ports 5672 and 15672

sudo firewall-cmd –add-port={5672,15672}/tcp –permanent

By default, the guest user exists and can connect only from localhost. You can log in with this user locally with the password “guest”

To be able to login on the network, create an admin user like below:

rabbitmqctl add_user admin StrongPassword
rabbitmqctl set_user_tags admin administrator
Login with this admin username and the password assigned.

RabbitMQ User Management Commands
Delete User:

rabbitmqctl delete_user user
Change User Password:

rabbitmqctl change_password user strongpassword
Create new Virtualhost:

rabbitmqctl add_vhost /my_vhost
List available Virtualhosts:

rabbitmqctl list_vhosts
Delete a virtualhost:

rabbitmqctl delete_vhost /myvhost
Grant user permissions for vhost:

rabbitmqctl set_permissions -p /myvhost user “.” “.” “.*”
List vhost permissions:

rabbitmqctl list_permissions -p /myvhost
To list user permissions:

rabbitmqctl list_user_permissions user
Delete user permissions:

rabbitmqctl clear_permissions -p /myvhost user
The next article to read is:

sudo firewall-cmd –reload
Access it by opening the URL http://[server IP|Hostname]:15672

Ansibile yaml file

autocmd FileType yank setlocal ai ts=2 sw=2 et
vim set cursorcolumn color

iptables tips and tricks

Tip #1: Take a backup of your iptables configuration before you start working on it.

Back up your configuration with the command:

/sbin/iptables-save > /root/iptables-works

Tip #2: Even better, include a timestamp in the filename.

Add the timestamp with the command:

/sbin/iptables-save > /root/iptables-works-`date +%F`

You get a file with a name like:

/root/iptables-works-2018-09-11

If you do something that prevents your system from working, you can quickly restore it:

/sbin/iptables-restore < /root/iptables-works-2018-09-11
ln –s /root/iptables-works-`date +%F` /root/iptables-works-latest

Tip #4: Put specific rules at the top of the policy and generic rules at the bottom.

Avoid generic rules like this at the top of the policy rules:

iptables -A INPUT -p tcp --dport 22 -j DROP

The more criteria you specify in the rule, the less chance you will have of locking yourself out. Instead of the very generic rule above, use something like this:

iptables -A INPUT -p tcp --dport 22 –s 10.0.0.0/8 –d 192.168.100.101 -j DROP

This rule appends (-A) to the INPUT chain a rule that will DROP any packets originating from the CIDR block 10.0.0.0/8 on TCP (-p tcp) port 22 (–dport 22) destined for IP address 192.168.100.101 (-d 192.168.100.101).

There are plenty of ways you can be more specific. For example, using -i eth0 will limit the processing to a single NIC in your server. This way, the filtering actions will not apply the rule to eth1.

Tip #5: Whitelist your IP address at the top of your policy rules.

This is a very effective method of not locking yourself out. Everybody else, not so much.

iptables -I INPUT -s <your IP> -j ACCEPT

You need to put this as the first rule for it to work properly. Remember, -I inserts it as the first rule; -A appends it to the end of the list.

Tip #6: Know and understand all the rules in your current policy.

Not making a mistake in the first place is half the battle. If you understand the inner workings behind your iptables policy, it will make your life easier. Draw a flowchart if you must. Also remember: What the policy does and what it is supposed to do can be two different things.

Set up a workstation firewall policy

Scenario: You want to set up a workstation with a restrictive firewall policy.

Tip #1: Set the default policy as DROP.

# Set a default policy of DROP
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]

Tip #2: Allow users the minimum amount of services needed to get their work done.

The iptables rules need to allow the workstation to get an IP address, netmask, and other important information via DHCP (-p udp –dport 67:68 –sport 67:68). For remote management, the rules need to allow inbound SSH (–dport 22), outbound mail (–dport 25), DNS (–dport 53), outbound ping (-p icmp), Network Time Protocol (–dport 123 –sport 123), and outbound HTTP (–dport 80) and HTTPS (–dport 443).

# Set a default policy of DROP
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]

# Accept any related or established connections
-I INPUT  1 -m state –state RELATED,ESTABLISHED -j ACCEPT
-I OUTPUT 1 -m state –state RELATED,ESTABLISHED -j ACCEPT

# Allow all traffic on the loopback interface
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT

# Allow outbound DHCP request
-A OUTPUT –o eth0 -p udp –dport 67:68 –sport 67:68 -j ACCEPT

# Allow inbound SSH
-A INPUT -i eth0 -p tcp -m tcp –dport 22 -m state –state NEW  -j ACCEPT

# Allow outbound email
-A OUTPUT -i eth0 -p tcp -m tcp –dport 25 -m state –state NEW  -j ACCEPT

# Outbound DNS lookups
-A OUTPUT -o eth0 -p udp -m udp –dport 53 -j ACCEPT

# Outbound PING requests
-A OUTPUT –o eth0 -p icmp -j ACCEPT

# Outbound Network Time Protocol (NTP) requests
-A OUTPUT –o eth0 -p udp –dport 123 –sport 123 -j ACCEPT

# Outbound HTTP
-A OUTPUT -o eth0 -p tcp -m tcp –dport 80 -m state –state NEW -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp –dport 443 -m state –state NEW -j ACCEPT

COMMIT

Restrict an IP address range

Scenario: The CEO of your company thinks the employees are spending too much time on Facebook and not getting any work done. The CEO tells the CIO to do something about the employees wasting time on Facebook. The CIO tells the CISO to do something about employees wasting time on Facebook. Eventually, you are told the employees are wasting too much time on Facebook, and you have to do something about it. You decide to block all access to Facebook. First, find out Facebook’s IP address by using the host and whois commands.

host -t a www.facebook.com
www.facebook.com is an alias for star.c10r.facebook.com.
star.c10r.facebook.com has address 31.13.65.17
whois 31.13.65.17 | grep inetnum
inetnum:        31.13.64.0 – 31.13.127.255

Then convert that range to CIDR notation by using the CIDR to IPv4 Conversion page. You get 31.13.64.0/18. To prevent outgoing access to www.facebook.com, enter:

iptables -A OUTPUT -p tcp -i eth0 –o eth1 –d 31.13.64.0/18 -j DROP

Regulate by time

Scenario: The backlash from the company’s employees over denying access to Facebook access causes the CEO to relent a little (that and his administrative assistant’s reminding him that she keeps HIS Facebook page up-to-date). The CEO decides to allow access to Facebook.com only at lunchtime (12PM to 1PM). Assuming the default policy is DROP, use iptables’ time features to open up access.

iptables –A OUTPUT -p tcp -m multiport –dport http,https -i eth0 -o eth1 -m time –timestart 12:00 –timestart 12:00 –timestop 13:00 –d
31.13.64.0/18  -j ACCEPT

This command sets the policy to allow (-j ACCEPT) http and https (-m multiport –dport http,https) between noon (–timestart 12:00) and 13PM (–timestop 13:00) to Facebook.com (–d 31.13.64.0/18).

Regulate by time—Take 2

Scenario: During planned downtime for system maintenance, you need to deny all TCP and UDP traffic between the hours of 2AM and 3AM so maintenance tasks won’t be disrupted by incoming traffic. This will take two iptables rules:

iptables -A INPUT -p tcp -m time –timestart 02:00 –timestop 03:00 -j DROP
iptables -A INPUT -p udp -m time –timestart 02:00 –timestop 03:00 -j DROP

With these rules, TCP and UDP traffic (-p tcp and -p udp ) are denied (-j DROP) between the hours of 2AM (–timestart 02:00) and 3AM (–timestop 03:00) on input (-A INPUT).

Limit connections with iptables

Scenario: Your internet-connected web servers are under attack by bad actors from around the world attempting to DoS (Denial of Service) them. To mitigate these attacks, you restrict the number of connections a single IP address can have to your web server:

iptables –A INPUT –p tcp –syn -m multiport -–dport http,https –m connlimit -–connlimit-above 20 –j REJECT -–reject-with-tcp-reset

Let’s look at what this rule does. If a host makes more than 20 (-–connlimit-above 20) new connections (–p tcp –syn) in a minute to the web servers (-–dport http,https), reject the new connection (–j REJECT) and tell the connecting host you are rejecting the connection (-–reject-with-tcp-reset).

Monitor iptables rules

Scenario: Since iptables operates on a “first match wins” basis as packets traverse the rules in a chain, frequently matched rules should be near the top of the policy and less frequently matched rules should be near the bottom. How do you know which rules are traversed the most or the least so they can be ordered nearer the top or the bottom?

Tip #1: See how many times each rule has been hit.

Use this command:

iptables -L -v -n –line-numbers

The command will list all the rules in the chain (-L). Since no chain was specified, all the chains will be listed with verbose output (-v) showing packet and byte counters in numeric format (-n) with line numbers at the beginning of each rule corresponding to that rule’s position in the chain.

Using the packet and bytes counts, you can order the most frequently traversed rules to the top and the least frequently traversed rules towards the bottom.

Tip #2: Remove unnecessary rules.

Which rules aren’t getting any matches at all? These would be good candidates for removal from the policy. You can find that out with this command:

iptables -nvL | grep -v "0     0"

Note: that’s not a tab between the zeros; there are five spaces between the zeros.

Tip #3: Monitor what’s going on.

You would like to monitor what’s going on with iptables in real time, like with top. Use this command to monitor the activity of iptables activity dynamically and show only the rules that are actively being traversed:

watch --interval=5 'iptables -nvL | grep -v "0     0"'

watch runs ‘iptables -nvL | grep -v “0     0″‘ every five seconds and displays the first screen of its output. This allows you to watch the packet and byte counts change over time.

Report on iptables

Scenario: Your manager thinks this iptables firewall stuff is just great, but a daily activity report would be even better. Sometimes it’s more important to write a report than to do the work.

Use the packet filter/firewall/IDS log analyzer FWLogwatch to create reports based on the iptables firewall logs. FWLogwatch supports many log formats and offers many analysis options. It generates daily and monthly summaries of the log files, allowing the security administrator to free up substantial time, maintain better control over network security, and reduce unnoticed attacks.