December 2018
M T W T F S S
« Nov    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Categories

WordPress Quotes

If you have built castles in the air, your work need not be lost; that is where they should be. Now put foundations under them.
Henry David Thoreau

Recent Comments

December 2018
M T W T F S S
« Nov    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (34)
Ansibile (19)
Apache (133)
Asterisk (2)
cassandra (2)
Centos (209)
Centos RHEL 7 (259)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (28)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (2)
Ldap (5)
Linux (189)
Linux Commands (167)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (22)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (31)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (34)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (60)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

30 visitors online now
1 guests, 29 bots, 0 members

Hit Counter provided by dental implants orange county

Ansibile yaml file

autocmd FileType yank setlocal ai ts=2 sw=2 et
vim set cursorcolumn color

iptables tips and tricks

Tip #1: Take a backup of your iptables configuration before you start working on it.

Back up your configuration with the command:

/sbin/iptables-save > /root/iptables-works

Tip #2: Even better, include a timestamp in the filename.

Add the timestamp with the command:

/sbin/iptables-save > /root/iptables-works-`date +%F`

You get a file with a name like:

/root/iptables-works-2018-09-11

If you do something that prevents your system from working, you can quickly restore it:

/sbin/iptables-restore < /root/iptables-works-2018-09-11
ln –s /root/iptables-works-`date +%F` /root/iptables-works-latest

Tip #4: Put specific rules at the top of the policy and generic rules at the bottom.

Avoid generic rules like this at the top of the policy rules:

iptables -A INPUT -p tcp --dport 22 -j DROP

The more criteria you specify in the rule, the less chance you will have of locking yourself out. Instead of the very generic rule above, use something like this:

iptables -A INPUT -p tcp --dport 22 –s 10.0.0.0/8 –d 192.168.100.101 -j DROP

This rule appends (-A) to the INPUT chain a rule that will DROP any packets originating from the CIDR block 10.0.0.0/8 on TCP (-p tcp) port 22 (–dport 22) destined for IP address 192.168.100.101 (-d 192.168.100.101).

There are plenty of ways you can be more specific. For example, using -i eth0 will limit the processing to a single NIC in your server. This way, the filtering actions will not apply the rule to eth1.

Tip #5: Whitelist your IP address at the top of your policy rules.

This is a very effective method of not locking yourself out. Everybody else, not so much.

iptables -I INPUT -s <your IP> -j ACCEPT

You need to put this as the first rule for it to work properly. Remember, -I inserts it as the first rule; -A appends it to the end of the list.

Tip #6: Know and understand all the rules in your current policy.

Not making a mistake in the first place is half the battle. If you understand the inner workings behind your iptables policy, it will make your life easier. Draw a flowchart if you must. Also remember: What the policy does and what it is supposed to do can be two different things.

Set up a workstation firewall policy

Scenario: You want to set up a workstation with a restrictive firewall policy.

Tip #1: Set the default policy as DROP.

# Set a default policy of DROP
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]

Tip #2: Allow users the minimum amount of services needed to get their work done.

The iptables rules need to allow the workstation to get an IP address, netmask, and other important information via DHCP (-p udp –dport 67:68 –sport 67:68). For remote management, the rules need to allow inbound SSH (–dport 22), outbound mail (–dport 25), DNS (–dport 53), outbound ping (-p icmp), Network Time Protocol (–dport 123 –sport 123), and outbound HTTP (–dport 80) and HTTPS (–dport 443).

# Set a default policy of DROP
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]

# Accept any related or established connections
-I INPUT  1 -m state –state RELATED,ESTABLISHED -j ACCEPT
-I OUTPUT 1 -m state –state RELATED,ESTABLISHED -j ACCEPT

# Allow all traffic on the loopback interface
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT

# Allow outbound DHCP request
-A OUTPUT –o eth0 -p udp –dport 67:68 –sport 67:68 -j ACCEPT

# Allow inbound SSH
-A INPUT -i eth0 -p tcp -m tcp –dport 22 -m state –state NEW  -j ACCEPT

# Allow outbound email
-A OUTPUT -i eth0 -p tcp -m tcp –dport 25 -m state –state NEW  -j ACCEPT

# Outbound DNS lookups
-A OUTPUT -o eth0 -p udp -m udp –dport 53 -j ACCEPT

# Outbound PING requests
-A OUTPUT –o eth0 -p icmp -j ACCEPT

# Outbound Network Time Protocol (NTP) requests
-A OUTPUT –o eth0 -p udp –dport 123 –sport 123 -j ACCEPT

# Outbound HTTP
-A OUTPUT -o eth0 -p tcp -m tcp –dport 80 -m state –state NEW -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp –dport 443 -m state –state NEW -j ACCEPT

COMMIT

Restrict an IP address range

Scenario: The CEO of your company thinks the employees are spending too much time on Facebook and not getting any work done. The CEO tells the CIO to do something about the employees wasting time on Facebook. The CIO tells the CISO to do something about employees wasting time on Facebook. Eventually, you are told the employees are wasting too much time on Facebook, and you have to do something about it. You decide to block all access to Facebook. First, find out Facebook’s IP address by using the host and whois commands.

host -t a www.facebook.com
www.facebook.com is an alias for star.c10r.facebook.com.
star.c10r.facebook.com has address 31.13.65.17
whois 31.13.65.17 | grep inetnum
inetnum:        31.13.64.0 – 31.13.127.255

Then convert that range to CIDR notation by using the CIDR to IPv4 Conversion page. You get 31.13.64.0/18. To prevent outgoing access to www.facebook.com, enter:

iptables -A OUTPUT -p tcp -i eth0 –o eth1 –d 31.13.64.0/18 -j DROP

Regulate by time

Scenario: The backlash from the company’s employees over denying access to Facebook access causes the CEO to relent a little (that and his administrative assistant’s reminding him that she keeps HIS Facebook page up-to-date). The CEO decides to allow access to Facebook.com only at lunchtime (12PM to 1PM). Assuming the default policy is DROP, use iptables’ time features to open up access.

iptables –A OUTPUT -p tcp -m multiport –dport http,https -i eth0 -o eth1 -m time –timestart 12:00 –timestart 12:00 –timestop 13:00 –d
31.13.64.0/18  -j ACCEPT

This command sets the policy to allow (-j ACCEPT) http and https (-m multiport –dport http,https) between noon (–timestart 12:00) and 13PM (–timestop 13:00) to Facebook.com (–d 31.13.64.0/18).

Regulate by time—Take 2

Scenario: During planned downtime for system maintenance, you need to deny all TCP and UDP traffic between the hours of 2AM and 3AM so maintenance tasks won’t be disrupted by incoming traffic. This will take two iptables rules:

iptables -A INPUT -p tcp -m time –timestart 02:00 –timestop 03:00 -j DROP
iptables -A INPUT -p udp -m time –timestart 02:00 –timestop 03:00 -j DROP

With these rules, TCP and UDP traffic (-p tcp and -p udp ) are denied (-j DROP) between the hours of 2AM (–timestart 02:00) and 3AM (–timestop 03:00) on input (-A INPUT).

Limit connections with iptables

Scenario: Your internet-connected web servers are under attack by bad actors from around the world attempting to DoS (Denial of Service) them. To mitigate these attacks, you restrict the number of connections a single IP address can have to your web server:

iptables –A INPUT –p tcp –syn -m multiport -–dport http,https –m connlimit -–connlimit-above 20 –j REJECT -–reject-with-tcp-reset

Let’s look at what this rule does. If a host makes more than 20 (-–connlimit-above 20) new connections (–p tcp –syn) in a minute to the web servers (-–dport http,https), reject the new connection (–j REJECT) and tell the connecting host you are rejecting the connection (-–reject-with-tcp-reset).

Monitor iptables rules

Scenario: Since iptables operates on a “first match wins” basis as packets traverse the rules in a chain, frequently matched rules should be near the top of the policy and less frequently matched rules should be near the bottom. How do you know which rules are traversed the most or the least so they can be ordered nearer the top or the bottom?

Tip #1: See how many times each rule has been hit.

Use this command:

iptables -L -v -n –line-numbers

The command will list all the rules in the chain (-L). Since no chain was specified, all the chains will be listed with verbose output (-v) showing packet and byte counters in numeric format (-n) with line numbers at the beginning of each rule corresponding to that rule’s position in the chain.

Using the packet and bytes counts, you can order the most frequently traversed rules to the top and the least frequently traversed rules towards the bottom.

Tip #2: Remove unnecessary rules.

Which rules aren’t getting any matches at all? These would be good candidates for removal from the policy. You can find that out with this command:

iptables -nvL | grep -v "0     0"

Note: that’s not a tab between the zeros; there are five spaces between the zeros.

Tip #3: Monitor what’s going on.

You would like to monitor what’s going on with iptables in real time, like with top. Use this command to monitor the activity of iptables activity dynamically and show only the rules that are actively being traversed:

watch --interval=5 'iptables -nvL | grep -v "0     0"'

watch runs ‘iptables -nvL | grep -v “0     0″‘ every five seconds and displays the first screen of its output. This allows you to watch the packet and byte counts change over time.

Report on iptables

Scenario: Your manager thinks this iptables firewall stuff is just great, but a daily activity report would be even better. Sometimes it’s more important to write a report than to do the work.

Use the packet filter/firewall/IDS log analyzer FWLogwatch to create reports based on the iptables firewall logs. FWLogwatch supports many log formats and offers many analysis options. It generates daily and monthly summaries of the log files, allowing the security administrator to free up substantial time, maintain better control over network security, and reduce unnoticed attacks.

SSL and TLS 1.3 on Nginx

I have heard that there is TLS1.3,

I have been tickle, I want to toss and try. In the past, there were not many browsers supported,
and there were not many people on the Internet who tried it. There are some large website sites that have already got TLS1.3,
and many bloggers have upgraded their blogs to TLS1.3, leaving valuable experience. I can’t help it anymore.
Let’s take a look at it today. Openssl 1.1.1 LTS has been released, update the official version of TLS1.3.

Software version
?Nginx: nginx-1.15.4
?OpenSSL: openssl-1.1.1(LTS)

Tutorial

Installation dependency

Sudo apt update
sudo apt install -y build-essential libpcre3 libpcre3-dev zlib1g-dev liblua5.1-dev libluajit-5.1-dev libgeoip-dev google-perftools libgoogle-perftools-dev

Download and unzip the required software

Wget https://nginx.org/download/nginx-1.15.4.tar.gz
tar zxf nginx-1.15.4.tar.gz
wget https://www.openssl.org/source/openssl-1.1.1. tar.gz
tar zxf OpenSSL-1.1.1.tar.gz

OpenSSL patching

Pushd openssl-1.1.1 #?TLS1.3 Draft 23, 26, 28, Final patch
curl https://raw.githubusercontent.com/hakasenyang/openssl-patch/master/openssl-equal-1.1.1_ciphers.patch | patch -p1
#?ign Strict-SNI log patch
curl https://raw.githubusercontent.com/hakasenyang/openssl-patch/master/openssl-ignore_log_strict-sni.patch | patch -p1
popd

Nginx patch

Pushd nginx-1.15.4
#?SPDY, HTTP2 HPACK, Dynamic TLS Record, Fix Http2 Push Error, PRIORITIZE_CHACHA patch
curl https://raw.githubusercontent.com/kn007/patch/43f2d869b209756b442cfbfa861d653d993f16fe/nginx.patch | patch -p1
curl https ://raw.githubusercontent.com/kn007/patch/c59592bc1269ba666b3bb471243c5212b50fd608/nginx_auto_using_PRIORITIZE_CHACHA.patch | patch -p1
#? Strict-SNI patch
curl https://raw.githubusercontent.com/hakasenyang/openssl-patch/master/nginx_strict-sni .patch | patch -p1
popd

Compile and install Nginx

If you have compiled and installed Nginx, you can enter nginx -V to view the previous configure configuration. Compile with the required parameters later.

Key parameters:
? Add –with-openssl=../openssl-1.1.1 to specify the OpenSSL path
?HTTP2 HPACK needs to add the –with-http_v2_hpack_enc parameter.
?SPDY needs to be added –with-http_spdy_module

Note that the –with-openssl parameter is changed to its own OpenSSL folder address.

My full configure command is as follows, please analogy.

Cd nginx-1.15.4

./configure \
–user=www \
–group=www \
–prefix=/usr/local/nginx \
–with-http_stub_status_module \
–with-threads \
–with-file-aio \
–with -pcre-jit \
–with-http_ssl_module \
–with-http_v2_module \
–with-http_gzip_static_module \
–with-http_sub_module \
–with-http_flv_module \
–with-http_mp4_module \
–with-http_gunzip_module \
–with -http_realip_module \
–with-http_addition_module \
–with-stream \
–with-stream_ssl_module \
–with-stream_ssl_preread_module \
–with-stream_realip_module \
–with-http_slice_module \
–with-http_geoip_module \
–with-google_perftools_module \
–with-openssl=../openssl-1.1.1 \
–with-http_v2_hpack_enc \
–with-http_spdy_module

After configure is complete, enter the following statement to start compiling.

Make

After the compilation is completed, if no error is reported, enter the following to install.

Make install

Configuring Nginx Web Hosting

Add the following to the appropriate location in your conf file to replace the original content. I removed TLS1 and TLS1.1 due to security upgrade considerations. In addition, the new cipher suite for TLS 1.3 can only be used in TLS 1.3, and the old cipher suite cannot be used for TLS 1.3. It seems that all virtual hosts must be configured to use TLS1.3.

Ssl_early_data on;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers [TLS13+AESGCM+AES128|TLS13+AESGCM+AES256|TLS13+CHACHA20]:[EECDH+ECDSA+AESGCM+AES128|EECDH+ECDSA+CHACHA20]:EECDH+ECDSA+ AESGCM+AES256:EECDH+ECDSA+AES128+SHA:EECDH+ECDSA+AES256+SHA:[EECDH+aRSA+AESGCM+AES128|EECDH+aRSA+CHACHA20]:EECDH+aRSA+AESGCM+AES256:EECDH+aRSA+AES128+ SHA:EECDH+aRSA+AES256+SHA:RSA+AES128+SHA:RSA+AES256+SHA:RSA+3DES;
ssl_ecdh_curve X25519:P-256:P-384;
ssl_prefer_server_ciphers on;

Finally, use nginx -t to test the correctness of the nginx configuration.

success

Restart Nginx and you will find that your website is already connected to TLS1.3.

rminal window and follow these steps:

1. Generate the private key using the command sudo openssl genrsa -out ca.key 2048

2. Generate a CSR using the command sudo openssl req -new -key ca.key -out ca.csr

3. Use the command sudo openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt to generate a self-signed key

Now we need to copy the newly generated file to the correct location with the following command:

sudo cp ca.crt /etc/ssl/certs/
sudo cp ca.key /etc/ssl/private/
sudo cp ca.csr /etc/ssl/private/

Create an Nginx configuration

Remember, we want to enable SSL via TLS support. To do this, we must create a new Nginx configuration file with the following command:

Sudo nano /etc/nginx/conf.d/ssl.conf

In the file, paste the following:

Server {

Location / {
root /usr/share/nginx/html;
index index.html index.htm;
}

Listen 443 ssl;
server_name www.example.com;
ssl_certificate /etc/ssl/certs/ca.crt;
ssl_certificate_key /etc/ssl/private/ca.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers TLS- CHACHA20-POLY1305-SHA256: TLS-AES-256-GCM-SHA384: TLS-AES-128-GCM-SHA256: HIGH: !aNULL:!MD5;

}

Note: Be sure to change the root location to reflect your Nginx installation. However, if you follow the steps to build a Nginx that supports TLS, the above configuration should work.

Save and close the file. Test the new Nginx configuration file with the following command:

Sudo nginx -t

You should see the test passed.

Restart and test

Now we need to restart NGINX. Use the following command to do this:

Sudo systemctl restart nginx

Point your browser to https://SERVER_IP and you should see the NGINX welcome screen.
To ensure that your site is delivered with TLS 1.3 enabled, you can use the browser’s built-in tools.
For example, in Firefox, open the page and click the security button (the lock icon to the left of the address bar).
Click the right arrow associated with the page, then click More Info.
In the results window (Figure B), you should see the connection using TLS 1.3 encryption.

This is all about enabling SSL and TLS on the Nginx website.
Remember that you should use an SSL certificate from a reputable certification authority.
However, it is always a good idea to use a self-signed certificate for testing purposes.
Once you have confidence in this process, please purchase a certificate and deploy it to your Nginx site.

tomcat tuning

Sync sync disk

echo 3 > /proc/sys/vm/drop_caches # Clean up useless memory space

Tomcat8 final configuration
1.${tomcat}/bin/catalina.sh Join
1.${tomcat}/bin/catalina.sh
JAVA_OPTS=”-Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms1G -Xmx1G -Xss256k -XX:NewSize=1G -XX:MaxNewSize=1G
-XX:PermSize=128m -XX:MaxPermSize=128m -XX:+DisableExplicitGC”

2.
2.
JAVA_OPTS=”$JAVA_OPTS -server -Xms3G -Xmx3G -Xss256k -XX:PermSize=128m -XX:MaxPermSize=128m -XX:+UseParallelOldGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/aaa/dump -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/usr/tomcat/dump/heap_trace.txt -XX:NewSize=1G -XX:MaxNewSize=1G”

2.${tomcat}/conf/server.xml
Open commented out

<Executor name=”tomcatThreadPool” namePrefix=”catalina-exec-”
maxThreads=”300″ minSpareThreads=”50″/>

Add options not found in Connector

<Connector port=”80″ protocol=”org.apache.coyote.http11.Http11NioProtocol”
executor=”tomcatThreadPool”
minSpareThreads=”50″
maxSpareThreads=”500″
enableLookups=”false”
acceptCount=”500″
debug=”0″
connectionTimeout=”10000″
redirectPort=”8443″
compression=”on”
compressableMimeType=”text/html,text/xml,text/plain,text/javascript,text/css”
disableUploadTimeout=”true”
URIEncoding=”UTF-8″
useBodyEncodingForURI=”true”
/>

Detailed explanation of each parameter:

-Xms: Set the JVM initial memory size (default is 1/64 of physical memory)

-Xmx: Set the maximum memory that the JVM can use (default is 1/4 of physical memory, recommended: 80% of physical memory)

-Xmn: Set the minimum memory of the JVM (128-256m is enough, generally not set)

The default free heap memory is less than
At 40%, the JVM will increase the heap until the maximum limit of -Xmx; when the free heap memory is greater than 70%, the JVM will reduce the heap to the minimum limit of -Xms. So the server is generally set to -Xms,
-Xmx is equal to avoid resizing the heap after each GC.

In larger applications, the default memory is not enough and may cause the system to fail. A common problem is to report a Tomcat memory overflow error “java.lang.OutOfMemoryError:
Java heap space”, causing the client to display 500 errors.

-XX:PermSize : Perm memory size when starting the JVM

-XX:MaxPermSize : is the maximum available Perm memory size (default is 32M)

-XX:MaxNewSize, default is 16M

The full name of PermGen space is Permanent Generation
Space, refers to the permanent storage area of ??memory, this memory is mainly stored by the JVM Class and Meta information, Class will be placed in PermGen when it is Loader
In space, it is different from the Heap area that stores the instance (Instance), GC (Garbage
Collection) will not be in the main program runtime against PermGen
Space is cleaned up, so if your application has a very CLASS, it is likely to appear “java.lang.OutOfMemoryError:
PermGen space” error.

For WEB projects, when jvm loads a class, the objects in the permanent domain increase sharply, so that jvm constantly adjusts the size of the permanent domain. To avoid adjustments, you can use more parameter configuration. If your WEB
APP uses a large number of third-party jars, the size of which exceeds the default size of jvm, then this error message will be generated.

Other parameters:

-XX:NewSize: The default is 2M. This value is set to a large adjustable new object area, reducing Full.
GC times

-XX:NewRatio : Change the proportion of new and old space, which means that the size of the new space is 1/8 of the old space (default is 8)

-XX:SurvivorRatio: Change the size ratio of the Eden object space and the remaining space, meaning that the Eden object is empty.

The size between the two is greater than the survivor space by 2 times survivorRatio (default is 10)

-XX:userParNewGC can be used to set parallel collection [multiple CPU]

-XX:ParallelGCThreads can be used to increase parallelism [multiple CPU]

-XXUseParallelGC can be set to use parallel clear collector [multi-CPU]

maxThreads
The maximum number of request processing threads to be created by this Connector, which therefore determines the maximum number of simultaneous requests that can be handled.If not specified, this attribute is set to 200. If an executor is associated with this connector, this attribute is ignored as the connector will execute tasks using the executor rather than an internal thread pool.
300
minSpareThreads
The minimum number of threads always kept running. If not specified, the default of 10 is used.
50
connectionTimeout
The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. Use a value of -1 to indicate no (i.e. infinite) timeout. The default value is 60000 (i.e. 60 seconds) but note that the standard server.xml that ships with Tomcat sets this to 20000 (i.e. 20 seconds). Unless disableUploadTimeout is set to false, this timeout will also be used when reading the request body (if any).
tcpNoDelay
If set to true, the TCP_NO_DELAY option will be set on the server socket, which improves performance under most circumstances. This is set to true by default.
socketBuffer
The size (in bytes) of the buffer to be provided for socket output buffering. -1 can be specified to disable the use of a buffer. By default, a buffers of 9000 bytes will be used.
server
Overrides the Server header for the http response. If set, the value for this attribute overrides the Tomcat default and any Server header set by a web application. If not set, any value specified by the application is used. If the application does not specify a value then Apache-Coyote/1.1 is used. Unless you are paranoid, you won’t need this feature.
maxHttpHeaderSize
The maximum size of the request and response HTTP header, specified in bytes. If not specified, this attribute is set to 8192 (8 KB).
maxKeepAliveRequests
The maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well as HTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute is set to 100.
maxConnections
For BIO the default is the value of maxThreads unless an Executor is used in which case the default will be the value of maxThreads from the executor. For NIO the default is 10000. For APR/native, the default is 8192.
keepAliveTimeout
The number of milliseconds this Connector will wait for another HTTP request before closing the connection. The default value is to use the value that has been set for the connectionTimeout attribute. Use a value of -1 to indicate no (i.e. infinite) timeout.

Database Pool Configuration

<Resource name=”jdbc/productdb” auth=”Container” type=”javax.sql.DataSource”
maxTotal=”10″ maxIdle=”30″ maxWaitMillis=”10000″ logAbandoned=”true”
username=”root” password=”admin” driverClassName=”com.mysql.jdbc.Driver”
url=”jdbc:mysql://localhost:3306/products”/>
</Context>

JVM Settings
We have set the minimum and maximum heap size to 1GB respectively as below:

export CATALINA_OPTS=”-Xms1024m -Xmx1024m”

-Xms – Specifies the initial heap memory
-Xmx – Specifies the maximum heap memory

AJP Connector configuration
The AJP connector configuration below is configured so that there are two threads allocated to accept new connections.
This should be configured to the number of processors on the machine however two should be suffice here.
We have also allocated 400 threads to process requests, the default value is 200.
The “acceptCount” is set to 100 which denotes the maximum queue length to be used for incoming connections.
The default value is 10. Lastly we have set the minimum threads to 20 so that there are always 20 threads running in the pool to service requests:

<Connector port=”8009″ protocol=”AJP/1.3″ redirectPort=”8443″ acceptorThreadCount=”2″ maxThreads=”400″ acceptCount=”200″ minSpareThreads=”20″/>

Database Pool Configuration
We have modified the maximum number of pooled connections to 200 so that there are ample connections in the pool to service requests.

<Context>
<Resource name=”jdbc/productdb” auth=”Container” type=”javax.sql.DataSource”
maxTotal=”200″ maxIdle=”30″ maxWaitMillis=”10000″ logAbandoned=”true”
username=”xxxx” password=”xxxx” driverClassName=”com.mysql.jdbc.Driver”
url=”jdbc:mysql://localhost:3306/products”/>
</Context>

JVM Settings
Since we have increased the maximum number of pooled connections and AJP connector thread thresholds above,
we should increase the heap size appropriately. We have set the minimum and maximum heap size to 2GB respectively as below:

export CATALINA_OPTS=”-Xms2048m -Xmx2048m”

JVM Heap Monitoring and Tuning

Specifying appropriate JVM heap parameters to service your deployed applications on Tomcat is paramount to application performance.
There are a number of different ways which we can monitor JVM heap usage including using JDK hotspot tools such as jstat, JConsole etc. –
however to gather detailed data on when and how garbage collection is being performed, it is useful to turn on GC logging on the Tomcat instance.
We can turn on GC logging by modifying the catalina start up script with the following command:

JAVA_OPTS=”$JAVA_OPTS -verbose:gc -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps”

We can set the minimum and maximum heap size,
the size of the young generation and the maximum amount of memory to be allocated to the permanent generation used to store application class metadata by specifying the setting the CATALINA_OPTS parameter by executing this command:

export CATALINA_OPTS=”-Xms1024m -Xmx2048m -XX:MaxNewSize=512m -XX:MaxPermSize=256m”

This configuration is optimized for REST/HTTP API call. And it doesn’t use any reverse proxy like Apache, NginX etc. We will reside simple L4 switch infront of tomcat groups.

In addition we will not use Tomcat Clustering, Session etc. So the clustering configuration is omitted.

Listener Setting
<Listener className=”org.apache.catalina.security.SecurityListener” checkedOsUsers=”root” />

checkedOsUser setting means Unix system user “root” cannot start Tomcat. If user starts tomcat as a root user it makes log file as a root user permission. In that case tomcat user cannot delete the log file.

<Listener className=”org.apache.catalina.core.JreMemoryLeakPreventionListener” />

This makes detect memory leak.

Connector Setting
protocol=”org.apache.coyote.http11.Http11Protocol”

It makes tomcat use BIO. Tomcat has options for IO (BIO,NIO,APR). APR is fastest IO setting. It uses Apache web server IO module, so it is fastest. But it uses C code (JNI call), it can have a risk to kill tomcat instance. (with core dump). APR is more faster about 10% than BIO. But BIO is more stable. Use BIO. (Default is BIO)

acceptCount=”10?

It specifies server request queue length. If message is queued in the request queue, it means server cannot handle incoming message (it is overloaded). It will wait for idle thead and the request message will be pending. This setting reduce total size of request queue to 10. If the queue has been overflowed, client will get a error. It can protect server from high overload and let system manager to know the server has been overloaded.

enableLookups=”false”

In Java Servlet Code, user can look up request message origin (IP or URL).

For example user in yahoo.com send request to server, and Tomcat try to resolve incoming request IP address.
“enableLooksups” option enables return DNS name not a IP address. During this processing Tomcat look up DNS.
It brings performance degradation. This option removes DNS look up stage and increase performance.

compression=”off”

We are using REST protocol not a normal web contents like HTML,Image etc.
This options allows to compress HTTP message. It consumes computing power but it can reduce network payload.
In our environment compression is not required. It is better to save computing power. And in some particular Telco network, compression is not supported.

connectionTimeout=”10000?

It is HTTP Connection time out (client to server). It is milliseconds. (10,000 = 10 sec).

If server cannot make a connection from client til 10 sec. It will throw HTTP time out error.
In normal situation, our API response time is under 5 sec. So 10 sec means, server has been overloaded.
The reason why I increased the time up to 10 sec is, depends on network condition, connection time will be deferred.

maxConnections=”8192?

The maximum number of connection, tomcat can handle. It means tomcat can handle maximum 8192 socket connection in a time. This value is restricted by Unix system parameter “ulimit –f” (You can check up in unix console)

maxKeepAliveRequests=”1?

As I mentioned above, this configuration is optimized to REST API request not a common web system. It means client will send REST API call only. It sends the request and get a response. Client will not send request in a short time. It means we cannot reuse the connection from the client. So this setting turn of HTTP Keep Alive. (After response the request from client, tomcat disconnect the connection immediately)

maxThreads=”100?

This defines total number of thread in Tomcat. It represents max number of active user at that time. Usually 50~500 is good for performance. And 100~200 is best (it is different depends on use case scenario).

Please test with 100 and 200 values and find value for performance. This parameter also get a impact from DB connection pool setting, even if we have a lot of thread , and the total number of db connection is not enough, the thread will wait to acquire the connection.

tcpNoDelay=”true”

This allows us to use TCP_NO_DELAY in tcp/ip layer. It makes send small packet without delay. In TCP, to reduce small package congestion, it gathers small packet to tcp buffer until it has been filled and send the packet. TCP_NO_DELAY option makes send small packet immediately even though TCP buffer is not full.

JVM Tuning
Java Virtual Machine tuning is also very important factor to run Tomcat

The focus of JVM tuning is reducing Full GC time.

-server

This option makes JVM to optimize server application. It tunes HotSpot compiler etc internally. This option is very important and mandatory in server side application

-Xmx1024m –Xms1024m -XX:MaxNewSize=384m -XX:MaxPermSize=128m

This memory tuning options, our infrastructure is using c1.mediuem amazon instance, so the available memory is about 1.7 gb total. Heap size is 1G and let them to have fixed size. It defines max 1Gb, min 1Gb heap size. The NewSize is 384mb (1/3 size of total heap size). 1/3 New Size is best performance usually. Perm size is defines area of memory to load class. 64mb is enough. But we will use 128m first time and tune based on gc log analysis later.

Total physical memory consumption is 1G heap + 128mb perm = 1.128 GB and JVM internally uses memory to run JVM itself. It consumes about 350~500mb. So total estimated required memory is about 1.128GB+500m = 1.5 GB.

As I mentioned, c1.mediuem size has only 1.7GB physical memory. If consumed memory exceeds actual physical memory, it makes disk swapping. If JVM memory is swapped out to disk, the performance is significantly degraded. Please take care swapping is not occurred.

-XX:-HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./java_pid<pid>.hprof

These options are for trouble shooting “OOM (Java Out Of Memory Error”. If out of memory error has been occurred. The memory layout will be dumped to disk. The location of dumpfile is specified by “-XX:HeapDumpPath” option

-XX:ParallelGCThreads=2 -XX:-UseConcMarkSweepGC

These options specify GC strategy. It uses ParallelGC for Minor collection and 2 threads will be used for the Minor GC. And for Old area, concurrent gc will be used. It will reduce Full gc time

-XX:-PrintGC -XX:-PrintGCDetails -XX:-PrintGCTimeStamps -XX:-TraceClassUnloading -XX:-TraceClassLoading

These option specifies GC logging. It logs the GC log detail to stderr (console output). It shows usage trend os Java Heap memory, time stamp etc. (it contains old,new & perm area usage).

Especially, ClassLoading & UnLoading option show what class is loaded and unloaded to memory. It helps us to trace Perm Out of memory error.

CentOS 7.3 compile and install Nginx 1.12.2

CentOS 7.3 compile and install Nginx 1.12.2

1. Introduction to
Nginx Nginx (pronounced [engine x]) was developed for performance optimization. Its best known advantages are its stability and low system resource consumption, as well as high processing power for concurrent connections (single physical server available) Supporting 30,000 to 50,000 concurrent connections), is a high-performance HTTP and reverse proxy server, and an IMAP/POP3/SMTP proxy service.

Linux system: CentOS 7.3

2. Installation preparation
2.1 gcc installation

To install nginx, you need to compile the source code downloaded from the official website first, and compile it depends on the gcc environment. If there is no gcc environment, you need to install it:

[root@nginx ~]# yum -y install gcc-c++

2.2 pcre installation

PCRE (Perl Compatible Regular Expressions) is a Perl library that includes a perl-compatible regular expression library. Nginx’s http module uses pcre to parse regular expressions, so you need to install the pcre library on linux, a secondary development library developed with pcre. Nginx also needs this library.

[root@nginx ~]# yum -y install pcre pcre-devel

2.3 zlib installation

The zlib library provides a variety of ways to compress and decompress. nginx uses zlib to gzip the contents of the http package, so you need to install the zlib library on Centos.

[root@nginx ~]# yum -y install zlib zlib-devel

2.4 OpenSSL installation

OpenSSL is a powerful Secure Sockets Layer cryptography library that includes major cryptographic algorithms, common key and certificate encapsulation management functions, and SSL protocols, and provides a rich set of applications for testing or other purposes.
Nginx supports not only the http protocol, but also https (that is, http on the ssl protocol), so you need to install the OpenSSL library in Centos.

[root@nginx ~]# yum -y install openssl openssl-devel

3. Nginx installation

3.1 Nginx version

Download URL: https://nginx.org/en/download.html

Select the latest stable version of nginx-1.12.2
release notes:

Mainline version: Mainline is the version that Nginx is currently working on. It can be said that the development version of
Stable version: the latest stable version, the recommended version of the production environment
Legacy versions: the legacy version of the legacy version

3.2 Nginx Download

Use the wget command to download

[root@nginx ~]# wget -c https://nginx.org/download/nginx-1.12.2.tar.gz

Install without the wget command:

[root@nginx ~]# yum -y install wget

3.3 Decompression

[root@nginx ~]# tar -zxvf nginx-1.12.2.tar.gz

3.4 Installation and Configuration

3.4.1 Creating a New nginx User and Group

[root@nginx include]# groupadd nginx
[root@nginx include]# useradd -g nginx -d /home/nginx nginx
[root@nginx include]# passwd nginx

3.4.2 Third-party module installation

This article uses the third-party module sticky as an example, the version is 1., 2.5, download address:

You can download it from the Linux Community Resource Station:

——————————————Dividing line—— ————————————

The free download address is at http://linux.linuxidc.com/

Username and password are both www.linuxidc.com

The specific download directory is compiled and installed in the /2000 data/September/27/CentOS 7.3 installation Nginx 1.12.2/

The download method can be found at http://www.linuxidc.com/Linux/2013-07/87684.htm

——————————————Dividing line—— ————————————

Upload and unzip:

[root@nginx ~]# tar -zxvf nginx-goodies-nginx-sticky-module-ng-08a395c66e42..gz
[root@nginx ~]# mv nginx-goodies-nginx-sticky-module-ng-08a395c66e42 nginx-sticky -1.2.5

3.4.3 Installation

[root@nginx ~]# cd nginx-1.12.2
[root@nginx nginx-1.12.2]# ./configure –add-module=/root/nginx-sticky-1.2.5

Specify user, path, and module configuration (optional):

./configure \
–user=nginx –group=nginx \ #Installed user group
–prefix=/usr/local/nginx \
#Specify the installation path –with-http_stub_status_module \ #Monitor nginx state, need to be in nginx.
Conp configuration –with-http_ssl_module \ #Support HTTPS
–with-http_sub_module \ #Support URL redirection
–with-http_gzip_static_module #static compression–
add-module=/root/nginx-sticky-1.2.5 #Install sticky module

3.5 compilation

[root@nginx nginx-1.12.2]# make && make install

Error:

/root/nginx-sticky-1.2.5//ngx_http_sticky_misc.c: In the function ‘ngx_http_sticky_misc_sha1’:
/root/nginx-sticky-1.2.5//ngx_http_sticky_misc.c:176:15: Error: ‘SHA_DIGEST_LENGTH’ is not declared (first used in this function)
u_char hash[SHA_DIGEST_LENGTH];
^
/root/nginx-sticky-1.2.5//ngx_http_sticky_misc.c:176:15: Note: Every undeclared identifier appears in it Only one time is reported in the function
/root/nginx-sticky-1.2.5//ngx_http_sticky_misc.c:176:10: Error: Unused variable ‘hash’ [-Werror=unused-variable]
u_char hash[SHA_DIGEST_LENGTH];
^
/ Root/nginx-sticky-1.2.5//ngx_http_sticky_misc.c: In the function ‘ngx_http_sticky_misc_hmac_sha1’:
/root/nginx-sticky-1.2.5//ngx_http_sticky_misc.c:242:15: Error: ‘SHA_DIGEST_LENGTH’ is not declared ( Used for the first time in this function)
u_char hash[SHA_DIGEST_LENGTH];

Solution:

Modify the ngx_http_sticky_misc.c file to add #include <openssl/sha.h> and #include <openssl/md5.h> modules

[root@nginx nginx-1.12.2]# sed -i ’12a #include <openssl/sha.h>’ /root/nginx-sticky-1.2.5/ngx_http_sticky_misc.c
[root@nginx nginx-1.12.2] # sed -i ’12a #include <openssl/md5.h>’ /root/nginx-sticky-1.2.5/ngx_http_sticky_misc.c

Recompile:

[root@nginx nginx-1.12.2]# make && make install

3.6 nginx command global execution settings

[root@nginx bin]# cd /usr/local/nginx/sbin/
[root@nginx sbin]# ln -s /usr/local/nginx/sbin/nginx /usr/local/bin/nginx

4. Nginx related commands

4.1 version view

[root@nginx ~]# nginx -v
nginx version: nginx/1.12.2

4.2 Viewing Loaded Modules

[root@nginx ~]# nginx -V
nginx version: nginx/1.12.2
built by gcc 4.8.5 20150623 ( Red Hat 4.8.5-28) (GCC)
configure arguments: –add-module=/root/nginx -sticky-1.2.5/

4.3 Start and stop command

4.3.1 Starting

[root@nginx nginx-1.12.2]# nginx

4.3.2 Stop

[root@nginx nginx-1.12.2]# nginx -s stop
[root@nginx nginx-1.12.2]# nginx -s quit

4.3.3 Dynamic loading

[root@nginx nginx-1.12.2]# ngins -s reload

4.3.4 Testing the correctness of the configuration file nginx.conf

[root@nginx ~]# nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful

Nginx -s quit: This method stops the process until the nginx process finishes processing the task and stops.
Nginx -s stop: This method is equivalent to first detecting the nginx process id and then using the kill command to force the process to be killed.

Nginx -s reload: Dynamic loading. When the configuration file nginx.conf changes, the command is dynamically loaded.

4.4 Boot from boot

Edit the /etc/rc.d/rc.local file and add a line /usr/local/nginx/sbin/nginx

[root@nginx rc.d]# cd /etc/rc.d
[root@nginx rc.d]# sed -i ’13a /usr/local/nginx/sbin/nginx’ /etc/rc.d/rc. Local
[root@nginx rc.d]# chmod u+x rc.local

5. Change the default port

Edit the configuration file /usr/local/nginx/conf/nginx.conf and change the default port 80 to 81:

[root@nginx ~]# view /usr/local/nginx/conf/nginx.conf

Load configuration:

[root@nginx ~]# nginx -s reload

6. Visit Nginx

6.1 Turn off the firewall

[root@nginx ~]# firewall-cmd –state
running
[root@nginx ~]# systemctl stop firewalld.service
[root@nginx ~]# firewall-cmd –state
not running

6.2 Accessing Nginx

Http://localhost:81

CentOS 7 deploys rsync backup server

1.1 rsync (official address http://wwww.samba.org/ftp/rsync/rsync.html)

A remote data synchronization tool that quickly synchronizes files between multiple hosts over a LAN/WAN. Rsync uses the so-called “rsync algorithm” to synchronize files between two local and remote hosts. This algorithm only transfers different parts of two files, rather than transmitting them all at once, so the speed is quite fast.

1.2rsync backup mode

1) Local data backup method

Rsync parameter The data to be backed up where the backup data is saved

2) Remote backup mode

Pull:rsync [OPTION…] [USER@]HOST:SRC… [DEST]
What is the rsync parameter to pull data from the corresponding host to pull data to save the local path
Push:rsync [OPTION…] SRC … [USER@]HOST:DEST
rsync where the local data is pushed by the local host data

3) Daemon process

Pull:rsync [OPTION…] [USER@]HOST::SRC… [DEST]
rsync parameter authenticates the user to pull data from the corresponding host. Pull data to save the local path
Push:rsync [OPTION…] SRC … [USER@]HOST::DEST
rsync parameter authenticates the location where the user will push the local host data for push data

2. Environmental preparation

[root@backup ~]# cat /etc/RedHat-release
CentOS Linux release 7.2.1511 (Core)
[root@backup ~]# uname -r
3.10.0-327.el7.x86_64
[root@backup ~]# getenforce
Disabled
[root@backup ~]# systemctl status firewalld.service
? firewalld.service – firewalld – dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
[root@backup ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.0.41 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::20c:29ff:fe40:1a4e prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:40:1a:4e txqueuelen 1000 (Ethernet)
RX packets 1607 bytes 355312 (346.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 358 bytes 47574 (46.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.41 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::20c:29ff:fe40:1a58 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:40:1a:58 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 23 bytes 1698 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 03.??rsync???????

3.1 Check if rsync is installed

[root@backup ~]# rpm -qa rsync

rsync-3.0.9-17.el7.x86_64

3.2 Writing rsync configuration files

[root@backup ~]# cat /etc/rsyncd.conf
#rsync_config
#created by fengyu 2018-3-16
uid = rsync Operator
gid = rsync User group
use chroot = no Related security
max connections = 200 Maximum number of connections
timeout = 300 Timeout
pid file = /var/run/rsyncd.pid The process number file corresponding to the process
lock file = /var/run/rsyncd.log lock file
log file = /var/log/rsyncd.log log file
[backup] module Name
path = /backup module location
ignore errors ignore error program
read only = false read only
list = false list of
hosts allowed = 172.16.1.01/24 network segment allowed accesses
deny = 0.0.0.0/32 network forbidden to access segment
Auth users = rsync_backup User that does not exist, only used for authentication
secrets file = /etc/rsync.password There is no key file when the user authenticates

3.3 Create an administrative user

[root@backup ~]# useradd -s /sbin/nologin -M rsync

3.4 Creating an Authentication User Password File

[root@backup ~]# echo “rsync_backup:123456” > /etc/rsync.password
[root@backup ~]# chmod 600 /etc/rsync.password

3.5 Create a backup directory

[root@backup ~]# mkdir /backup
[root@backup ~]# chown -R rsync.rsync /backup/

3.6 start daemon

[root@backup ~]# rsync –daemon
[root@backup ~]# netstat -lntup | grep rsync
tcp 0 0 0.0.0.0:873 0.0.0.0:* LISTEN 3286/rsync
tcp6 0 0 :::873 :::* LISTEN 3286/rsync

4. Configure the rsync daemon client (here, the NFS storage server is used as an example. In the work, the rsync server and the NFS server are matched with each other)

4.1 Creating a Password Authentication File

[root@nfs01 ~]# echo “123456” > /etc/rsync.password

[root@nfs01 ~]# chmod 600 /etc/rsync.password

4.2 Writing real-time monitoring push scripts

[root@nfs01 backup]# cat /server/scripts/inotify.sh
#!bin/bash
inotifywait -mrq –format “%w%f” -e create,close_write,delete,moved_to /data/|\
while read fy
do
rsync -az /data/ –delete rsync_backup@172.16.1.41::backup –password-file=/etc/rsync.password
done

4.3 Put the script execution command into the /etc/rc.local directory (under the CentOS 7 system, you need to execute the permissions in the /etc/rc.local directory)

[root@nfs01 ~]# echo “/usr/bin/sh /server/scripts/inotify.sh” >> /etc/rc.local

MySQL master-slave and proxy server

MySQL master-slave principle and process

principle

MySQL Replication is an asynchronous replication process (mysql5.1.7 or later is divided into asynchronous replication and semi-synchronous modes), copied from a Mysql instace (we call it Master) to another Mysql instance (we call it Slave) . Implementation of the Master and Slave The entire replication process is mainly done by three threads, two threads (Sql thread and IO thread) on the Slave side and another thread (IO thread) on the Master side.

To implement MySQL Replication, you must first open the Binary Log (mysql-bin.xxxxxx) function on the Master side, otherwise it will not be implemented. Because the entire copy process is actually the various operations recorded in the log that Slave takes the log from the Master and then executes it in its own complete sequence. Open MySQL’s Binary Log by adding the “-log-bin” parameter option during the startup of MySQL Server, or by adding the “log-bin” parameter to the mysqld parameter group in the configuration file (parameter part of the [mysqld] ID) item.

Basic process

1.Slave The IO thread above connects to the Master and requests the contents of the log after the specified location of the specified log file (or the log from the beginning);

2. After receiving the request from the IO thread of the slave, the master reads the log information after the specified location of the specified log according to the request information through the IO thread responsible for the copy, and returns it to the IO thread of the slave. In addition to the information contained in the log, the return information includes the name of the Binary Log file on the Master side and the location in the Binary Log.

3. After receiving the information, the IO thread of the Slave writes the received log content to the end of the Relay Log file (mysql-relay-bin.xxxxxx) on the Slave end, and reads the bin-log of the Master. The file name and location are recorded in the master-info file so that the next time you read it, you can clearly tell the Master “Which location I need to start from a bin-log, please send it to me”

4.Slave’s SQL thread detects the newly added content in the Relay Log, and immediately parses the contents of the log file into executable Query statements when the Master side is actually executed, and executes the Query on its own. In this way, the same Query is actually executed on the Master side and the Slave side, so the data at both ends is exactly the same.

Several modes of MySQL replication

Starting with MySQL 5.1.12, you can do this in three modes:

– based on statement-based replication (SBR),

– row-based replication (RBR),

– mixed-based replication (MBR)

Accordingly, there are three formats for binlog: STATEMENT, ROW, MIXED. In the MBR mode, the SBR mode is the default.

Set master-slave replication mode:
log-bin=mysql-bin

#binlog_format=”STATEMENT”

#binlog_format=”ROW”

binlog_format=”MIXED”

It is also possible to dynamically modify the format of the binlog at runtime. For example
mysql> SET SESSION binlog_format = ‘STATEMENT’;

mysql> SET SESSION binlog_format = ‘ROW’;

mysql> SET SESSION binlog_format = ‘MIXED’;

mysql> SET GLOBAL binlog_format = ‘STATEMENT’;

Mysql master-slave replication configuration

Version: mysql5.7 CentOS 7.2

Scenario description:
Primary database server: 192.168.1.100, MySQL is installed, and there is no application data.
From the database server: 192.168.1.200, MySQL is already installed and there is no application data.

1 Operations on the primary server

Start mysql service
service mysqld start

Log in to the MySQL server via the command line
mysql -uroot -p’new-password’

Authorize copy permissions to the database server 192.168.1.200
mysql> GRANT REPLICATION SLAVE ON *.* to ‘rep1’@’192.168.1.200’ identified by ‘password’;

Query the status of the primary database

When configuring the slave server,
mysql> show master status;
+————————-+———- +————–+——————+————— —-+
| File| Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+————————-+——- —+————–+——————+———— ——-+
| mysql-master-bin.000001 | 154 | | | |
+————————-+- ———+————–+——————+—— ————-+

Need to pay attention here, if the query returns
mysql> show slave status;
Empty set (0.01 sec)

This is because the bin-log is not enabled, you need to modify the /etc/my.cnf file
server-id =1
log-bin=mysql-master-bin

Also need to pay attention to when modifying the file. After mysql5.7, you need to specify the server-id when you open the binlog. Otherwise, you will get an error.

2 Configuring the slave server

Modify the configuration file from the server /opt/mysql/etc/my.cnf

Change server-id = 1 to server-id = 2 and make sure the ID is not used by other MySQL services.

Start mysql service
service mysqld start

Login to manage MySQL server
mysql -uroot -p’new-password’

change master to
master_host=’192.168.1.100′,
master_user=’root’,
master_password=’mohan..’,
master_log_file=’mysql-master-bin.000001′,
master_log_pos=154;

Start the slave synchronization process
mysql> start slave after the correct execution ;

Note that there is another pit here.
Even if the start slave is successful, the master-slave copy is still failing.
1. Error message
mysql> show slave staus;

Last_IO_Error: Fatal error: The slave I/O thread stops because master and slave have Equal MySQL server UUIDs;
These UUIDs must be different for replication to work.

2. View the master-slave server_id variable
master_mysql> show variables like ‘server_id’;
+—————+- ——+
| Variable_name | Value |
+—————+——-+
| server_id | 33|
+——- ——–+——-+

slave_mysql> show variables like ‘server_id’;
+—————+——- +
| Variable_name | Value |
+—————+——-+
| server_id | 11|
+————- –+——-+
— From the above situation, the master has used a different server_id

3 from mysql , solve the fault
### view auto.cnf file
[root@dbsrv1 ~] cat /data/mysqldata/auto.cnf ### uuid
[Auto]
Server-uuid = 62ee10aa-b1f7-11e4-90ae-080 027 615 026

[dbsrv2 the root @ ~] More /data/mysqldata/auto.cnf # ### from the uuid, there really is repeated, because the cloned Virtual machine, only change server_id not
[auto]
server-uuid=62ee10aa-b1f7-11e4-90ae-080027615026

[root@dbsrv2 ~]# mv /data/mysqldata/auto.cnf /data/mysqldata/auto.cnf.bk # ##Rename the file
[root@dbsrv2 ~]# service mysql restart ### Restart mysql
Shutting down MySQL.[ OK ]
Starting MySQL.[ OK ]
[root@dbsrv2 ~]# more /data/mysqldata/auto.cnf ###Automatically generate a new auto.cnf file after reboot, ie new UUID
[auto]
server-uuid=6ac0fdae-b5d7-11e4-a9f3-0800278ce5c9

slave

[root@dbsrv1 ~]# mysql -uroot -pxxx -e “show slave status\G”|grep Running
Warning: Using a password on the command line interface can be insecure.
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

###uuid
master_mysql> show variables like ‘server_uuid’;
+—————+————————————–+
| Variable_name | Value|
+—————+————————————–+
| server_uuid | 62ee10aa-b1f7-11e4-90ae-080027615026 |
+—————+————————————–+
1 row in set (0.00 sec)

master_mysql> show slave hosts;
+———–+——+——+———–+————————————–+
| Server_id | Host | Port | Master_id | Slave_UUID |
+———–+——+——+———–+————————————–+
|33 | | 3306 |11 | 62ee10aa-b1f7-11e4-90ae-080027615030 |
|22 | | 3306 |11 | 6ac0fdae-b5d7-11e4-a9f3-0800278ce5c9 |
+———–+——+——+———–+————————————–+

The values ??of Slave_IO_Running and Slave_SQL_Running must be YES to indicate that the status is normal.

If the application data already exists on the primary server, the following processing is required when performing the master-slave replication:
(1) The primary database performs the lock table operation, and the data is not allowed to be written again.
mysql> FLUSH TABLES WITH READ LOCK;

(2) View the status of the main database
mysql> show master status;

(3) Record the values ??of FILE and Position.
Copy the data file of the primary server (the entire /opt/mysql/data directory) to the secondary server. It is recommended to compress it through the tar archive and then transfer it to the secondary server.

(4) cancel the main database lock
mysql> UNLOCK TABLES;

3 Verify master-slave replication

Create the database first_db on the primary server
mysql> create database first_db;
Query Ok, 1 row affected (0.01 sec)

Create a table first_tb on the primary server
mysql> create table first_tb(id int(3),name char(10));
Query Ok, 1 row affected (0.00 sec)

Insert the record
mysql> insert into first_tb values ??(001, “myself”) in the table first_tb on the primary server ;
Query Ok, 1 row affected (0.00 sec)

Viewing from the server

mysql> show databases;

MySQL read and write separation configuration under CentOS 7.2
MySQL read and write separation configuration

Environment: CentOS 7.2 MySQL 5.7

Scene Description:
Database Master Primary Server: 192.168.1.100
Database Slave Slave Server: 192.168.1.200
MySQL-Proxy Dispatch Server: 192.168.1.210

The following operations are performed on the 192.168.1.210 MySQL-Proxy scheduling server.

1. Check the software package required by the system

You need to configure the EPEL YUM source
wget before installation https://mirrors.ustc.edu.cn/epel//7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
rpm -ivh epel-release-7 -11.noarch.rpm
yum clean all
yum update

yum install -y gcc* gcc-c++* autoconf* automake* zlib* libxml* ncurses-devel* libmcrypt* libtool* flex* pkgconfig* libevent* glib*

2. Compile and install lua

The read-write separation of MySQL-Proxy is mainly implemented by the rw-splitting.lua script, so you need to install lua.

Lua can
download the source package from http://www.lua.org/download.html in the following way.

Search for the relevant rpm package from rpm.pbone.net
download. Fedora . RedHat .com/pub/fedora/epel/5/i386/lua-5.1.4-4.el5.i386.rpm
download.fedora.redhat.com/ Pub/fedora/epel/5/x86_64/lua-5.1.4-4.el5.x86_64.rpm

Here we recommend to use the source package to install
cd /opt/install
wget http://www.lua.org/ftp/lua-5.1.4.tar.gz
tar zvfx lua-5.1.4.tar.gz
cd lua-5.1 .4
make linux
make install
mkdir /usr/lib/pkgconfig/
cp /opt/install/lua-5.1.4/etc/lua.pc /usr/lib/pkgconfig/
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig

Attention problem

When compiling, the problem is that there is a lack of dependencies** readline**, then readline depends on ncurses, so you must first install two software
yum install -y readline-devel ncurses-devel

3. Install and configure MySQL-Proxy

Download mysql-proxy

???http://dev.mysql.com/downloads/mysql-proxy/
wget https://downloads.mysql.com/archives/get/file/mysql-proxy-0.8.5-linux-glibc2.3-x86-64bit.tar.gz
tar zxvf mysql-proxy-0.8.5-linux-glibc2.3-x86-64bit.tar.gz
mv zxvf mysql-proxy-0.8.5-linux-glibc2.3-x86-64bit /usr/local/mysql-proxy

** Configure mysql-proxy, create the main configuration file **
cd /usr/local/mysql-proxy
mkdir lua #Create script storage directory
mkdir logs #Create log directory
cp share / doc / mysql-proxy / rw-splitting.lua . /lua #copy read and write separation configuration file
vi /etc/mysql-proxy.cnf #Create configuration file
[mysql-proxy]
user=root
#Run mysql-proxy user admin-username=proxyuser #??mysql user
admin- Password=123456 #user’s password
proxy-address=192.168.1.210:4040 #mysql-proxyRun ip and port, no port, default 4040
proxy-read-only-backend-addresses=192.168.1.200 #Specify backend from slave Read the data
proxy-backend-addresses=192.168.1.100 #Specify the backend master master write data
proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua #Specify the read-write separation configuration file Location
admin-lua-script=/usr/local/mysql-proxy/lua/admin.lua #Specify the management script
log-file=/var/log/mysql-proxy.log #log location
Log-level=info #definition log log level
daemon=true#run
keepalive=true in daemon mode #mysql-proxy crash, try to restart

There is a pit here.

The comments in the configuration file should be completely deleted, otherwise it may cause some characters that cannot be recognized.
This is not the most pit, the most pit is: even if you delete the comment, remove the extra white space, you may still report the following error:
2018-09-21 06:39:40: (critical) Key file contains key “daemon ” Which has a value that cannot be interpreted.”

2018-09-21 06:52:22: (critical) Key file contains key “keepalive” which has a value that cannot be interpreted.

The reason for the above problem is daemon=true, keepalive=true is not written now, to be changed to:
daemon=1
keepalive=1

Execute permissions to the configuration file

chmod 660 /etc/mysql-porxy.cnf
Configuring the admin.lua file

In the /etc/mysql-proxy.cnf configuration file, the management file of /usr/local/mysql-proxy/lua/admin.lua is still not created yet. So, now you need to edit and create the admin.lua file. For this version of mysql-proxy-0.8.5, I found the following admin.lua script, which is valid for this version:

vim /usr/local/mysql-proxy/lua/admin.lua
function set_error(errmsg)
proxy.response = {
type = proxy.MYSQLD_PACKET_ERR,
errmsg = errmsg or “error”
}
end
function read_query(packet)
if packet:byte() ~= proxy.COM_QUERY then
set_error(“[admin] we only handle text-based queries (COM_QUERY)”)
return proxy.PROXY_SEND_RESULT
end
local query = packet:sub(2)
local rows = { }
local fields = { }
if query:lower() == “select * from backends” then
fields = {
{ name = “backend_ndx”,
type = proxy.MYSQL_TYPE_LONG },
{ name = “address”,
type = proxy.MYSQL_TYPE_STRING },
{ name = “state”,
type = proxy.MYSQL_TYPE_STRING },
{ name = “type”,
type = proxy.MYSQL_TYPE_STRING },
{ name = “uuid”,
type = proxy.MYSQL_TYPE_STRING },
{ name = “connected_clients”,
type = proxy.MYSQL_TYPE_LONG },
}
for i = 1, #proxy.global.backends do
local states = {
“unknown”,
“up”,
“down”
}
local types = {
“unknown”,
“rw”,
“ro”
}
local b = proxy.global.backends[i]
rows[#rows + 1] = {
i,
b.dst.name, — configured backend address
states[b.state + 1], — the C-id is pushed down starting at 0
types[b.type + 1], — the C-id is pushed down starting at 0
b.uuid, — the MySQL Server’s UUID if it is managed
b.connected_clients — currently connected clients
}
end
elseif query:lower() == “select * from help” then
fields = {
{ name = “command”,
type = proxy.MYSQL_TYPE_STRING },
{ name = “description”,
type = proxy.MYSQL_TYPE_STRING },
}
rows[#rows + 1] = { “SELECT * FROM help”, “shows this help” }
rows[#rows + 1] = { “SELECT * FROM backends”, “lists the backends and their state” }
else
set_error(“use ‘SELECT * FROM help’ to see the supported commands”)
return proxy.PROXY_SEND_RESULT
end
proxy.response = {
type = proxy.MYSQLD_PACKET_OK,
resultset = {
fields = fields,
rows = rows
}
}
return proxy.PROXY_SEND_RESULT
end

** Modify the read-write separation configuration file**
vim /usr/local/mysql-proxy/lua/rw-splitting.luaif not proxy.global.config.rwsplit
proxy.global.config.rwsplit = {
min_idle_connections = 1, #default When there are more than 4 connections, the read/write separation starts, and 1
max_idle_connections = 1, #
default8 , changed to 1 is_debug = false
}
end

mysql-proxy
/usr/local/mysql-proxy/bin/mysql-proxy –defaults-file=/etc/mysql-proxy.cnf

netstat -tupln | grep 4000 #killall -9 has been started mysql-proxy #close mysql-proxy

AWS How to Copy EBS Volumes to Different Account

It is common for an organization to have multiple AWS accounts, In my opinion, it’s a best practice to have different accounts for DEV, QA, and PROD environments. One of the reasons, just in case of any security compromise your other accounts would be unaffected.

Managing multiple accounts could be a challenge as well. Recently, I published an article on Cross-account copying of EC2 Instances. I thought it would be cool to share on how to do the same with EBS volumes, so here we go.

Obtaining Target AWS Account ID

To obtain AWS account ID a simple solution is to do the following

  1. Log in to AWS console on the target account
  2. Click on the top right corner Support > Support Center
  3. Copy the AWS Account ID and paste it into your favorite notepad, we will need it later

 

 

 

Create a Snapshot of EBS Volume

  1. To create a snapshot of EBS volume, log in to AWS console and click on Volumes under EC2 > Elastic Block Store
  2. Select the volume of your choice, Right-click or choose to Create Snapshot from the Actions Menu
  3. Enter Volume description and click Create Snapshot
  4. Verify the snapshot created

 

 

 

Verify Snapshot in Target Account

  1. Login to AWS console using target account, click on the EC2 > Elastic Block Store > Snapshots
  2. Choose Private Snapshots from the filter
  3. There you go, you can create a new EBS volume in target account using the shared snapshot


How to Fix Redis CLI Error Connection Reset by Peer

Overview

Recently we faced an issue with an AWS ElastiCache Redis instance when trying to test the connections from EC2 Instance using Redis CLI, we faced the following error

$ ./redis-cli -c -h my-redis-server -p 6379 my-redis-server:6379> set a “hello” Error: Connection reset by peer

 

 

Problem

On investigation, we found that the ElastiCache Redis Instance is using Encryption in-transit and Encryption at-rest and by design, the Redis CLI is not compatible with the encryption.

 

 

 

 

 

 

Solution

The solution to test the connectivity and to use the Redis CLI with ElastiCache In-Transit encryption,  we needed to configure ‘stunnel

Stunnel is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs’ code

With stunnel client would create a SSL tunnel to the Redis nodes and use redis-cli to connect through the tunnel to access data from encrypted redis nodes.

Here is how to setup everything, we are using Amazon Linux in this example but same steps should work on Redhat Linux

1. Install stunnel


$ sudo yum install stunnel  -y

2. Configure SSL tunnel for redis-cli


$ sudo vi /etc/stunnel/redis-cli.conf
 Set the following properties in redis-cli.conf file 
fips = no
setuid = root
setgid = root
pid = /var/run/stunnel.pid
debug = 7
options = NO_SSLv2
options = NO_SSLv3
[redis-cli]
client = yes
accept = 127.0.0.1:6379
connect = my-redis-server:6379

3. Start Stunnel


$ sudo stunnel /etc/stunnel/redis-cli.conf

4. Verify the tunnel is running


$ sudo netstat -tulnp | grep -i stunnel
 You might see following output from the above command 
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 1314 stunnel

5. Last is to connect to Redis cluster using Redis CLI using SSL tunnel (Yes it is connecting using localhost tunnel)


redis-cli -h localhost -p 6379

Note: To install Redis CLI on Linux check this AWS documentation

6. Run few Redis commands to see if it works



$ ./redis-cli -h localhost -p 6379
localhost:6379> set a hello
OK
localhost:6379> get a
"hello"
localhost:6379>

Hope you find this post useful, please leave a comment and let us know what topics you would like to see.

Elasticsearch cluster deployment under CentOS7.3

 

The bottom layer of Elastic is the open source library Lucene. However, you can’t use Lucene directly, you have to write your own code to call its interface. Elastic is a package of Lucene that provides an operational interface to the REST API, out of the box. The bottom layer of Elastic is the open source library. However, you can’t use Lucene directly, you have to write your own code to call its interface. Elastic is a package of Lucene that provides an operational interface to the REST API, out of the box.

First, the basic concept in ES
Cluster
Represents a cluster. There are multiple nodes in the cluster. One of them is the master node. This master node can be elected. The master and slave nodes are internal to the cluster. A concept of es is decentralization. The literal understanding is that there is no central node. This is for the outside of the cluster, because from the outside, the es cluster is logically a whole, and you communicate with any node and the whole. Es cluster communication is equivalent.

Shards
On behalf of index shards, es can divide a complete index into multiple shards. This has the advantage of splitting a large index into multiple nodes and distributing them to different nodes. Form a distributed search. The number of shards can only be specified before the index is created, and cannot be changed after the index is created.

Replica
On behalf of the index copy, es can set a copy of multiple indexes. The role of the copy is to improve the fault tolerance of the system. When a certain piece of a node is damaged or lost, it can be recovered from the copy. The second is to improve the efficiency of es query, es will automatically load balance the search request.

Recovery
Representing data recovery or data redistribution, es will re-allocate index fragments according to the load of the machine when a node joins or exits, and data recovery will be performed when the suspended node restarts.

River
A data source representing es, and a method of synchronizing data to es by other storage methods (such as databases). It is an es service that exists as a plugin. By reading the data in the river and indexing it into es, the official river has couchDB, RabbitMQ, Twitter, and Wikipedia.

Gateway
Represents the storage mode of the es index snapshot. es defaults to storing the index in memory first, and then persisting to the local hard disk when the memory is full. The gateway stores the index snapshot. When the es cluster is closed and restarted, the index backup data is read from the gateway. Es supports multiple types of gateways, including local file system (default), distributed file system, Hadoop HDFS and amazon s3 cloud storage service.

Discovery.zen
On behalf of es’ automatic discovery node mechanism, es is a p2p-based system. It first searches for existing nodes through broadcast, and then communicates between nodes through multicast protocols, and also supports peer-to-peer interaction.

Transport
Represents the internal node of es or the interaction between the cluster and the client. The default internal interaction is using tcp protocol. At the same time, it supports the transmission protocol of http protocol (json format), thrift, servlet, memcached, zeroMQ, etc. (integrated through plug-in).

Second, the deployment environment
The deployment of Elasticsearch clusters with three CentOS 7.3s requires index fragmentation when deploying Elasticsearch clusters. The following is a brief introduction to index sharding.

system Node name IP
CentOS7.3 Els1 172.18.68.11
CentOS7.3 Els2 172.18.68.12
CentOS7.3 Els3 172.18.68.13
An index in an ES cluster may consist of multiple shards, and each shard can have multiple copies. By dividing a single index into multiple shards, we can handle large indexes that cannot be run on a single server. Simply put, the size of the index is too large, causing efficiency problems. The reason you can’t run is probably memory or storage. Since each shard can have multiple copies, you can increase the load capacity of the query by assigning copies to multiple servers.

Third, deploy Elasticsearch cluster
1. Install JDK
Elasticsearch is based on Java development and is a Java program that runs in Jvm, so the first step is to install JDK.

yum install -y java-1.8.0-openjdk-devel
2. Download elasticsearch
Https://artifacts.elastic.co/downloads/elasticsearch/ is the official site of ELasticsearch. If you need to download the latest version, you can download it from the official website. It can be downloaded to a local computer and then imported into CentOS, or it can be downloaded directly from CentOS.

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.1.rpm
3. Configuration directory
After the installation is complete, many files will be generated, including configuration file log files, etc. The following are the most important configuration file paths.

/etc/elasticsearch/elasticsearch.yml # els
/etc/elasticsearch/jvm.options # JVM
/etc/elasticsearch/log4j2.properties
/var/lib/elasticsearch
4. Create a directory for storing data and logs
The data file will grow rapidly with the system running, so the default log file and data file path can not meet our needs, then manually create the log and data file path, you can use NFS, you can use Raid, etc. to facilitate future management and Expansion

mkdir /els/{log,date}
chown -R elasticsearch.elasticsearch /els/*
5. Cluster configuration
The most important cluster configuration is two node.nameand network.hosteach node must be unreasonable. Among them node.nameis the node name mainly in Elasticsearch’s own log to distinguish each node information.
discovery.zen.ping.unicast.hostsIt is the node information in the cluster. You can use the IP address and you can use the host name (you must be able to resolve it).

vim /etc/elasticsearch
cluster.name: aubin-cluster
node.name: els1

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

network.host: 192.168.0.1
http.port: 9200

discovery.zen.ping.unicast.hosts: [“172.18.68.11”, “172.18.68.12”,”172.18.68.13″]

discovery.zen.minimum_master_nodes: 2
6.JVM configuration
Since Elasticsearch is developed in Java, you can /etc/elasticsearch/jvm.optionsset the JVM settings through a configuration file. If there is no special requirement, you can press the default.
However, there are still two of the most important minimum memory -Xmx1gwith the -Xms1gJVM. If it is too small, it will cause Elasticsearch to stop as soon as it starts. Too slow to slow down the system itself

vim /etc/elasticsearch/jvm.options
-Xms1g # JVM
-Xmx1g
7. Start Elasticsearch
Since launching Elasticsearch will automatically start daemon-reload, the last item can be omitted.

systemctl enable elasticsearch.service
systemctl start elasticsearch
systemctl daemon-reload
8. Testing
Elasticsearch has directly heard the http interface, so you can view some cluster-related information directly using the curl command.

You can use the curl command to get information about the cluster.

_cat stands for viewing information
The nodes are for viewing node information, the default will be displayed as one line, so use the knife? Preety to make the information better display
?preety makes the output information more friendly display
curl -XGET ‘http://172.18.68.11:9200/_cat/nodes?pretty’
172.18.68.12 18 68 0 0.07 0.06 0.05 mdi – els2
172.18.68.13 25 67 0 0.01 0.02 0.05 mdi * els3
172.18.68.11 7 95 0 0.02 0.04 0.05 mdi – els1
If you want to see more about cluster information, current node statistics, etc., you can use the following command to get all the information you can view.

curl -XGET ‘http://172.18.68.11:9200/_cat?pretty’