May 2018
M T W T F S S
« Apr    
 123456
78910111213
14151617181920
21222324252627
28293031  

Categories

WordPress Quotes

Cherish your visions and your dreams as they are the children of your soul, the blueprints of your ultimate achievements.
Napoleon Hill

Recent Comments

May 2018
M T W T F S S
« Apr    
 123456
78910111213
14151617181920
21222324252627
28293031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (13)
Ansibile (17)
Apache (123)
Asterisk (2)
cassandra (2)
Centos (207)
Centos RHEL 7 (245)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (21)
Eassy (11)
EXCHANGE (3)
Fedora (6)
ftp (4)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (6)
JBOSS (31)
jenkins (1)
Kubernetes (1)
Ldap (4)
Linux (186)
Linux Commands (167)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (21)
MYSQL (74)
Nagios (4)
NaturalOil (13)
Nginx (23)
Ngix (1)
Openstack (6)
Oracle (29)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (9)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (58)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

17 visitors online now
1 guests, 16 bots, 0 members

Hit Counter provided by dental implants orange county

CentOS 7 MongoDB 3.4

nstall MongoDB 3.4 process on yum under CentOS 7 system.

The first step to see if there is a MongoDB configuration yum source

Switch to the yum directory cd /etc/yum.repos.d/

View the file ls

The second part does not exist to add yum source

Create the file touch mongodb-3.4.repo

Edit this file vi mongodb-3.4.repo

content:

Cat /etc/yum.repos.d/mongodb-3.4.repos

[mongodb-org-3.4]

Name=MongoDB Repository

Baseurl=https://repo.mongodb.org/yum/ RedHat /$releasever/mongodb-org/3.4/x86_64/

Gpgcheck=1

Enabled=1

Gpgkey=https://www.mongodb.org/static/pgp/server-3.2.asc

You can modify gpgcheck=0 here to save gpg verification

Update all packages before installation: yum update (optional operation)

Then install: yum install -y mongodb-org

Check the mongo installation location whereis mongod

Check the modified configuration file: vi /etc/mongod.conf

Start mongod :systemctl start mongod.service

Stop mongod :systemctl stop mongod,service

External network access needs to shut down the firewall:

CentOS 7.0 uses firewall as the firewall by default, and it is changed to iptables firewall.

Close the firewall:

Systemctl stop firewalld.service #stop firewall

Systemctl disable firewalld.service #Disable firewall startup

Use mongodb : mongo 192.168.60.102:27017

>use admin

>show dbs

>show collections

After restarting Mongodb, log in to the admin account and create a super-privileged user

Use admin

db.createUser({user:’root’,pwd:’root’,roles:[{ “role” : “root”, “db” : “admin” }]});

Configuration

Fork=true ## allows programs to run in the background

#auth=true ## Start Authentication

Logpath=/data/db/mongodb/logs/mongodb.log logappend=true # Write log mode: set to true to append. The default is to override dbpath=/data/db/mongodb/data/ ## data storage directory

Pidfilepath=/data/db/mongodb/logs/mongodb.pid # Process ID. If not specified, there will be no PID file when starting. Default default.

Port=27017

#bind_ip=192.168.2.73 # Bind addresses. The default is 127.0.0.1. You can only change the data directory storage mode by setting the local connection # to true. Each database file is stored in a different folder in the DBPATH specified directory. # With this option, MongoDB can be configured to store data on different disk devices to increase write throughput or disk capacity. The default is false. # suggest to configure sub-options from the beginning

Directoryperdb=true # Disable log # Enable the operation log for the journal to ensure write consistency and data consistency. Create a journal directory in the dbpath directory

Nojournal = true ##

Max connections # The maximum number of connections. Default: Depends on system (ie ulimit and file descriptor) restrictions. # MongoDB does not limit its own connection. When the setting is greater than the system limit, it is invalid and the system limit prevails. # Set the value of this value higher than the size of the connection pool and the total number of connections to prevent connections at peak times. # Note: This value cannot be set greater than 20000. maxConns=1024

Application Load Balancer

 

 

Create an Application Load Balancer
The Application Load Balancer is a flavor of the Elastic Load Balancing (ELB) service. It works more or less the same as a Classic Load Balancer, however, it has several additional features and some new concepts you need to understand so this Lab will covers those first.
AWS has great documentation to help you get started, so let’s start by referencing it:

The load balancer serves as the single point of contact for clients. You add one or more listeners to your load balancer.

A listener checks for connection requests from clients, using the protocol and port that you configure, and forwards requests to one or more target groups, based on the rules that you define. Each rule specifies a target group, condition, and priority.
When the condition is met, the traffic is forwarded to the target group. You must define a default rule for each listener, and you can add rules that specify different target groups based on the content of the request (also known as content-based routing).
Each target group routes requests to one or more registered targets, such as EC2 instances, using the protocol and port number that you specify. You can register a target with multiple target groups. You can configure health checks on a per target group basis.
Health checks are performed on all targets registered to a target group that is specified in a listener rule for your load balancer.
The following diagram illustrates the basic components. Notice that each listener contains a default rule, and one listener contains another rule that routes requests to a different target group. One target is registered with two target groups.

 

Recommended Network ACL Rules for Your VPC

Recommended Rules for Scenario 1

Scenario 1 is a single subnet with instances that can receive and send Internet traffic. For more information, see Scenario 1: VPC with a Single Public Subnet.

The following table shows the rules we recommended. They block all traffic except that which is explicitly required.

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv4 address.
110 0.0.0.0/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv4 address.
120 Public IPv4 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic from your home network (over the Internet gateway).
130 Public IPv4 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic from your home network (over the Internet gateway).
140 0.0.0.0/0 TCP 32768-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound IPv4 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
110 0.0.0.0/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
120 0.0.0.0/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound IPv4 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for IPv6

If you implemented scenario 1 with IPv6 support and created a VPC and subnet with associated IPv6 CIDR blocks, you must add separate rules to your network ACL to control inbound and outbound IPv6 traffic.

The following are the IPv6-specific rules for your network ACL (which are in addition to the rules listed above).

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 ::/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv6 address.
160 ::/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv6 address.
170 IPv6 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic from your home network (over the Internet gateway).
180 IPv6 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic from your home network (over the Internet gateway).
190 ::/0 TCP 32768-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
130 ::/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
140 ::/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
150 ::/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for Scenario 2

Scenario 2 is a public subnet with instances that can receive and send Internet traffic, and a private subnet that can’t receive traffic directly from the Internet. However, it can initiate traffic to the Internet (and receive responses) through a NAT gateway or NAT instance in the public subnet. For more information, see Scenario 2: VPC with Public and Private Subnets (NAT).

For this scenario you have a network ACL for the public subnet, and a separate one for the private subnet. The following table shows the rules we recommend for each ACL. They block all traffic except that which is explicitly required. They mostly mimic the security group rules for the scenario.

ACL Rules for the Public Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv4 address.
110 0.0.0.0/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv4 address.
120 Public IP address range of your home network TCP 22 ALLOW Allows inbound SSH traffic from your home network (over the Internet gateway).
130 Public IP address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic from your home network (over the Internet gateway).
140 0.0.0.0/0 TCP 1024-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound IPv4 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
110 0.0.0.0/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
120 10.0.1.0/24 TCP 1433 ALLOW Allows outbound MS SQL access to database servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

140 0.0.0.0/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

150 10.0.1.0/24 TCP 22 ALLOW Allows outbound SSH access to instances in your private subnet (from an SSH bastion, if you have one).
* 0.0.0.0/0 all all DENY Denies all outbound IPv4 traffic not already handled by a preceding rule (not modifiable).

ACL Rules for the Private Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 10.0.0.0/24 TCP 1433 ALLOW Allows web servers in the public subnet to read and write to MS SQL servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

120 10.0.0.0/24 TCP 22 ALLOW Allows inbound SSH traffic from an SSH bastion in the public subnet (if you have one).
130 10.0.0.0/24 TCP 3389 ALLOW Allows inbound RDP traffic from the Microsoft Terminal Services gateway in the public subnet.
140 0.0.0.0/0 TCP 1024-65535 ALLOW Allows inbound return traffic from the NAT device in the public subnet for requests originating in the private subnet.

For information about specifying the correct ephemeral ports, see the important note at the beginning of this topic.

* 0.0.0.0/0 all all DENY Denies all IPv4 inbound traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
110 0.0.0.0/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
120 10.0.0.0/24 TCP 32768-65535 ALLOW Allows outbound responses to the public subnet (for example, responses to web servers in the public subnet that are communicating with DB servers in the private subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound IPv4 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for IPv6

If you implemented scenario 2 with IPv6 support and created a VPC and subnets with associated IPv6 CIDR blocks, you must add separate rules to your network ACLs to control inbound and outbound IPv6 traffic.

The following are the IPv6-specific rules for your network ACLs (which are in addition to the rules listed above).

ACL Rules for the Public Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 ::/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv6 address.
160 ::/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv6 address.
170 IPv6 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic over IPv6 from your home network (over the Internet gateway).
180 IPv6 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic over IPv6 from your home network (over the Internet gateway).
190 ::/0 TCP 1024-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
160 ::/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
170 ::/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet
180 2001:db8:1234:1a01::/64 TCP 1433 ALLOW Allows outbound MS SQL access to database servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

200 ::/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet)

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

210 2001:db8:1234:1a01::/64 TCP 22 ALLOW Allows outbound SSH access to instances in your private subnet (from an SSH bastion, if you have one).
* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

ACL Rules for the Private Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 2001:db8:1234:1a00::/64 TCP 1433 ALLOW Allows web servers in the public subnet to read and write to MS SQL servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

170 2001:db8:1234:1a00::/64 TCP 22 ALLOW Allows inbound SSH traffic from an SSH bastion in the public subnet (if applicable).
180 2001:db8:1234:1a00::/64 TCP 3389 ALLOW Allows inbound RDP traffic from a Microsoft Terminal Services gateway in the public subnet, if applicable.
190 ::/0 TCP 1024-65535 ALLOW Allows inbound return traffic from the egress-only Internet gateway for requests originating in the private subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
130 ::/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
140 ::/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
150 2001:db8:1234:1a00::/64 TCP 32768-65535 ALLOW Allows outbound responses to the public subnet (for example, responses to web servers in the public subnet that are communicating with DB servers in the private subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for Scenario 3

Scenario 3 is a public subnet with instances that can receive and send Internet traffic, and a VPN-only subnet with instances that can communicate only with your home network over the VPN connection. For more information, see Scenario 3: VPC with Public and Private Subnets and AWS Managed VPN Access.

For this scenario you have a network ACL for the public subnet, and a separate one for the VPN-only subnet. The following table shows the rules we recommend for each ACL. They block all traffic except that which is explicitly required.

ACL Rules for the Public Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows inbound HTTP traffic to the web servers from any IPv4 address.
110 0.0.0.0/0 TCP 443 ALLOW Allows inbound HTTPS traffic to the web servers from any IPv4 address.
120 Public IPv4 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic to the web servers from your home network (over the Internet gateway).
130 Public IPv4 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic to the web servers from your home network (over the Internet gateway).
140 0.0.0.0/0 TCP 32768-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration,see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound IPv4 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 0.0.0.0/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
110 0.0.0.0/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
120 10.0.1.0/24 TCP 1433 ALLOW Allows outbound MS SQL access to database servers in the VPN-only subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

140 0.0.0.0/0 TCP 32768-65535 ALLOW Allows outbound IPv4 responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet)

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound traffic not already handled by a preceding rule (not modifiable).

ACL Settings for the VPN-Only Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 10.0.0.0/24 TCP 1433 ALLOW Allows web servers in the public subnet to read and write to MS SQL servers in the VPN-only subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

120 Private IPv4 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic from the home network (over the virtual private gateway).
130 Private IPv4 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic from the home network (over the virtual private gateway).
140 Private IP address range of your home network TCP 32768-65535 ALLOW Allows inbound return traffic from clients in the home network (over the virtual private gateway)

This range is an example only. For information about choosing the correct ephemeral ports for your configuration,see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 Private IP address range of your home network All All ALLOW Allows all outbound traffic from the subnet to your home network (over the virtual private gateway). This rule also covers rule 120; however, you can make this rule more restrictive by using a specific protocol type and port number. If you make this rule more restrictive, then you must include rule 120 in your network ACL to ensure that outbound responses are not blocked.
110 10.0.0.0/24 TCP 32768-65535 ALLOW Allows outbound responses to the web servers in the public subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

120 Private IP address range of your home network TCP 32768-65535 ALLOW Allows outbound responses to clients in the home network (over the virtual private gateway).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for IPv6

If you implemented scenario 3 with IPv6 support and created a VPC and subnets with associated IPv6 CIDR blocks, you must add separate rules to your network ACLs to control inbound and outbound IPv6 traffic.

The following are the IPv6-specific rules for your network ACLs (which are in addition to the rules listed above).

ACL Rules for the Public Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 ::/0 TCP 80 ALLOW Allows inbound HTTP traffic from any IPv6 address.
160 ::/0 TCP 443 ALLOW Allows inbound HTTPS traffic from any IPv6 address.
170 IPv6 address range of your home network TCP 22 ALLOW Allows inbound SSH traffic over IPv6 from your home network (over the Internet gateway).
180 IPv6 address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic over IPv6 from your home network (over the Internet gateway).
190 ::/0 TCP 1024-65535 ALLOW Allows inbound return traffic from hosts on the Internet that are responding to requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
150 ::/0 TCP 80 ALLOW Allows outbound HTTP traffic from the subnet to the Internet.
160 ::/0 TCP 443 ALLOW Allows outbound HTTPS traffic from the subnet to the Internet.
170 2001:db8:1234:1a01::/64 TCP 1433 ALLOW Allows outbound MS SQL access to database servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

190 ::/0 TCP 32768-65535 ALLOW Allows outbound responses to clients on the Internet (for example, serving web pages to people visiting the web servers in the subnet)

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

ACL Rules for the VPN-only Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
150 2001:db8:1234:1a00::/64 TCP 1433 ALLOW Allows web servers in the public subnet to read and write to MS SQL servers in the private subnet.

This port number is an example only. Other examples include 3306 for MySQL/Aurora access, 5432 for PostgreSQL access, 5439 for Amazon Redshift access, and 1521 for Oracle access.

* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
130 2001:db8:1234:1a00::/64 TCP 32768-65535 ALLOW Allows outbound responses to the public subnet (for example, responses to web servers in the public subnet that are communicating with DB servers in the private subnet).

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for Scenario 4

Scenario 4 is a single subnet with instances that can communicate only with your home network over a VPN connection. For a more information, see Scenario 4: VPC with a Private Subnet Only and AWS Managed VPN Access.

The following table shows the rules we recommended. They block all traffic except that which is explicitly required.

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
100 Private IP address range of your home network TCP 22 ALLOW Allows inbound SSH traffic to the subnet from your home network.
110 Private IP address range of your home network TCP 3389 ALLOW Allows inbound RDP traffic to the subnet from your home network.
120 Private IP address range of your home network TCP 32768-65535 ALLOW Allows inbound return traffic from requests originating in the subnet.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all inbound traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
100 Private IP address range of your home network All All ALLOW Allows all outbound traffic from the subnet to your home network. This rule also covers rule 120; however, you can make this rule more restrictive by using a specific protocol type and port number. If you make this rule more restrictive, then you must include rule 120 in your network ACL to ensure that outbound responses are not blocked.
120 Private IP address range of your home network TCP 32768-65535 ALLOW Allows outbound responses to clients in the home network.

This range is an example only. For information about choosing the correct ephemeral ports for your configuration, see Ephemeral Ports.

* 0.0.0.0/0 all all DENY Denies all outbound traffic not already handled by a preceding rule (not modifiable).

Recommended Rules for IPv6

If you implemented scenario 4 with IPv6 support and created a VPC and subnet with associated IPv6 CIDR blocks, you must add separate rules to your network ACL to control inbound and outbound IPv6 traffic.

In this scenario, the database servers cannot be reached over the VPN communication via IPv6, therefore no additional network ACL rules are required. The following are the default rules that deny IPv6 traffic to and from the subnet.

ACL Rules for the VPN-only Subnet

Inbound
Rule # Source IP Protocol Port Allow/Deny Comments
* ::/0 all all DENY Denies all inbound IPv6 traffic not already handled by a preceding rule (not modifiable).
Outbound
Rule # Dest IP Protocol Port Allow/Deny Comments
* ::/0 all all DENY Denies all outbound IPv6 traffic not already handled by a preceding rule (not modifiable).

MYSQL 5.8 on Centos 7

  1. systemd is now used to look after mySQL instead of mysqld_safe (which is why you get the -bash: mysqld_safe: command not found error – it’s not installed)
  2. The user table structure has changed.

So to reset the root password, you still start mySQL with --skip-grant-tables options and update the user table, but how you do it has changed.

1. Stop mysql:
systemctl stop mysqld

2. Set the mySQL environment option 
systemctl set-environment MYSQLD_OPTS="--skip-grant-tables"

3. Start mysql usig the options you just set
systemctl start mysqld

4. Login as root
mysql -u root

5. Update the root user password with these mysql commands
mysql> UPDATE mysql.user SET authentication_string = PASSWORD('MyNewPassword')
    -> WHERE User = 'root' AND Host = 'localhost';
mysql> FLUSH PRIVILEGES;
mysql> quit

*** Edit ***
As mentioned my shokulei in the comments, for 5.7.6 and later, you should use 
   mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';
Or you'll get a warning

6. Stop mysql
systemctl stop mysqld

7. Unset the mySQL envitroment option so it starts normally next time
systemctl unset-environment MYSQLD_OPTS

8. Start mysql normally:
systemctl start mysqld

Try to login using your new password:
7. mysql -u root -p

docker swarm

Docker Engine Starting from version 1.12.0, Docker Swarm is integrated natively. The operation of the cluster can be directly controlled by the docker service command, which is very convenient and the operation process is greatly simplified. Docker Swarm For the average developer, the biggest advantage lies in the native support of the load balancing mechanism, which can effectively scale the service up. With the help of the Raft Consensus algorithm, the robustness of the system is very good and can be tolerated as much as possible (n -1)/2 fault nodes.
Build Swarm Cluster

Install the latest docker

curl -sSL https://get.docker.com/ | sh
CentOS 7?????
firewall-cmd --permanent --zone=trusted --add-port=2377/tcp && \
firewall-cmd --permanent --zone=trusted --add-port=7946/tcp && \
firewall-cmd --permanent --zone=trusted --add-port=7946/udp && \
firewall-cmd --permanent --zone=trusted --add-port=4789/udp && \
firewall-cmd --reload 

Create a management node

$ docker swarm init --advertise-addr 192.168.99.100
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.

To add a worker to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
    192.168.99.100:2377

To add a manager to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-61ztec5kyafptydic6jfc1i33t37flcl4nuipzcusor96k7kby-5vy9t8u35tuqm7vh67lrz9xp6 \
    192.168.99.100:2377

When the management node is created, we can view the node creation status through the docker info and docker node ls commands.

$ docker info

Containers: 2
Running: 0
Paused: 0
Stopped: 2
  ...snip...
Swarm: active
  NodeID: dxn1zf6l61qsb1josjja83ngz
  Is Manager: true
  Managers: 1
  Nodes: 1
  ...snip...
$ docker node ls

ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
dxn1zf6l61qsb1josjja83ngz *  manager1  Ready   Active        Leader
??worker??

According to the previous command line output result prompt, two workers are now added to the cluster. Remember to replace the corresponding token and IP address with the actual value during execution.

$ docker swarm join \
  --token  SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
  192.168.99.100:2377

This node joined a swarm as a worker.
$ docker swarm join \
  --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
  192.168.99.100:2377

This node joined a swarm as a worker.
#????????hostname `hoshnamectl set-hostname worker2`

Now we can see all the nodes in the cluster on the manager1 node

$ docker node ls

ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
3g1y59jwfg7cf99w4lt0f662    worker2   Ready   Active
j68exjopxe7wfl6yuxml7a7j    worker1   Ready   Active
dxn1zf6l61qsb1josjja83ngz *  manager1  Ready   Active        Leader

So far, the cluster environment has been set up.

Deployment Test Service

We deployed nginx as an example to test the Swarm cluster we built.

$ docker service create --replicas 3 --publish 8080:80 --name helloworld nginx

The –replicas parameter here is used to indicate how many instances nginx needs to deploy because there are three physical machines. If replicas is set to 3, swarm will deploy one instance on each of the three machines. If you want to rescale the number of instances, you can use the following command.

docker service scale helloworld=5

We can check the deployment of nginx through a series of commands, such as

$ docker service inspect --pretty helloworld
$ docker service ps helloworld

Deleting a service is also very simple and you can simply execute rm.

$ docker service rm helloworld

Let’s look at a docker-compose.yml file first. It doesn’t matter what this is doing. It’s just a format that is easy to explain:

version: '2'
services:
  web:
    image: dockercloud/hello-world
    ports:
      - 8080
    networks:
      - front-tier
      - back-tier

  redis:
    image: redis
    links:
      - web
    networks:
      - back-tier

  lb:
    image: dockercloud/haproxy
    ports:
      - 80:80
    links:
      - web
    networks:
      - front-tier
      - back-tier
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock 

networks:
  front-tier:
    driver: bridge
  back-tier:
driver: bridge

It can be seen that a standard configuration file should contain three major parts: version, services, and networks. The most critical part is services and networks. Let’s first look at the rules for writing services.

  1. Image
    services:
      web:
    image: hello-world

    The second-level tag under the services tab is web. The name is customized by the user. It is the service name.
    Image is the image name or image ID of the specified service. If the image does not exist locally, Compose will try to pull the image.
    For example, the following formats are all possible:

    image: redis
    image: Ubuntu:14.04
    image: tutum/influxdb
    image: example-registry.com:4000/postgresql
    image: a4bc65fd
  2. Build

The service can be based on a specified image, or it can be based on a Dockerfile. When you use up to start a build task, this build tag is the build, which specifies the path to the Dockerfile folder. Compose will use it to automatically build this image and then use this image to start the service container.

build: /path/to/build/dir

It can also be a relative path, and the Dockerfile can be read as long as the context is determined.

build: ./dir

Set the context root and specify the Dockerfile as the target.

build:
  context: ../
  dockerfile: path/of/Dockerfile

Note that build is a directory, if you want to specify the Dockerfile file you need to use the dockerfile tag in the child tag of the build tag, as in the above example.
If you specify both the image and build tags, Compose will build the image and name the image after the image.

build: ./dir
image: webapp:tag

Since you can define the build task in docker-compose.yml, you must have the arg tag. Just like the ARG directive in the Dockerfile, it can specify the environment variables during the build process, but cancel after the build succeeds, at docker-compose. The yml file also supports this notation:

build:
  context: .
  args:
    buildno: 1
    password: secret

The following writing is also supported. In general, the following wording is more suitable for reading.

build:
  context: .
  args:
    - buildno=1
    - password=secret

Unlike ENV, ARG allows null values. E.g:

args:
  - buildno
  - password

This way the build process can assign values ??to them.

Note: YAML Boolean values ??(true, false, yes, no, on, off) must be quoted (either single or double quotes), otherwise they will be parsed as strings.

  1. Command

Use command to override the default command executed after the container starts.

command: bundle exec thin -p 3000

It can also be written in a format similar to Dockerfile:

command: [bundle, exec, thin, -p, 3000]

4.container_name

As mentioned earlier, Compose’s container name format is: <project name> <service name> <serial number>
Although you can customize the project name, service name, but if you want to fully control the container’s name, you can use this tag to specify:

container_name: app

The name of this container is specified as app.

5.depends_on

In the use of Compose, the biggest advantage is to use less to start the command, but the order of the general project container startup is required, if you start the container directly from top to bottom, it will inevitably fail because of container dependency problems.
For example, if the application container is started when the database container is not started, the application container will exit because it cannot find the database. In order to avoid this situation, we need to add a label, namely depends_on, which resolves the container’s dependency and startup sequence. The problem.
For example, the following container will start two services redis and db, and finally start the web service:

version: '2'
services:
  web:
    build: .
    depends_on:
      - db
      - redis
  redis:
    image: redis
  db:
    image: postgres

Note that when launching a web service using the docker-compose up web method by default, both the redis and db services are started because the dependencies are defined in the configuration file.

6.dns

The same as the –dns parameter, the format is as follows:

Dns: 8.8.8.8
can also be a list:

dns:
  - 8.8.8.8
  - 9.9.9.9

In addition, the configuration of dns_search is similar:

dns_search: example.com
dns_search:
  - dc1.example.com
  - dc2.example.com
  1. Tmpfs

Mounting a temporary directory inside the container has the same effect as the run parameter:

tmpfs: /run
tmpfs:
  - /run
  - /tmp
  1. Entrypoint

In the Dockerfile there is an instruction called the ENTRYPOINT directive that specifies the access point, and Chapter 4 has the difference compared to the CMD.
The access point can be defined in docker-compose.yml, overriding the definition in the Dockerfile:

Entrypoint: The /code/entrypoint.sh
format is similar to Docker, but can also be written like this:

entrypoint:
    - php
    - -d
    - zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20100525/xdebug.so
    - -d
    - memory_limit=-1
    - vendor/bin/phpunit

9.env_file

Remember the .env file mentioned earlier. This file can set Compose variables. In docker-compose.yml, you can define a file that stores variables.
If the configuration file is specified with docker-compose -f FILE, the path to the env_file uses the configuration file path.

If there is a conflict between the variable name and the environment instruction, the latter will prevail. The format is as follows:

Env_file: .env
or set multiple according to docker-compose.yml:

env_file:
  - ./common.env
  - ./apps/web.env
  - /opt/secrets.env

Note that the environment variable mentioned here is for the host’s Compose. If there is a build operation in the configuration file, these variables will not enter the build process. If you want to use variables in your build, it is still preferred. The arg tag.

  1. Environment

Unlike the above env_file tag, which is somewhat similar to arg, the effect of this tag is to set the mirror variable, which can save the variable to the image, which means that the starting container will also contain these variable settings. This is the same as arg. The biggest difference.
General arg tag variables are used only during the build process. The ENV instruction in environment and Dockerfile will keep the variables in the image and container, similar to the effect of docker run -e.

environment:
  RACK_ENV: development
  SHOW: 'true'
  SESSION_SECRET:

environment:
  - RACK_ENV=development
  - SHOW=true
  - SESSION_SECRET
  1. The expose

This tag is the same as the EXPOSE directive in the Dockerfile. It is used to specify the exposed port, but only as a reference. In fact, the port mapping of docker-compose.yml still has a tag like ports.

expose:
 - "3000"
 - "8000"
  1. External_links

In the Docker process, we have a lot of containers that are started using docker run alone. To make Compose connect to containers that are not defined in docker-compose.yml, we need a special label, external_links, which allows the Compose project to work. The containers inside are connected to containers outside of the project configuration (provided that at least one container in the external container is connected to the same network as the service in the project).
The format is as follows:

external_links:
 - redis_1
 - project_db_1:mysql
 - project_db_1:postgresql
  1. Extra_hosts

Add the host name tag, which is to add some records to the /etc/hosts file, similar to the –add-host of the Docker client:

extra_hosts:
 - "somehost:162.242.195.82"
 - "otherhost:50.31.209.229"

View the internal hosts of the container after startup:

162.242.195.82  somehost
50.31.209.229   otherhost
  1. Labels

Add metadata to the container, and the meaning of the Dockerfile’s LABEL directive is as follows:

labels:
  com.example.description: "Accounting webapp"
  com.example.department: "Finance"
  com.example.label-with-empty-value: ""
labels:
  - "com.example.description=Accounting webapp"
  - "com.example.department=Finance"
  - "com.example.label-with-empty-value"
  1. Links

Remember the above depends_on, that the tag solves the startup sequence problem, this tag resolves the container connection problem, and is the same as the docker client’s –link, which connects to containers in other services.
The format is as follows:

links:
 - db
 - db:database
 - redis

The alias used will be automatically created in /etc/hosts in the service container. E.g:

172.12.2.186  db
172.12.2.186  database
172.12.2.187  redis

The corresponding environment variable will also be created.

  1. Logging

This tag is used to configure the log service. The format is as follows:

logging:
  driver: syslog
  options:
    syslog-address: "tcp://192.168.0.42:123"

The default driver is json-file. Only json-file and journald can display logs through docker-compose logs. There are other ways to view logs, but Compose does not support them. For optional values, use options.
For more information on this you can read the official documentation:
https://docs.docker.com/engine/admin/logging/overview/

  1. Pid

Pid: “host”
sets the PID mode to host PID mode, sharing the process namespace with the host system. Containers using this tag will be able to access and manipulate the namespaces of other containers and hosts.

  1. Ports

Map the port’s tag.
Using the HOST:CONTAINER format or just specifying the port of the container, the host randomly maps ports.

ports:
 - "3000"
 - "8000:8000"
 - "49100:22"
 - "127.0.0.1:8001:8001"

Note: When using HOST:CONTAINER format to map ports, if you use a container port less than 60 you may get a wrong result, because YAML will parse xx:yy this number format is hexadecimal. Therefore, it is recommended to use a string format.

  1. Security_opt

Override the default label for each container. Simply put, it is the label for managing all services. For example, set the user tag for all services to USER.

security_opt:
  - label:user:USER
  - label:role:ROLE
  1. Stop_signal

Set another signal to stop the container. The SIGTERM stop container is used by default. Set another signal to use the stop_signal tag.

Stop_signal: SIGUSR1

  1. Volumes

Mount a directory or an existing data volume container, either directly using the format [HOST:CONTAINER], or using the format [HOST:CONTAINER:ro], which is read-only for containers This can effectively protect the host’s file system.
The Compose data volume designation path can be a relative path, using . or .. to specify the relative directory.
The format of the data volume can be in the following forms:

Volumes:
// Just specify a path, Docker will automatically create a data volume (this path is inside the container).

- /var/lib/mysql

// Mount data volume using absolute path

  - /opt/data:/var/lib/mysql

// The relative path centered on the Compose configuration file is mounted as a data volume to the container.

- ./cache:/tmp/cache

// Use the relative path of the user (the directory represented by ~/ is /home/<user directory>/ or /root/).

 - ~/configs:/etc/configs/:ro

// An existing named data volume.

- datavolume:/var/lib/mysql

If you do not use the host’s path, you can specify a volume_driver.

volume_driver: mydriver
  1. Volumes_from

Mount data volumes from other containers or services. Optional parameters are: ro or :rw. The former indicates that the container is read-only and the latter indicates that the container is readable and writeable to the data volume. It is readable and writable by default.

volumes_from:
  - service_name
  - service_name:ro
  - container:container_name
  - container:container_name:rw
  1. Cap_add, cap_drop

Add or remove the container’s kernel features. Detailed information is explained in the previous section of the container and will not be repeated here.

cap_add:
  - ALL

cap_drop:
  - NET_ADMIN
  - SYS_ADMIN
  1. Cgroup_parent

Specifies the parent cgroup of a container.

Cgroup_parent: m-executor-abcd

  1. Devices

List of device mappings. Similar to the –device parameter of the Docker client.

devices:
  - "/dev/ttyUSB0:/dev/ttyUSB0"
  1. Extends

This tag can be used to extend another service. Extended content can be from the current file, or from other files, and the same service. Latecomers can choose to overwrite the original configuration.

Extends : file: common.yml
service: webapp
users can use this tag anywhere, as long as the tag content contains both file and service values. The value of file can be a relative or absolute path. If you do not specify the value of file, Compose will read the current YML file information.
More details of the operation are described later in subsection 12.3.4.

  1. Network_mode

The network mode is similar to the –net parameter of the Docker client, except that there is a relatively more service:[service name] format.
E.g:

network_mode: "bridge"
network_mode: "host"
network_mode: "none"
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"

You can specify the network that uses the service or container.

  1. Networks

Join the specified network in the following format:

services:
  some-service:
    networks:
     - some-network
     - other-network

There is also a special child tag aliases for this tag. This is a tag to set the service alias, for example:

services:
  some-service:
    networks:
      some-network:
        aliases:
         - alias1
         - alias3
      other-network:
        aliases:
         - alias2

The same service can have different aliases on different networks.

  1. other

There are also these tags: cpu_shares, cpu_quota, cpuset, domainname, hostname, ipc, mac_address, mem_limit, memswap_limit, privileged, read_only, restart, shm_size, stdin_open, tty, user, working_dir
These are all single-valued tags, similar to Use docker run effect.

cpu_shares: 73
cpu_quota: 50000
cpuset: 0,1

user: postgresql
working_dir: /code

domainname: foo.com
hostname: foo
ipc: host
mac_address: 02:42:ac:11:65:43

mem_limit: 1000000000
memswap_limit: 2000000000
privileged: true

restart: always

read_only: true
shm_size: 64M
stdin_open: true
tty: true

Docker import and export mirrors

Docker Import and Export Image
Docker allows you to export the image to a local file. The export command is docker save. First, let’s look at the use of this command.
[plain] view plain copy
$ sudo docker save –help

You can see that the docker save command is very simple to use, with a -o parameter to specify which file to output the image to.
Earlier we have downloaded some images, here we put the Ubuntu :14.04 image output file ubuntu1404.tar
[plain] view
plaincopy $ sudo docker save -o ubuntu1404.tar ubuntu:14.04
after successful export can be in the local file Look under the file

Import Image
Docker Use docker load command to import exported files to local mirror library again

For example, we can import the exported image file ubuntu1404.tar into the local image library again
[plain] view
plaincopy $ sudo docker load -i ubuntu1404.tar

Remove mirror image
remove mirror command docker rmi

The docker rmi can remove one or more images at a time. To remove the image, you can specify that the image ID or image name can remove the specified image. Here we use CentOS which was just imported .
[plain] view plain copy
$ sudo docker rmi centos:centos6

You can see that the image of centos under the local repository has been deleted.
Before removing an image, ensure that there are no containers under the image (including containers that have been stopped). Otherwise, the image cannot be deleted. You must use docker rm to delete all containers under the image before you can remove the image.
For example, if we remove the image ubuntu:14.04 it cannot be removed directly because the image has container dependencies.

The contents of the image are temporarily closed.

Docker centos 6.9

Docker requires a Linux kernel version of 3.8 or higher, you must check the kernel version of the host operating system before installation. Otherwise, if the kernel is lower than 3.8, Docker can be successfully installed, but after entering Docker, it will automatically exit. .

1. Download and install CentOS 6.9

CentOS 6 series, the latest version is 6.9, because Docker can only run on 64-bit systems, so select an image download CentOS 6.9 64-bit CentOS official website

2, upgrade CentOS Linux kernel

CentOS 6.9 default linux kernel version is 2.6, CentOS 7 default linux kernel version is 3.10, so for CentOS 6.9 you need to upgrade the kernel version

1) Enter the URL of the updated linux kernel http://elrepo.org/tiki/tiki-index.php

2) Follow the instructions to update the kernel and execute the following command in the root account

(1) import public key

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

(2) Install ELRepo

For Centos 6,

rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

For Cenos7,

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm (external link)

(3) Install the kernel

Long-term supported version, stable (recommended)

yum --enablerepo=elrepo-kernel install -y kernel-lt

Mainline version (mainline)

yum --enablerepo=elrepo-kernel install -y kernel-ml

(4) modify the Grub boot sequence, set the default startup of the newly upgraded kernel

Edit grub.conf file

vi /etc/grub.conf

Modify default to the location of the newly installed kernel

# grub.conf generated by anaconda
#
default=0    
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (3.10.28-1.el6.elrepo.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-3.10.28-1.el6.elrepo.x86_64 ro root=UUID=0a05411f-16f2-4d69-beb0-2db4cefd3613 rd_NO_LUKS  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD crashkernel=auto.UTF-8 rd_NO_LVM rd_NO_DM rhgb quiet
        initrd /boot/initramfs-3.10.28-1.el6.elrepo.x86_64.img
title CentOS (2.6.32-431.3.1.el6.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-431.3.1.el6.x86_64 ro root=UUID=0a05411f-16f2-4d69-beb0-2db4cefd3613 rd_NO_LUKS  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD crashkernel=auto.UTF-8 rd_NO_LVM rd_NO_DM rhgb quiet
        initrd /boot/initramfs-2.6.32-431.3.1.el6.x86_64.img

(5) Restart, kernel upgrade completed

reboot

3, install docker

(1) disable selinux

Because selinux and LXC have conflicts, selinux is disabled

vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#       targeted - Only targeted network daemons are protected.
#       strict - Full SELinux protection.
SELINUXTYPE=targeted

(2) Configure Fedora EPEL source

Because Docker 6.x and 7.x installation docker is a little different, the docker installation package for CentOS 6.x is called docker-io, which comes from the Fedora epel library. This repository maintains a large number of packages that are not included in the distribution. Software, so first install EPEL, and docker for CentOS 7.x is directly included in the Extras repository of the official image source (the [extras] section enable=1 enable under CentOS-Base.repo)

yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

(3) Install docker

Install docker-io

yum install -y docker-io

(4) start docker

service docker start

(5) Check Docker Version

docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d/1.7.1
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d/1.7.1
OS/Arch (server): linux/amd64

(6) Perform docker hello-world

Pull hello-world image

docker pull hello-world

Run hello-world

docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (Assuming it was not already locally available.)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

For more examples and ideas, visit:
 http://docs.docker.com/userguide/

The above output message indicates that Docker has been completely installed

4, uninstall Docker

If you want to uninstall docker, it is very simple, check the docker installation package

yum list installed | grep docker

Then delete the installation package

yum -y remove docker-io.x86_64

Delete a mirror or container

rm -rf /var/lib/docker

Docker creates Tomcat/Weblogic cluster

Install Tomcat image

Prepare the required jdk, tomcat and other software into the home directory, start a container
docker run -t -i -v /home:/opt/data -name mk_tomcat Ubuntu /bin/bash

This command mounts the local home directory to the /opt/data directory of the container. If the container directory does not exist, it will be automatically created. The next step is the basic configuration of tomcat. After setting the jdk environment variable, place the tomcat program in /opt/apache-tomcat. Edit the /etc/supervisor/conf.d/supervisor.conf file and add the tomcat item.

[supervisord]
nodaemon=true

[program:tomcat]
command=/opt/apache-tomcat/bin/startup.sh

[program:sshd]
command=/usr/sbin/sshd -D
docker commit ac6474aeb31d tomcat

Create a new tomcat folder and create a new Dockerfile.
FROM mk_tomcat EXPOSE
22 8080
CMD [“/usr/bin/supervisord”]

Create an image based on the Dockerfile.
Docker build tomcat tomcat

Install weblogic image

The steps and tomcat basically the same, posted here configuration file
supervisor.conf
[supervisord]
nodaemon=true

[program:weblogic]
command=/opt/Middleware/user_projects/domains/base_domain/bin/startWebLogic.sh

[program:sshd]
command=/usr/sbin/sshd -D
dockerfile
FROM weblogic EXPOSE
22 7001
CMD [“/usr/bin/supervisord”]

Use of the tomcat/weblogic image

Use of storage

At startup, use the -v parameter
-v, –volume=[] Bind mount a volume (eg from the host: -v /host:/container, from docker: -v /container)

Map the local disk to the inside of the container, it changes in real time between the host and the container, so we only need to update the directory of the physical host to update the program and upload the code.

Implementation of tomcat and weblogic clusters

Tomcat just need to open multiple containers
docker run -d -v -p 204:22 -p 7003:8080 -v /home/data:/opt/data -name tm1 tomcat /usr/bin/supervisord
docker run -d -v -p 205:22 -p 7004:8080 -v /home/data:/opt/data –name tm2 tomcat /usr/bin/supervisord
docker run -d -v -p 206:22 -p 7005:8080 -v /home/data:/opt/data –name tm3 tomcat /usr/bin/supervisord

Here to talk about weblogic configuration, we all know that weblogic has a domain concept. If you want to deploy using the normal administrator +node method, you need to write the startup scripts for the administrator and server in supervisord respectively. The advantages of doing this are:

  • You can use weblogic clustering, synchronization and other concepts
  • To deploy a clustered application, you only need to install the application once on the cluster.

weakness is:

  • Docker configuration is complicated
  • There is no way to automatically expand the computing capacity of the cluster. If you need to add nodes, you need to first create a node on the administrator, then configure a new container supervisor startup script, and then start the container

Another method is to install all the programs on the adminiserver. When you need to expand, you can start multiple nodes. Its advantages and disadvantages are the opposite of the previous methods. (It is recommended to use this method to deploy the development and test environment)
docker run -d -v -p 204:22 -p 7001:7001 -v /home/data:/opt/data -name node1 weblogic /usr/bin/ Supervisord
docker run -d -v -p 205:22 -p 7002:7001 -v /home/data:/opt/data -name node2 weblogic /usr/bin/supervisord
docker run -d -v -p 206:22 -p 7003:7001 -v /home/data:/opt/data –name node3 weblogic /usr/bin/supervisord

In this way, using nginx as the load balancer in the front end can complete the configuration.

Kubernetes basic concepts study notes

Kubernetes (often called K8s) is an open source system for automatically deploying, extending, and managing containerized applications, and is an “open source version” of Google’s internal tools, Borg.

Kubernetes is currently recognized as the most advanced container cluster management tool. After the release of version 1.0, Kubernetes has been developing at a faster speed and has been fully supported by container ecosystem vendors, including coreos, rancher, and many other public cloud services. Vendors also provide infrastructure services based on Kubernetes’ secondary development when providing container services, such as Huawei. It can be said that Kubernetes is also Docker’s foray into the strongest competitors in container cluster management and service orchestration (Docker Swarm).

Kubernetes defines a set of building blocks that together provide a mechanism for deploying, maintaining, and extending applications. The components that make up Kubernetes are designed to be loosely coupled and scalable so that they can meet a variety of different workloads. Scalability is largely provided by the Kubernetes API – it is used as an internal component of extensions and containers that run on Kubernetes.

Because Kubernetes is a system made up of many components, it is still difficult for Kubernetes to install and deploy, and Kubernetes is developed by Google. There are many internal dependencies that need to be accessed through the wall.

Of course, there are quick installation tools, such as kubeadm, kubeadm is the official tool provided by Kubernetes to quickly install and initialize the Kubernetes cluster. Currently, it is still in an incubator development state. With the release of Kubernetes, the release of each version will be updated synchronously. Of course, the current kubeadm It cannot be used in a production environment.

1. Kubernetes architecture

 

 

2. Kubernetes features

Kubernetes features:

  • Simple : lightweight, simple, easy to use
  • Portable : public, private, hybrid, multi-cloud
  • Extensible : modular, plug-in, mountable, combinable
  • Self -healing : Automatic layout, automatic restart, automatic copy

In layman terms:

  • Automated container deployment and replication
  • Expand or shrink containers at any time
  • Organize containers into groups and provide load balancing among containers
  • Easily upgrade new versions of application containers
  • Provide container flexibility, replace it if the container fails

3. Kubernetes terminology

Kubernetes terminology:

  • Master Node : The computer used to control the Kubernetes node. All task assignments come from this.
  • Minion Node : The computer that performs the request and assignment tasks. The Kubernetes host is responsible for controlling the nodes.
  • Namespace : Namespace is an abstract set of a set of resources and objects. For example, it can be used to divide the internal objects of the system into different project groups or user groups. The common pods, services, replication controllers, and deployments are all part of a single namespace (the default is default), while node, persistentVolumes, etc. do not belong to any namespace.
  • Pod : A container group that is deployed on a single node and contains one or more containers. A Pod can be created, scheduled, and shared with all Kubernetes managed minimum deployment units, all containers in the same container set. IP address, IPC, host name, and other resources. The container set abstracts the network and storage from the underlying container so that you can more easily move the container in the cluster.
  • Deployment : Deployment is a new generation of objects for Pod management. Compared with Replication Controller, it provides more complete functionality and is easier and more convenient to use.
  • Replication Controller : The replication controller manages the lifecycle of pods. They ensure that a specified number of pods are running at any given time. They do this by creating or deleting pods.
  • Service : The service provides a single stable name and address for a group of pods. The service separates the work definition from the container set. The Kubernetes service agent automatically assigns the service request to the correct container set—whether or not the container set moves to the cluster. Which position in it, even if it has been replaced, is also the case.
  • Lable : Labels are used to organize and select object groups based on key-value pairs, which are used for each Kubernetes component.

In Kubernetes, all containers are run in a Pod, a Pod to hold a single container, or multiple cooperating containers. In the latter case, the containers in the Pod are guaranteed to be placed on the same machine and resources can be shared. A Pod can also contain zero or more volumes. Volumes are private to a container or can be shared between containers in a Pod. For each Pod created by the user, the system finds a machine that is healthy and has sufficient capacity, and then starts to start the corresponding container there. If a container fails, it is automatically restarted by Kubernetes’ node agent. This node agent is called a Kubelet. However, if the Pod or his machine fails, it will not be automatically transferred or restarted unless the user also defines a Replication Controller.

Pod’s copy sets can collectively form an entire application, a microservice, or a layer of a multi-tiered application. Once the Pod is created, the system continuously monitors their health status and the health of the machine they are running on. If a Pod has problems due to a software problem or a machine failure, the Replication controller automatically creates a new Pod on a healthy machine.

Kubernetes supports a unique network model. Kubernetes encourages the use of flat address spaces and does not dynamically allocate ports, but instead allows users to choose any port that suits them. To achieve this, it assigns each Pod an IP address.

Kubernetes provides an abstraction of Service that provides a stable IP address and DNS name to correspond to a set of dynamic pods, such as a set of pods that make up a microservice. This Pod group is defined by the Label selector because any Pod group can be specified. When a container running in Kubernetes Pod is connected to this address, the connection is forwarded by the local proxy (called a kube proxy). The agent runs on the source machine, the forwarding destination is a corresponding back-end container, and the exact back-end is selected by the round-robin policy to balance the load. The kube proxy also tracks the dynamic changes of the backend Pod group, such as when the Pod is replaced by a new Pod located on a new machine, and thus the IP and DNS names of the service do not need to be changed.

Each Kubernetes resource, such as a Pod, is identified by a URI and has a UID. A general component of a URI is the type of the object (eg, Pod), the name of the object, and the namespace of the object. For a particular object type, each name is unique in its namespace, and the name of an object is not given in the form of a namespace, which is the default namespace, and the UID is in the range of time and space. only.


More about Service:

  • Service is an abstraction of application services. It provides load balancing and service discovery for applications through labels. The list of Pod IPs and ports that match the labels constitutes endpoints, and kube-proxy is responsible for load balancing service IPs to these endpoints.
  • Each Service automatically assigns a cluster IP (a virtual address that is accessible only within the cluster) and a DNS name through which other containers can access the service without needing to know about the operation of the backend container.

 

 

 

 

4. Kubernetes components

Kubernetes component:

  • Kubectl : The client command line tool that formats the accepted command and sends it to kube-apiserver as an operation entry for the entire system.
  • Kube-apiserver : Serves as a control entry for the entire system, providing interfaces with REST API services.
  • Kube-controller-manager : It is used to perform background tasks in the entire system, including node status status, number of Pods, association between Pods and Service, and so on.
  • Kube-scheduler ( Distributing pods to Nodes): Responsible for node resource management, accepts Pods tasks created by kube-apiserver, and assigns them to a node.
  • Etcd : Responsible for service discovery and configuration sharing between nodes.
  • Kube-proxy : Runs on each compute node and is responsible for the Pod network proxy. Obtain the Service information from etcd periodically to do the corresponding strategy.
  • Kubelet : Runs on each compute node. As an agent, it accepts the Pods task and management container that allocates the node, periodically obtains the container status, and feeds it back to the kube-apiserver.
  • DNS : An optional DNS service that creates a DNS record for each Service object so that all pods can access the service through DNS.
  • flannel : Flannel is CoreOS team designed a cover for Kubernetes network (Overlay Network) tool, you need to download another deployment. We know that when we start Docker there will be an IP address for interacting with the container. If you do not manage it, the IP address may be the same on each machine, and it is limited to communicate on the machine, you can not access Docker containers on other machines. The purpose of the Flannel is to re-plan the use of IP addresses for all nodes in the cluster, so that containers on different nodes can obtain IP addresses that belong to the same intranet and are not duplicated, and allow containers belonging to different nodes to directly pass through. Network IP communication.

The master node contains the components:

Docker
etcd
kube-apiserver
kube-controller-manager
kubelet
kube-scheduler

The minion node contains components:

Docker
kubelet
kube-proxy

Ansible-palybooks

Ansible-palybooks

root@controller:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/root/.ssh/id_rsa.
Your public key has been saved in /home/root/.ssh/id_rsa.pub.
The key fingerprint is:
33:b8:4d:8f:95:bc:ed:1a:12:f3:6c:09:9f:52:23:d0 root@controller
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| . |
| . E |
| o . . |
| . S * |
| + / * |
| . = @ . |
| + o |
| … |
+—————–+
root@controller:~$ cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCpEZIWC8UGJXA5uGDRj6vUd5KlwIvE0cat3opG4ZUajmrZU382PbdO3JG6DJa3beZXylYSOYeKtRVT9DxbeKgeTKQ4m8uamM81NAMRf0ZaiqSsZ9r56h1urlJCfD4y5nXwnUTvoBzZpTvTYwcevBzpNcI/VnBIgpcKQWJq11iHHrcmybbFreujgotHg1XUwCv9BdpXbPnA50XbUyX97uqCE9EzIk7WnSNpTtsmASxMPSWoHB9seOas1mq7UBKo7Xfu7qaJJLIEnMisBLKHPb0hM23BNV2SiacJEpHSB5eJKULtMGDej38HbmBsQI3u+lzcWSRppDIt6BvO05brW5C5 root@controller

copy the key to ansible deployment sever-host for password less auth

root@controller:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 controller
104.198.143.3 ansible
root@controller:~$ ssh ansible
Last login: Wed Jan 18 02:51:09 2017 from 115.113.77.105
[root@ansible ~]$

——————————-
[root@ansible ~]$ sudo vim /etc/ansible/hosts
[web]
192.168.1.23
[web]
192.168.1.21
——————————-
[root@ansible ~]$ vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.128.0.2 ansible.c.rich-operand-154505.internal ansible # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
192.168.1.23 ansible1
192.168.1.22ansible2
104.198.143.3 ansible

——————————-
[root@ansible ~]$ ansible -m ping web
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping web
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}

——————————-
[root@ansible ~]$ ansible -m ping all -u root
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping all -u root
192.168.1.23 | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n”,
“unreachable”: true
}
192.168.1.22| UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n”,
“unreachable”: true
}
——————————-
[root@ansible ~]$ ansible -m ping all -b
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -s -m ping all
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ vim playbook1.yml

– hosts: all
tasks:
– name: installing telnet package
yum: name=telnet state=present

[root@ansible ~]$ ansible-playbook playbook1.yml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [installing telnet package] ***********************************************
fatal: [192.168.1.23]: FAILED! => {“changed”: true, “failed”: true, “msg”: “You need to be root to perform this command.\n”, “rc”: 1, “results”: [“Loaded plugins: fastestmirror\n”]}
fatal: [192.168.1.21]: FAILED! => {“changed”: true, “failed”: true, “msg”: “You need to be root to perform this command.\n”, “rc”: 1, “results”: [“Loaded plugins: fastestmirror\n”]}
to retry, use: –limit @/home/root/playbook1.retry

PLAY RECAP *********************************************************************
192.168.1.22: ok=1 changed=0 unreachable=0 failed=1
192.168.1.23 : ok=1 changed=0 unreachable=0 failed=1

[root@ansible ~]$ ansible-playbook playbook1.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [installing telnet package] ***********************************************
changed: [192.168.1.23]
changed: [192.168.1.21]

PLAY RECAP *********************************************************************
192.168.1.22: ok=2 changed=1 unreachable=0 failed=0
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0
——————————-
[root@ansible ~]$ vim playbook2.yml

– hosts: all
tasks:
– name: inatalling nfs package
yum: name=nfs-utils state=present

– name: statrting nfs service
service: name=nfs state=started enabled=yes

[root@ansible ~]$ ansible-playbook playbook2.yml –syntax-check

playbook: playbook2.yml
[root@ansible ~]$ ansible-playbook playbook2.yml –check

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
changed: [192.168.1.23]
changed: [192.168.1.21]

TASK [statrting nfs service] ***************************************************
fatal: [192.168.1.21]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service \”‘nfs’\”: “}
fatal: [192.168.1.23]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service \”‘nfs’\”: “}
to retry, use: –limit @/home/root/playbook2.retry

PLAY RECAP *********************************************************************
192.168.1.22: ok=2 changed=1 unreachable=0 failed=1
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=1

————————
[root@ansible ~]$ ansible-playbook playbook2.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
changed: [192.168.1.21]
changed: [192.168.1.23]

TASK [statrting nfs service] ***************************************************
changed: [192.168.1.21]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=3 changed=2 unreachable=0 failed=0
192.168.1.23 : ok=3 changed=2 unreachable=0 failed=0
—————-

Run again and again same play book configuration remains same as Idempotent
[root@ansible ~]$ ansible-playbook playbook2.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [statrting nfs service] ***************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=3 changed=0 unreachable=0 failed=0
192.168.1.23 : ok=3 changed=0 unreachable=0 failed=0

=================================================
[root@ansible ~]$ ansible all -a “service nfs status” -b
192.168.1.23 | SUCCESS | rc=0 >>
? nfs-server.service – NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-01-18 04:14:18 UTC; 2min 13s ago
Process: 12036 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 12035 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 12036 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service

192.168.1.22| SUCCESS | rc=0 >>
? nfs-server.service – NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-01-18 04:14:18 UTC; 2min 13s ago
Process: 6738 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 6737 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 6738 (code=exited, status=0/SUCCESS)
Memory: 0B
CGroup: /system.slice/nfs-server.service
—————————————————
[root@ansible ~]$ vim playbook3.yml

– hosts: all
become: yes
tasks:
– name: Install Apache.
yum: name={{ item }} state=present
with_items:
– httpd
– httpd-devel
– name: Copy configuration files.
copy:
src: “{{ item.src }}”
dest: “{{ item.dest }}”
owner: root
group: root
mode: 0644
with_items:
– src: “httpd.conf”
dest: “/etc/httpd/conf/httpd.conf”
– src: “httpd-vhosts.conf”
dest: “/etc/httpd/conf/httpd-vhosts.conf”
– name: Make sure Apache is started now and at boot.
service: name=httpd state=started enabled=yes
[root@ansible ~]$ ls -l
total 40
-rw-r–r–. 1 root root 11753 Jan 18 06:27 httpd.conf
-rw-r–r–. 1 root root 824 Jan 18 06:27 httpd-vhosts.conf

[root@ansible ~]$ ansible-playbook playbook3.yml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [Install Apache.] *********************************************************
changed: [192.168.1.23] => (item=[u’httpd’, u’httpd-devel’])
changed: [192.168.1.21] => (item=[u’httpd’, u’httpd-devel’])

TASK [Copy configuration files.] ***********************************************
ok: [192.168.1.21] => (item={u’dest’: u’/etc/httpd/conf/httpd.conf’, u’src’: u’httpd.conf’})
ok: [192.168.1.23] => (item={u’dest’: u’/etc/httpd/conf/httpd.conf’, u’src’: u’httpd.conf’})
ok: [192.168.1.21] => (item={u’dest’: u’/etc/httpd/conf/httpd-vhosts.conf’, u’src’: u’httpd-vhosts.conf’})
ok: [192.168.1.23] => (item={u’dest’: u’/etc/httpd/conf/httpd-vhosts.conf’, u’src’: u’httpd-vhosts.conf’})

TASK [Make sure Apache is started now and at boot.] ****************************
changed: [192.168.1.21]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=4 changed=2 unreachable=0 failed=0
192.168.1.23 : ok=4 changed=2 unreachable=0 failed=0

playbook1playbook2
[root@ansible ~]$ sudo vim /etc/ansible/hosts
[web]
192.168.1.23
[web]
192.168.1.21
[multi:children]
web
web

[root@ansible ~]$ ansible multi -m ping
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
—————————————————

[root@ansible ~]$ ansible multi -a hostname
192.168.1.22| SUCCESS | rc=0 >>
ansible2.c.rich-operand-154505.internal

192.168.1.23 | SUCCESS | rc=0 >>
ansible1.c.rich-operand-154505.internal
—————————————-
[root@ansible ~]$ ansible multi -a ‘free -m’
192.168.1.22| SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 3700 229 1673 178 1796 2978
Swap: 0 0 0

192.168.1.23 | SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 3700 329 265 16 3105 3031
Swap: 0 0 0
—————————————-
[root@ansible ~]$ ansible multi -a “du -h”
192.168.1.23 | SUCCESS | rc=0 >>
4.0K ./.ssh
56K ./.ansible/tmp/ansible-tmp-1484714028.87-63512995370206
56K ./.ansible/tmp
56K ./.ansible
0 ./.puppetlabs/var
0 ./.puppetlabs/etc
0 ./.puppetlabs/opt/puppet
0 ./.puppetlabs/opt
0 ./.puppetlabs
80K .

192.168.1.22| SUCCESS | rc=0 >>
4.0K ./.ssh
56K ./.ansible/tmp/ansible-tmp-1484714028.87-38086154108105
56K ./.ansible/tmp
56K ./.ansible
80K .
————————————-
[root@ansible ~]$ ansible multi -a ‘service httpd status’ -b
192.168.1.22| FAILED | rc=4 >>
Redirecting to /bin/systemctl status httpd.service
Unit httpd.service could not be found.

192.168.1.23 | FAILED | rc=4 >>
Redirecting to /bin/systemctl status httpd.service
Unit httpd.service could not be found.
————————————-
[root@ansible ~]$ ansible multi -a ‘netstat -tlpn’ -s
192.168.1.22| SUCCESS | rc=0 >>
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 15329/etcd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 6736/rpc.mountd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 994/sshd
tcp 0 0 0.0.0.0:34457 0.0.0.0:* LISTEN –
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1041/master
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:43009 0.0.0.0:* LISTEN 6724/rpc.statd
tcp6 0 0 :::10251 :::* LISTEN 15502/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 15445/kube-apiserve
tcp6 0 0 :::2379 :::* LISTEN 15329/etcd
tcp6 0 0 :::10252 :::* LISTEN 15463/kube-controll
tcp6 0 0 :::111 :::* LISTEN 6524/rpcbind
tcp6 0 0 :::20048 :::* LISTEN 6736/rpc.mountd
tcp6 0 0 :::8080 :::* LISTEN 15445/kube-apiserve
tcp6 0 0 :::22 :::* LISTEN 994/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1041/master
tcp6 0 0 :::36474 :::* LISTEN –
tcp6 0 0 :::2049 :::* LISTEN –
tcp6 0 0 :::54309 :::* LISTEN 6724/rpc.statd

192.168.1.23 | SUCCESS | rc=0 >>
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 12034/rpc.mountd
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 990/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1032/master
tcp 0 0 0.0.0.0:4447 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:45185 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:9990 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:53095 0.0.0.0:* LISTEN 12018/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 11822/rpcbind
tcp6 0 0 :::20048 :::* LISTEN 12034/rpc.mountd
tcp6 0 0 :::22 :::* LISTEN 990/sshd
tcp6 0 0 :::43255 :::* LISTEN –
tcp6 0 0 :::55927 :::* LISTEN 12018/rpc.statd
tcp6 0 0 ::1:25 :::* LISTEN 1032/master
tcp6 0 0 :::2049 :::* LISTEN –

[root@ansible ~]$ ansible multi -s -m yum -a “name=ntp state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“ntp-4.2.6p5-25.el7.centos.x86_64 providing ntp is already installed”
]
}
192.168.1.22| SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“ntp-4.2.6p5-25.el7.centos.x86_64 providing ntp is already installed”
]
}

————————————–
[root@ansible ~]$ ansible multi -s -m service -a “name=ntpd state=started enabled=yes”
————————————–
[root@ansible ~]$ ntpdate
18 Jan 04:57:35 ntpdate[4532]: no servers can be used, exiting
————————————–
[root@ansible ~]$ ansible multi -s -a “service ntpd stop”
192.168.1.23 | SUCCESS | rc=0 >>
Redirecting to /bin/systemctl stop ntpd.service

192.168.1.22| SUCCESS | rc=0 >>
Redirecting to /bin/systemctl stop ntpd.service
————————————–
[root@ansible ~]$ ansible multi -s -a “ntpdate -q 0.rhel.pool.ntp.org”
192.168.1.22| SUCCESS | rc=0 >>
server 138.236.128.112, stratum 2, offset -0.003149, delay 0.05275
server 71.210.146.228, stratum 2, offset 0.003796, delay 0.04633
server 128.138.141.172, stratum 1, offset -0.000194, delay 0.03752
server 69.89.207.199, stratum 2, offset -0.000211, delay 0.05193
18 Jan 04:58:22 ntpdate[10370]: adjust time server 128.138.141.172 offset -0.000194 sec

192.168.1.23 | SUCCESS | rc=0 >>
server 173.230.144.109, stratum 2, offset 0.000549, delay 0.06175
server 45.127.113.2, stratum 3, offset 0.000591, delay 0.06134
server 4.53.160.75, stratum 2, offset -0.000900, delay 0.04163
server 50.116.52.97, stratum 2, offset -0.001006, delay 0.05426
18 Jan 04:58:22 ntpdate[15477]: adjust time server 4.53.160.75 offset -0.000900 sec
————————————–
[root@ansible ~]$ ansible web -s -m yum -a “name=MySQL-python state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: bay.uchicago.edu\n * epel: mirror.steadfast.net\n * extras: mirror.tzulo.com\n * updates: mirror.team-cymru.org\nResolving Dependencies\n–> Running transaction check\n—> Package MySQL-python.x86_64 0:1.2.5-1.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n MySQL-python x86_64 1.2.5-1.el7 base 90 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 90 k\nInstalled size: 284 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : MySQL-python-1.2.5-1.el7.x86_64 1/1 \n Verifying : MySQL-python-1.2.5-1.el7.x86_64 1/1 \n\nInstalled:\n MySQL-python.x86_64 0:1.2.5-1.el7 \n\nComplete!\n”
]
}

[root@ansible ~]$ ansible web -s -m yum -a “name=python-setuptools state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“python-setuptools-0.9.8-4.el7.noarch providing python-setuptools is already installed”
]
}

[root@ansible ~]$ ansible web -s -m easy_install -a “name=django state=present”
192.168.1.23 | SUCCESS => {
“binary”: “/bin/easy_install”,
“changed”: true,
“name”: “django”,
“virtualenv”: null
}
————————————–
[root@ansible ~]$ ansible web -s -m user -a “name=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“comment”: “”,
“createhome”: true,
“group”: 1004,
“home”: “/home/admin”,
“name”: “admin”,
“shell”: “/bin/bash”,
“state”: “present”,
“system”: false,
“uid”: 1003
}
[root@ansible ~]$ ansible web -s -m group -a “name=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“gid”: 1004,
“name”: “admin”,
“state”: “present”,
“system”: false
}
[root@ansible ~]$ ansible web -s -m user -a “name=first group=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“comment”: “”,
“createhome”: true,
“group”: 1004,
“home”: “/home/first”,
“name”: “first”,
“shell”: “/bin/bash”,
“state”: “present”,
“system”: false,
“uid”: 1004
}
[root@ansible ~]$ ansible web -a “tail /etc/passwd”
192.168.1.23 | SUCCESS | rc=0 >>
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin
root:x:1000:1001::/home/root:/bin/bash
test:x:1001:1002::/home/test:/bin/bash
jboss:x:1002:1003::/home/jboss:/bin/bash
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
admin:x:1003:1004::/home/admin:/bin/bash
first:x:1004:1004::/home/first:/bin/bash

[root@ansible ~]$ ansible web -a “tail /etc/shadow”
192.168.1.23 | FAILED | rc=1 >>
tail: cannot open ‘/etc/shadow’ for reading: Permission denied

[root@ansible ~]$ ansible web -a “tail /etc/shadow” -b
192.168.1.23 | SUCCESS | rc=0 >>
systemd-network:!!:17176::::::
tss:!!:17176::::::
root:*:17178:0:99999:7:::
test:*:17178:0:99999:7:::
jboss:!!:17182:0:99999:7:::
rpc:!!:17184:0:99999:7:::
rpcuser:!!:17184::::::
nfsnobody:!!:17184::::::
admin:!!:17184:0:99999:7:::
first:!!:17184:0:99999:7:::
——————————–

[root@ansible ~]$ ansible web -m stat -a “path=/etc/hosts”
192.168.1.23 | SUCCESS => {
“changed”: false,
“stat”: {
“atime”: 1484635843.2532218,
“checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“ctime”: 1484203757.175483,
“dev”: 2049,
“executable”: false,
“exists”: true,
“gid”: 0,
“gr_name”: “root”,
“inode”: 240,
“isblk”: false,
“ischr”: false,
“isdir”: false,
“isfifo”: false,
“isgid”: false,
“islnk”: false,
“isreg”: true,
“issock”: false,
“isuid”: false,
“md5”: “10f391742a450d220ff00269216eff8a”,
“mode”: “0644”,
“mtime”: 1484203757.175483,
“nlink”: 1,
“path”: “/etc/hosts”,
“pw_name”: “root”,
“readable”: true,
“rgrp”: true,
“roth”: true,
“rusr”: true,
“size”: 297,
“uid”: 0,
“wgrp”: false,
“woth”: false,
“writeable”: false,
“wusr”: true,
“xgrp”: false,
“xoth”: false,
“xusr”: false
}
}

========================================
[root@ansible ~]$ ansible multi -m copy -a “src=/etc/hosts dest=/tmp/hosts”
192.168.1.23 | SUCCESS => {
“changed”: true,
“checksum”: “08aa54eecc8a866b53d38351ea72e5bb97718005”,
“dest”: “/tmp/hosts”,
“gid”: 1001,
“group”: “root”,
“md5sum”: “72ff7a2085a5186d0cab74f14bae1483”,
“mode”: “0664”,
“owner”: “root”,
“secontext”: “unconfined_u:object_r:user_home_t:s0”,
“size”: 369,
“src”: “/home/root/.ansible/tmp/ansible-tmp-1484717608.47-178441141048946/source”,
“state”: “file”,
“uid”: 1000
}
192.168.1.22| SUCCESS => {
“changed”: true,
“checksum”: “08aa54eecc8a866b53d38351ea72e5bb97718005”,
“dest”: “/tmp/hosts”,
“gid”: 1001,
“group”: “root”,
“md5sum”: “72ff7a2085a5186d0cab74f14bae1483”,
“mode”: “0664”,
“owner”: “root”,
“secontext”: “unconfined_u:object_r:user_home_t:s0”,
“size”: 369,
“src”: “/home/root/.ansible/tmp/ansible-tmp-1484717608.85-272034831244848/source”,
“state”: “file”,
“uid”: 1000
}
==========================================
[root@ansible ~]$ ansible multi -s -m fetch -a “src=/etc/hosts dest=/tmp”
192.168.1.23 | SUCCESS => {
“changed”: true,
“checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“dest”: “/tmp/192.168.1.23/etc/hosts”,
“md5sum”: “10f391742a450d220ff00269216eff8a”,
“remote_checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“remote_md5sum”: null
}
192.168.1.22| SUCCESS => {
“changed”: true,
“checksum”: “0b24c9ee4a888defdf6769d5e72f65761e882f1f”,
“dest”: “/tmp/192.168.1.21/etc/hosts”,
“md5sum”: “e2dd8ef8a5f58f35d7a3f3dce7f2f2bf”,
“remote_checksum”: “0b24c9ee4a888defdf6769d5e72f65761e882f1f”,
“remote_md5sum”: null
}
============================================
[root@ansible ~]$ ls -l /tmp/
total 16
drwxrwxr-x. 3 root root 16 Jan 18 05:34 192.168.1.21
drwxrwxr-x. 3 root root 16 Jan 18 05:34 192.168.1.23

[root@ansible ~]$ cat /tmp/192.168.1.23/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.128.0.3 ansible1.c.rich-operand-154505.internal ansible1 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
==========================================
[root@ansible ~]$ ansible multi -m file -a “dest=/tmp/test mode=644 state=directory”
192.168.1.23 | SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 1000
}
192.168.1.22| SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 1000
}
===========================================
[root@ansible ~]$ ansible multi -s -m file -a “dest=/tmp/test mode=644 owner=root state=directory”
192.168.1.22| SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 0
}
192.168.1.23 | SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 0
}
================================================
[root@ansible ~]$ ansible multi -s -B 3600 -a “yum -y update”
=================================================
[root@ansible ~]$ ansible 192.168.1.23 -s -a “tail /var/log/messages”
192.168.1.23 | SUCCESS | rc=0 >>
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Return async_wrapper task started.
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Starting module and watcher
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Start watching 18305 (3600)
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Start module (18305)
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Module complete (18305)
Jan 18 05:41:08 ansible1 ansible-async_wrapper.py: Done in kid B.
Jan 18 05:42:04 ansible1 systemd-logind: Removed session 180.
Jan 18 05:42:04 ansible1 systemd: Started Session 181 of user root.
Jan 18 05:42:04 ansible1 systemd-logind: New session 181 of user root.
Jan 18 05:42:04 ansible1 systemd: Starting Session 181 of user root.
=================================================
[root@ansible ~]$ ansible multi -s -m shell -a “tail /var/log/messages | grep ansible-command | wc -l”
192.168.1.23 | SUCCESS | rc=0 >>
0

192.168.1.22| SUCCESS | rc=0 >>
0
=================================
[root@ansible ~]$ ansible web -s -m git -a “repo=git://web.com/path/to/repo.git dest=/opt/myapp update=yes version=1.2.4”
192.168.1.23 | FAILED! => {
“changed”: false,
“failed”: true,
“msg”: “Failed to find required executable git”
}
[root@ansible ~]$ ansible web -s -m yum -a “name=git state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: bay.uchicago.edu\n * epel: mirror.steadfast.net\n * extras: mirror.tzulo.com\n * updates: mirror.team-cymru.org\nResolving Dependencies\n–> Running transaction check\n—> Package git.x86_64 0:1.8.3.1-6.el7_2.1 will be installed\n–> Processing Dependency: perl-Git = 1.8.3.1-6.el7_2.1 for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Term::ReadKey) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Git) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Error) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: libgnome-keyring.so.0()(64bit) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Running transaction check\n—> Package libgnome-keyring.x86_64 0:3.8.0-3.el7 will be installed\n—> Package perl-Error.noarch 1:0.17020-2.el7 will be installed\n—> Package perl-Git.noarch 0:1.8.3.1-6.el7_2.1 will be installed\n—> Package perl-TermReadKey.x86_64 0:2.30-20.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n git x86_64 1.8.3.1-6.el7_2.1 base 4.4 M\nInstalling for dependencies:\n libgnome-keyring x86_64 3.8.0-3.el7 base 109 k\n perl-Error noarch 1:0.17020-2.el7 base 32 k\n perl-Git noarch 1.8.3.1-6.el7_2.1 base 53 k\n perl-TermReadKey x86_64 2.30-20.el7 base 31 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package (+4 Dependent packages)\n\nTotal download size: 4.6 M\nInstalled size: 23 M\nDownloading packages:\n——————————————————————————–\nTotal 12 MB/s | 4.6 MB 00:00 \nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:perl-Error-0.17020-2.el7.noarch 1/5 \n Installing : libgnome-keyring-3.8.0-3.el7.x86_64 2/5 \n Installing : perl-TermReadKey-2.30-20.el7.x86_64 3/5 \n Installing : git-1.8.3.1-6.el7_2.1.x86_64 4/5 \n Installing : perl-Git-1.8.3.1-6.el7_2.1.noarch 5/5 \n Verifying : perl-Git-1.8.3.1-6.el7_2.1.noarch 1/5 \n Verifying : perl-TermReadKey-2.30-20.el7.x86_64 2/5 \n Verifying : libgnome-keyring-3.8.0-3.el7.x86_64 3/5 \n Verifying : 1:perl-Error-0.17020-2.el7.noarch 4/5 \n Verifying : git-1.8.3.1-6.el7_2.1.x86_64 5/5 \n\nInstalled:\n git.x86_64 0:1.8.3.1-6.el7_2.1 \n\nDependency Installed:\n libgnome-keyring.x86_64 0:3.8.0-3.el7 perl-Error.noarch 1:0.17020-2.el7 \n perl-Git.noarch 0:1.8.3.1-6.el7_2.1 perl-TermReadKey.x86_64 0:2.30-20.el7 \n\nComplete!\n”
]
}

Page 4 of 174« First...23456...102030...Last »