October 2019
M T W T F S S
« Aug    
 123456
78910111213
14151617181920
21222324252627
28293031  

Categories

WordPress Quotes

The only man who never makes mistakes is the man who never does anything.
Theodore Roosevelt
October 2019
M T W T F S S
« Aug    
 123456
78910111213
14151617181920
21222324252627
28293031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (40)
Ansibile (19)
Apache (135)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (268)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (1)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Uncategorized (30)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

22 visitors online now
8 guests, 14 bots, 0 members

Hit Counter provided by dental implants orange county

 Create additional database in Oracle Express edition 

 Create additional database in Oracle Express edition 

*In my windows i have installed oracle11g express edition i want to create new database for my testing purpose but the express edition doesnot support DBCA utiltiy let us we can discuss How to Create additional database in Oracle Express edition or How to create manual database in oracle11g windows enivornment.

S-1:

create directory

 

C:\Windows\system32>mkdir C:\oraclexe\app\oracle\admin\TEST

C:\Windows\system32>cd C:\oraclexe\app\oracle\admin\TEST



C:\oraclexe\app\oracle\admin\TEST>mkdir adump

C:\oraclexe\app\oracle\admin\TEST>mkdir bdump

C:\oraclexe\app\oracle\admin\TEST>mkdir dpdump

C:\oraclexe\app\oracle\admin\TEST>mkdir pfile

 

S-2:

Create new instance


  
C:\Windows\System32>oradim -new -sid test

Instance created.

 

S-3:

create pfile and Password file like below

 

C:\Windows\System32>orapwd file=C:\oraclexe\app\oracle\product\11.2.0\server\dat
abase\PWDTEST.ora password=oracle

 

Note: I just copied the pfile (InitXE.ora )from Xe database into new database pfile(my manual database) location then I changed the file name “initXE.ora” into “initTEST.ora” and opened that file

 

S-4:

Open the pfile “InitTEST.ora”



xe.__db_cache_size=411041792
xe.__java_pool_size=4194304
xe.__large_pool_size=4194304
xe.__oracle_base='C:\oraclexe\app\oracle'#ORACLE_BASE set from environment
xe.__pga_aggregate_target=432013312
xe.__sga_target=641728512
xe.__shared_io_pool_size=0
xe.__shared_pool_size=205520896
xe.__streams_pool_size=8388608
*.audit_file_dest='C:\oraclexe\app\oracle\admin\XE\adump'
*.compatible='11.2.0.0.0'
*.control_files='C:\oraclexe\app\oracle\oradata\XE\control.dbf'
*.db_name='XE'
*.DB_RECOVERY_FILE_DEST_SIZE=10G
*.DB_RECOVERY_FILE_DEST='C:\oraclexe\app\oracle\fast_recovery_area'
*.diagnostic_dest='C:\oraclexe\app\oracle\.'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=XEXDB)'
*.job_queue_processes=4
*.memory_target=1024M
*.open_cursors=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=20
*.shared_servers=4
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'

 

S-5:

Change The parameter like below



*.audit_file_dest='C:\oraclexe\app\oracle\admin\TEST\adump'
*.compatible='11.2.0.0.0'
*.control_files='C:\oraclexe\app\oracle\oradata\TEST\control.dbf'
*.db_name='TEST'
*.DB_RECOVERY_FILE_DEST_SIZE=10G
*.DB_RECOVERY_FILE_DEST='C:\oraclexe\app\oracle\fast_recovery_area'
*.diagnostic_dest='C:\oraclexe\app\oracle\.'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TESTXDB)'
*.job_queue_processes=4
*.memory_target=1024M
*.open_cursors=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=20
*.shared_servers=4
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'

 

S-6:

After modifying the pfile and i started the new instance like below



  C:\Windows\System32>set ORACLE_SID=TEST

C:\Windows\System32>sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 20 12:41:38 2017

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

 

S-7:

Start the database in nomount stage using pfile



SQL> startup nomount pfile='C:\oraclexe\app\oracle\admin\TEST\pfile\initTEST.ora
'
ORACLE instance started.

Total System Global Area  644468736 bytes
Fixed Size                  1385488 bytes
Variable Size             192941040 bytes
Database Buffers          444596224 bytes
Redo Buffers                5545984 bytes

 

S-8:

Create the database script


create database TEST
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
  GROUP 1 'C:\oraclexe\app\oracle\oradata\TEST\REDO01.LOG'  SIZE 50M BLOCKSIZE 512,
  GROUP 2 'C:\oraclexe\app\oracle\oradata\TEST\REDO02.LOG'  SIZE 50M BLOCKSIZE 512
DATAFILE'C:\oraclexe\app\oracle\oradata\TEST\SYSTEM.DBF' size 100m autoextend on
sysaux datafile 'C:\oraclexe\app\oracle\oradata\TEST\SYSAUX.DBF' size 100m autoextend on
undo tablespace undotbs1 datafile  'C:\oraclexe\app\oracle\oradata\TEST\UNDOTBS1.DBF' size 100m autoextend on
CHARACTER SET AL32UTF8
;

 

S-9:

Execute the @createdatabase.sql file


SQL> @C:\oraclexe\app\oracle\CREATEDATABASE.SQL

Database created.

 

S-10:

Test our database name and instance status



SQL> select status from v$instance;

STATUS
------------
OPEN

SQL> select * from V$version;

BANNER
--------------------------------------------------------------------------------

Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for 32-bit Windows: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> select name from V$database;

NAME
---------
TEST

 

S-11:

Execute this below two scripts


  SQL> @C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\admin\catalog.sql

  SQL> @C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\admin\catproc.sql

How to list all database names in Oracle

1) To view database

select * from v$database;

2) To view instance

select * from v$instance;

3) To view all users

select * from all_users;


4) To view table and columns for a particular user

select tc.table_name Table_name
,tc.column_id Column_id
,lower(tc.column_name) Column_name
,lower(tc.data_type) Data_type
,nvl(tc.data_precision,tc.data_length) Length
,lower(tc.data_scale) Data_scale
,tc.nullable nullable
FROM all_tab_columns tc
,all_tables t
WHERE tc.table_name = t.table_name;



select owner from dba_tables
union
select owner from dba_views;


select username from dba_users;


QL> SELECT TABLESPACE_NAME FROM USER_TABLESPACES;

Resulting in:

SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
EXAMPLE
DEV_DB

It is also possible to query the users in all tablespaces:

SQL> select USERNAME, DEFAULT_TABLESPACE from DBA_USERS;

Or within a specific tablespace (using my DEV_DB tablespace as an example):

SQL> select USERNAME, DEFAULT_TABLESPACE from DBA_USERS where DEFAULT_TABLESPACE = 'DEV_DB';

ROLES DEV_DB
DATAWARE DEV_DB
DATAMART DEV_DB
STAGING DEV_DB

EXP-00002: error in writing to export file

EXP-00002: error in writing to export file

While exporting table or schema using exp/imp utility you may come across below error.
Most of the time this error occurs due to insufficient space available on disk.so confirm space is available where you are taking taking export dump and re-run export.
[oracle@DEV admin]$ exp test/test@DEV tables=t1,t2,t3,t4 file=exp_tables.dmp log=exp_tables.log
Export: Release 9.2.0.8.0 – Production on Thu Sep 10 12:25:52 2015
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 – Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 – Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path …
. . exporting table t1 1270880 rows exported
. . exporting table t2 2248883 rows exported
. . exporting table t3 2864492 rows exported
. . exporting table t4
EXP-00002: error in writing to export file
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully

CentOS 7 Installs Docker Application Container Engine

Docker is an open source application container engine based on the  Go language  and open sourced under the Apache 2.0 protocol.

Docker allows developers to package their applications and dependencies into a lightweight, portable container that can then be published to any popular Linux machine or virtualized.

Containers are completely sandboxed and do not have any interfaces with each other (similar to the iPhone’s app). More importantly, the container’s performance overhead is extremely low.

Docker application scenarios

  • Web application automation packaged and released.
  • Automated testing and continuous integration, release.
  • Deploy and tune databases or other back-end applications in a service-oriented environment.
  • Build or extend your existing OpenShift or Cloud Foundry platform from scratch to build your own PaaS environment.

Docker advantages

  • 1, simplify the program:

Docker allows developers to package their applications and dependencies into a portable container and then publish it to any popular Linux machine for virtualization. Docker has changed the way of virtualization so that developers can directly put their own results into Docker for management. Convenience is already Docker’s biggest advantage. In the past, it took days or even weeks to complete the task, and it took only a few seconds to finish in the Docker container.

  • 2. Avoid phobias:

If you have phobia, you are still a senior patient. Docker helps you pack your tangle! Such as the Docker image; Docker image contains the operating environment and configuration, so Docker can simplify the deployment of a variety of application examples work. For example, Web applications, background applications, database applications, big data applications such as Hadoop clusters, message queues, and so on can all be packaged into a single mirrored deployment.

  • 3, save expenses:

On the one hand, the arrival of cloud computing era, so that developers do not have to configure high hardware for the pursuit of effect, Docker changed the mindset of high performance inevitable high prices. The combination of Docker and cloud makes the cloud space more fully utilized. Not only solved the problem of hardware management, but also changed the way of virtualization.

1.Docker’s installation:

Docker supports the following CentOS versions:

  • CentOS 7 (64-bit)
  • CentOS 6.5 (64-bit) or later

1.2 Prerequisites

Currently, Docker is supported by kernels in the CentOS distribution only.

Docker runs on CentOS 7 and requires a 64-bit system with a 3.10 or higher kernel version.

Docker runs on CentOS-6.5 or higher versions of CentOS. It requires a 64-bit system with a kernel version of 2.6.32-431 or higher.

 

My operating system version:

Centos7 officially installs docker CE instructions: https://docs.docker.com/install/linux/docker-ce/centos/

Docker packages and dependencies are included in the default CentOS-Extras source. You can update the yum source to install yum install, or you can choose to download the rpm package.

The official download address: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

 

1.3 Use wget command to download rpm package

Create a downloads folder and use the wget command to download

[root@sungeek downloads]# wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm 
[ Root@sungeek downloads]# yum install docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm

 

 

1.4 Use yun install to install directly

I use yum update to update the yum source and install yum install

 

[root@rmohan downloads]# yum update
[root@rmohan downloads]# yum -y install docker-io
[root@localhost downloads]# docker –version –View the version, preferably using wget, so you can find the corresponding The latest installation package
Docker version 1.13.1, build 94f4240/1.13.1

1.5 Start the Docker Process
[root@localhost downloads]# systemctl start docker
[root@localhost downloads]# systemctl status docker
[root@localhost downloads]# systemctl enable docker –Set boot self-booting docker service

1.6 Verifying Successful Installation

Enter docker directly to list the usage of this command

 

Test Run First Container: hello-world

Since there is no hello-world image locally, a hello-world image will be downloaded and run inside the container.

 

 

2. The most commonly used commands

2.1 Use docker images to view images

[root@localhost downloads]# docker images 
REPOSITORY TAG IMAGE ID CREATED SIZE 
docker.io/hello-world latest e38bc07ac18e 2 months ago 1.85 kB

Docker ps is also the most commonly used command, -a : displays all containers, including those that are not running.

2.3 Use docker logs to view the container console output

Get the container’s log

Docker logs [container]

 

 

 

 

Bond Technology Load Balancing in Linux

Problem introduction

When the general enterprise is used to provide NFS service, samba service or vsftpd service, the system must provide 7*24 hours of network transmission service. The maximum network transmission speed it can provide is 100MB/s, but when there are a large number of users accessing, the server’s access pressure is very high, and the network transmission rate is particularly slow.

Solution

Therefore, we can use bond technology to achieve load balancing of multiple network cards to ensure automatic backup and load balancing of the network. In this way, the reliability of the network and the high-speed transmission of files in the actual operation and maintenance work are guaranteed.

There are seven (0~6) network card binding modes: bond0, bond1, bond2, bond3, bond4, bond5, bond6. 
The common network card binding driver has the following three modes:

  • Mode0 Balanced load mode: Usually two network cards work and are automatically backed up, but port aggregation is required on the switch devices connected to the server’s local network card to support bonding technology.
  • Mode1 automatic backup technology: usually only one network card works, after it fails, it is automatically replaced with another network card;
  • Mode6 Balanced load mode: Normally, both network cards work, and they are automatically backed up. There is no need for the switch device to provide auxiliary support.

Here mainly describes the mode6 network card binding drive mode, because this mode allows two network cards to work together at the same time, when one of the network card failure can automatically backup, and without the need for switch device support, thus ensuring reliable network transmission protection .

The following is the bond binding operation of the network card in RHEL 7 in the VMware virtual machine

  1. Add another network card device to the virtual machine system and set two network cards in the same network connection mode. As shown in the following figure, the network card device in this mode can bind the network card. Otherwise, the two network cards cannot be added. Send data to each other.
  2. Configure the binding parameters of the network card device. It should be noted here that the independent network card needs to be configured as a “slave” network card. Serving the “main” network card, it should not have its own IP address. After the following initialization of the device, they can support network card binding.
    cd /etc/sysconfig/network-scripts/

    Vim ifcfg-eno16777728 # Edit NIC 1 configuration file

    TYPE=Ethernet
    BOOTPROTO=none
    DEVICE=eno16777728
    ONBOOT=yes
    HWADDR=00:0C:29:E2:25:2D
    USERCTL=no
    MASTER=bond0
    SLAVE=yes

    Vim ifcfg-eno33554968 # Edit NIC 2 configuration file

    TYPE=Ethernet
    BOOTPROTO=none
    DEVICE=eno33554968
    ONBOOT=yes
    HWADDR=00:0C:29:E2:25:2D
    MASTER=bond0
    SLAVE=yes

    1. Create a new network card device file ifcfg-bond0, and configure the IP address and other information. In this way, when the user accesses the corresponding service, the two network card devices provide services together.

    Vim ifcfg-bond0 # Create a new ifcfg-bond0 configuration file in the current directory.

    TYPE=Ethernet
    BOOTPROTO=none
    ONBOOT=yes
    USERCTL=no
    DEVICE=bond0
    IPADDR=192.168.100.5
    PREFIX=24
    DNS=192.168.100.1
    NM_CONTROLLED=no

    1. Modify the network card binding drive mode, here we use mode6 (balanced load mode)

    Vim /etc/modprobe.d/bond.conf # Configure the mode of the NIC driver

    Alias ??bond0 bonding
    options bond0 miimon=100 mode=6

    1. Restart the network service so that the configuration takes effect

    Systemctl restart network

    1. test

First, bonding technology

Bonding is a network card binding technology in a Linux system. It can abstract (bind) n physical NICs on the server into a logical network card, which can improve network throughput and achieve network redundancy. , load and other functions have many advantages.

Bonding technology is implemented at the kernel level of the Linux system. It is a kernel module (driver). To use it, the system needs to have this module. We can use modinfo command to view the information of this module. Generally, it is supported.

# modinfo bonding
filename:       /lib/modules/2.6.32-642.1.1.el6.x86_64/kernel/drivers/net/bonding/bonding.ko
author:         Thomas Davis, tadavis@lbl.gov and many others
description:    Ethernet Channel Bonding Driver, v3.7.1
version:        3.7.1
license:        GPL
alias:          rtnl-link-bond
srcversion:     F6C1815876DCB3094C27C71
depends:        
vermagic:       2.6.32-642.1.1.el6.x86_64 SMP mod_unload modversions 
parm:           max_bonds:Max number of bonded devices (int)
parm:           tx_queues:Max number of transmit queues (default = 16) (int)
parm:           num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm:           num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm:           miimon:Link check interval in milliseconds (int)
parm:           updelay:Delay before considering link up, in milliseconds (int)
parm:           downdelay:Delay before considering link down, in milliseconds (int)
parm:           use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)
parm:           mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
parm:           primary:Primary network device to use (charp)
parm:           primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)
parm:           lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)
parm:           ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)
parm:           min_links:Minimum number of available links before turning on carrier (int)
parm:           xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3 (charp)
parm:           arp_interval:arp interval in milliseconds (int)
parm:           arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm:           arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)
parm:           arp_all_targets:fail on any/all arp targets timeout; 0 for any (default), 1 for all (charp)
parm:           fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)
parm:           all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int)
parm:           resend_igmp:Number of IGMP membership reports to send on link failure (int)
parm:           packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), >1 packets per slave. (int)
parm:           lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint)

The seven working modes of bonding: 

Bonding technology provides seven operating modes that need to be specified when used. Each has its own advantages and disadvantages.

  1. Balance-rr (mode=0) By default, there is a high availability (fault tolerance) and load balancing feature that requires configuration of the switch, each packet is polled for packet delivery (balanced traffic distribution).
  2. Active-backup (mode=1) Only the high availability (fault-tolerance) function does not require switch configuration. In this mode, only one network card is working and only one mac address is available to the outside world. The disadvantage is that the port utilization rate is relatively low.
  3. Balance-xor (mode=2) is not commonly used
  4. Broadcast (mode=3) is not commonly used
  5. 802.3ad (mode=4) IEEE 802.3ad dynamic link aggregation, requires switch configuration, never used
  6. Balance-tlb (mode=5) is not commonly used
  7. Balance-alb (mode=6) has high availability (fault tolerance) and load balancing features and does not require switch configuration (traffic distribution to each interface is not particularly balanced)

There is a lot of information on the specific Internet, understand the characteristics of each mode according to their own choices on the line, generally used 0,1,4,6 these several modes.

Second, CentOS 7 configuration bonding

surroundings:

System: Centos7
Network card: em1, em2
Bond0: 172.16.0.183 
Load Mode: mode6( adaptive load balancing )

The two physical network cards em1 and em2 on the server are bound to a logical network card bond0, and the bonding mode selects mode6.

Note: The ip address is configured on bond0. The physical network card does not need to configure an ip address.

1, close and stop the NetworkManager service

STOP NetworkManager.service systemctl      # Stop NetworkManager service 
systemctl disable NetworkManager.service   # prohibit start-up service NetworkManager

Ps: Must be closed, will not interfere with doing the bonding

2, loading the bonding module

modprobe --first-time bonding

There is no prompt to indicate successful loading. If modprobe: ERROR: could not insert ‘bonding’: Module already in kernel indicates that you have already loaded this module.

You can also use lsmod | grep bonding to see if the module is loaded

lsmod | grep bonding
bonding               136705  0

3, create a configuration file based on the bond0 interface

1
vim /etc/sysconfig/network-scripts/ifcfg-bond0

Modify it as follows, depending on your situation:

DEVICE=bond0
TYPE=Bond
IPADDR=172.16.0.183
NETMASK=255.255.255.0
GATEWAY=172.16.0.1
DNS1=114.114.114.114
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_MASTER=yes
BONDING_OPTS="mode=6 miimon=100"

The above BONDING_OPTS=” mode=6 miimon=100 ” indicates that the configured working mode is mode6 (adaptive load balancing), and miimon indicates the frequency of monitoring network links (milliseconds). We set the frequency to 100 milliseconds, depending on your needs. Mode can be specified for other load modes.

4, modify the em1 interface configuration file

vim /etc/sysconfig/network-scripts/ifcfg-em1

Modify it as follows:

DEVICE=em1
USERCTL=no
ONBOOT = yes
 MASTER =bond0                   # needs to correspond to the value of DEVICE in the ifcfg-bond0 configuration file above 
SLAVE= yes
BOOTPROTO=none

5, modify the em2 interface configuration file

vim /etc/sysconfig/network-scripts/ifcfg-em2

Modify it as follows:

DEVICE=em2
USERCTL=no
ONBOOT = yes
 MASTER =bond0                  # Needs and corresponds to the value of DEVICE in the ifcfg-bond0 configuration file 
SLAVE= yes
BOOTPROTO=none

6, test

Restart network service

systemctl restart network

View the interface status information of bond0 (If the error message shows that it is not successful, it is most likely that the bond0 interface is not up)

# cat /proc/net/bonding/bond0 

Bonding Mode: adaptive load balancing    // Binding mode: Currently it is ald mode (mode 6), ie high availability and load balancing mode
Primary Slave: None
Currently Active Slave: em1 
MII Status: up                            // Interface status: up (MII is the Media Independent Interface abbreviation, interface meaning)
MII Polling Interval (ms): 100 // Time interval for interface polling (here 100 ms)
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1                      / / prepared interface: em0
 MII Status: up                            / / interface status: up (MII is the Media Independent Interface referred to, the interface means)
Speed: 1000 Mbps // The speed of the port is 1000 Mpbs
Duplex: full                              // full duplex
Link Failure Count: 0                     // Number of link failures: 0
Permanent HW addr: 84:2b:2b:6a:76:d4 // Permanent MAC address
Slave queue ID: 0

Slave Interface: em1                      / / prepared interface: em1
 MII Status: up                            / / interface status: up (MII is the Media Independent Interface referred to, the interface means)
Speed: 1000 Mbps
Duplex: full                              // full duplex
Link Failure Count: 0                     // Number of link failures: 0
Permanent HW addr: 84:2b:2b:6a:76:d5 // Permanent MAC address
Slave queue ID: 0

Check the interface information of the network through the ifconfig command

# ifconfig

bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 172.16.0.183  netmask 255.255.255.0  broadcast 172.16.0.255
        inet6 fe80::862b:2bff:fe6a:76d4  prefixlen 64  scopeid 0x20<link>
        ether 84:2b:2b:6a:76:d4  txqueuelen 0  (Ethernet)
        RX packets 11183  bytes 1050708 (1.0 MiB)
        RX errors 0  dropped 5152  overruns 0  frame 0
        TX packets 5329  bytes 452979 (442.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)
        RX packets 3505  bytes 335210 (327.3 KiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 2852  bytes 259910 (253.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 84:2b:2b:6a:76:d5  txqueuelen 1000  (Ethernet)
        RX packets 5356  bytes 495583 (483.9 KiB)
        RX errors 0  dropped 4390  overruns 0  frame 0
        TX packets 1546  bytes 110385 (107.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 17  bytes 2196 (2.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 17  bytes 2196 (2.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
The test network is highly available. We unplugged one of the network cables for testing. The conclusions are:
  • In the current mode=6 mode, one packet is lost. When the network is restored (the network is inserted back), the packet loss is about 5-6. This indicates that the high-availability function is normal but the packet loss will be more when the network recovers.
  • One packet was lost in the test mode=1 mode. When the network was restored (the cable was plugged back in), there was basically no packet loss, indicating that the high-availability function and recovery were normal.
  • Mode6 This kind of load mode is very good except that there is packet loss when the fault is recovered. If this can be ignored, this mode can be used; mode1 fault switching and recovery are fast, and there is basically no packet loss and delay. . But the port utilization is relatively low, because this kind of master-backup mode only has one network card at work.

Third, CentOS 6 configuration bonding

Centos6 configuration bonding is basically the same as the above Cetons7, but the configuration is somewhat different. 

System: Centos6
Network card: em1, em2
Bond0: 172.16.0.183 
Load Mode: mode1(adaptive load balancing) # Here, the load mode is 1, that is, active/standby mode.

1, close and stop the NetworkManager service

service  NetworkManager stop
chkconfig NetworkManager off

Ps: If it is installed, close it. If the error message indicates that this is not installed, then do not use it.

2, loading the bonding module

modprobe --first-time bonding

, a record built on bond0 interface configuration files

vim /etc/sysconfig/network-scripts/ifcfg-bond0

Modify the following (according to your needs):

DEVICE=bond0
TYPE=Bond
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.16.0.183
NETMASK=255.255.255.0
GATEWAY=172.16.0.1
DNS1=114.114.114.114
USERCTL=no
BONDING_OPTS="mode=6 miimon=100"

4, load the bond0 interface to the kernel

vi /etc/modprobe.d/bonding.conf

Modify it as follows:

alias bond0 bonding

5, edit the em1, em2 interface file

vim /etc/sysconfig/network-scripts/ifcfg-em1

Modify it as follows:

DEVICE=em1
MASTER=bond0
SLAVE=yes
USERCTL = no
ONBOOT=yes
BOOTPROTO=none
vim /etc/sysconfig/network-scripts/ifcfg-em2

Modify it as follows:

DEVICE=em2
MASTER=bond0
SLAVE=yes
USERCTL = no
ONBOOT=yes
BOOTPROTO=none

6, load the module, restart the network and test

modprobe bonding
service network restart

Check the status of the bondo interface

cat /proc/net/bonding/bond0
Bonding Mode: fault-tolerance ( active- backup ) # The current load mode of bond0 interface is 
active/ backup mode
 Primary Slave: None Currently Active Slave: em2 
MII Status: up 
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 84:2b:2b:6a:76:d4
Slave queue ID: 0

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 84: 2b: 2b: 6a: 76 : d5
Slave queue ID: 0

Use the ifconfig command to view the status of the next interface. You will find that all MAC addresses in the mode=1 mode are consistent, indicating that the external logic is a mac address.

ifconfig 
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet6 fe80::862b:2bff:fe6a:76d4  prefixlen 64  scopeid 0x20<link>
        ether 84:2b:2b:6a:76:d4  txqueuelen 0  (Ethernet)
        RX packets 147436  bytes 14519215 (13.8 MiB)
        RX errors 0  dropped 70285  overruns 0  frame 0
        TX packets 10344  bytes 970333 (947.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)
        RX packets 63702  bytes 6302768 (6.0 MiB)
        RX errors 0  dropped 64285  overruns 0  frame 0
        TX packets 344  bytes 35116 (34.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 84:2b:2b:6a:76:d4  txqueuelen 1000  (Ethernet)
        RX packets 65658  bytes 6508173 (6.2 MiB)
        RX errors 0  dropped 6001  overruns 0  frame 0
        TX packets 1708  bytes 187627 (183.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 31  bytes 3126 (3.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 31  bytes 3126 (3.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Perform a high availability test, unplug one of the cables to see the packet loss and delay, and then plug in the network cable (analog recovery), and then watch the packet loss and delay.

centos7 openldap

systemctl stop firewalld.service
setenforce 0

sed -i s/^SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.70 master.apple.com master
192.168.1.71 slave.apple.com slave
192.168.1.73 client1.apple.com client1
192.168.1.74 client2.apple.com client2

yum -y install openldap compat-openldap openldap-clients openldap-servers openldap-servers-sql openldap-devel migrationtools vim

cd /etc/openldap/slapd.d

rm -rf

cp /usr/share/openldap-servers/slapd.ldif /root/ldap/

Set OpenLDAP admin password.
# generate encrypted password

slappasswd -s redhat -n > /etc/openldap/passwd

slappasswd -h {SSHA} -s ldppassword

[root@ldap ~]# slappasswd

New password:

Re-enter new password:

[root@master ldap]# cat slapd.ldif
#
# See slapd-config(5) for details on configuration options.
# This file should NOT be com readable.
#

dn: cn=config
objectClass: olcGlobal
cn: config
olcArgsFile: /var/run/openldap/slapd.args
olcPidFile: /var/run/openldap/slapd.pid
#
# TLS settings
#
olcTLSCACertificatePath: /etc/openldap/certs
olcTLSCertificateFile: “OpenLDAP Server”
olcTLSCertificateKeyFile: /etc/openldap/certs/password
#
# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#
#olcReferral: ldap://root.openldap.org
#
# Sample security restrictions
# Require integrity protection (prevent hijacking)
# Require 112-bit (3DES or better) encryption for updates
# Require 64-bit encryption for simple bind
#
#olcSecurity: ssf=1 update_ssf=112 simple_bind=64

#
# Load dynamic backend modules:
# – modulepath is architecture dependent value (32/64-bit system)
# – back_sql.la backend requires openldap-servers-sql package
# – dyngroup.la and dynlist.la cannot be used at the same time
#

#dn: cn=module,cn=config
#objectClass: olcModuleList
#cn: module
#olcModulepath: /usr/lib/openldap
#olcModulepath: /usr/lib64/openldap
#olcModuleload: accesslog.la
#olcModuleload: auditlog.la
#olcModuleload: back_dnsapple.la
#olcModuleload: back_ldap.la
#olcModuleload: back_mdb.la
#olcModuleload: back_meta.la
#olcModuleload: back_null.la
#olcModuleload: back_passwd.la
#olcModuleload: back_relay.la
#olcModuleload: back_shell.la
#olcModuleload: back_sock.la
#olcModuleload: collect.la
#olcModuleload: constraint.la
#olcModuleload: dds.la
#olcModuleload: deref.la
#olcModuleload: dyngroup.la
#olcModuleload: dynlist.la
#olcModuleload: memberof.la
#olcModuleload: pcache.la
#olcModuleload: ppolicy.la
#olcModuleload: refint.la
#olcModuleload: retcode.la
#olcModuleload: rwm.la
#olcModuleload: seqmod.la
#olcModuleload: smbk5pwd.la
#olcModuleload: sssvlv.la
#olcModuleload: syncprov.la
#olcModuleload: translucent.la
#olcModuleload: unique.la
#olcModuleload: valsort.la

#
# Schema settings
#

dn: cn=schema,cn=config
objectClass: olcSchemaConfig
cn: schema

include: file:///etc/openldap/schema/corba.ldif
include: file:///etc/openldap/schema/core.ldif
include: file:///etc/openldap/schema/cosine.ldif
include: file:///etc/openldap/schema/duaconf.ldif
include: file:///etc/openldap/schema/dyngroup.ldif
include: file:///etc/openldap/schema/inetorgperson.ldif
include: file:///etc/openldap/schema/java.ldif
include: file:///etc/openldap/schema/misc.ldif
include: file:///etc/openldap/schema/nis.ldif
include: file:///etc/openldap/schema/openldap.ldif
include: file:///etc/openldap/schema/ppolicy.ldif
include: file:///etc/openldap/schema/collective.ldif

#
# Frontend settings
#

dn: olcDatabase=frontend,cn=config
objectClass: olcDatabaseConfig
objectClass: olcFrontendConfig
olcDatabase: frontend
#
# Sample global access control policy:
# Root DSE: allow anyone to read it
# Subschema (sub)entry DSE: allow anyone to read it
# Other DSEs:
# Allow self write access
# Allow authenticated users read access
# Allow anonymous users to authenticate
#
#olcAccess: to dn.base=”” by * read
#olcAccess: to dn.base=”cn=Subschema” by * read
#olcAccess: to *
# by self write
# by users read
# by anonymous auth
#
# if no access controls are present, the default policy
# allows anyone and everyone to read anything but restricts
# updates to rootdn. (e.g., “access to * by * read”)
#
# rootdn can always read and write EVERYTHING!
#

#
# Configuration database
#

dn: olcDatabase=config,cn=config
objectClass: olcDatabaseConfig
olcDatabase: config
olcAccess: to * by dn.base=”gidNumber=0+uidNumber=0,cn=peercred,cn=external,c
n=auth” manage by * none

#
# Server status monitoring
#

dn: olcDatabase=monitor,cn=config
objectClass: olcDatabaseConfig
olcDatabase: monitor
olcAccess: to * by dn.base=”gidNumber=0+uidNumber=0,cn=peercred,cn=external,c
n=auth” read by dn.base=”cn=Manager,dc=apple,dc=com” read by * none

#
# Backend database definitions
#

dn: olcDatabase=hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: hdb
olcSuffix: dc=apple,dc=com
olcRootDN: cn=Manager,dc=apple,dc=com
olcRootPW: {SSHA}cc+n64r5WNtLivZppJmYvWWMo3DIhcAy
olcDbDirectory: /var/lib/ldap
olcDbIndex: objectClass eq,pres
olcDbIndex: ou,cn,mail,surname,givenname eq,pres,sub

rm -rf /etc/openldap/slapd.d/*

[root@master ldap]# slapadd -F /etc/openldap/slapd.d/ -n 0 -l /root/ldap/slapd.ldif
_#################### 100.00% eta none elapsed none fast!
Closing DB…
[root@master ldap]#

[root@server home]# slaptest -u -F /etc/openldap/slapd.d/
config file testing succeeded

config file testing succeeded

chown -Rv ldap.ldap /etc/openldap/slapd.d

[root@server slapd.d]# chown -Rv ldap.ldap /etc/openldap/slapd.d/

[root@master ldap]# cd /etc/openldap/slapd.d/
[root@master slapd.d]# ls -la
total 4
drwxr-x— 3 ldap ldap 45 Jun 15 19:18 .
drwxr-xr-x. 5 root root 92 Jun 15 19:07 ..
drwxr-x— 3 ldap ldap 182 Jun 15 19:18 cn=config
-rw——- 1 ldap ldap 589 Jun 15 19:18 cn=config.ldif
[root@master slapd.d]#

[root@master ldap]# systemctl start slapd.service
[root@master ldap]# systemctl status slapd.service
[root@master ldap]# systemctl enable slapd.service

[root@master ldap]# cat create_user.sh
#!/bin/bash
USER_LIST=ldapuser.txt
HOME_ldap=/home/ldapuser
mkdir -pv $HOME_ldap
for USERID in `awk ‘{print $1}’ $USER_LIST`; do
USERNAME=”`grep “$USERID” $USER_LIST | awk ‘{print $2}’`”
HOMEDIR=${HOME_ldap}/${USERNAME}
useradd $USERNAME -u $USERID -d $HOMEDIR
grep “$USERID” $USER_LIST | awk ‘{print $3}’ | passwd –stdin $USERNAME
done

[root@master ldap]# cat ldapuser.txt
5000 lduser1 123456
5001 lduser2 123456
5002 lduser3 123456
5003 lduser4 123456
5004 lduser5 123456
5005 lduser6 123456
[root@master ldap]#

vim /usr/share/migrationtools/migrate_common.ph

# Default DNS domain
$DEFAULT_MAIL_DOMAIN = “apple.com”;

# Default base
$DEFAULT_BASE = “dc=apple,dc=com”;

vim /usr/share/migrationtools/migrate_common.ph
/usr/share/migrationtools/migrate_base.pl > /root/ldap/base.ldif

/usr/share/migrationtools/migrate_passwd.pl /etc/passwd /root/ldap/user.ldif
cat /root/ldap/user.ldif
/usr/share/migrationtools/migrate_group.pl /etc/group /root/ldap/group.ldif

ldapadd -D “cn=Manager,dc=apple,dc=com” -W -x -f base.ldif
ldapadd -D “cn=Manager,dc=apple,dc=com” -W -x -f user.ldif
ldapadd -D “cn=Manager,dc=apple,dc=com” -W -x -f group.ldif

yum -y install nfs-utils

yum -y install nfs-utils

[root@server ~]# cat /etc/exports
/home/ldapuser 192.168.1.0/24(rw,sync)

[root@server ~]# systemctl start nfs-server.service

[root@client home]# exportfs -rv
exporting *:/home/ldapuser

systemctl enable nfs-server.service

vi /etc/rsyslog.conf

local4.* /var/log/ldap.log
touch /var/log/ldap.log

rsyslog?

systemctl restart rsyslog.service

slapd /var/log/messages

systemctl status slapd.service -l

tail -f /var/log/messages

SSL SETUP IN LDAP

openssl req -nodes -sha256 -newkey rsa:2048 -keyout PrivateKey.key -out CertificateRequest.csr

2. Optional: Check to see if the CSR really has 256bit signatures

openssl req -in CertificateRequest.csr -text -noout

You should see “Signature Algorithm: sha256WithRSAEncryption”

3. Create the certificate

We use the CSR and sign it with the private key and create a public certificate

openssl req -nodes -sha256 -newkey rsa:2048 -keyout PrivateKey.key -out CertificateRequest.csr
openssl req -in CertificateRequest.csr -text -noout
openssl x509 -req -days 365 -sha256 -in CertificateRequest.csr -signkey PrivateKey.key -out my256.crt

cp my256.crt /etc/openldap/certs/server.crt

cp PrivateKey.key /etc/openldap/certs/server.key

cp /etc/pki/tls/certs/ca-bundle.crt /etc/openldap/certs/
cat my256.crt PrivateKey.key >> master.pem
cat my256.crt PrivateKey.key >> slave.pem

vi mod_ssl.ldif

# create new
dn: cn=config
changetype: modify
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/openldap/certs/ca-bundle.crt

replace: olcTLSCertificateFile
olcTLSCertificateFile: /etc/openldap/certs/server.crt

replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/openldap/certs/server.key

[root@master ldap]# ldapmodify -Y EXTERNAL -H ldapi:/// -f mod_ssl.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry “cn=config”

[root@master ldap]# vi /etc/sysconfig/slapd
add
SLAPD_URLS=”ldapi:/// ldap:/// ldaps:///”

systemctl restart slapd

client end

[root@master ldap]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.70 master.apple.com master
192.168.1.71 slave.apple.com slave
192.168.1.73 client1.apple.com client1
192.168.1.74 client2.apple.com client2

yum -y install sssd-ldap nss-pam-ldapd nfs-utils

authconfig-tui

authconfig –enableldap –enableldapauth –ldapserver=ldaps://master.apple.com –ldapbasedn=”dc=apple,dc=com” –enablemkhomedir –disableldaptls –update
authconfig –enableldap –enableldapauth –ldapserver=ldaps://slave.apple.com –ldapbasedn=”dc=apple,dc=com” –enablemkhomedir –disableldaptls –update

reate the c hash of the CA certificate.

/etc/pki/tls/misc/c_hash /etc/openldap/cacerts/server.pem

Output:

997ee4fb.0 => /etc/openldap/cacerts/server.pem

Now, symlink the rootCA.pem to the shown 8 digit hex number.

ln -s /etc/openldap/cacerts/server.pem 997ee4fb.0

[root@client1 ~]# echo “TLS_REQCERT allow” >> /etc/openldap/ldap.conf

[root@client1 ~]# echo “tls_reqcert allow” >> /etc/nslcd.conf

Restart the LDAP client service.

systemctl restart nslcd

[root@client1 ~]# getent passwd lduser1
lduser1:x:5000:5000:lduser1:/home/ldapuser/lduser1:/bin/bash

[root@client1 /]# mkdir -p /home/ldapuser
[root@client1 /]# mount -t nfs 192.168.1.70:/home/ldapuser/ /home/ldapuser/
[root@client1/]# cd /home/ldapuser/
[root@client ldapuser]# ls
lduser1 lduser2 lduser3 lduser4 lduser5 lduser6
[root@client ldapuser]# su – lduser1
Last login: Sat May 20 23:11:00 EDT 2017 on pts/0

Configure LDAP Client for TLS connection.
[root@client1 ~]# echo “TLS_REQCERT allow” >> /etc/openldap/ldap.conf

[root@client1 ~]# echo “tls_reqcert allow” >> /etc/nslcd.conf

[root@client1 ~]# authconfig –enableldaptls –update

getsebool: SELinux is disabled

scp server.pem root@client1:/tmp/

Enable debug logging on CentOS 7 LDAP Server

vi /root/ldap/logging.ldif
——
cat logging.ldif
dn: cn=config
replace: olcLogLevel
olcLogLevel: -1
——

# apply
ldapmodify -Y EXTERNAL -H ldapi:/// -f /root/ldap/logging.ldif

# verify
ldapsearch -Y EXTERNAL -H ldapi:/// -b cn=config -s base|grep -i LOG

systemctl restart slapd

vi /etc/rsyslog.conf
——
local4.* -/var/log/slapd.log
——

systemctl restart rsyslog

vi /etc/logrotate.d/syslog
—–
# add this line
/var/log/slapd.log

Master slave replication
root@master ldap]# cat rpuser.ldif
dn: uid=rpuser,dc=apple,dc=com
objectClass: simpleSecurityObject
objectclass: account
uid: rpuser
description: Replication User
userPassword: root1234

[root@master ldap]# ldapadd -x -W -D cn=Manager,dc=apple,dc=com -W -f rpuser.ldif
Enter LDAP Password:
adding new entry “uid=rpuser,dc=apple,dc=com”

Configure LDAP Provider. Add syncprov module.
[root@master ~]# vi mod_syncprov.ldif
# create new

dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/lib64/openldap
olcModuleLoad: syncprov.la

[root@master ~]# ldapadd -Y EXTERNAL -H ldapi:/// -f mod_syncprov.ldif

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry “cn=module,cn=config”

[root@master ~]# vi syncprov.ldif
# create new

dn: olcOverlay=syncprov,olcDatabase={2}hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
olcSpSessionLog: 100

[root@master ~]# ldapadd -Y EXTERNAL -H ldapi:/// -f syncprov.ldif

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry “olcOverlay=syncprov,olcDatabase={2}hdb,cn=config”

vi syncrepl.ldif

[root@slave ldap]# cat syncrepl.ldif
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcSyncRepl
olcSyncRepl: rid=001
provider=ldap://192.168.1.70:389/
bindmethod=simple
binddn=”uid=rpuser,dc=apple,dc=com”
credentials=root1234
searchbase=”dc=apple,dc=com”
scope=sub
schemachecking=on
type=refreshAndPersist
retry=”30 5 300 3″
interval=00:00:05:00
[root@slave ldap]#

ldapadd -Y EXTERNAL -H ldapi:/// -f syncrepl.ldif

Test the LDAP replication:

Let’s create a user in LDAP called “ldaprptest“, to do that, create a .ldif file on the master LDAP server.

[root@master ~]# vi ldaprptest.ldif

Update the above file with below content.

dn: uid=ldaprptest,ou=People,dc=apple,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldaprptest
uid: ldaprptest
uidNumber: 9988
gidNumber: 100
homeDirectory: /home/ldaprptest
loginShell: /bin/bash
gecos: LDAP Replication Test User
userPassword: redhat123
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7

ldapsearch -x cn=ldaprptest -b dc=apple,dc=com

[root@master ldap]# slappasswd
New password:
Re-enter new password:
{SSHA}hbfwS2+203V3p+P6CB5n7nHVZpRB6ns+
[root@master ldap]# vi adduser.ldif
[root@master ldap]# cat adduser.ldif
dn: uid=mohan,ou=People,dc=apple,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: mohan
uid: mohan
uidNumber: 9999
gidNumber: 100
homeDirectory: /home/mohan
loginShell: /bin/bash
gecos: Mohan [Admin (at) Apple]
userPassword: {SSHA}uWC6jFxw/4nY3GEQfwf4Eh/cq13lvyKy
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
[root@master ldap]# ldapadd -x -W -D cn=Manager,dc=apple,dc=com -W -f adduser.ldif
Enter LDAP Password:
adding new entry “uid=mohan,ou=People,dc=apple,dc=com”

Backup LDAP with slapcat on CentOS 7

#!/bin/bash
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
set -e
KEEP=7
BASE_DN=’dc=apple,dc=com’
LDAPBK=”ldap-$( date +%y%m%d-%H%M ).ldif”
BACKUPDIR=’/root/ldap-backup’
test -d “$BACKUPDIR” || mkdir -p “$BACKUPDIR”
slapcat -b “$BASE_DN” -l “$BACKUPDIR/$LDAPBK”
gzip -9 “$BACKUPDIR/$LDAPBK”
ls -1tr $BACKUPDIR/*.ldif.gz | head -n-$KEEP | xargs rm –

ldapmodify -Y EXTERNAL -H ldapi:/// -f monitor.ldif

cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
chown ldap:ldap /var/lib/ldap/*

Add the cosine and nis LDAP schemas.

ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif

Generate base.ldif file for your domain.

vi base.ldif

Use the below information. You can modify it according to your requirement.

dn: dc=apple,dc=com
dc: apple
objectClass: top
objectClass: domain

dn: cn=ldapadm,dc=apple,dc=com
objectClass: organizationalRole
cn: ldapadm
description: LDAP Manager

dn: cn=Manager,dc=apple,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager

dn: ou=People,dc=apple,dc=com
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=apple,dc=com
objectClass: organizationalUnit
ou: Group

Build the directory structure.

ldapadd -x -W -D cn=Manager,dc=apple,dc=com -W -f base.ldif

ldapsearch -x -W -D ‘cn=Manager,dc=apple,dc=com ‘ -b “” -s base

ldapsearch -x -b ” -s base ‘(objectclass=*)’ namingContexts

Backup and Restore Files in Red Hat / CentOS 7

This procedure is to backup XFS-based file systems in Red Hat Enterprise Linux 7.x
             

Backing up a file system (/etc)

1. To backup the /etc/ filessystem to a file called “/tmp/mybackup”, issue the following command:

# cd /tmp

# xfsdump -0f mybackup   /dev/mapper/rhel-root -s etc

where:

-f: do the dump to the file “mybackup”

-0: do full backup (1 means incremental)

-s: backup the etc subtree from the filesystem /dev/mapper/rhel-root

 

Restoring the file system (/etc/)

1. To list all the created dumps with detailed information:

# xfsrestore -I

2. To restore a particular dump:

# xfsrestore –f mybackup /tmp

where:

-f: restore from the file “mybackup”

 

Interactive Restore

1. This exercise enables you to navigate through the backup file “mybackup”, and allows you to restore selected files instead of perfuming a full restore:

# cd /tmp

# xfsrestore –if mybackup /tmp

2. Once you get the restore prompt, you may ls and cd commands to navigate and list the files inside the backup file.

3. Select the files you want to restore by using the add command:

# add yum.conf

4. Once the selection is complete, write extract to restore the selected files.

ANSIBLE Practice

ANSIBLE Practice
Ansible-palybooks

root@controller:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/root/.ssh/id_rsa.
Your public key has been saved in /home/root/.ssh/id_rsa.pub.
The key fingerprint is:
33:b8:4d:8f:95:bc:ed:1a:12:f3:6c:09:9f:52:23:d0 root@controller
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| . |
| . E |
| o . . |
| . S * |
| + / * |
| . = @ . |
| + o |
| … |
+—————–+
root@controller:~$ cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCpEZIWC8UGJXA5uGDRj6vUd5KlwIvE0cat3opG4ZUajmrZU382PbdO3JG6DJa3beZXylYSOYeKtRVT9DxbeKgeTKQ4m8uamM81NAMRf0ZaiqSsZ9r56h1urlJCfD4y5nXwnUTvoBzZpTvTYwcevBzpNcI/VnBIgpcKQWJq11iHHrcmybbFreujgotHg1XUwCv9BdpXbPnA50XbUyX97uqCE9EzIk7WnSNpTtsmASxMPSWoHB9seOas1mq7UBKo7Xfu7qaJJLIEnMisBLKHPb0hM23BNV2SiacJEpHSB5eJKULtMGDej38HbmBsQI3u+lzcWSRppDIt6BvO05brW5C5 root@controller

copy the key to ansible deployment sever-host for password less auth

root@controller:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 controller
104.198.143.3 ansible
root@controller:~$ ssh ansible
Last login: Wed Jan 18 02:51:09 2017 from 115.113.77.105
[root@ansible ~]$

——————————-
[root@ansible ~]$ sudo vim /etc/ansible/hosts
[web]
192.168.1.23
[web]
192.168.1.21
——————————-
[root@ansible ~]$ vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.128.0.2 ansible.c.rich-operand-154505.internal ansible # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
192.168.1.23 ansible1
192.168.1.22ansible2
104.198.143.3 ansible

——————————-
[root@ansible ~]$ ansible -m ping web
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping web
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}

——————————-
[root@ansible ~]$ ansible -m ping all -u root
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -m ping all -u root
192.168.1.23 | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n”,
“unreachable”: true
}
192.168.1.22| UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n”,
“unreachable”: true
}
——————————-
[root@ansible ~]$ ansible -m ping all -b
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ ansible -s -m ping all
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
——————————-
[root@ansible ~]$ vim playbook1.yml

– hosts: all
tasks:
– name: installing telnet package
yum: name=telnet state=present

[root@ansible ~]$ ansible-playbook playbook1.yml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [installing telnet package] ***********************************************
fatal: [192.168.1.23]: FAILED! => {“changed”: true, “failed”: true, “msg”: “You need to be root to perform this command.\n”, “rc”: 1, “results”: [“Loaded plugins: fastestmirror\n”]}
fatal: [192.168.1.21]: FAILED! => {“changed”: true, “failed”: true, “msg”: “You need to be root to perform this command.\n”, “rc”: 1, “results”: [“Loaded plugins: fastestmirror\n”]}
to retry, use: –limit @/home/root/playbook1.retry

PLAY RECAP *********************************************************************
192.168.1.22: ok=1 changed=0 unreachable=0 failed=1
192.168.1.23 : ok=1 changed=0 unreachable=0 failed=1

[root@ansible ~]$ ansible-playbook playbook1.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [installing telnet package] ***********************************************
changed: [192.168.1.23]
changed: [192.168.1.21]

PLAY RECAP *********************************************************************
192.168.1.22: ok=2 changed=1 unreachable=0 failed=0
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=0
——————————-
[root@ansible ~]$ vim playbook2.yml

– hosts: all
tasks:
– name: inatalling nfs package
yum: name=nfs-utils state=present

– name: statrting nfs service
service: name=nfs state=started enabled=yes

[root@ansible ~]$ ansible-playbook playbook2.yml –syntax-check

playbook: playbook2.yml
[root@ansible ~]$ ansible-playbook playbook2.yml –check

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
changed: [192.168.1.23]
changed: [192.168.1.21]

TASK [statrting nfs service] ***************************************************
fatal: [192.168.1.21]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service \”‘nfs’\”: “}
fatal: [192.168.1.23]: FAILED! => {“changed”: false, “failed”: true, “msg”: “Could not find the requested service \”‘nfs’\”: “}
to retry, use: –limit @/home/root/playbook2.retry

PLAY RECAP *********************************************************************
192.168.1.22: ok=2 changed=1 unreachable=0 failed=1
192.168.1.23 : ok=2 changed=1 unreachable=0 failed=1

————————
[root@ansible ~]$ ansible-playbook playbook2.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
changed: [192.168.1.21]
changed: [192.168.1.23]

TASK [statrting nfs service] ***************************************************
changed: [192.168.1.21]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=3 changed=2 unreachable=0 failed=0
192.168.1.23 : ok=3 changed=2 unreachable=0 failed=0
—————-

Run again and again same play book configuration remains same as Idempotent
[root@ansible ~]$ ansible-playbook playbook2.yml -b

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [inatalling nfs package] **************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [statrting nfs service] ***************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=3 changed=0 unreachable=0 failed=0
192.168.1.23 : ok=3 changed=0 unreachable=0 failed=0

=================================================
[root@ansible ~]$ ansible all -a “service nfs status” -b
192.168.1.23 | SUCCESS | rc=0 >>
? nfs-server.service – NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-01-18 04:14:18 UTC; 2min 13s ago
Process: 12036 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 12035 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 12036 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service

192.168.1.22| SUCCESS | rc=0 >>
? nfs-server.service – NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2017-01-18 04:14:18 UTC; 2min 13s ago
Process: 6738 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 6737 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 6738 (code=exited, status=0/SUCCESS)
Memory: 0B
CGroup: /system.slice/nfs-server.service
—————————————————
[root@ansible ~]$ vim playbook3.yml

– hosts: all
become: yes
tasks:
– name: Install Apache.
yum: name={{ item }} state=present
with_items:
– httpd
– httpd-devel
– name: Copy configuration files.
copy:
src: “{{ item.src }}”
dest: “{{ item.dest }}”
owner: root
group: root
mode: 0644
with_items:
– src: “httpd.conf”
dest: “/etc/httpd/conf/httpd.conf”
– src: “httpd-vhosts.conf”
dest: “/etc/httpd/conf/httpd-vhosts.conf”
– name: Make sure Apache is started now and at boot.
service: name=httpd state=started enabled=yes
[root@ansible ~]$ ls -l
total 40
-rw-r–r–. 1 root root 11753 Jan 18 06:27 httpd.conf
-rw-r–r–. 1 root root 824 Jan 18 06:27 httpd-vhosts.conf

[root@ansible ~]$ ansible-playbook playbook3.yml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.1.21]
ok: [192.168.1.23]

TASK [Install Apache.] *********************************************************
changed: [192.168.1.23] => (item=[u’httpd’, u’httpd-devel’])
changed: [192.168.1.21] => (item=[u’httpd’, u’httpd-devel’])

TASK [Copy configuration files.] ***********************************************
ok: [192.168.1.21] => (item={u’dest’: u’/etc/httpd/conf/httpd.conf’, u’src’: u’httpd.conf’})
ok: [192.168.1.23] => (item={u’dest’: u’/etc/httpd/conf/httpd.conf’, u’src’: u’httpd.conf’})
ok: [192.168.1.21] => (item={u’dest’: u’/etc/httpd/conf/httpd-vhosts.conf’, u’src’: u’httpd-vhosts.conf’})
ok: [192.168.1.23] => (item={u’dest’: u’/etc/httpd/conf/httpd-vhosts.conf’, u’src’: u’httpd-vhosts.conf’})

TASK [Make sure Apache is started now and at boot.] ****************************
changed: [192.168.1.21]
changed: [192.168.1.23]

PLAY RECAP *********************************************************************
192.168.1.22: ok=4 changed=2 unreachable=0 failed=0
192.168.1.23 : ok=4 changed=2 unreachable=0 failed=0

playbook1playbook2
[root@ansible ~]$ sudo vim /etc/ansible/hosts
[web]
192.168.1.23
[web]
192.168.1.21
[multi:children]
web
web

[root@ansible ~]$ ansible multi -m ping
192.168.1.23 | SUCCESS => {
“changed”: false,
“ping”: “pong”
}
192.168.1.22| SUCCESS => {
“changed”: false,
“ping”: “pong”
}
—————————————————

[root@ansible ~]$ ansible multi -a hostname
192.168.1.22| SUCCESS | rc=0 >>
ansible2.c.rich-operand-154505.internal

192.168.1.23 | SUCCESS | rc=0 >>
ansible1.c.rich-operand-154505.internal
—————————————-
[root@ansible ~]$ ansible multi -a ‘free -m’
192.168.1.22| SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 3700 229 1673 178 1796 2978
Swap: 0 0 0

192.168.1.23 | SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 3700 329 265 16 3105 3031
Swap: 0 0 0
—————————————-
[root@ansible ~]$ ansible multi -a “du -h”
192.168.1.23 | SUCCESS | rc=0 >>
4.0K ./.ssh
56K ./.ansible/tmp/ansible-tmp-1484714028.87-63512995370206
56K ./.ansible/tmp
56K ./.ansible
0 ./.puppetlabs/var
0 ./.puppetlabs/etc
0 ./.puppetlabs/opt/puppet
0 ./.puppetlabs/opt
0 ./.puppetlabs
80K .

192.168.1.22| SUCCESS | rc=0 >>
4.0K ./.ssh
56K ./.ansible/tmp/ansible-tmp-1484714028.87-38086154108105
56K ./.ansible/tmp
56K ./.ansible
80K .
————————————-
[root@ansible ~]$ ansible multi -a ‘service httpd status’ -b
192.168.1.22| FAILED | rc=4 >>
Redirecting to /bin/systemctl status httpd.service
Unit httpd.service could not be found.

192.168.1.23 | FAILED | rc=4 >>
Redirecting to /bin/systemctl status httpd.service
Unit httpd.service could not be found.
————————————-
[root@ansible ~]$ ansible multi -a ‘netstat -tlpn’ -s
192.168.1.22| SUCCESS | rc=0 >>
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 15329/etcd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 6736/rpc.mountd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 994/sshd
tcp 0 0 0.0.0.0:34457 0.0.0.0:* LISTEN –
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1041/master
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:43009 0.0.0.0:* LISTEN 6724/rpc.statd
tcp6 0 0 :::10251 :::* LISTEN 15502/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 15445/kube-apiserve
tcp6 0 0 :::2379 :::* LISTEN 15329/etcd
tcp6 0 0 :::10252 :::* LISTEN 15463/kube-controll
tcp6 0 0 :::111 :::* LISTEN 6524/rpcbind
tcp6 0 0 :::20048 :::* LISTEN 6736/rpc.mountd
tcp6 0 0 :::8080 :::* LISTEN 15445/kube-apiserve
tcp6 0 0 :::22 :::* LISTEN 994/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1041/master
tcp6 0 0 :::36474 :::* LISTEN –
tcp6 0 0 :::2049 :::* LISTEN –
tcp6 0 0 :::54309 :::* LISTEN 6724/rpc.statd

192.168.1.23 | SUCCESS | rc=0 >>
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:20048 0.0.0.0:* LISTEN 12034/rpc.mountd
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 990/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1032/master
tcp 0 0 0.0.0.0:4447 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:45185 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
tcp 0 0 0.0.0.0:9990 0.0.0.0:* LISTEN 7715/java
tcp 0 0 0.0.0.0:53095 0.0.0.0:* LISTEN 12018/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 11822/rpcbind
tcp6 0 0 :::20048 :::* LISTEN 12034/rpc.mountd
tcp6 0 0 :::22 :::* LISTEN 990/sshd
tcp6 0 0 :::43255 :::* LISTEN –
tcp6 0 0 :::55927 :::* LISTEN 12018/rpc.statd
tcp6 0 0 ::1:25 :::* LISTEN 1032/master
tcp6 0 0 :::2049 :::* LISTEN –

[root@ansible ~]$ ansible multi -s -m yum -a “name=ntp state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“ntp-4.2.6p5-25.el7.centos.x86_64 providing ntp is already installed”
]
}
192.168.1.22| SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“ntp-4.2.6p5-25.el7.centos.x86_64 providing ntp is already installed”
]
}

————————————–
[root@ansible ~]$ ansible multi -s -m service -a “name=ntpd state=started enabled=yes”
————————————–
[root@ansible ~]$ ntpdate
18 Jan 04:57:35 ntpdate[4532]: no servers can be used, exiting
————————————–
[root@ansible ~]$ ansible multi -s -a “service ntpd stop”
192.168.1.23 | SUCCESS | rc=0 >>
Redirecting to /bin/systemctl stop ntpd.service

192.168.1.22| SUCCESS | rc=0 >>
Redirecting to /bin/systemctl stop ntpd.service
————————————–
[root@ansible ~]$ ansible multi -s -a “ntpdate -q 0.rhel.pool.ntp.org”
192.168.1.22| SUCCESS | rc=0 >>
server 138.236.128.112, stratum 2, offset -0.003149, delay 0.05275
server 71.210.146.228, stratum 2, offset 0.003796, delay 0.04633
server 128.138.141.172, stratum 1, offset -0.000194, delay 0.03752
server 69.89.207.199, stratum 2, offset -0.000211, delay 0.05193
18 Jan 04:58:22 ntpdate[10370]: adjust time server 128.138.141.172 offset -0.000194 sec

192.168.1.23 | SUCCESS | rc=0 >>
server 173.230.144.109, stratum 2, offset 0.000549, delay 0.06175
server 45.127.113.2, stratum 3, offset 0.000591, delay 0.06134
server 4.53.160.75, stratum 2, offset -0.000900, delay 0.04163
server 50.116.52.97, stratum 2, offset -0.001006, delay 0.05426
18 Jan 04:58:22 ntpdate[15477]: adjust time server 4.53.160.75 offset -0.000900 sec
————————————–
[root@ansible ~]$ ansible web -s -m yum -a “name=MySQL-python state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: bay.uchicago.edu\n * epel: mirror.steadfast.net\n * extras: mirror.tzulo.com\n * updates: mirror.team-cymru.org\nResolving Dependencies\n–> Running transaction check\n—> Package MySQL-python.x86_64 0:1.2.5-1.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n MySQL-python x86_64 1.2.5-1.el7 base 90 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 90 k\nInstalled size: 284 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : MySQL-python-1.2.5-1.el7.x86_64 1/1 \n Verifying : MySQL-python-1.2.5-1.el7.x86_64 1/1 \n\nInstalled:\n MySQL-python.x86_64 0:1.2.5-1.el7 \n\nComplete!\n”
]
}

[root@ansible ~]$ ansible web -s -m yum -a “name=python-setuptools state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“msg”: “”,
“rc”: 0,
“results”: [
“python-setuptools-0.9.8-4.el7.noarch providing python-setuptools is already installed”
]
}

[root@ansible ~]$ ansible web -s -m easy_install -a “name=django state=present”
192.168.1.23 | SUCCESS => {
“binary”: “/bin/easy_install”,
“changed”: true,
“name”: “django”,
“virtualenv”: null
}
————————————–
[root@ansible ~]$ ansible web -s -m user -a “name=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“comment”: “”,
“createhome”: true,
“group”: 1004,
“home”: “/home/admin”,
“name”: “admin”,
“shell”: “/bin/bash”,
“state”: “present”,
“system”: false,
“uid”: 1003
}
[root@ansible ~]$ ansible web -s -m group -a “name=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: false,
“gid”: 1004,
“name”: “admin”,
“state”: “present”,
“system”: false
}
[root@ansible ~]$ ansible web -s -m user -a “name=first group=admin state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“comment”: “”,
“createhome”: true,
“group”: 1004,
“home”: “/home/first”,
“name”: “first”,
“shell”: “/bin/bash”,
“state”: “present”,
“system”: false,
“uid”: 1004
}
[root@ansible ~]$ ansible web -a “tail /etc/passwd”
192.168.1.23 | SUCCESS | rc=0 >>
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin
root:x:1000:1001::/home/root:/bin/bash
test:x:1001:1002::/home/test:/bin/bash
jboss:x:1002:1003::/home/jboss:/bin/bash
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
admin:x:1003:1004::/home/admin:/bin/bash
first:x:1004:1004::/home/first:/bin/bash

[root@ansible ~]$ ansible web -a “tail /etc/shadow”
192.168.1.23 | FAILED | rc=1 >>
tail: cannot open ‘/etc/shadow’ for reading: Permission denied

[root@ansible ~]$ ansible web -a “tail /etc/shadow” -b
192.168.1.23 | SUCCESS | rc=0 >>
systemd-network:!!:17176::::::
tss:!!:17176::::::
root:*:17178:0:99999:7:::
test:*:17178:0:99999:7:::
jboss:!!:17182:0:99999:7:::
rpc:!!:17184:0:99999:7:::
rpcuser:!!:17184::::::
nfsnobody:!!:17184::::::
admin:!!:17184:0:99999:7:::
first:!!:17184:0:99999:7:::
——————————–

[root@ansible ~]$ ansible web -m stat -a “path=/etc/hosts”
192.168.1.23 | SUCCESS => {
“changed”: false,
“stat”: {
“atime”: 1484635843.2532218,
“checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“ctime”: 1484203757.175483,
“dev”: 2049,
“executable”: false,
“exists”: true,
“gid”: 0,
“gr_name”: “root”,
“inode”: 240,
“isblk”: false,
“ischr”: false,
“isdir”: false,
“isfifo”: false,
“isgid”: false,
“islnk”: false,
“isreg”: true,
“issock”: false,
“isuid”: false,
“md5”: “10f391742a450d220ff00269216eff8a”,
“mode”: “0644”,
“mtime”: 1484203757.175483,
“nlink”: 1,
“path”: “/etc/hosts”,
“pw_name”: “root”,
“readable”: true,
“rgrp”: true,
“roth”: true,
“rusr”: true,
“size”: 297,
“uid”: 0,
“wgrp”: false,
“woth”: false,
“writeable”: false,
“wusr”: true,
“xgrp”: false,
“xoth”: false,
“xusr”: false
}
}

========================================
[root@ansible ~]$ ansible multi -m copy -a “src=/etc/hosts dest=/tmp/hosts”
192.168.1.23 | SUCCESS => {
“changed”: true,
“checksum”: “08aa54eecc8a866b53d38351ea72e5bb97718005”,
“dest”: “/tmp/hosts”,
“gid”: 1001,
“group”: “root”,
“md5sum”: “72ff7a2085a5186d0cab74f14bae1483”,
“mode”: “0664”,
“owner”: “root”,
“secontext”: “unconfined_u:object_r:user_home_t:s0”,
“size”: 369,
“src”: “/home/root/.ansible/tmp/ansible-tmp-1484717608.47-178441141048946/source”,
“state”: “file”,
“uid”: 1000
}
192.168.1.22| SUCCESS => {
“changed”: true,
“checksum”: “08aa54eecc8a866b53d38351ea72e5bb97718005”,
“dest”: “/tmp/hosts”,
“gid”: 1001,
“group”: “root”,
“md5sum”: “72ff7a2085a5186d0cab74f14bae1483”,
“mode”: “0664”,
“owner”: “root”,
“secontext”: “unconfined_u:object_r:user_home_t:s0”,
“size”: 369,
“src”: “/home/root/.ansible/tmp/ansible-tmp-1484717608.85-272034831244848/source”,
“state”: “file”,
“uid”: 1000
}
==========================================
[root@ansible ~]$ ansible multi -s -m fetch -a “src=/etc/hosts dest=/tmp”
192.168.1.23 | SUCCESS => {
“changed”: true,
“checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“dest”: “/tmp/192.168.1.23/etc/hosts”,
“md5sum”: “10f391742a450d220ff00269216eff8a”,
“remote_checksum”: “5fed7929fb7be9b1046252b6b7e0e2263fbc0738”,
“remote_md5sum”: null
}
192.168.1.22| SUCCESS => {
“changed”: true,
“checksum”: “0b24c9ee4a888defdf6769d5e72f65761e882f1f”,
“dest”: “/tmp/192.168.1.21/etc/hosts”,
“md5sum”: “e2dd8ef8a5f58f35d7a3f3dce7f2f2bf”,
“remote_checksum”: “0b24c9ee4a888defdf6769d5e72f65761e882f1f”,
“remote_md5sum”: null
}
============================================
[root@ansible ~]$ ls -l /tmp/
total 16
drwxrwxr-x. 3 root root 16 Jan 18 05:34 192.168.1.21
drwxrwxr-x. 3 root root 16 Jan 18 05:34 192.168.1.23

[root@ansible ~]$ cat /tmp/192.168.1.23/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.128.0.3 ansible1.c.rich-operand-154505.internal ansible1 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
==========================================
[root@ansible ~]$ ansible multi -m file -a “dest=/tmp/test mode=644 state=directory”
192.168.1.23 | SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 1000
}
192.168.1.22| SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 1000
}
===========================================
[root@ansible ~]$ ansible multi -s -m file -a “dest=/tmp/test mode=644 owner=root state=directory”
192.168.1.22| SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 0
}
192.168.1.23 | SUCCESS => {
“changed”: true,
“gid”: 1001,
“group”: “root”,
“mode”: “0644”,
“owner”: “root”,
“path”: “/tmp/test”,
“secontext”: “unconfined_u:object_r:user_tmp_t:s0”,
“size”: 6,
“state”: “directory”,
“uid”: 0
}
================================================
[root@ansible ~]$ ansible multi -s -B 3600 -a “yum -y update”
=================================================
[root@ansible ~]$ ansible 192.168.1.23 -s -a “tail /var/log/messages”
192.168.1.23 | SUCCESS | rc=0 >>
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Return async_wrapper task started.
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Starting module and watcher
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Start watching 18305 (3600)
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Start module (18305)
Jan 18 05:41:03 ansible1 ansible-async_wrapper.py: Module complete (18305)
Jan 18 05:41:08 ansible1 ansible-async_wrapper.py: Done in kid B.
Jan 18 05:42:04 ansible1 systemd-logind: Removed session 180.
Jan 18 05:42:04 ansible1 systemd: Started Session 181 of user root.
Jan 18 05:42:04 ansible1 systemd-logind: New session 181 of user root.
Jan 18 05:42:04 ansible1 systemd: Starting Session 181 of user root.
=================================================
[root@ansible ~]$ ansible multi -s -m shell -a “tail /var/log/messages | grep ansible-command | wc -l”
192.168.1.23 | SUCCESS | rc=0 >>
0

192.168.1.22| SUCCESS | rc=0 >>
0
=================================
[root@ansible ~]$ ansible web -s -m git -a “repo=git://web.com/path/to/repo.git dest=/opt/myapp update=yes version=1.2.4”
192.168.1.23 | FAILED! => {
“changed”: false,
“failed”: true,
“msg”: “Failed to find required executable git”
}
[root@ansible ~]$ ansible web -s -m yum -a “name=git state=present”
192.168.1.23 | SUCCESS => {
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: bay.uchicago.edu\n * epel: mirror.steadfast.net\n * extras: mirror.tzulo.com\n * updates: mirror.team-cymru.org\nResolving Dependencies\n–> Running transaction check\n—> Package git.x86_64 0:1.8.3.1-6.el7_2.1 will be installed\n–> Processing Dependency: perl-Git = 1.8.3.1-6.el7_2.1 for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Term::ReadKey) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Git) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: perl(Error) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Processing Dependency: libgnome-keyring.so.0()(64bit) for package: git-1.8.3.1-6.el7_2.1.x86_64\n–> Running transaction check\n—> Package libgnome-keyring.x86_64 0:3.8.0-3.el7 will be installed\n—> Package perl-Error.noarch 1:0.17020-2.el7 will be installed\n—> Package perl-Git.noarch 0:1.8.3.1-6.el7_2.1 will be installed\n—> Package perl-TermReadKey.x86_64 0:2.30-20.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n git x86_64 1.8.3.1-6.el7_2.1 base 4.4 M\nInstalling for dependencies:\n libgnome-keyring x86_64 3.8.0-3.el7 base 109 k\n perl-Error noarch 1:0.17020-2.el7 base 32 k\n perl-Git noarch 1.8.3.1-6.el7_2.1 base 53 k\n perl-TermReadKey x86_64 2.30-20.el7 base 31 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package (+4 Dependent packages)\n\nTotal download size: 4.6 M\nInstalled size: 23 M\nDownloading packages:\n——————————————————————————–\nTotal 12 MB/s | 4.6 MB 00:00 \nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:perl-Error-0.17020-2.el7.noarch 1/5 \n Installing : libgnome-keyring-3.8.0-3.el7.x86_64 2/5 \n Installing : perl-TermReadKey-2.30-20.el7.x86_64 3/5 \n Installing : git-1.8.3.1-6.el7_2.1.x86_64 4/5 \n Installing : perl-Git-1.8.3.1-6.el7_2.1.noarch 5/5 \n Verifying : perl-Git-1.8.3.1-6.el7_2.1.noarch 1/5 \n Verifying : perl-TermReadKey-2.30-20.el7.x86_64 2/5 \n Verifying : libgnome-keyring-3.8.0-3.el7.x86_64 3/5 \n Verifying : 1:perl-Error-0.17020-2.el7.noarch 4/5 \n Verifying : git-1.8.3.1-6.el7_2.1.x86_64 5/5 \n\nInstalled:\n git.x86_64 0:1.8.3.1-6.el7_2.1 \n\nDependency Installed:\n libgnome-keyring.x86_64 0:3.8.0-3.el7 perl-Error.noarch 1:0.17020-2.el7 \n perl-Git.noarch 0:1.8.3.1-6.el7_2.1 perl-TermReadKey.x86_64 0:2.30-20.el7 \n\nComplete!\n”
]
}

kubernetes centos7.3

Last login: Thu Mar 1 00:45:37 2018 from 192.168.1.248
[root@clusterserver1 ~]# uname -a
Linux clusterserver1.rmohan.com 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@clusterserver1 ~]#

Master : 192.168.1.20
Worker1 : 192.168.1.21
Worker2 : 192.168.1.23
worker3 : 192.168.1.24

hostnamectl set-hostname ‘clusterserver1.rmohan.com’
exec bash
setenforce 0
sed -i –follow-symlinks ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/sysconfig/selinux

[root@clusterserver1 ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root@clusterserver1 ~]#

CentOS Linux release 7.4.1708 (Core)
[root@clusterserver1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.20 clusterserver1.rmohan.com clusterserver1 master
192.168.1.21 clusterserver2.rmohan.com clusterserver2 worker1
192.168.1.22 clusterserver3.rmohan.com clusterserver3 worker2
192.168.1.23 clusterserver4.rmohan.com clusterserver4 worker3
[root@clusterserver1 ~]#

[root@clusterserver1 ~]#systemctl stop firewalld && systemctl disable firewalld

[root@clusterserver1 ~]#yum update -y
[root@clusterserver1 ~]# modprobe br_netfilter
[root@clusterserver1 ~]#sysctl net.bridge.bridge-nf-call-iptables=1
[root@clusterserver1 ~]#sysctl net.bridge.bridge-nf-call-ip6tables=1

[root@clusterserver1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@clusterserver1 ~]# sysctl –system

cat <<EOF > /etc/yum.repos.d/virt7-dockerrelease.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF

cat <<eof > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
eof

[root@clusterserver1 ~]# yum install ebtables ethtool

[root@clusterserver1 ~]# yum install -y kubelet kubeadm kubectl docker

#systemctl enable kubelet && systemctl start kubelet

After this Check Kubelet is running fine

#systemctl status kubelet -l

[root@clusterserver1 ~]# systemctl status kubelet -l
? kubelet.service – kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
??10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2018-03-01 16:16:40 +08; 8s ago
Docs: http://kubernetes.io/docs/
Process: 2504 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 2504 (code=exited, status=1/FAILURE)

Mar 01 16:16:40 clusterserver1.rmohan.com systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Mar 01 16:16:40 clusterserver1.rmohan.com systemd[1]: Unit kubelet.service entered failed state.
Mar 01 16:16:40 clusterserver1.rmohan.com systemd[1]: kubelet.service failed.
[root@clusterserver1 ~]# ystemctl status kubelet -l
-bash: ystemctl: command not found
[root@clusterserver1 ~]# systemctl status kubelet -l
? kubelet.service – kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
??10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2018-03-01 16:17:01 +08; 1s ago
Docs: http://kubernetes.io/docs/
Process: 2517 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 2517 (code=exited, status=1/FAILURE)

Can verify errors in /var/log/message
[root@clusterserver1 ~]# tail -f /var/log/messages
Mar 1 16:17:11 clusterserver1 systemd: kubelet.service holdoff time over, scheduling restart.
Mar 1 16:17:11 clusterserver1 systemd: Started kubelet: The Kubernetes Node Agent.
Mar 1 16:17:11 clusterserver1 systemd: Starting kubelet: The Kubernetes Node Agent…
Mar 1 16:17:11 clusterserver1 kubelet: I0301 16:17:11.580912 2523 feature_gate.go:226] feature gates: &{{} map[]}
Mar 1 16:17:11 clusterserver1 kubelet: I0301 16:17:11.581022 2523 controller.go:114] kubelet config controller: starting controller
Mar 1 16:17:11 clusterserver1 kubelet: I0301 16:17:11.581030 2523 controller.go:118] kubelet config controller: validating combination of defaults and flags
Mar 1 16:17:11 clusterserver1 kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Mar 1 16:17:11 clusterserver1 systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Mar 1 16:17:11 clusterserver1 systemd: Unit kubelet.service entered failed state.
Mar 1 16:17:11 clusterserver1 systemd: kubelet.service failed.
Mar 1 16:17:21 clusterserver1 systemd: kubelet.service holdoff time over, scheduling restart.
Mar 1 16:17:21 clusterserver1 systemd: Started kubelet: The Kubernetes Node Agent.
Mar 1 16:17:21 clusterserver1 systemd: Starting kubelet: The Kubernetes Node Agent…
Mar 1 16:17:21 clusterserver1 kubelet: I0301 16:17:21.830092 2535 feature_gate.go:226] feature gates: &{{} map[]}
Mar 1 16:17:21 clusterserver1 kubelet: I0301 16:17:21.830201 2535 controller.go:114] kubelet config controller: starting controller
Mar 1 16:17:21 clusterserver1 kubelet: I0301 16:17:21.830209 2535 controller.go:118] kubelet config controller: validating combination of defaults and flags
Mar 1 16:17:21 clusterserver1 kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Mar 1 16:17:21 clusterserver1 systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Mar 1 16:17:21 clusterserver1 systemd: Unit kubelet.service entered failed state.
Mar 1 16:17:21 clusterserver1 systemd: kubelet.service failed.

So here Issue is “error: unable to load client CA file /etc/kubernetes/pki/ca.crt” As we have not installed kubeadm here

So now if this you are trying on worker node and already installed kubeadm on master then you can try Copying files from Master.

If this is you Master Node setup then let Start Master Node setup here

If kubernates was tried/installed any time on your machine then you need to reset kubeadm
run command on Master Node

[root@clusterserver1 ~]# kubeadm reset
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Removing kubernetes-managed containers.
[reset] Docker doesn’t seem to be running. Skipping the removal of running Kubernetes containers.
[reset] No etcd manifest found in “/etc/kubernetes/manifests/etcd.yaml”. Assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# kubeadm init
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-693.17.1.el7.x86_64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled (as module)
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set – Required for aufs.
CONFIG_BLK_DEV_DM: enabled (as module)
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING FileExisting-crictl]: crictl not found in system path
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’
[preflight] Some fatal errors occurred:
[ERROR SystemVerification]: failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[ERROR Swap]: running with swap on is not supported. Please disable swap
[ERROR Service-Docker]: docker service is not active, please run ‘systemctl start docker.service’
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`

Disable swap

swapoff -a

enable docker and start the docker services

[root@clusterserver1 ~]# kubeadm init
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [clusterserver1.rmohan.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.20]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”.
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 85.002972 seconds
[uploadconfig] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[markmaster] Will mark node clusterserver1.rmohan.com as master by adding a label and a taint
[markmaster] Master clusterserver1.rmohan.com tainted and labelled with key/value: node-role.kubernetes.io/master=””
[bootstraptoken] Using token: 8c45ed.70033b8135e5439a
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join –token 8c45ed.70033b8135e5439a 192.168.1.20:6443 –discovery-token-ca-cert-hash sha256:1df9c0250f28e5a4d137f29307b787954948fc417f2fa9a06a195d65f41b959d

[root@clusterserver1 ~]# mkdir -p $HOME/.kube
[root@clusterserver1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@clusterserver1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

[root@clusterserver1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
clusterserver1.rmohan.com NotReady master 2m v1.9.3
[root@clusterserver1 ~]#

Install kubeadm and docker package on both nodes

[root@clusterserver2 ~]# yum install kubeadm docker -y
[root@clusterserver3 ~]# yum install kubeadm docker -y
[root@clusterserver4 ~]# yum install kubeadm docker -y

Start and enable docker service

[root@clusterserver2 ~]# systemctl restart docker && systemctl enable docker
[root@clusterserver3 ~]# systemctl restart docker && systemctl enable docker
[root@clusterserver4 ~]# systemctl restart docker && systemctl enable docker

Step 4: Now Join worker nodes to master node

To join worker nodes to Master node, a token is required. Whenever kubernetes master initialized , then in the output we get command and token. Copy that command and run on both nodes.

[root@worker-node1 ~]# kubeadm join –token 8c45ed.70033b8135e5439a 192.168.1.20:6443 –discovery-token-ca-cert-hash sha256:1df9c0250f28e5a4d137f29307b787954948fc417f2fa9a06a195d65f41b959d

yum install epel-release -y

yum install ansible -y

How to Setup a WebDAV Server Using Apache on CentOS 7

How to Setup a WebDAV Server Using Apache on CentOS 7

CentOS Linux Guides Server Apps Web Servers

WebDAV stands for “Web-based Distributed Authoring and Versioning”. It’s an extension of the HTTP protocol that allows users to manage and share files stored on a WebDAV-enabled web server.

This tutorial will show you how to setup a WebDAV server using Apache on a Vultr CentOS 7 server instance.

Prerequisites
A Vultr CentOS 7 server instance.
A non-root sudo user. You can learn more about how to create a sudo user in this Vultr tutorial.
Step one: Update the system
sudo yum install epel-release
sudo yum update -y
sudo shutdown -r now
After the reboot, use the same sudo user to log in.

Step two: Install Apache
Install Apache using YUM:

sudo yum install httpd
Disable Apache’s default welcome page:

sudo sed -i ‘s/^/#&/g’ /etc/httpd/conf.d/welcome.conf
Prevent the Apache web server from displaying files within the web directory:

sudo sed -i “s/Options Indexes FollowSymLinks/Options FollowSymLinks/” /etc/httpd/conf/httpd.conf
Start the Apache web server:

sudo systemctl start httpd.service
sudo systemctl enable httpd.service
Step three: Setup WebDAV
For Apache, there are three WebDAV-related modules which will be loaded by default when a Apache web server getting started. You can confirm that with this command:

sudo httpd -M | grep dav
You should be presented with:

dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)
Next, create a dedicated directory for WebDAV:

sudo mkdir /var/www/html/webdav
sudo chown -R apache:apache /var/www/html
sudo chmod -R 755 /var/www/html
For security purposes, you need to create a user account, say it is “user001”, to access the WebDAV server, and then input your desired password. Later, you will use this user account to log into your WebDAV server.

sudo htpasswd -c /etc/httpd/.htpasswd user001
Modify the owner and permissions in order to enhance security:

sudo chown root:apache /etc/httpd/.htpasswd
sudo chmod 640 /etc/httpd/.htpasswd
Step four: Create a virtual host for WebDAV
sudo vi /etc/httpd/conf.d/webdav.conf
Populate the file with:

DavLockDB /var/www/html/DavLock
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html/webdav/
ErrorLog /var/log/httpd/error.log
CustomLog /var/log/httpd/access.log combined
Alias /webdav /var/www/html/webdav
<Directory /var/www/html/webdav>
DAV On
AuthType Basic
AuthName “webdav”
AuthUserFile /etc/httpd/.htpasswd
Require valid-user
</Directory>
</VirtualHost>
Save and quit:

:wq!
Restart Apache to put your changes into effect:

sudo systemctl restart httpd.service
Step five: Modify firewall rules
sudo firewall-cmd –zone=public –permanent –add-service=http
sudo firewall-cmd –reload
Step six: Test the functionality of the WebDAV server from a local machine
In order to take advantage of WebDAV, you need to use a qualified client. For example, you can install a program called cadaver on a CentOS 7 desktop:

sudo yum install cadaver
Having cadaver installed, use the following command to access the WebDAV server:

cadaver http://<your-server-ip>/webdav/
Use the username “user001” and the password you setup earlier to log in.

In the cadaver shell, you can upload and organize files as you wish. Here are some examples.

To upload a local file “/home/user/abc.txt” to the WebDAV server:

dav:/webdav/> put /home/user/abc.txt
To create a directory “dir1” on the WebDAV server:

dav:/webdav/> mkdir dir1
To quit the cadaver shell:

dav:/webdav/> exit
If you want to learn more about cadaver, you can look up the cadaver manual in the Bash shell:

man cadaver
or

cadaver -h

 

 

Once it is done, you shall disable Apache’s default welcome page.

[root@linuxhelp ~]# sed -i ‘s/^/#&/g’ /etc/httpd/conf.d/welcome.conf

Also, prevent the Apache web server from displaying files within the web directory.

[root@linuxhelp ~]# sed -i “s/Options Indexes FollowSymLinks/Options FollowSymLinks/” /etc/httpd/conf/httpd.conf

After that, start and enable the Apache web server.

[root@linuxhelp ~]# systemctl start httpd.service
[root@linuxhelp ~]# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
Setup WebDAV

For Apache, there are three WebDAV-related modules which will be loaded by default when an Apache web server is getting started.

[root@linuxhelp ~]# httpd -M | grep dav
dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)

Next, create a dedicated directory for WebDAV:

[root@linuxhelp ~]# mkdir /var/www/html/webdav
[root@linuxhelp ~]# chown -R apache:apache /var/www/html
[root@linuxhelp ~]# chmod -R 755 /var/www/html

For security purposes, you need to create a user account.

[root@linuxhelp ~]# htpasswd -c /etc/httpd/.htpasswd user1
New password:
Re-type new password:
Adding password for user user1

And also, you need to modify the owner and permissions in order to enhance security

[root@linuxhelp ~]# chown root:apache /etc/httpd/.htpasswd
[root@linuxhelp ~]# chmod 640 /etc/httpd/.htpasswd

Once it is done, you need to create a VirtialHost for WebDAV.

[root@linuxhelp ~]# vi /etc/httpd/conf.d/webdav.conf
DavLockDB /var/www/html/DavLock
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html/webdav/
ErrorLog /var/log/httpd/error.log
CustomLog /var/log/httpd/access.log combined
Alias /webdav /var/www/html/webdav
<Directory /var/www/html/webdav>
DAV On
AuthType Basic
AuthName “webdav”
AuthUserFile /etc/httpd/.htpasswd
Require valid-user
</Directory>
</VirtualHost>

Once the VirtualHost is configured, you need to restart Apache to put your changes into effect.

[root@linuxhelp ~]# systemctl restart httpd.service

Test the functionality of the WebDAV server from a local machine. In order to take advantage of WebDAV, you need to use a qualified client. For example, you can install a program called cadaver on a CentOS 7 desktop

[root@linuxhelp ~]# yum install cadaver
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirror.dhakacom.com
* epel: mirror2.totbb.net
* extras: mirrors.viethosting.com
* updates: centos-hn.viettelidc.com.vn
Resolving Dependencies
–> Running transaction check
—> Package cadaver.x86_64 0:0.23.3-9.el7 will be installed
–> Finished Dependency Resolution

.
.
Running transaction
Installing : cadaver-0.23.3-9.el7.x86_64 1/1
Verifying : cadaver-0.23.3-9.el7.x86_64 1/1

Installed:
cadaver.x86_64 0:0.23.3-9.el7

Complete!

Having cadaver installed, use the following command to access the WebDAV server.

[root@linuxhelp ~]# cadaver http://192.168.7.234/webdav/
Authentication required for webdav on server `192.168.7.234′:
Username: user1
Password:
dav:/webdav/>

In the cadaver shell, you can upload and organize files as you wish. Here are some examples. To upload a local file

dav:/webdav/> put /root/Desktop/linuxhelp.txt
Uploading /root/Desktop/linuxhelp.txt to `/webdav/linuxhelp.txt’: succeeded.

To create a directory “dir1” on the WebDAV server

dav:/webdav/> mkdir dir

To quit the cadaver shell

dav:/webdav/> exit
Connection to `192.168.7.234′ closed.
If you want to learn more about cadaver, you can look up the cadaver manual in the Bash shell. With this, the tutorial on setting up a WebDAV Server Using Apache on CentOS 7 comes to an end.