Categories

MongoDB replication set settings Replication Sets

MongoDB replication set settings Replication Sets

Master-Slave Master-slave replication: -master only need to add a parameter when a service starts, while another service plus -slave and -source parameters, you can achieve synchronization.

The latest version of MongoDB is no longer recommended for this program.

Replica Sets replication set: MongoDB version 1.6 for developers of new features replica set, which is stronger than some of the previous replication function, increased fault automatic switching and automatic restoration member nodes, exactly the same data between various DB, greatly reducing maintenance success. auto shard has been clearly stated does not support replication paris, we recommended replica set, replica set failover is completely automatic.

Replica Sets structure is very similar to a cluster, if one node failure occurs, the other nodes will immediately take over the business without having to shut down operations.

??MongoDB

[root@node1 ~]# vim /etc/yum.repos.d/Mongodb.repo

[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/RedHat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc

[root@node1 ~]# yum install -y mongodb-org

[root@node1 ~]# service mongod start
Starting mongod: [ OK ]

[root@node1 ~]# ps aux|grep mong
mongod 1361 5.7 14.8 351180 35104 ? Sl 01:26 0:01 /usr/bin/mongod -f /etc/mongod.conf

[root@node1 ~]# mkdir -p /mongodb/data
[root@node1 ~]# chown -R mongod:mongod /mongodb/
[root@node1 ~]# ll /mongodb/
total 4
drwxr-xr-x 2 mongod mongod 4096 May 18 02:04 data

[root@node1 ~]# grep -v “^#” /etc/mongod.conf |grep -v “^$”
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /mongodb/data
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
net:
port: 27017
bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

[root@node1 ~]# service mongod start
Starting mongod: [ OK ]

node2?node2

Replication Sets

–oplogSize
–dbpath
–logpath
–port
–replSet
–replSet test/
–maxConns
–fork
–logappend
–keyFile

v3.4.4

[root@node1 ~]# vim /etc/mongod.conf
replication:
oplogSizeMB: 1024
replSetName: rs0

Keyfile

openssl rand -base64 756 > chmod 400

Configuration File
If using a configuration file, set the security.keyFile option to the keyfile’s path, and the replication.replSetName option to the replica set name:
security:
keyFile: replication:
replSetName:

Command Line
If using the command line option, start the mongod with the –keyFile and –replSet parameters:

mongod –keyFile –replSet

Replication Sets?

[root@node1 ~]# openssl rand -base64 756 > /mongodb/mongokey
[root@node1 ~]# cat /mongodb/mongokey
gxpcgjyFj2qE8b9TB/0XbdRVYH9VDb55NY03AHwxCFU58MUjJMeez844i1gaUo/t
…..
…..

[root@node1 ~]# chmod 400 /mongodb/mongokey
[root@node1 ~]# chown mongod:mongod /mongodb/mongokey
[root@node1 ~]# ll /mongodb/
total 8
drwxr-xr-x 4 mongod mongod 4096 May 19 18:39 data
-r——– 1 mongod mongod 1024 May 19 18:29 mongokey

[root@node1 ~]# vim /etc/mongod.conf
#security:
security:
keyFile: /mongodb/mongokey

#operationProfiling:

#replication:
replication:
oplogSizeMB: 1024
replSetName: rs0

[root@node1 ~]# service mongod restart
Stopping mongod: [ OK ]
Starting mongod: [ OK ]

[root@node1 ~]# iptables -I INPUT 4 -m state –state NEW -p tcp –dport 27017 -j ACCEPT

hosts
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/hosts root@node2.rmohan.com:/mongodb/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/hosts root@node3.rmohan.com:/mongodb/

[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /mongodb/mongokey root@node3.rmohan.com:/mongodb/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /mongodb/mongokey root@node3.rmohan.com:/mongodb/

[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/mongod.conf root@node2.rmohan.com:/etc/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/mongod.conf root@node3.rmohan.com:/etc/

rsync?openssh-clients

[root@node1 ~]# mongo
> help
db.help() help on db methods
db.mycoll.help() help on collection methods
sh.help() sharding helpers
rs.help() replica set helpers
…..

> rs.help()
rs.status() { replSetGetStatus : 1 } checks repl set status
rs.initiate() { replSetInitiate : null } initiates set with default settings
rs.initiate(cfg) { replSetInitiate : cfg } initiates set with configuration cfg
rs.conf() get the current configuration object from local.system.replset
…..

> rs.status()
{
“info” : “run rs.initiate(…) if not yet done for the set”,
“ok” : 0,
“errmsg” : “no replset config has been received”,
“code” : 94,
“codeName” : “NotYetInitialized”
}

> rs.initiate()
{
“info2” : “no configuration specified. Using a default configuration for the set”,
“me” : “node1.rmohan.com:27017”,
“ok” : 1
}

rs0:OTHER>
rs0:PRIMARY> rs.status()
{
“set” : “rs0”,
“date” : ISODate(“2017-05-18T17:00:49.868Z”),
“myState” : 1,
“term” : NumberLong(1),
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1495126845, 1),
“t” : NumberLong(1)
},
“appliedOpTime” : {
“ts” : Timestamp(1495126845, 1),
“t” : NumberLong(1)
},
“durableOpTime” : {
“ts” : Timestamp(1495126845, 1),
“t” : NumberLong(1)
}
},
“members” : [
{
“_id” : 0,
“name” : “node1.rmohan.com:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 1239,
“optime” : {
“ts” : Timestamp(1495126845, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2017-05-18T17:00:45Z”),
“infoMessage” : “could not find member to sync from”,
“electionTime” : Timestamp(1495126824, 2),
“electionDate” : ISODate(“2017-05-18T17:00:24Z”),
“configVersion” : 1,
“self” : true
}
],
“ok” : 1
}

rs0:PRIMARY> rs.add(“node2.rmohan.com”)
{ “ok” : 1 }
rs0:PRIMARY> rs.add(“node3.rmohan.com”)
{ “ok” : 1 }
rs0:PRIMARY> rs.status()
{
“set” : “rs0”,
“date” : ISODate(“2017-05-18T17:08:47.724Z”),
“myState” : 1,
“term” : NumberLong(1),
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“appliedOpTime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“durableOpTime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
}
},
“members” : [
{
“_id” : 0,
“name” : “node1.rmohan.com:27017”,
“health” : 1, //
“state” : 1, //1???PRIMARY?slave
“stateStr” : “PRIMARY”, //
“uptime” : 1717,
“optime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2017-05-18T17:08:45Z”),
“electionTime” : Timestamp(1495126824, 2),
“electionDate” : ISODate(“2017-05-18T17:00:24Z”),
“configVersion” : 3,
“self” : true
},
{
“_id” : 1,
“name” : “node2.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 64,
“optime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDurable” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2017-05-18T17:08:45Z”),
“optimeDurableDate” : ISODate(“2017-05-18T17:08:45Z”),
“lastHeartbeat” : ISODate(“2017-05-18T17:08:46.106Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-18T17:08:47.141Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node1.rmohan.com:27017”,
“configVersion” : 3
},
{
“_id” : 2,
“name” : “node3.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 55,
“optime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDurable” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2017-05-18T17:08:45Z”),
“optimeDurableDate” : ISODate(“2017-05-18T17:08:45Z”),
“lastHeartbeat” : ISODate(“2017-05-18T17:08:46.195Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-18T17:08:46.924Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node2.rmohan.com:27017”,
“configVersion” : 3
}
],
“ok” : 1
}

rs0:PRIMARY> db.isMaster()
{
“hosts” : [
“node1.rmohan.com:27017”,
“node2.rmohan.com:27017”,
“node3.rmohan.com:27017”
],
“setName” : “rs0”,
“setVersion” : 3,
“ismaster” : true,
“secondary” : false,
“primary” : “node1.rmohan.com:27017”,
“me” : “node1.rmohan.com:27017”,
“electionId” : ObjectId(“7fffffff0000000000000001”),
“lastWrite” : {
“opTime” : {
“ts” : Timestamp(1495127705, 1),
“t” : NumberLong(1)
},
“lastWriteDate” : ISODate(“2017-05-18T17:15:05Z”)
},
“maxBsonObjectSize” : 16777216,
“maxMessageSizeBytes” : 48000000,
“maxWriteBatchSize” : 1000,
“localTime” : ISODate(“2017-05-18T17:15:11.146Z”),
“maxWireVersion” : 5,
“minWireVersion” : 0,
“readOnly” : false,
“ok” : 1
}

rs0:PRIMARY> use testdb
rs0:PRIMARY> show collections
testcoll
rs0:PRIMARY> db.testcoll.find()
{ “_id” : ObjectId(“591dd9f965cc255a5373aefa”), “name” : “tom”, “age” : 25 }

node2:
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
admin 0.000GB
local 0.000GB
testdb 0.000GB
rs0:SECONDARY> use testdb
switched to db testdb
rs0:SECONDARY> show collections
testcoll
rs0:SECONDARY> db.testcoll.find()
{ “_id” : ObjectId(“591dd9f965cc255a5373aefa”), “name” : “tom”, “age” : 25 }
rs0:SECONDARY>

node3:

rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
admin 0.000GB
local 0.000GB
testdb 0.000GB
rs0:SECONDARY> use testdb
switched to db testdb
rs0:SECONDARY> show collections
testcoll
rs0:SECONDARY> db.testcoll.find()
{ “_id” : ObjectId(“591dd9f965cc255a5373aefa”), “name” : “tom”, “age” : 25 }
rs0:SECONDARY>

rs0:PRIMARY> use local
switched to db local
rs0:PRIMARY> show collections
me
oplog.rs
replset.election
replset.minvalid
startup_log
system.replset
rs0:PRIMARY> db.oplog.rs.find()
{ “ts” : Timestamp(1495126824, 1), “h” : NumberLong(“3056083863196084673”), “v” : 2, “op” : “n”, “ns” : “”, “o” : { “msg” : “initiating set” } }
{ “ts” : Timestamp(1495126825, 1), “t” : NumberLong(1), “h” : NumberLong(“7195178065440751511”), “v” : 2, “op” : “n”, “ns” : “”, “o” : { “msg” : “new primary” } }
{ “ts” : Timestamp(1495126835, 1), “t” : NumberLong(1), “h” : NumberLong(“5723995478292318850”), “v” : 2, “op” : “n”, “ns” : “”, “o” : { “msg” : “periodic noop” } }
{ “ts” : Timestamp(1495126845, 1), “t” : NumberLong(1), “h” : NumberLong(“-3772304067699003381”), “v” : 2, “op” : “n”, “ns” : “”, “o”

rs0:PRIMARY> db.printReplicationInfo()
configured oplog size: 1024MB
log length start to end: 2541secs (0.71hrs)
oplog first event time: Fri May 19 2017 01:00:24 GMT+0800 (CST)
oplog last event time: Fri May 19 2017 01:42:45 GMT+0800 (CST)
now: Fri May 19 2017 01:42:48 GMT+0800 (CST)
rs0:PRIMARY>

db.oplog.rs.find()?
db.printReplicationInfo()?
db.printSlaveReplicationInfo()?

rs0:PRIMARY> db.printSlaveReplicationInfo()
source: node2.rmohan.com:27017
syncedTo: Fri May 19 2017 01:47:15 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: node3.rmohan.com:27017
syncedTo: Fri May 19 2017 01:47:15 GMT+0800 (CST)
0 secs (0 hrs) behind the primary

db.system.replset.find()?

rs0:PRIMARY> db.system.replset.find()
{ “_id” : “rs0”, “version” : 3, “protocolVersion” : NumberLong(1), “members” : [ { “_id” : 0, “host” : “node1.rmohan.com:27017”, “arbiterOnly” : false, “buildIndexes” : true, “hidden” : false, “priority” : 1, “tags” : { }, “slaveDelay” : NumberLong(0), “votes” : 1 }, { “_id” : 1, “host” : “node2.rmohan.com:27017”, “arbiterOnly” : false, “buildIndexes” : true, “hidden” : false, “priority” : 1, “tags” : { }, “slaveDelay” : NumberLong(0), “votes” : 1 }, { “_id” : 2, “host” : “node3.rmohan.com:27017”, “arbiterOnly” : false, “buildIndexes” : true, “hidden” : false, “priority” : 1, “tags” : { }, “slaveDelay” : NumberLong(0), “votes” : 1 } ], “settings” : { “chainingAllowed” : true, “heartbeatIntervalMillis” : 2000, “heartbeatTimeoutSecs” : 10, “electionTimeoutMillis” : 10000, “catchUpTimeoutMillis” : 2000, “getLastErrorModes” : { }, “getLastErrorDefaults” : { “w” : 1, “wtimeout” : 0 }, “replicaSetId” : ObjectId(“591dd3284fc6957e660dc933”) } }

rs0:PRIMARY> db.system.replset.find().forEach(printjson)

1 node3

rs0:SECONDARY> rs.freeze(30)
{ “ok” : 1 }

2?node1 PRIMARY
rs0:PRIMARY> rs.stepDown(30)
2017-05-19T02:09:27.945+0800 E QUERY [thread1] Error: error doing query: failed: network error while attempting to run command ‘replSetStepDown’ on host ‘127.0.0.1:27017’ :
DB.prototype.runCommand@src/mongo/shell/db.js:132:1
DB.prototype.adminCommand@src/mongo/shell/db.js:150:16
rs.stepDown@src/mongo/shell/utils.js:1261:12
@(shell):1:1
2017-05-19T02:09:27.947+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2017-05-19T02:09:27.949+0800 I NETWORK [thread1] reconnect 127.0.0.1:27017 (127.0.0.1) ok

30
rs0:SECONDARY> rs.status()
{
“set” : “rs0”,
“date” : ISODate(“2017-05-18T18:12:09.732Z”),
“myState” : 2,
“term” : NumberLong(2),
“syncingTo” : “node2.rmohan.com:27017”,
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1495131128, 1),
“t” : NumberLong(2)
},
“appliedOpTime” : {
“ts” : Timestamp(1495131128, 1),
“t” : NumberLong(2)
},
“durableOpTime” : {
“ts” : Timestamp(1495131128, 1),
“t” : NumberLong(2)
}
},
“members” : [
{
“_id” : 0,
“name” : “node1.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 5519,
“optime” : {
“ts” : Timestamp(1495131128, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2017-05-18T18:12:08Z”),
“syncingTo” : “node2.rmohan.com:27017”,
“configVersion” : 3,
“self” : true
},
{
“_id” : 1,
“name” : “node2.rmohan.com:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 3866,
“optime” : {
“ts” : Timestamp(1495131118, 1),
“t” : NumberLong(2)
},
“optimeDurable” : {
“ts” : Timestamp(1495131118, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2017-05-18T18:11:58Z”),
“optimeDurableDate” : ISODate(“2017-05-18T18:11:58Z”),
“lastHeartbeat” : ISODate(“2017-05-18T18:12:08.333Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-18T18:12:08.196Z”),
“pingMs” : NumberLong(0),
“electionTime” : Timestamp(1495130977, 1),
“electionDate” : ISODate(“2017-05-18T18:09:37Z”),
“configVersion” : 3
},
{
“_id” : 2,
“name” : “node3.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 3857,
“optime” : {
“ts” : Timestamp(1495131118, 1),
“t” : NumberLong(2)
},
“optimeDurable” : {
“ts” : Timestamp(1495131118, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2017-05-18T18:11:58Z”),
“optimeDurableDate” : ISODate(“2017-05-18T18:11:58Z”),
“lastHeartbeat” : ISODate(“2017-05-18T18:12:08.486Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-18T18:12:08.116Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node2.rmohan.com:27017”,
“configVersion” : 3
}
],
“ok” : 1
}
rs0:SECONDARY>

[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/hosts root@node2.rmohan.com:/etc/

[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /mongodb/mongokey root@node4.rmohan.com:/mongodb/

[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/mongod.conf root@node4.rmohan.com:/etc/

[root@node4 ~]# iptables -I INPUT 4 -m state –state NEW -p tcp –dport 27017 -j ACCEPT

rs0:PRIMARY> rs.add(“node4.rmohan.com”)
{ “ok” : 1 }

rs0:PRIMARY> rs.status()
{
“set” : “rs0”,
“date” : ISODate(“2017-05-19T12:12:57.697Z”),
“myState” : 1,
“term” : NumberLong(8),
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“appliedOpTime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“durableOpTime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
}
},
“members” : [
{
“_id” : 0,
“name” : “node1.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 159,
“optime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDurable” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDate” : ISODate(“2017-05-19T12:12:51Z”),
“optimeDurableDate” : ISODate(“2017-05-19T12:12:51Z”),
“lastHeartbeat” : ISODate(“2017-05-19T12:12:56.111Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-19T12:12:57.101Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node3.rmohan.com:27017”,
“configVersion” : 4
},
{
“_id” : 1,
“name” : “node2.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 189,
“optime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDurable” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDate” : ISODate(“2017-05-19T12:12:51Z”),
“optimeDurableDate” : ISODate(“2017-05-19T12:12:51Z”),
“lastHeartbeat” : ISODate(“2017-05-19T12:12:56.111Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-19T12:12:57.103Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node3.rmohan.com:27017”,
“configVersion” : 4
},
{
“_id” : 2,
“name” : “node3.rmohan.com:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 191,
“optime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDate” : ISODate(“2017-05-19T12:12:51Z”),
“electionTime” : Timestamp(1495195800, 1),
“electionDate” : ISODate(“2017-05-19T12:10:00Z”),
“configVersion” : 4,
“self” : true
},
{
“_id” : 3,
“name” : “node4.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 71,
“optime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDurable” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDate” : ISODate(“2017-05-19T12:12:51Z”),
“optimeDurableDate” : ISODate(“2017-05-19T12:12:51Z”),
“lastHeartbeat” : ISODate(“2017-05-19T12:12:56.122Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-19T12:12:56.821Z”),
“pingMs” : NumberLong(1),
“syncingTo” : “node3.rmohan.com:27017”,
“configVersion” : 4
}
],
“ok” : 1
}

rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
admin 0.000GB
local 0.000GB
testdb 0.000GB
rs0:SECONDARY> use testdb
switched to db testdb
rs0:SECONDARY> show collections
testcoll
rs0:SECONDARY> db.testcoll.find()
{ “_id” : ObjectId(“591dd9f965cc255a5373aefa”), “name” : “tom”, “age” : 25 }
rs0:SECONDARY>
rs0:SECONDARY> db.isMaster()
{
“hosts” : [
“node1.rmohan.com:27017”,
“node2.rmohan.com:27017”,
“node3.rmohan.com:27017”,
“node4.rmohan.com:27017”
],
“setName” : “rs0”,
“setVersion” : 4,
“ismaster” : false,
“secondary” : true,
“primary” : “node3.rmohan.com:27017”,
“me” : “node4.rmohan.com:27017”,
“lastWrite” : {
“opTime” : {
“ts” : Timestamp(1495196261, 1),
“t” : NumberLong(8)
},
“lastWriteDate” : ISODate(“2017-05-19T12:17:41Z”)
},
“maxBsonObjectSize” : 16777216,
“maxMessageSizeBytes” : 48000000,
“maxWriteBatchSize” : 1000,
“localTime” : ISODate(“2017-05-19T12:17:44.104Z”),
“maxWireVersion” : 5,
“minWireVersion” : 0,
“readOnly” : false,
“ok” : 1
}
rs0:SECONDARY>

2?

rs0:PRIMARY> rs.remove(“node4.rmohan.com:27017”)
{ “ok” : 1 }

rs0:PRIMARY> db.isMaster()
{
“hosts” : [
“node1.rmohan.com:27017”,
“node2.rmohan.com:27017”,
“node3.rmohan.com:27017”
],
“setName” : “rs0”,
“setVersion” : 5,
“ismaster” : true,
“secondary” : false,
“primary” : “node3.rmohan.com:27017”,
“me” : “node3.rmohan.com:27017”,
“electionId” : ObjectId(“7fffffff0000000000000008”),
“lastWrite” : {
“opTime” : {
“ts” : Timestamp(1495196531, 1),
“t” : NumberLong(8)
},
“lastWriteDate” : ISODate(“2017-05-19T12:22:11Z”)
},
“maxBsonObjectSize” : 16777216,
“maxMessageSizeBytes” : 48000000,
“maxWriteBatchSize” : 1000,
“localTime” : ISODate(“2017-05-19T12:22:19.874Z”),
“maxWireVersion” : 5,
“minWireVersion” : 0,
“readOnly” : false,
“ok” : 1
}
rs0:PRIMARY>

Linux to achieve SSH password-free remote access server

Linux to achieve SSH password-free remote access server

Description

Usually use SSH login remote server, you need to use the input password, hoping to achieve through the key login and exemption from the input password,
which can be achieved for the future batch automatic deployment of the host to prepare.

The environment is as follows:

IP address operating system
Service-Terminal 192.168.1.10/24 CentOS 6.5 x86
Client 192.168.1.129/24 Ubuntu 16.04 x86

1. The client generates a key pair

Generate key pair:

rmohan@rmohan:~$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/rmohan/.ssh/id_rsa):
Created directory ‘/home/rmohan/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/rmohan/.ssh/id_rsa.
Your public key has been saved in /home/rmohan/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:eLssyXJLzUCfSN5mu6nqNH9dB/gOyXSvWBwQdNssIYE rmohan@rmohan
The key’s randomart image is:
+—[RSA 2048]—-+
| o=oo |
| E .o = |
| o oo o |
| + = .o +. |
| = So = + |
| B o+ = o |
| o…=. * o |
| ..+=..+o o |
| .o++== |
+—-[SHA256]—–+
View the generated key pair:

Linuxidc @ rmohan: ~ $ ls .ssh
id_rsa id_rsa.pub

# id_rsa for the private key, this generally need to keep confidential; id_rsa.pub for the public key, this can be made public.

2. Upload the public key to the server

Use the scp command to:

rmohan@rmohan:~$ scp .ssh/id_rsa.pub root@192.168.1.129:/root
The authenticity of host ‘192.168.1.129(192.168.1.129)’ can’t be established.
RSA key fingerprint is SHA256:0Tpm11wruaQXyvOfEB1maIkEwxmjT2AklWb198Vrln0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.0.0.128’ (RSA) to the list of known hosts.
root@10.0.0.128’s password:
id_rsa.pub 100% 393 0.4KB/s 00:00
3. Server-side operation

Add the public key from the client to .ssh / authorized_keys:

[root@rmohan ~]# cat id_rsa.pub >> .ssh/authorized_keys
[root@rmohan ~]# chmod 600 .ssh/authorized_keys

# authorized_keys 600
Modify the ssh configuration file /etc/ssh/sshd_config, find the following line:

PubkeyAuthentication no
change into:

PubkeyAuthentication yes
4. Test

Log on to the server using the key on the client:

rmohan@rmohan:~$ ssh -i .ssh/id_rsa root@192.168.1.129
Last login: Tue May 9 15:14:01 2017 from 192.168.1.129

[root@rmohan ~]#
5. Precautions
In the server side need to turn off selinux, or finally can not use the key for remote login;
The client uses the scp command, the server also need to install ssh client, or can not upload the public key to the server side,
you can also use ssh-copy-id root@192.168.1.129 instead of scp operation (so that the server Do not need to perform the operation. Ssh directory and other operations, that is equivalent to the order can help us complete the key upload and configuration work);
The following article on SSH related you may also like, may wish to refer to the following:

MariaDB Galera Cluster deployment

MariaDB Galera Cluster deployment (how quickly deploy MariaDB cluster)

MariaDB is a branch of Mysql, has been widely used in open source projects, such as hot openstack, therefore, in order to ensure high availability of services, while increasing the load capacity of the system, clustered deployment is essential.

MariaDB Galera Cluster Introduction

MariaDB MariaDB cluster is synchronous multi-master cluster. It only supports XtraDB / InnoDB storage engine (although experimental support for MyISAM – see wsrep_replicate_myisam system variable).

The main function:

Replication
True multi-master, that is, all nodes can read and write the database at the same time
Automatic control node members, the failed node is automatically cleared
The new node joins the data is automatically copied
True parallel copy, row-level
Users can directly connect to the cluster, use exactly the same experience with MySQL
Advantage:

Because it is multi-master, Slavelag so there is no (delayed)
There is no case of lost transactions
But also has the ability to read and write extended
Smaller client latencies
Data synchronization between nodes, and the Master / Slave mode is asynchronous, the binlog on different slave may be different
technology:

Galera Cluster replication based Galeralibrary achieve, in order to allow MySQL and Galera library communications, developed specifically for MySQL wsrep API.

Galera Cluster Synchronization Plug-assurance data, maintaining data consistency, can rely on certified copy, it works in the following figure:

When the client sends a commit command, before the transaction is committed, all changes to the database are collected write-set up, and sends the contents of write-set record to other nodes.

write-set will be certification testing at each node, the node test results determine whether to apply the write-set change data.

If the authentication test fails, the node will discard the write-set; if the certification test is successful, the transaction commits.

1. Installation Environment Preparation

Install MariaDB cluster requires at least 3 servers (if only two words requires special configuration, please refer to the official documentation)

Here, I list the configuration of the test machine:

Operating system version: CentOS 7

node4: 192.168.1.16 Node5: 192.168.1.17 Node6: 192.168.1.18

The first line as an example, node4 for the hostname, 192.168.1.16 for the ip, the three machines to modify / etc / hosts file, my file as follows:
192.168.1.16 Node4
192.168.1.17 Node5
192.168.1.18 Node6

In order to ensure mutual communication between nodes, need to disable the firewall settings (if you need a firewall, refer to the official website to increase the firewall settings)

Execute commands three nodes are:
systemctl STOP firewalld

Then the / etc / sysconfig / selinux selinux is set to disabled, so that the initialization is complete environment.

2. Install the Cluster Galera MariaDB
[Node4 the root @ ~] # yum the install -Y-MariaDB MariaDB MariaDB Galera-Server-Common-Galera Galera the rsync

[root@node5 ~]# yum install -y mariadb mariadb-galera-server mariadb-galera-common galera rsync

[root@node6 ~]# yum install -y mariadb mariadb-galera-server mariadb-galera-common galera rsync

3. MariaDB Galera Cluster

Initialize the database service, only one node

[root@node4 mariadb]# systemctl start mariadb
[root@node4 mariadb]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we’ll need the current
password for the root user. If you’ve just installed MariaDB, and
you haven’t set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on…

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n]
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
… Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] n
… skipping.

Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
… Success!

By default, MariaDB comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] n
… skipping.

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
… Success!

Cleaning up…

All done! If you’ve completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

/etc/my.cnf.d/galera.cnf
[root@node4 mariadb]# systemctl stop mariadb
[root@node4 ~]# vim /etc/my.cnf.d/galera.cnf

[mysqld]
……
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address = “gcomm://node4,node5,node6″
wsrep_node_name = node4
wsrep_node_address=192.168.1.16
#wsrep_provider_options=”socket.ssl_key=/etc/pki/galera/galera.key; socket.ssl_cert=/etc/pki/galera/galera.crt;”

Tip: If you do not have a way ssl certification, please put wsrep_provider_options commented.

Copy this file to node5, node6, attention should wsrep_node_name and wsrep_node_address into the corresponding node hostname and ip.

4. Start MariaDB Galera Cluster Service
[root @ node4 ~] # / usr / libexec / mysqld –wsrep-new-cluster –user = root &

?????
[root@node4 ~]# tail -f /var/log/mariadb/mariadb.log

150701 19:54:17 [Note] WSREP: wsrep_load(): loading provider library ‘none’
150701 19:54:17 [Note] /usr/libexec/mysqld: ready for connections.
Version: ‘5.5.40-MariaDB-wsrep’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MariaDB Server, wsrep_25.11.r4026

Ready for connections appear to prove that we started successfully, continue to start another node:
[root @ Node5 ~] # systemctl Start MariaDB
[root @ Node6 ~] # systemctl Start MariaDB

You can view /var/log/mariadb/mariadb.log, the log can see the nodes are added to the cluster.

Warning ?: – wsrep-new-cluster This cluster initialization parameters can only be used, and can only be used in a node.

5. Check the cluster status

We can focus on a few key parameters:

wsrep_connected = on the link is on

wsrep_local_index = 1 the cluster index value

wsrep_cluster_size = the number of nodes in the cluster 3

wsrep_incoming_addresses = 192.168.1.17:3306,192.168.1.16:3306,192.168.1.18:3306 access nodes in the cluster address

6. verification data synchronization

Our new database on node4 galera_test, then a query on node5 and node6, if you can check galera_test library, a data synchronization is successful, the cluster is operating normally.

[root@node4 ~]# mysql -uroot -p root -e “create database galera_test”

[root@node5 ~]# mysql -uroot -p root -e “show databases”
+——————–+
| Database |
+——————–+
| information_schema |
| galera_test |
| mysql |
| performance_schema |
+——————–+

[root@node6 ~]# mysql -uroot -p root -e “show databases”
+——————–+
| Database |
+——————–+
| information_schema |
| galera_test |
| mysql |
| performance_schema |
+——————–+

At this point, our MariaDB Galera Cluster has been successfully deployed.

————————————–Dividing line———- —————————-

CentOS 6.9 compile and install Nginx1.4.7

CentOS 6.9 compile and install Nginx1.4.7

[root@rmohan.com ~]# yum install -y openssl
[root@rmohan.com ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules:
2. Download nginx source package to a local

[root@rmohan.com ~]# ll nginx-1.4.7.tar.gz
-rw-r–r–. 1 root root 769153 Jun 1 2017 nginx-1.4.7.tar.gz
3. Extract nginx source package

[root@rmohan.com ~]# tar -xf nginx-1.4.7.tar.gz
4. Go to the extracted directory

[root@rmohan.com ~]# cd nginx-1.4.7
5. Start compilation to generate the makefile

[root@rmohan.com nginx-1.4.7]# ./configure –prefix=/usr\ –sbin-path=/usr/sbin/nginx –conf-path=/etc/nginx/nginx.conf\ –error-log-path=/var/log/nginx/error.log\ –http-log-path=/var/log/nginx/access.log\ –pid-path=/var/run/nginx/nginx.pid –lock-path=/var/lock/nginx.lock\ –user=nginx –group=nginx –with-http_flv_module\ –with-http_stub_status_module –with-http_gzip_static_module\ –http-client-body-temp-path=/var/tmp/nginx/client\ –http-proxy-temp-path=/var/tmp/nginx/proxy\ –http-fastcgi-temp-path=/var/tmp/nginx/fcgi/\ –http-uwsgi-temp-path=/var/tmp/nginx/uwsgi\ –http-scgi-temp-path=/var/tmp/nginx/scgi –with-pcre\ –with-http_ssl_module\
…..
checking for socklen_t … found
checking for in_addr_t … found
checking for in_port_t … found
checking for rlim_t … found
checking for uintptr_t … uintptr_t found
checking for system byte ordering … little endian
checking for size_t size … 8 bytes
checking for off_t size … 8 bytes
checking for time_t size … 8 bytes
checking for setproctitle() … not found
checking for pread() … found
checking for pwrite() … found
checking for sys_nerr … found
checking for localtime_r() … found
checking for posix_memalign() … found
checking for memalign() … found
checking for mmap(MAP_ANON|MAP_SHARED) … found
checking for mmap(“/dev/zero”, MAP_SHARED) … found
checking for System V shared memory … found
checking for POSIX semaphores … not found
checking for POSIX semaphores in libpthread … found
checking for struct msghdr.msg_control … found
checking for ioctl(FIONBIO) … found
checking for struct tm.tm_gmtoff … found
checking for struct dirent.d_namlen … not found
checking for struct dirent.d_type … found
checking for sysconf(_SC_NPROCESSORS_ONLN) … found
checking for openat(), fstatat() … found
checking for getaddrinfo() … found
checking for PCRE library … found
checking for PCRE JIT support … not found
checking for OpenSSL library … found
checking for zlib library … found
creating objs/Makefile

Configuration summary
+ using system PCRE library
+ using system OpenSSL library
+ md5: using OpenSSL library
+ sha1: using OpenSSL library
+ using system zlib library

nginx path prefix: “/usr”
nginx binary file: “/usr/sbin/nginx”
nginx configuration prefix: “/etc/nginx”
nginx configuration file: “/etc/nginx/nginx.conf”
nginx pid file: “/var/run/nginx/nginx.pid”
nginx error log file: “/var/log/nginx/error.log”
nginx http access log file: “/var/log/nginx/access.log”
nginx http client request body temporary files: “/var/tmp/nginx/client”
nginx http proxy temporary files: “/var/tmp/nginx/proxy”
nginx http fastcgi temporary files: “/var/tmp/nginx/fcgi/”
nginx http uwsgi temporary files: “/var/tmp/nginx/uwsgi”
nginx http scgi temporary files: “/var/tmp/nginx/scgi”

CentOS 7.2 install the MySQL 5.7.18 with the rpm package

Description

This article uses MySQL-5.7.18. The operating system is 64-bit CentOS Linux release 7.2.1511 (Core), installed as a desktop.

Uninstall MariaDB

CentOS7 defaults to installing MariaDB instead of MySQL, and the yum server also removes the MySQL-related packages. Because MariaDB and MySQL may conflict, so first uninstall MariaDB.

View the installed DebianDB related rpm package.

rpm -qa | grep mariadb
View the installed MariaDB-related yum package, package name to be rpmjudged by the results of the order.

yum list mariadb-libs
Remove the installed MariaDB-related yum package, the package name to yum listjudge the results according to the order. This step requires root privileges.

yum remove mariadb-libs
Download the MySQL rpm package

As the software package is large, you can first use other ways (such as Thunder) to download. Using rpm, but also can not be installed under the conditions of the network – this is yum can not do. To install other versions of MySQL, please go to the official website to search the corresponding rpm download link.

wget https://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.18-1.el7.x86_64.rpm-bundle.tar
Use the rpm package to install MySQL

The following steps require root privileges. And because of the dependencies between the packages, the rpmcommands must be executed in sequence.

mkdir mysql-5.7.18
tar -xv -f mysql-5.7.18-1.el7.x86_64.rpm-bundle.tar -C mysql-5.7.18
cd mysql-5.7.18/
rpm -ivh mysql-community-common-5.7.18-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-5.7.18-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-5.7.18-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-5.7.18-1.el7.x86_64.rpm
After the installation is successful, you can also delete the installation files and temporary files.

cd ..
rm -rf mysql-5.7.18
rm mysql-5.7.18-1.el7.x86_64.rpm-bundle.tar
Modify the MySQL initial password

The following steps require root privileges.

Since the beginning does not know the password, first modify the configuration file /etc/my.cnfso that MySQL skip the login when the permissions check. Add a line

skip-grant-tables
Restart MySQL.

service mysqld restart
Free password to log in to MySQL.

mysql
In the mysql client to execute the following order, modify the root password.

use mysql;
UPDATE user SET authentication_string = password(‘your-password’) WHERE host = ‘localhost’ AND user = ‘root’;
quit;
Modify the configuration file to /etc/my.cnfdelete the previous line skip-grant-tablesand restart MySQL. This step is very important and failure to perform can lead to serious security problems.
Log in using the password you just set.

mysql -u root -p
MySQL will force the need to re-modify the password, and can not be a simple rule password.

ALTER USER root@localhost IDENTIFIED BY ‘your-new-password’;
Steps may be a little trouble, have not yet thought of other ways, first use this.

rsync command

Occasionally, rsync is too slow when transferring a very large number of files. This article discusses a workaround.
Recently I had to copy about 10 TByte of data from one server to another. Normally I would use rsync and just let it run for whatever time it takes, but for this particular system I could get only a transfer speed of 10 – 20 MByte per second. As both systems were connected with a 1 GBit network, I was expecting a performance of about 100 MByte per second.
It turned out that the bottleneck was not the network speed, but the fact that the source system contained a very large number of smaller files. Rsync doesn’t seem to be the optimal solution in this case. Also, the destination system was empty, so there was no benefit in choosing rsync over scp or tar.
After some experiments, I found a command that improved the overall file copy performance significantly. It looks like this:
root@s2200:/home/backup# tar cf – * | mbuffer -m 1024M | ssh 10.1.1.207 ‘(cd /home/backup; tar xf -)’
in @ 121.3 MB/s, out @ 85.4 MB/s, 841.3 GB total, buffer 78% full
Using this method, it is possible to transfer data with a speed near the network bandwidth. The trick is the mbuffer command. It allocates a very large buffer of 1024 MByte which sits between the tar command and the ssh command.
When there are a few large files available to transfer, the tar command would copy the data faster than it can be transferred over the network. So, the buffer fills up to 100% even though data is transmitted with the full network speed.
However when there is a directory with a large number of smaller files, reading those files from the storage is relatively slow so the buffer is emptied faster than it is refilled by the tar command. But until it is not completely empty, data is still transferred with the maximum network speed.
With a bit of luck there are enough large files to keep the buffer filled. If the buffer is always near 100% full, this means that the bottleneck is the network (or the destination system). In this case it is worth trying the -z option to both tar commands. This would compress the data before transmission. However if the buffer is mostly near 0% full, this means that the source system is the bottleneck. Data can’t be read from the local storage fast enough, and spending more CPU to compress it would probably not help.
Of course, the command above makes only sense if the destination server is empty. If some of the files already exist in the destination location, rsync would simply skip over them (if they are actually identical). There are two rsync options that can be used to speed up rsync somewhat: (todo)

CentOS / RHEL 7 : Enable NTP to start at boot after fresh install (disable chrony)

Chrony is introduced as new NTP client to replace the ntp as the default time syncing package since RHEL7, so if you configure NTP during the installation process, it just enables the chronyd service, not ntpd service.

# systemclt status ntpd.service
ntpd.service – Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled)
Active: inactive (dead)
Even when you have enabled NTP to start on boot, it will not start when chrony is enabled. So to enable NTP to start on boot, we have to disable the chrony service
In case you want to use NTP only, then below is the procedure to do so :

Please follow steps below to enable NTP service on RHEL 7:
1. Disable chronyd service.
To stop chronyd, issue the following command as root:

# systemctl stop chronyd
To prevent chronyd from starting automatically at system start, issue the following command as root:

# systemctl disable chronyd
2. Install ntp using yum:

# yum install ntp
3. Then enable and start ntpd service:

# systemctl enable ntpd.service
# systemctl start ntpd.service
4. Reboot and verify.

# systemctl status ntpd.service
ntpd.service – Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled)
Active: active (running) since Fri 2015-01-09 16:14:00 EST; 53s ago
Process: 664 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 700 (ntpd)
CGroup: /system.slice/ntpd.service
??700 /usr/sbin/ntpd -u ntp:ntp -g

CentOS / RHEL 7 : Configuring NTP using chrony

– Chrony provides another implementation of NTP.
– Chrony is designed for systems that are often powered down or disconnected from the network.
– The main configuration file is /etc/chrony.conf.
– Parameters are similar to those in the /etc/ntp.conf file.
– chronyd is the daemon that runs in user space.
– chronyc is a command-line program that provides a command prompt and a number of commands. Examples:
tracking: Displays system time information
sources: Displays information about current sources.

Installing Chrony

Install the chrony package by using the following command:

# yum install chrony
Use the following commands to start chronyd and to ensure chronyd starts at boot time:

# systemctl start chronyd
# systemctl enable chronyd
Configuring Chrony

A sample configuration would look like below :

# cat /etc/chrony.conf
server a.b.c offline
server d.e.f offline
server g.h.i offline
keyfile /etc/chrony.keys generatecommandkey
driftfile /var/lib/chrony/drift makestep 10 3
The parameters are described as follows:
server: Identifies the NTP servers you want to use. The offline keyword indicates that the servers are not contacted until chronyd receives notification that the link to the Internet is present.
keyfile: File containing administrator password. Password allows chronyc to log in to chronyd and notify chronyd of the presence of the link to the Internet.
generatecommandkey: Generates a random password automatically on the first chronyd start.
driftfile: Location and name of file containing drift data.
makestep: Step (start anew) system clock if a large correction is needed. The parameters 10 and 3 would step the system clock if the adjustment is larger than 10 seconds, but only in the first three clock updates.

Although, all these parameters are not required. For this post purpose I am using only below two lines in the configuration file.

# cat /etc/chrony.conf
server 192.0.2.1
allow 192.0.2/24
Starting chrony

Use the systemctl command to start the chrony daemon, chronyd.

# systemctl start chronyd
Verify

To check if chrony is synchronized, use the tracking, sources, and sourcestats commands. Run the chronyc tracking command to check chrony tracking. Alternatively you could run chronyc to display a chronyc> prompt, and then run the tracking command from the chronyc> prompt.

# chronyc tracking
Reference ID : 192.0.2.1 (192.0.2.1)
Stratum : 12
Ref time (UTC) : Fri Aug 05 19:06:51 2016
System time : 0.000823375 seconds fast of NTP time
Last offset : 0.001989304 seconds
RMS offset : 0.060942811 seconds
Frequency : 1728.043 ppm slow
Residual freq : 1.100 ppm
Skew : 94.293 ppm
Root delay : 0.000207 seconds
Root dispersion : 0.016767 seconds
Update interval : 65.1 seconds
Leap status : Normal
Some of the important fields are :
Reference ID: This is the reference ID and name (or IP address) if available, of the server to which the computer is currently synchronized.
Stratum: The stratum indicates how many hops away from a computer with an attached reference clock you are.
Ref time: This is the time (UT C) at which the last measurement from the reference source was processed.

Run the chronyc sources command to display information about the current time sources that chronyd is accessing.

# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
=============================================================================
^* 192.0.2.1 11 6 377 63 +1827us[+6783us]…
Some of the fields are described:
M: The mode of the source. ^ means a server, = means a peer, and # indicates
a locally connected reference clock.
S: The state of the sources. “*” indicates the source to which chronyd is currently synchronized. “+” indicates acceptable sources that are combined with the selected source. “-” indicates acceptable sources that are excluded by the combining algorithm. “?” indicates sources to which connectivity has been lost or whose packets do not pass all tests. “x” indicates a clock that chronyd thinks is a false ticker, that is, its time is inconsistent with a majority of other sources. “~” indicates a source whose time appears to have too much variability. The “?” condition is also shown at start-up, until at least three samples have been gathered from it.
Name/IP address: This shows the name or the IP address of the source, or reference ID for reference clocks.

Run the chronyc sourcestats command. This command displays information about the drift rate and offset estimation
process for each of the sources currently being examined by chronyd.

# chronyc sourcestats
210 Number of sources = 1
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
==================================================================================
192.0.2.1 5 4 259 -747.564 1623.869 -2873us 30ms
Stop chrony

Use the systemctl command to stop the chrony daemon, chronyd.

# systemctl stop chronyd
Run the chronyc tracking command and notice chronyc cannot talk to the chronyd daemon.

# chronyc tracking
506 Cannot talk to daemon

CentOS / RHEL 7 : Tips on Troubleshooting NTP / chrony Issues

The chrony service does not change the time
The often misconception is that the chrony service is setting the time to the one given by the NTP server. This is incorrect – what actually happens is that based on the answer from the NTP server, chrony just tells the system clock to go faster or slower. For this reason, sometimes even though the time is wrong and the NTP server is working, the time does not get corrected immediately.
Only time when chrony sets time

When the chrony service starts, there are some settings in the /etc/chrony/chrony.conf file that tells it to actually set the time if specific conditions occur:

# Force system clock correction at boot time.
makestep 1000 10
which means that if chrony detects during the first 10 measurements after its start that the time is off by more than 1000 seconds it will set the clock.

Some useful commands

Below are some useful commands which can be used for the troubleshooting of chrony related issues.

# chronyc tracking
# chronyc sources
# chronyc sourcestats
# systemctl status chronyd
# chronyc activity
# timedatectl
Check chronyd status

To check the status of the chronyd daemon :

# systemctl status -l chronyd
? chronyd.service – NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2016-08-12 13:22:22 IST; 1s ago
Process: 33263 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
Process: 33259 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 33261 (chronyd)
CGroup: /system.slice/chronyd.service
??33261 /usr/sbin/chronyd

Aug 12 13:22:22 NVMBD1S11BKPMED03 systemd[1]: Starting NTP client/server…
Aug 12 13:22:22 NVMBD1S11BKPMED03 chronyd[33261]: chronyd version 2.1.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +DEBUG +ASYNCDNS +IPV6 +SECHASH)
Aug 12 13:22:22 NVMBD1S11BKPMED03 chronyd[33261]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift
Aug 12 13:22:22 NVMBD1S11BKPMED03 systemd[1]: Started NTP client/server.
The chronyc sources command

Running chronyc sources -v shows the current state of the NTP server/s configured in the system. Here is an example output, in which ntp.example.com shows as a valid server which is online:

# chronyc sources -v
210 Number of sources = 1

.– Source mode ‘^’ = server, ‘=’ = peer, ‘#’ = local clock.
/ .- Source state ‘*’ = current synced, ‘+’ = OK for sync, ‘?’ = unreachable,
| / ‘x’ = time may be in error, ‘~’ = time is too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| / xxxx = adjusted offset,
|| Log2(Polling interval) -. | yyyy = measured offset,
|| \ | zzzz = estimated error.
|| | |
MS Name/IP address Stratum Poll LastRx Last sample
============================================================================
^* ntp.example.com 3 6 40 +31us[ -98us] +/- 118ms
Note that a Source state different than ‘*’ usually indicates a problem with the NTP server.

Source state ‘~’ means that the time is too variable
If the Source state is ‘~‘, it probably means that the server is accessible but the time is too variable. This can happen if the server responds too slow or responds sometimes slower and sometimes faster. You could check the response time of the pings to the server to see if they are slow or variable. This state has also been noticed when the server is running on virtual machines which are too slow causing timing issues.

Chrony check and restart every hour

Once an hour, the chrony service checks the output of the chronyc sources -v command, by running script /usr/sbin/palladion_chrony_healthcheck which runs /usr/sbin/palladion_check_chrony and checks its output:

if /usr/sbin/palladion_check_chrony returns 1 – it means there was no online source (no source with Source state = ‘*’) , so chrony restarts in an attempt to re-initialize the server status
if /usr/sbin/palladion_check_chrony returns 0 – this means everything is ok, chrony does not need to be restarted because it already has a valid online source
# cat /etc/cron.d/chrony
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
#
# Check chrony every hour and restart if necessary.
#
16 * * * * root /usr/sbin/palladion_chrony_healthcheck
Chrony logs

There are several chrony logs that can be used to troubleshoot. Most of them are located in /var/log/chrony/. Note that the latest file is not always the *.log one. Sometimes it happens that even the *.log.2 or *.log.3 file are the ones that are more recent. Here is an example of listing the files with sorting by the most recent:

# ls -lisaht /var/log/chrony/
total 1.5M
3801115 580K -rw-r–r– 1 root root 574K Oct 21 14:56 measurements.log.3
3801131 544K -rw-r–r– 1 root root 540K Oct 21 14:56 statistics.log.3
3801166 356K -rw-r–r– 1 root root 350K Oct 21 14:56 tracking.log.3
3801089 4.0K drwxr-xr-x 16 root root 4.0K Oct 21 00:01 ..
3801114 4.0K drwxr-xr-x 2 root root 4.0K Oct 21 00:01 .
3801128 0 -rw-r–r– 1 root root 0 Oct 21 00:01 tracking.log
3801110 0 -rw-r–r– 1 root root 0 Oct 21 00:01 measurements.log
3801120 0 -rw-r–r– 1 root root 0 Oct 21 00:01 statistics.log
3801167 0 -rw-r–r– 1 root root 0 Oct 20 00:01 tracking.log.1
3801165 0 -rw-r–r– 1 root root 0 Oct 20 00:01 statistics.log.1
3801159 0 -rw-r–r– 1 root root 0 Oct 20 00:01 measurements.log.1
…………
Try setting only one NTP server by entering its IP address

If until now you have been using two or more NTP servers (either because they were set or because you entered an FQDN that resolves in different IP addresses), try to set one single NTP server by entering only one IP address. This may solve your NTP related issue.

Tracing the communication with the NTP server

To double check if the NTP server is answering or not, it is possible to trace the traffic between chrony and the NTP server for a period of time while monitoring the server:
1. Start a pcap trace with tcpdump on the NTP port 123 and leave it running until the issue appears (run it in ‘screen’ or with ‘nohup’ to avoid it from being stopped if you disconnect from the shell command)
2. As soon as the issue re-appears, get a System Diagnostics covering the entire history since you have set the server to DNS name until the gap reoccurred. If this produces a file that is too big, just get the System Diagnostics for Current data and in addition copy all the files from /var/log/chrony/, and all files called /var/log/syslog* . Remember to stop the trace you started at step 1

RHEL 7 – RHCSA Notes – input / output redirection

Three standard file descriptors :

1. stdin 0 – Standard input to the program.
2. stdout 1 – Standard output from the program.
3. stderr 2 – Standard error output from the program.
PURPOSE COMMAND
redirect std output to filename > filename or 1> filename
append std out to filename >> filename
append std out and std err to filename >> filename 2>&1 or 1>> filename 2>&1
take input from filename < filename or 0 < filename redirect std error to filename 2> filename
redirect std out and std error to filename 1> filename 2>&1 or > filename 2>&1
Some examples of using I/O redirection

# cat goodfile badfile 1> output 2> errors
This command redirects the normal output (contents of goodfile) to the file output and sends any errors (about badfile not existing, for example) to the file errors.

# mail user_id < textfile 2> errors
This command redirects the input for the mail command to come from file textfile and any errors are redirected to the file errors.

# find / -name xyz -print 1> abc 2>&1
This command redirects the normal output to the file abc. The construct “2>&1” says “send error output to the same place we directed normal output”.

Note that the order is important; command 2>&1 1>file does not do the same as command 1>file 2>&1. This is because the 2>&1 construction means redirect standard error to the place where standard output currently goes. The construction command 2>&1 1>file will first redirect standard error to where standard output goes (probably the terminal, which is where standard error goes by default anyway) then will redirect standard output to file. This is probably not what was intended.
# ( grep Bob filex > out ) 2> err
– any output of the grep command is sent to the file out and any errors are sent to the file err.

# find . -name xyz -print 2>/dev/null
This runs the find command, but sends any error output (due to inaccessible directories, for example), to /dev/null. Use with care, unless error output really is of no interest.

Page 10 of 159« First...89101112...203040...Last »