June 2019
« May    


WordPress Quotes

The truth is that there is nothing noble in being superior to somebody else. The only real nobility is in being superior to your former self.
Whitney Young
June 2019
« May    

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (34)
Ansibile (19)
Apache (133)
Asterisk (2)
cassandra (2)
Centos (210)
Centos RHEL 7 (265)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (1)
horoscope (23)
Hyper-V (10)
IIS (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (3)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (34)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

21 visitors online now
1 guests, 20 bots, 0 members

Hit Counter provided by dental implants orange county

EC2 instance

volume == hard disk
security groups == virtual firewalls
EC2 instances types:
'Dr mc gift pix'
m-main choice (general)
f-fpga (field-programmable gate array)
t--cheap (t2 micro)

termination protection-off by default.
EBS will be deleted when EC2 instance is terminated by default.
EBS root volume of default AMI cann't be encrypted. but third party tools can be used to encrypt, or by making an image copy of the instance.
additinal volumes can be encrypted

EBS: the block storage assiciated with an EC2 instance

pricing models:
-on demand
-dedicated host

instances are charged by the hour-rounded up. unless aws terminated the instance, then it is rounded down.

ebs consists:
 ssd, general purpose - GP2 - (up to 10,000 iops)
 ssd, provisioned iops - IO1 - more then 10,000 iops
 hdd, thouroughput optimized ST1 - frequently accessed workloads
 hdd, cold SC1 - less frequenly accessed data
 hdd, magnectic - standard - cheap less frequenly accessed data

*can't connect more then 1 EC2 instance to 1 EBS. use EFS for that

instance termination protection is turned off by default.
the root(where OS is installed) EBS is deleted when the EC2 instance is terminated by default.
root volumes can't be encrypted unless using third party tools, but ither EBS volumes can be.
volumes exist on EBS - they are virtual hard disk.
snapshots (point in time copies of volumes) exits on S3.
taking a snapshot of a volume will store that volume on S3.
snapshots are incremental - only data that changed since the last snapshot are moved to S3
snapshots are encrypted automatically
volumes restored from snapshots are ecnrypted automatically
you can share snapshot, but only if they are un-encrypted.
you should stop the instance bofore taking a snapshot of a root volume
security rules:
all inbound traffic is blocked by defualt
all outbound traffic is allowed by default
can have any numbr or instances within a security group
can have multiple security groups attached to EC2 instances
there are no 'deny' rules. on allow
security groups are stateful - if you allow a rule for traffic in, that traffic is also allowed out. (access lists are not)
you can't block specific ip adresses from security groups.

roles: are more secure then storing the access and secret keys on EC2. easier to manage.
can be assigned to instaces after they are provisions
roles can be updated during usage.
raid = redundant aray of indipendent disks. acting as one disk to the OS.
raid 0 - good perfoamce. no data redundancy
raid 1 - mirrored, data redundancy
raid 5 - aws doesn't recommend this for EBS. good for reads, bad for writes
raid 10 - good redundancy and performance.

to increase IO - increase disk volumes as a raid
taking a snapshot of a Raid array: (application consistent snapshot)
stop application from writing to disk by freezing the file system / unmount the raid array / shut down the EC2 instance
flush all caches to the disk
when taking a snapshot of a volume, the snapshot is encrypted by default
volumes restored from encrypted snapshots are also encrypted
can't share encrypted snapshots.
AMI - amazon machine image.
AMI's are available on the store.
AMis are regional - can only be launched from the region it is stored in. but you can copy AMIs to other region. using the cli api or console.
AMI type: EBS backed and Instance Store backed (also called ephemeral).
Instance Stores can't attach additional Instnce Store Volumes after launching. 
EBS can be stopped and re-run on a different hypervisor in case of a problem. Instance Store can't.
Instance Stores are less durable, if their host fails - the instance is lost. (ephemeral)
EBS are created from a snapshot. Instance Store are created from a template stored on S3
both can be rebooted without losing data
elastic load balancer
classic / application
*a subnet == avalability zone
has healthchecks
no ip adress. only dns names.
standart monitoring - 5 mins, detaild monitoring - 1 min
create dashboard, alarms, events, logs
cloud watch is for logginng , monitoring. cloud trail is for auditing an entire aws env/ accounts.
credentials are normally stored on an instance under the .aws folder. this is a security breach..
roles allow instances not to have the credentials written to a file on the instance. therfore safer
roles are global. not by zone

bash script example: 
yum update -y
yum install httpd -y
service httpd start
chkconfig httpd on
aws s3 cp s3://mywebsitebucket-acg2/index.html /var/www/html


Autoscaling  groups require launch configuration before launching.
placement group - grouping of instances within a single availability zone. used for low network latency and high performace.
only certain types of instances can be part of  a placemnet group.
recommended to use homogenous instaces types (same size and family)
can't merge groups.
can't move existing instincaes into an existing group.
efs - elastic file system.
pay for the storge you use.
allows to use a single storage across multiple ec2 instances. could use as a central file storage.
data is stored across multiple AZ
read after write consistency
*not available in all zones yet
lambda - no servers, auto scaling, very cheap


** 1 subnet == 1 AZ.
ACL = access control list
SN = subnet
IGW = internet gateway
CIDR - classless inter-domain routing. -where we assign ip ranges
NAT - network adress translation
internal ip address ranges
(rfc 1918) - (10/8 prefix) - (172.16/12 prefix) - (192.168 / 16 prefix)
we will always use a /16 network adress
vpc - virtual private cloud
think of vpc as a logical data center.
you provision a section of the aws cloud in a virtual network. you can easily cutomize your network.

example - a public-facing subnet for webservers, and private-facing backend db servers with no internet connection.

you can create a hardware virtual private network (VPN) between corporate datacenter and VPC to leverage aws as an extens center (hybrid cloud)
what can you do with a VPC?

launch instances into a subnet
assign custom IP ranges in each subnet
configure toure tables betwen subnets
create internet gateway. only 1 per VPC
better security over aws resources
security groups (are stateful - incoming rules are automatically allowed as outgoing)
subnet ACL (not stateful. everything needs to be configured)
default VPC 
defualt have a route out to the internet
each EC2 has a public and private ip
the only way to restore a default VPC is to contact aws.

security groups , ACL , default Route table are created by default.
subnets and IGW are not created by default.
connect one VPC to another using private ip addresses. (not over the internet)
instances behave as if they are on same network.
can also peer VPC with other AWS accounts VPC withing a SINGLE REGION
star configuration - 1 central VPC peers with 4 others. NO TRASITIVE PEERING == the networks must be directly connected.
example VPC-a is connected to VPC-b and VPC-c. VPC-b and VPC-c cann't talk to each other through VPC-a (transitive). they must be directly peered. 
CIDR blocks for the private IP's must be different between peering VPCs - VPC A cant peer to VPC B if it has
NAT instances - traditional use for allowing an EC2 instance with no internet connection to have access to the internet for updates, install dbs... we use an EC2 instance from the community AMI search for NAT.
remember to disable source/destination check on the instance.
must be in a public subnet
in the route table ensure there's a route out to the NAT instance. it's found in the default route table.
the bandwidth the NAT instance supports depends on the instance type.
to create high availability you need to use autoscaling groups, multiple subnets in different AZ and scripts to automate failover
(lots of work..)
need to set security group

NAT gateways - easier access. preferd. scales automatically and no need to set security groups. 
if a NAT instance goes down, so does our internet connection. but with NAT gateways aws takes care of that automatically. supports bandwidth up to 10gb

building a VPC process (not using the wizard):

1. start vpc , your VPC, create VPC
2. name, CIDR block (our ip ranges. we used, tenancy (shared or dedicated hardware).
3. default route table, ACL, security groups are created
4. subnets-> create -> name, vpc (select the newly created one), AZ , CIDR (we used which will give us 10.0.1.xxx)
5. create anoter subnet ->name , vpc (same as above), AZ(different then above), CIDR (
6. internet gateways ->create ->name
7. attach to vpc (select newly created vpc)
8. route tables -> (main route table is private by default)
9. create new table -> name, vpc
10. edit -> add route that is open to the internet
11. subnet associations -> edit -> select a subnet that will be the public one
12. subnets -> selected the public one -> actions -> modify autoassign public ip.
13. deploy 2 EC2 instances. one is a public web server (can use a script). one is a private sql server. (notice for the auto-assign public ip..). for the private instace, set a new security group with ssh-, mysql/aurora-
14. for the mysqlserver add another rule for all ICMP with same ip address - this allows ping.
15. copy the content of the privateKey.pem file.
16. ssh into the web server -> create (echo or nano) new privateKey.pem file and paste the content.
17. chmod 0600 the privteKey.pem file (gives read and write privileges to that file).
18. ssh into sql server using the newly created file.
19. (NAT instace) launch an EC2 instance -> community -> search for nat
20. deploy into our VPC, and put into the public subnet.
21. use a public facing security group
22. actions -> networking -> change source/destination check->disable
23. VPC->route tables -> select the main route (nameless) -> add target- our newly created EC2
 (?? associate public subnet )
24. (NAT gateway - replaces steps 19-23) VPC -> NAT gateways -> create -> subnet(public facing), elastic ip(create new EIP)
25. route tables -> main route table ->add target - the newly created gateway.

security groups vs NACL:

security group acts as the first layer of defence. operates at the instance level. stateful
N(network)ACL operates at the subnet level. stateless. denies all traffic by default

a subnet can only be assiciated with one NACL. but an NACL can be assiciated with many subnets.
if you try to add a ACL to a subnet the is already associated with an ACL, the new ACL will just replace the old one.

rules are evalutaed in numerical order.
the lowest number rules have precedens over later rules.
rule 99 blocks my ip
rull 100 allows all ips
==my ip is still blocked.

you can't block using a security group
when setting up an ELB, to get good availability you need  at least two AZ or subnets. So notice if your VPC actually has more then 1 public subnet

Bastion - used to securly administer EC2 instances in private subnets (using ssh or RDP-remote desktop protocal). used instaed of NAT. 
for our purposes, we used the nat-EC2 as a Bastion

Flow logs - enable you to capture IP traffic flow information for the network interfaces in your resources and log them in cloudWatch.

Redis master-slave + KeepAlived achieve high availability

Redis master-slave + KeepAlived achieve high availability

Redis is a non-relational database that we currently use more frequently. It can support diverse data types, multi-threaded high concurrency support, and redis running in memory with faster read and write. Because of the excellent performance of redis, how can we ensure that redis can deal with downtime during operation?

So today’s summary of the construction of the redis master-slave high-availability system, with reference to online bloggers of some great gods, found that many are pitted, so I share this one time, hoping to help everyone.

Redis Features
Redis is completely open source free, complies with the BSD protocol and is a high performance key-value database.

Redis and other key-value cache products have the following three characteristics:

Redis supports the persistence of data, can keep the data in memory on the disk, and can be loaded again for use when restarted.

Redis not only supports simple key-value types of data, but also provides data structures such as: Strings, Maps, Lists, Sets, and sorted sets. Storage.

Redis supports data backup, that is, data backup in master-slave mode.

The Redis advantage
is extremely high – Redis can read 100K+ times/s and write at 80K+ times/s.

Rich data types – Redis supports Strings, Lists, Hashes, Sets, and Ordered Sets data type operations for binary cases.

Atoms – All operations of Redis are atomic, and Redis also supports atomic execution of all operations after all operations.

Rich features – Redis also supports publish/subscribe, notifications, key expiration, and more.

Prepare environment

CentOS 7 –> –> Master Redis –> Master Keepalived

CentOS7 –> –> From Redis –> Prepared Keepalived

VIP –>

Redis (normally 3.0 or later)

KeepAlived (direct online installation)

Redis compile and install

cd /opt
tar -zxvf redis-4.0.6.tar.gz
mv redis-4.0.6 redis
cd redis
make PREFIX=/usr/local/redis install

2, configure the redis startup script

vim /etc/init.d/redis


#chkconfig:2345 80 90
# Simple Redisinit.d script conceived to work on Linux systems
# as it doeSUSE of the /proc filesystem.

# Configure the redis port number
# Configure the redis startup command path
EXE=/usr/local/redis/bin/ redisserver
# Configure the redis connection command path
# Configure
Redis Run PID path PIDFILE=/var/run/redis_6379.pid
# Configure the path of redis configuration file
# Configure the connection authentication password for redis
function start () {
if [ -f $PIDFILE ]


echo “$PIDFILE exists,process is already running or crashed”


echo “Starting Redisserver…”



function stop () {
if [ ! -f $PIDFILE ]


echo “$PIDFILE does not exist, process is not running”



echo “Stopping …”


while [ -x /proc/${PID} ]


echo “Waiting forRedis to shutdown …”

sleep 1


echo “Redis stopped”


function restart () {

sleep 3


case “$1” in
echo -e “\e[31m Please use $0 [start|stop|restart] asfirst argument \e[0m”

Grant execution permissions:

chmod +x /etc/init.d/redis

Add boot start:

chkconfig –add redis

chkconfig redis on

See: chkconfig –list | grep redis

The test closed the firewall and selinux beforehand. The production environment is recommended to open the firewall.

3, add redis command environment variables

#vi /etc/profile #Add the
next line of parameters
exportPATH=”$PATH:/usr/local/redis/bin” #Environment
variables become effective
source /etc/profile

4. Start redis service

Service redis start #Check

ps -ef | grep redis

Note: After we perform the same operation on our two servers, the installation completes redis. After the installation is completed, we directly enter the configuration master-slave environment.

Redis master-slave configuration

To extend back to the previous design pattern, our idea is to use 140 as the master, 141 as the slave, and 139 as the VIP elegant address. The application accesses the redis database through the 6379 port of the 139.

In normal operation, when the master node 140 goes down, the VIP floats to 141, and then 141 will take over 140 as the master node, and 140 will become the slave node, continuing to provide read and write operations.

When 140 returns to normal, 140 will perform data synchronization with 141 at this time, 140 the original data will not be lost, and the data that has been written into 141 between synchronization machines will be synchronized. After the data synchronization is completed,

The VIP will return to the 140 node and become the master node because of the weight. 141 will lose the VIP and become the slave node again, and restore to the initial state to continue providing uninterrupted read and write services.

1, configure the redis configuration file

Master-140 configuration file

vim /etc/redis/redis.conf
port 6379
daemonize yes
requirepass 123456
slave-serve-stale-data yes
slave-read-only no

Slave-141 configuration file

vim /etc/redis/redis.conf
port 6379
daemonize yes
slaveof 6379
masterauth 123456
slave-serve-stale-data yes
slave-read-only no

2. Restart the redis service after the configuration is complete! Verify that the master and slave are normal.

Master node 140 terminal login test:

[root@localhost ~]# redis-cli -a 123456> INFO
# Replication

Login test from node 141 terminal:

[root@localhost ~]# redis-cli -a 123456> info
# Replication
3, synchronous test

Master node 140

The masters and slaves of this redis have been completed!

KeepAlived configuration to achieve dual hot standby

Use Keepalived to implement VIP, and achieve disaster recovery through notify_master, notify_backup, notify_fault, notify_stop.

1, configure Keepalived configuration file

Master Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
notification_email_from Alexandre.Cassen@firewall.loc
smtp_connect_timeout 30
router_id redis01

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2

vrrp_instance VI_1 {
state MASTER
interface eno16777984
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111

track_script {
virtual_ipaddress {

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh

Spare Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
notification_email_from Alexandre.Cassen@firewall.loc
smtp_connect_timeout 30
router_id redis02

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2

vrrp_instance VI_1 {
state BACKUP
interface eno16777984
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111

track_script {
virtual_ipaddress {

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh

2, configure the script

Master KeepAlived — 140

Create a script directory: mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ];then

echo $ALIVE

exit 0


echo $ALIVE

exit 1


[root@localhost script]# cat redis_master.sh
REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″
sleep 15
echo “[master]” >> $LOGFILE
date >> $LOGFILE
echo “Being master….” >>$LOGFILE 2>&1
echo “Run SLAVEOF cmd …”>> $LOGFILE
if [ $? -ne 0 ];then
echo “data rsync fail.” >>$LOGFILE 2>&1
echo “data rsync OK.” >> $LOGFILE 2>&1

Sleep 10 # delay 10 seconds later to cancel synchronization after the data synchronization is complete

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

if [ $? -ne 0 ];then
echo “Run SLAVEOF NO ONE cmd fail.” >>$LOGFILE 2>&1
echo “Run SLAVEOF NO ONE cmd OK.” >> $LOGFILE 2>&1

[root@localhost script]# cat redis_backup.sh

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″


echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

Sleep 15 #delay 15 seconds to wait until the data is synchronized to the other side and then switch the role of master-slave

echo “Run SLAVEOF cmd …”>> $LOGFILE


[root@localhost script]# cat redis_fault.sh


echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh


echo “[stop]” >> $LOGFILE

date >> $LOGFILE

Slave KeepAlived — 141

mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ]; then

echo $ALIVE

exit 0


echo $ALIVE

exit 1


[root@localhost script]# cat redis_master.sh

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″


echo “[master]” >> $LOGFILE

date >> $LOGFILE

echo “Being master….” >>$LOGFILE 2>&1

echo “Run SLAVEOF cmd …”>> $LOGFILE


sleep 10 #

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE


[root@localhost script]# cat redis_backup.sh

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″


echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

sleep 15 #

echo “Run SLAVEOF cmd …”>> $LOGFILE


[root@localhost script]# cat redis_fault.sh


echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh


echo “[stop]” >> $LOGFILE

date >> $LOGFILE

systemctl start keepalived

systemctl enable keepalived

ps -ef | grep keepalived

Full understanding of the new features of MySQL 8.0

Full understanding of the new features of MySQL 8.0

First, the function added in MySQL 8.0

1, the new system dictionary table

Integrated transaction data dictionary for storing information about database objects, all metadata is stored using the InnoDB engine

2, support for DDL atomic operations

The DDL of the InnoDB table supports transaction integrity, either to be successful or to roll back, to write the DDL operation rollback log to the data dictionary data dictionary table mysql.innodb_ddl_log for rollback operations

3, security and user management

Added caching_sha2_password authentication plugin and is the default authentication plugin. Enhanced performance and security

Permissions support role

New password history feature restricts reuse of previous passwords

4, support for resource management

Supports creation and management of resource groups and allows the allocation of threads run by the server to specific groups for execution by threads based on the resources available to the resource group

5, innodb enhancements

Self-enhanced optimization to fix MySQL bug#199, this bug causes MySQL to take the maximum self-increment on the table as the maximum when the DB is restarted, and the next allocation is to allocate max(id)+1 if it is an archive table or After the data is deleted in other modes, the DB system restarts, and self-enhancement may be reused.

Added INFORMATION_SCHEMA.INNODB_CACHED_INDEXES to see the index pages of each index cache in the InnoDB buffer pool

InnoDB temporary tables will be created in the shared temporary table space ibtmp1

InnoDB supports NOWAIT and SKIP LOCKED for SELECT … FOR SHARE and SELECT … FOR UPDATE statements

The minimum value of innodb_undo_tablespaces is 2, and it is no longer allowed to set innodb_undo_tablespaces to 0. Min 2 ensures that rollback segments are always created in the undo tablespace, not in the system tablespace


Added innodb_dedicated_server to let InnoDB automatically configure innodb_buffer_pool_size according to the amount of memory detected on the server, innodb_log_file_size, innodb_flush_method


A new dynamic configuration item, innodb_deadlock_detect, is used to disable deadlock checking, because in high-concurrency systems, when a large number of threads wait for the same lock, deadlock checking can significantly slow down the database.

Supports use of the innodb_directories option to move or restore tablespace files to a new location when the server is offline

6, MySQL 8.0 better support for document database and JSON

7, optimization

Invisible index, starting to support invisible index, (feeling the same as Oracle ), you can set the index to be invisible during the optimization of the SQL, the optimizer will not use the invisible index

Support descending index, DESC can be defined on the index, before the index can be reversed scan, but affect performance, and descending index can be completed efficiently

8, support RANK (), LAG (), NTILE () and other functions

9, regular expression enhancements, provide REGEXP_LIKE (), EGEXP_INSTR (), REGEXP_REPLACE (), REGEXP_SUBSTR () and other functions

10. Add a backup lock to allow DML during online backup while preventing operations that may result in inconsistent snapshots. Backup locks supported by LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE syntax

11, the character set The default character set changed from latin1 to utf8mb4

12, configuration file enhancement

MySQL 8.0 supports online modification of global parameter persistence. By adding the PERSIST keyword, you can persist the adjustment to a new configuration file. Restarting db can also be applied to the latest parameters. For adding the PERSIST keyword modification parameter command, the MySQL system will generate a mysqld-auto.cnf file that contains json format data. For example, execute:

Set PERSIST expire_logs_days=10 ; # memory and json files are modified, restart also effective

Set GLOBAL expire_logs_days=10 ; # only modify memory, restart lost

The system will generate a file containing the following in the data directory mysqld-auto.cnf:

{ “mysql_server”: {“expire_logs_days”: “10” } }

When my.cnf and mysqld-auto.cnf exist at the same time, the latter has a high priority.

13. Histogram

The MySQL 8.0 version started to support long-awaited histograms. The optimizer will use the column_statistics data to determine the distribution of field values ??and get a more accurate execution plan.

You can use ANALYZE TABLE table_name [UPDATE HISTOGRAM on col_name with N BUCKETS | DROP HISTOGRAM ON clo_name] to collect or delete histogram information

14, support for session-level SET_VAR dynamically adjust some of the parameters, help to improve statement performance.

Select /*+ SET_VAR(sort_buffer_size = 16M) */ id from test order id ;

Insert /*+ SET_VAR(foreign_key_checks=OFF) */ into test(name) values(1);

15, the adjustment of the default parameters

Adjust the default value of back_log to keep the same with max_connections and increase the connection processing capacity brought by burst traffic.

Modifying event_scheduler defaults to ON, which was previously disabled by default.

Adjust the default value of max_allowed_packet from 4M to 64M.

Adjust bin_log, log_slave_updates default is on.

Adjust the expiry date of expire_logs_days to 30 days, and the old version to 7 days. In the production environment, check this parameter to prevent the binlog from creating too much space.

Adjust innodb_undo_log_truncate to ON by default

Adjust the default value of innodb_undo_tablespaces to 2

Adjust the innodb_max_dirty_pages_pct_lwm default 10

Adjust the default value of innodb_max_dirty_pages_pct 90

Added innodb_autoinc_lock_mode default value 2

16, InnoDB performance improvement

Abandon the buffer pool mutex, split the original mutex into multiple, increase concurrent

Splitting the two mutexes LOCK_thd_list and LOCK_thd_remove can increase the threading efficiency by approximately 5%.

17, line buffer

The MySQL8.0 optimizer can estimate the number of rows to be read, so it can provide the storage engine with an appropriately sized row buffer to store the required data. Mass performance of continuous data scans will benefit from larger record buffers

18, improve the scanning performance

Improving the performance of InnoDB range queries improves the performance of full-table queries and range queries by 5-20%.

19, the cost model

The InnoDB buffer can estimate how many tables and indexes are in the cache, which allows the optimizer to choose the access mode to know if the data can be stored in memory or must be stored on disk.

20, refactoring SQL analyzer

Improve the SQL analyzer. The old analyzer has serious limitations due to its grammatical complexity and top-down analysis, making it difficult to maintain and extend.

Second, MySQL 8.0 in the abandoned features

  • Deprecated validate_password plugin
  • Discard JSON_MERGE() -> JSON_MERGE_PRESERVE() instead
  • Abandoned have_query_cache system variable

Third, MySQL 8.0 is removed

Query cache functionality was removed and related system variables were also removed

Mysql_install_db is replaced by mysqld –initialize or –initialize-insecure

The INNODB_LOCKS and INNODB_LOCK_WAITS tables under INFORMATION_SCHEMA have been deleted. Replaced with Performance Schema data_locks and data_lock_waits tables


InnoDB no longer supports compressed temporary tables.

PROCEDURE ANALYSE() syntax is no longer supported

InnoDB Information Schema Views Renamed
Old the Name the Name New

Remove’s server option:


Remove configuration options:


Remove the system variable

information_schema_stats -> information_schema_stats_expiry
the time_format
global.sql_log_bin (session.sql_log_bin reserved)
log_warnings -> log_error_verbosity
TX_ISOLATION -> transaction_isolation
tx_read_only -> transaction_read_only
the query_cache_limit
the query_cache_min_res_unit
the query_cache_size
the query_cache_type
innodb_undo_logs -> innodb_rollback_segments

Remove the state variable

Slave_retried_transactions, Slave_running
Innodb_available_undo_logs Status

Remove function






Remove’s client option:

–ssl –ssl-verify-server-cert is deleted, with –ssl-mode = VERIFY_IDENTITY | alternative DISABLED | REQUIRED




MySQL 8 official version 8.0.11 has been released. Officially stated that MySQL 8 is 2 times faster than MySQL 5.7, and it also brings a lot of improvements and faster performance!

The following is the record of the installation process of the person 2018.4.23 days. The entire process takes about an hour, and the make && make install process takes longer.

I. Environment

CentOS 7.4 64-bit Minimal Installation

II. Preparation

??1. Installation dependencies

Yum -y install wget cmake gcc gcc-c++ ncurses ncurses-devel libaio-devel openssl openssl-devel

??2. Download the source package

Wget https://cdn.mysql.com//Downloads/MySQL-8.0/mysql-boost-8.0.11.tar.gz (this version comes with boost)

??Create mysql user

Groupadd mysql
useradd -r -g mysql -s /bin/false mysql

??4 create an installation directory and data directory

Mkdir -p /usr/local/mysql
mkdir -p /data/mysql

Three. Install MySQL8.0.11

??1. Extract the source package

Tar -zxf mysql-boost-8.0.11.tar.gz -C /usr/local

??2. Compile & Install

Cd /usr/local/mysql-8.0.11
cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DMYSQL_DATADIR=/usr/local/mysql/data -DSYSCONFDIR=/etc -DMYSQL_TCP_PORT=3306 -DWITH_BOOST=/usr/local/ Mysql-8.0.11/boost
make && make install

3. Configure the my.cnf file

Cat /etc/my.cnf

## Please add parameters according to the actual situation

4 directory permissions modify

Chown -R mysql:mysql /usr/local/mysql
chown -R mysql:mysql /data/mysql
chmod 755 /usr/local/mysql -R
chmod 755 /data/mysql -R

5. Initialization

Bin/mysqld –initialize –user=mysql –datadir=/data/mysql/

6. Start mysql

Bin/mysqld_safe –user=mysql &

??7. Modify account password

Bin/mysql -uroot -p
mysql> alter user ‘root’@’localhost’ identified by “123456”;

Mysql> show databases;
| Database |
+——————- – +
| information_schema |
| mysql |
| performance_schema |
| sys |
4 rows in set (0.00 sec)

##Add Remote Account

Mysql> create user root@’%’ identified by ‘123456’;
Query OK, 0 rows affected (0.08 sec)

????Mysql> grant all privileges on *.* to root@’%’;
????Query OK, 0 rows affected (0.04 sec)

????Mysql> flush privileges;
????Query OK, 0 rows affected (0.01 sec)

??8. Create a soft link (non-essential)

Ln -s /usr/local/mysql/bin/* /usr/local/bin/

Mysql -h -P 3306 -uroot -p123456 -e “select version();”
mysql: [Warning] Using a password on the command line interface can be insecure.
+———- -+
| version() |
| 8.0.11 |

??9. Add to start (non-essential)

Cp support-files/mysql.server /etc/init.d/mysql.server

Hereby explain: MySQL official recommended to use binary installation. (The following figure is a screenshot of the official document)

Nginx load balancing and configuration

Nginx load balancing and configuration

1 Load Balancing Overview The
origin of load balancing is that when a server has a large amount of traffic per unit time, the server will be under great pressure. When it exceeds its own capacity, the server will crash. To avoid crashing the server. The user has a better experience, born load balancing to share the pressure of the server.

Load balancing is essentially implemented with the principle of reverse proxy, is a kind of technology that optimizes server resources and reasonably handles high concurrency, and can balance Server pressure to reduce user request wait time and ensure fault tolerance. Nginx is generally used as an efficient HTTP load balancing server to distribute traffic to multiple application servers to improve performance, scalability, and high availability.

Principle: Internal A large number of servers can be built on the network to form a server cluster. When a user visits the site, they first access the public network intermediate server. The intermediate server is assigned to the intranet server according to the algorithm and shares the pressure of the server. Therefore, each visit of the user ensures the server. The pressure of each server in the cluster tends to balance, sharing server pressure and avoiding servers The collapse of the case.

The nginx reverse proxy implementation includes the following load balancing HTTP, HTTPS, FastCGI, uwsgi, SCGI, and memcached.
To configure HTTPS load balancing, simply use the protocol that begins with ‘http’.
When you want to set load balancing for FastCGI, uwsgi, SCGI, or memcached, use the fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass commands, respectively.

2 Common Balancing Mechanisms of Load Balancing

1 round-robin: The requests are distributed to different servers in a polling manner. Each request is assigned to different back-end servers in chronological order. If the back-end server goes down, it is automatically removed to ensure normal services. .

Configuration 1:
upstream server_back {#nginx distribution service request

Configuration 2:
http {
upstream servergroup { # service group accepts requests, nginx polling distribution service requests
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;
server {
listen 80;
location / {
Proxy_pass http://servergroup; #All requests are proxied to servergroup service group
proxy_pass is followed by proxy server ip, can also be hostname, domain name, ip port mode
upstream set load balancing background server list

2 Weight load balancing: if no weight is configured, the load of each server is the same. When there is uneven server performance, weight polling is used. The weight parameter of the specified server is determined by load balancing. a part of. Heavy load is great.
Upstream server_back {
server weight=3;
server weight=7;

3 least-connected: The next request is allocated to the server with the least number of connections. When some requests take longer to respond, the least connections can more fairly control the load of application instances. Nginx forwards the request to the less loaded server.
Upstream servergroup {
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;

4 ip-hash: Client-based IP address. When load balancing occurs, each request is relocated to one of the server clusters. Users who have logged in to one server then relocate to another server and their login information is lost. This is obviously not appropriate. Use ip_hash to solve this problem. If the client has accessed a server, when the user accesses it again, the request will be automatically located to the server through a hash algorithm.

Each request is assigned according to the result of the IP hash, so the request is fixed to a certain back-end server, and it can also solve the session problem
upstream servergroup {
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;

Attach an example:
#user nobody;
worker_processes 4;
events {
# maximum number of concurrent
workers_connections 1024;
# The list of pending servers to be
upstream myserver{
# ip_hash instruction to bring the same user to the same server.
server fail_timeout=60s; tentative time after the failure of #max_fails 60s

# listening port
listen 80;
# root
location / /
# select which server list
proxy_pass http://myserver;

Max_fails allows the number of request failures to default to 1
fail_timeout=60s fail_timeout=60s timeout for failed timeouts
down indicates that the current server is not participating in the loadbackup. All nonbackup
machines will request backups when they are busy, so their stress will be lightest.

Solution Architect Associate

1. Messaging
2. Desktop and App Streaming
3. Security and Identity
4. Management Tools
5. Storage
6. Databases
7. Networking & Content Delivery
8. Compute
9. AWS Global Infrastructure

10. Migration
11. Analytics
12. Application services
13. Developer Tools

1. Messaging:
SNS - Simple Notification Service (Text, Email, Http)
SQS - Simple queue Service 

3. Security and Identity:
IAM - Identity Access Management
Inspector - Agent installed on VM provides security reports
Certificate Manager - Provides free certificate for domain name.
Directory Services - Provides Microroft Active directory  
WAF - Web Application Firewall (Application layer security x-site scripting / sql injection)
Artifect -  Get compliance ceriticate ISO PCI HIPPA

4. Management Tools:
Cloud Watch - Performance of AWS environment (Disk, CPU, RAM utilization)
Cloud Formation - Turn your AWS infrastructure into code. (Document)
Cloud trail - Auditing your AWS environment
OpsWork - Automatic deployment using chef
Config - Monitor your environemnt, set alerts
Service Catlog - Catlogs Enterprise authorise and authorise services
Trusted Advisor - Scan Environemtn suggest Performance optimization, security opti fault tollerance suggestions

5. Storage
S3 - Simple storage service Object based storage (DropBox)
Glacier - Not instance access 4-5 hours to recoved the archived files.
EFS - Elastic File service - File Storage Service
Storage Gateway - Not in exam

6. Databases
RDS - Relational Database Service ( SQL, MySQL, MariaDB, PostgreSQL, Aurora, and Oracle)
DynamoDB - Non Relational Database (High performance, scalable)
RedShift - Database Warehouse Service (Hudge data queries)
Elasticache - Cacheing data on the cloud (Quicker to featch from database)

7. Networking & Content Delivery:
VPC - Virtual Private Network
Route 53 - Amazons DNS Service
Cloud Front - Content Delivery Network
Direct Connect - Connect your Data Center with direct physical telephone line

8. Compute
EC2 - Elastic Coumpute Cloud
EC2 Container Services - Not in Exam
Elastic Beanstalk - developer's code on an infrastructure that is automatically provisioned to host that code
Lambda - allows you to run code without having to worry about provisioning any underlying resources.
Lightsail - New Service

9. AWS Global Infrastructure :
14 Regions and 38 Availibility zones
4 Regions and 11 more Availibility zones in 2017
66 Edge Locations
Regions - Phisical Geographocal Areas (An independent collection of AWS computing resources in a defined geography.)
Availibility Zones - Logical Data centers (Distinct locations from within an AWS region that are engineered to be isolated from failures.)
Edge Location - Content Delivery Network CDN End Points for CloudFront (Very large media objects)
10. Migration:
Snowball - Connect different discs and transfer data into cloud like S3
DMS - Database Migration Service (Migrate Oracle SQL MySQL to cloud)
SMS - Server Migration Service (Migrate VMWare to cloud)

11. Analytics:
Athena - SQL queries on S3
EMR - Elastic Map Reduce is specifically designed to assist you in processing large data sets
	Big Data Processing (Big Data, Log analysis, Analyse finantial markets)
Cloud Search - Fully Managed service
Elastic Search - Open source 
Kinesis - Process tera bit data and analyse it (Financial transaction Social Media centiment analysis,)
Data Pipeline - Move data from s3 to dynamo DB and vice-varsa
Quick Sight - Business analysis tool.

12. Application services:
Step Functions: Microservices used by your applicaitions 
SWF - Simple workflow service (co-ordinate physical and automated tasks)
API Gateway - Publish and monitor API and scale
AppStream - streaming desktop applications.
Elastic transcoder - Helps to run video on different form factor and reolutions

13. Developer Tools:
CodeCommit - Cloud git
CodeBuild - Compiling the code 
CodeDeploy - Way to deploy code to EC2 instances 
CodePipeLine - Track different versions of code UAT

Amazon Web Services (AWS)

Amazon Web Services (AWS)

  • Extensive set of cloud services available via the Internet
  • On-demand, virtually endless, elastic resources
  • Pay-per-use with no up-front costs (with optional commitment)
  • Self-serviced and programmable




Elastic Compute Cloud (EC2)

  • One of the core services of AWS
  • Virtual machines (or instances) as a service
  • Dozens of instance types that vary in performance and cost
  • Instance is created from an Amazon Machine Image (AMI), which in turn can be created again from instances




Regions and Availability Zones (AZ)

Notes: We will only use Ireland (eu-west-1) region in this workshop. See also A Rare Peek Into The Massive Scale of AWS.

Networking in AWS

Exercise: Launch an EC2 instance

  1. Log-in to gofore-crew.signin.aws.amazon.com/console
  2. Switch to Ireland region and go to EC2 dashboard
  3. Launch a new EC2 instance according instructor guidance
  • In “Configure Instance Details”, pass a User Data script under Advanced
  • In “Configure Security Group”, use a recognizable, unique name

# When passed as User Data, this script will be run on boot
touch /new_empty_file_we_created.txt
echo "It works!" > /it_works.txt

Exercise: SSH into the instance

SSH into the instance (find the IP address in the EC2 console)

# Windows Putty users must convert key to .ppk (see notes)
ssh -i your_ssh_key.pem ubuntu@instance_ip_address

View instance metadata


View your User Data and find the changes your script made

ls -la /

Notes: You will have to reduce keyfile permissions chmod og-xrw mykeyfile.pem. If you are on Windows and use Putty, you will have to convert the .pem key to .ppk key using puttygen (Conversions -> Import key -> *.pem file -> Save private key. Now you can use your *.ppk key with Putty: Connection -> SSH -> Auth -> Private key file)

Exercise: Security groups

Setup a web server that hosts the id of the instance

mkdir ~/webserver && cd ~/webserver
curl > index.html
python -m SimpleHTTPServer

Configure the security group of your instance to allow inbound requests to your web server from anywhere. Check that you can access the page with your browser.

Exercise: Security groups

Delete the previous rule. Ask a neighbor for the name of their security group, and allow requests to your server from your neighbor’s security group.

Have your neighbor access your web server from his/her instance.

# Private IP address of the web server (this should work)
curl 172.31.???.???:8000
# Public IP address of the web server (how about this one?)
curl 52.??.???.???:8000

Speaking of IP addresses, there is also Elastic IP Address. Later on, we will see use cases for this, as well as better alternatives.

Also, notice the monitoring metrics. These come from CloudWatch. Later on, we will create alarms based on the metrics.

Elastic Block Store (EBS)

  • Block storage service (virtual hard drives) with speed and encryption options
  • Disks (or volumes) are attached to EC2 instances
  • Snapshots can be taken from volumes
  • Alternative to EBS is ephemeral instance store

EC2 cost

Identity and Access Management

Identity and Access Management (IAM)

Notes: Always use roles inside instances (do not store credentials there), or something bad might happen.

Quiz: Users on many levels

Imagine running a content management system, discussion board or blog web application in EC2. How many different typesof user accounts you might have?

Virtual Private Cloud

Virtual Private Cloud (VPC)

  • Heavy-weight virtual IP networking for EC2 and RDS instances. Integral part of modern AWS, all instances are launched into VPCs (not true for EC2-classic)
  • An AWS root account can have many VPCs, each in a specific region
  • Each VPC is divided into subnets, each bound to an availability zone
  • Each instance connects to a subnet with a Elastic Network Interface





VPC with Public and Private Subnets

Access Control Lists





Auto Scaling



Provisioning capacity as needed

  • Changing the instance type is vertical scaling (scale up, scale down)
  • Adding or removing instances is horizontal scaling (scale out, scale in)
  • 1 instance 10 hours = 10 instances 1 hour

Auto Scaling instances

  • Launch Configuration describes the configuration of the instance. Having a good AMI and bootstrapping is crucial.
  • Auto Scaling Group contains instances whose lifecycles are automatically managed by CloudWatch alarms or schedule
  • Scaling Plan refers when scaling happens and what triggers it.

Scaling Plans

  • Maintain current number of instances
  • Manual scaling by user interaction or via API
  • Scheduled scaling
  • Dynamic Auto Scaling. A scaling policy describes how the group scales in or out. You should always have policies for both directions. Policy cooldowns control the rate in which scaling happens.

Auto Scaling Group Lifecycle

Auto Scaling Group Lifecycle

Elastic Load Balancer

  • Route traffic to an Auto Scaling Group (ASG)
  • Runs health checks to instances to decide whether to route traffic to them
  • Spread instances over multiple AZs for higher availability
  • ELB scales itself. Never use ELB IP address. Pre-warm before flash traffic.


Public networking

Route 53

  • Domain Name System (DNS)
  • Manage DNS records of hosted zones
  • Round Robin, Weighted Round Robin and Latency-based routing


  • Content Delivery Network (CDN)
  • Replicate static content from S3 to edge locations
  • Also supports dynamic and streaming content

EC2 instance

chmod 400 SeniorServer.pem

Server: ssh -i "SeniorServer.pem" ec2-user@ec2-54-149-37-172.us-west-2.compute.amazonaws.com
password for root:qwe123456

server2: ssh -i "senior_design_victor.pem" ec2-user@ec2-54-69-160-179.us-west-2.compute.amazonaws.com

controller: ssh -i "zheng.pem" ec2-user@ec2-52-34-59-51.us-west-2.compute.amazonaws.com

DB: mysql -h seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com -P 3306 -u root -p
username: root
password: qwe123456

use command "screen" to keep running server codes and controller codes on AWS
screen  #create a new screen session
screen -ls  #check running screens
screen -r screenID   #resume to a screen
screen -X -S screenID kill   #end a screen

create table transaction (username varchar(20), history varchar(20));
insert into property (username,password,money) values ("client1","123",100);

To create more replicated server/db:

Master (RDS) - all in mysql, no Bash - remember to remove [] from statements
	1. Create new slave
	2. Give it access 

Slave (Server) - 
	1. [Bash] Import the database from master
		mysqldump -h seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com -u root -p senior_design > dump.sql

	2. [Bash] Import the dump.sql into your database 
		mysql senior_design < dump.sql

	3. [Bash] Edit the /etc/my.cnf - will require root access, add the follow lines
	**Remember to keep server-id different (currently using 10, so next is 11, etc...)
		# Give the server a unique ID
		server-id               = #CHANGE THIS NUMBER#
		# Ensure there's a relay log
		relay-log               = /var/lib/mysql/mysql-relay-bin.log
		# Keep the binary log on
		log_bin                 = /var/lib/mysql/mysql-bin.log
		replicate_do_db            = senior_design

	4. [Bash] Restart mysqld
		service mysqld restart

Master-Slave Connection Creation
	1. On master (RDS) - type
		show master status;
		** We will need to keep note of File and Position
		| File                       | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
		| mysql-bin-changelog.010934 |      400 |              |                  |                   |
	2. On the slave, enter mysql then enter
	3. On the slave, enter "START SLAVE;"
	4. Make sure the slave started - "SHOW SLAVE STATUS\G;"
	5. You can always triple check by adding a new row to senior_design in master then see if it updates in slave.

	- If for some reason you mess up the slave in step 2.
		[mysql] on the slave side
			reset slave;
		then repeat step 2 - 5
	- If in SHOW SLAVE STATUS\G shows error
		error should be gone, but this will only skip the error; the error may still re-appear

use senior_design;
select count(*) from property;

Slave user pass:
Slave 1 - username: slave1 pass: slave1pwd
Slave 2 - username: slave2 pass: [slave2]

CHANGE MASTER TO MASTER_HOST='seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com', MASTER_USER='slave1', MASTER_PASSWORD='slave1pwd', MASTER_LOG_FILE='mysql-bin-changelog.011030', MASTER_LOG_POS= 400;

CHANGE MASTER TO MASTER_HOST='seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com', MASTER_USER='slave2', MASTER_PASSWORD='[slave2]', MASTER_LOG_FILE='mysql-bin-changelog.011030', MASTER_LOG_POS= 400;

Redis install


$ wget http://download.redis.io/redis-stable.tar.gz
$ tar xvzf redis-stable.tar.gz
$ cd redis-stable
$ make
$ make test # optional
# yum install wget tcl
# wget http://download.redis.io/releases/redis-3.2.5.tar.gz
# tar xzf redis-3.2.5.tar.gz
# cd redis-3.2.5
# make
# make test
$ sudo make install
$ sudo cp src/redis-server /usr/local/bin/
$ sudo cp src/redis-cli /usr/local/bin/
$ sudo mkdir /etc/redis
$ sudo mkdir /var/redis
$ sudo cp utils/redis_init_script /etc/init.d/redis_6379
$ sudo cp redis.conf /etc/redis/6379.conf
$ sudo mkdir /var/redis/6379
$ sudo update-rc.d redis_6379 defaults # OR sudo chkconfig –add redis_6379
$ sudo /etc/init.d/redis_6379 start

RESP (REdis Serialization Protocol)
RESP, the type of some data depends on the first byte:
Simple Strings “+”
Errors “-”
Integers “:”
Bulk Strings “$”
Arrays “*”
Redis append-only file feature (AOF)

$ redis-server # start server
/etc/init.d/redis_PORT start
$ redis-cli [-p PORT] shutdown # stop server
/etc/init.d/redis_PORT stop
$ redis-cli ping # check if working
$ redis-cli –stat [-i interval] # continuous stats mode
$ redis-cli –bigkeys # scan for big keys
$ redis-cli [-p port ] –scan [–pattern REGEX] # get a list of keys
$ redis-cli [-p port ] monitor # monitor commands
$ redis-cli [-p port ] –latency # monitor latency of instances
$ redis-cli [-p port ] –latency-history [-i interval]
$ redis-cli [-p port ] –latency-dist # spectrum of latencies
$ redis-cli –intrinsic-latency [-p port ] [test-time] # latency of system
$ redis-cli –intrinsic-latency 5
$ redis-cli –rdb <dest-filename> # remote RDB backup ($?=0 success)
$ redis-cli –rdb /tmp/dump.rdb
$ redis-cli –slave # slave mode (monitor master -> slave writes)
$ redis-cli –lru-test 10000000 # Least Recently Used (LRU) simulation
# used to help configure ‘maxmemory’ for LRU
$ redis-cli save # save dump file (dump.rdb) to $dir
$ redis-cli select <DB_NUMBER> # select DB
$ redis-cli dbsize # show size of DB
$ redis-cli connect <SERVER> <PORT> # connect to different servers/ports
$ redis-cli debug restart
$ redis-cli –version
$ redis-cli pubsub channels [PATTERN]
$ redis-cli pubsub numsub [channel1 … channelN]
$ redis-cli subscribe/psubscribe/publish
$ redis-cli slowlog get [N]|len
a. unique progressive identifier for every slow log entry.
b. timestamp at which the logged command was processed.
c. amount of time needed for its execution, in microseconds.
d. array composing the arguments of the command.

config file: /etc/redis/6371.conf
dbfilename dump.rdb
dir /var/lib/redis/6371
pidfile /var/run/redis/6371.pid
DB saved to:

–raw, –no-raw

redis.conf # well documented
default ports: 6379 / 16379 (cluster mode) / 26379 (Sentinel)

daemonize no
pidfile /var/run/redis_6379.pid
port 6379
loglevel info
logfile /var/log/redis_6379.log
dir /var/redis/6379

keyword argument1 argument2 … argumentN

slaveof 6380
requirepass “hello world”
maxmemory 2mb
maxmemory-policy allkeys-lru
masterauth <password>
daemonize no # when run under daemontools


vm.overcommit_memory = 1 # sysctl vm.overcommit_memory=1
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# passing arguments via CLI
$ ./redis-server –port 6380 –slaveof 6379

redis> config get GLOB
redis> config set slaveof 6380
redis> config rewrite

— —
Actual config

daemonize yes
pidfile /var/run/redis/6371.pid
port 6371
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile “”
syslog-enabled yes
syslog-ident redis
syslog-facility USER
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis/6371
slaveof 6371 # only on slave(s)
slave-serve-stale-data yes
slave-read-only yes
repl-disable-tcp-nodelay no
slave-priority 100
maxclients 10000
maxmemory-policy noeviction
appendonly no
appendfilename “appendonly.aof”
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
notify-keyspace-events “”
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

redis> config set masterauth <password>
persistence = enabled OR automatic-restarts = disabled

slaveof 6379
masterauth <password> # config set masterauth <password>
min-slaves-to-write <number of slaves>
min-slaves-max-lag <number of seconds>
slave-announce-port 1234

Redis Sentinel (26379)
– Monitoring
– Notification
– Automatic failover
– Configuration provider

redis-sentinel /path/to/sentinel.conf
redis-server /path/to/sentinel.conf –sentinel

# typical minimal config

# sentinel monitor <master-group-name> <ip> <port> <quorum>
# sentinel down-after-milliseconds <master-name> <milliseconds> # default 30 secs
# sentinel failover-timeout <master-name> <milliseconds> # default 3 minutes
# sentinel parallel-syncs <master-name> <numslaves>

# example typical minimal config:

sentinel monitor mymaster 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1

# additional configs

# bind
# protected-mode no
# sentinel announce-ip <ip>
# sentinel announce-port <port>
# dir <working-directory>
# syntax: sentinel <option_name> <master_name> <option_value>
# sentinel auth-pass <master-name> <password>
# sentinel down-after-milliseconds <master-name> <milliseconds> # default 30 secs
# sentinel parallel-syncs <master-name> <numslaves>
# sentinel failover-timeout <master-name> <milliseconds> # default 3 minutes
# sentinel notification-script <master-name> <script-path>
# passed: <event type> <event description>
# sentinel client-reconfig-script <master-name> <script-path>
# passed: <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
# <state> is currently always “failover”
# <role> is either “leader” or “observer”
— — —
# actual config

# redis-sentinel 2.8.9 configuration file
# sentinel_26371.conf
daemonize no
dir “/var/lib/redis/sentinel_26371”
pidfile “/var/run/redis/sentinel_26371.pid”
port 26371
sentinel monitor iowa_master_staging 6375 2
sentinel config-epoch iowa_master_staging 0
sentinel leader-epoch iowa_master_staging 0
sentinel known-slave iowa_master_staging 6375
logfile “”
syslog-enabled yes
syslog-ident “sentinel_26371”
syslog-facility user
# sentinel messages/events

+monitor master <group-name> <ip> quorum <N>

# Testing
$ redis-cli -p PORT> SENTINEL master mymaster # info about master> SENTINEL slaves mymaster # info about slave(s)> SENTINEL sentinels mymaster # info about sentinel(s)> SENTINEL get-master-addr-by-name mymasterer # get address of master
$ redis-cli -p 6379 DEBUG sleep 30 # simulate master hanging

SENTINEL masters # get list of monitored masters and their state
SENTINEL master <master name>
SENTINEL slaves <master name>
SENTINEL sentinels <master name>
SENTINEL get-master-addr-by-name <master name>
SENTINEL reset <pattern> # reset all masters matching pattern
SENTINEL failover <master name> # force a failover
SENTINEL ckquorum <master name> # check if current config is able to failover
SENTINEL flushconfig # rewrite config file
SENTINEL monitor <name> <ip> <port> <quorum> # start monitoring a new master
SENTINEL remove <name> # stop monitoring master
SENTINEL SET <name> <option> <value>

# Commands

Redis Cluster
port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

redis1:$ WTFI redis-cli => /nb/redis/bin/redis-cli

redis-cli -h <hostname> -p <port> -r <repeat (-1=forever)> -i <interval (secs)> -n <DB_NUM> <COMMAND>
redis-cli -p 6371|26371 info [server|clients|memory|persistence|stats|replication|cpu|keyspace|sentinel]
redis-cli -p 6371 ping
Upgrading or restarting a Redis instance without downtime
Check out: https://redis.io/topics/admin (bottom)
for p in $(grep ^port /etc/redis/*|awk ‘{print $NF}’); do echo “—- port: $p —-“; /nb/redis/bin/redis-cli -p $p info | grep stat; done

========== tool (redis_monit.sh) [begin] ==========
# get status of redis servers
# get the list of ports configured
ports=$(ls /etc/redis/*.conf | tr -d ‘[a-z/.]’)
for port in $ports; do
echo “—- port: $port —-”
if [ -e /etc/redis/$port.conf ]; then
$REDIS_CLI_CMD -p $port info | grep stat
echo “no config (/etc/redis/$port.conf)”
========== tool (redis_monit.sh) [end] ==========

# update the monitor hosts – “live”
sentinel monitor mymaster 6379 2
sentinel monitor redis2 6379 2
sentinel failover redis1
sentinel masters
sentinel slaves redis1

# manual failover

$ redis-cli -p 7002 debug segfault

s3 aws instance

creating a bucket:
S3 > Create bucket > unique name + region > create
bucket > select > upload > upload file or drag n drop

Backup Files to Amazon S3 using the AWS CLI
Step 1: create login for aws console:
IAM > Users > Create > username: AWS_Admin > Permissions > Attach policy > AdministratorFullAccess
> Manage password > Auto generated, uncheck require password change > apply
> Download credentials > credentials.csv

Step 2: install and configure aws cli
download AWSCLI64.msi > install > windows run > cmd

Type aws configure and press enter. Enter the following when prompted:

AWS Access Key ID [None]: enter the Access Key Id from the credentials.csv file you downloaded in step 1 part d

Note: this should look something like AKIAPWINCOKAO3U4FWTN
AWS Secret Access Key [None]: enter the Secret Access Key from the credentials.csv file you downloaded in step 1 part d

Note: this should look something like 5dqQFBaGuPNf5z7NhFrgou4V5JJNaWPy1XFzBfX3

Default region name [None]: enter us-east-1
Default output format [None]: enter json

Step 3: Using the AWS CLI with Amazon S3
a. Creating a bucket is optional if you already have a bucket created that you want to use.
To create a new bucket named my-first-backup-bucket type aws s3 mb s3://my-first-backup-bucket

Note: bucket naming has some restrictions; one of those restrictions is that bucket names must be globally unique (e.g. two different AWS users can not have the same bucket name);
because of this, if you try the command above you will get a BucketAlreadyExists error.

b. To upload the file my-first-backup.bak located in the local directory to the S3 bucket my-first-backup-bucket,
you would use the following command:
aws s3 cp my-first-backup.bak s3://my-first-backup-bucket/

c. To download my-first-backup.bak from S3 to the local directory we would reverse the order of the commands as follows:
aws s3 cp s3://my-first-backup-bucket/my-first-backup.bak ./

d. To delete my-first-backup.bak from your my-first-backup-bucket bucket, use the following command:
aws s3 rm s3://my-first-backup-bucket/my-first-backup.bak

additional commands
recursively copying local files to s3
aws s3 cp myDir s3://mybucket/ –recursive –exclude “*.jpg”

recursively remove files (with caution!)
aws s3 rm myDir s3://mybucket/ –recursive –exclude “*.jpg”

list files:
aws s3 ls s3://mybucket

remove bucket:
$ aws s3 rb s3://bucket-name
$ aws s3 rb s3://bucket-name –force