June 2018
M T W T F S S
« May    
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

WordPress Quotes

Are you bored with life? Then throw yourself into some work you believe in with all your heart, live for it, die for it, and you will find happiness that you had thought could never be yours.
Dale Carnegie

Recent Comments

June 2018
M T W T F S S
« May    
 123
45678910
11121314151617
18192021222324
252627282930  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (26)
Ansibile (17)
Apache (124)
Asterisk (2)
cassandra (2)
Centos (207)
Centos RHEL 7 (250)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (24)
Eassy (11)
EXCHANGE (3)
Fedora (6)
ftp (5)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (6)
JBOSS (32)
jenkins (1)
Kubernetes (1)
Ldap (4)
Linux (188)
Linux Commands (167)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (21)
MYSQL (80)
Nagios (5)
NaturalOil (13)
Nginx (27)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (33)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (11)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (59)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

21 visitors online now
5 guests, 16 bots, 0 members

Hit Counter provided by dental implants orange county

VPC -AWS

** 1 subnet == 1 AZ.
ACL = access control list
SN = subnet
IGW = internet gateway
CIDR - classless inter-domain routing. -where we assign ip ranges
NAT - network adress translation
------------------
internal ip address ranges
(rfc 1918)
10.0.0.0 -10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168 / 16 prefix)
we will always use a /16 network adress
--------------------
vpc - virtual private cloud
think of vpc as a logical data center.
you provision a section of the aws cloud in a virtual network. you can easily cutomize your network.

example - a public-facing subnet for webservers, and private-facing backend db servers with no internet connection.

you can create a hardware virtual private network (VPN) between corporate datacenter and VPC to leverage aws as an extens center (hybrid cloud)
--------------
what can you do with a VPC?

launch instances into a subnet
assign custom IP ranges in each subnet
configure toure tables betwen subnets
create internet gateway. only 1 per VPC
better security over aws resources
security groups (are stateful - incoming rules are automatically allowed as outgoing)
subnet ACL (not stateful. everything needs to be configured)
------------------
default VPC 
defualt have a route out to the internet
each EC2 has a public and private ip
the only way to restore a default VPC is to contact aws.

security groups , ACL , default Route table are created by default.
subnets and IGW are not created by default.
---------
Peering
connect one VPC to another using private ip addresses. (not over the internet)
instances behave as if they are on same network.
can also peer VPC with other AWS accounts VPC withing a SINGLE REGION
star configuration - 1 central VPC peers with 4 others. NO TRASITIVE PEERING == the networks must be directly connected.
example VPC-a is connected to VPC-b and VPC-c. VPC-b and VPC-c cann't talk to each other through VPC-a (transitive). they must be directly peered. 
CIDR blocks for the private IP's must be different between peering VPCs - VPC A 10.0.0.0/16 cant peer to VPC B if it has 10.0.0.0/24
----------------
NAT 
NAT instances - traditional use for allowing an EC2 instance with no internet connection to have access to the internet for updates, install dbs... we use an EC2 instance from the community AMI search for NAT.
remember to disable source/destination check on the instance.
must be in a public subnet
in the route table ensure there's a route out to the NAT instance. it's found in the default route table.
the bandwidth the NAT instance supports depends on the instance type.
to create high availability you need to use autoscaling groups, multiple subnets in different AZ and scripts to automate failover
(lots of work..)
need to set security group

NAT gateways - easier access. preferd. scales automatically and no need to set security groups. 
if a NAT instance goes down, so does our internet connection. but with NAT gateways aws takes care of that automatically. supports bandwidth up to 10gb

---------------
building a VPC process (not using the wizard):

1. start vpc , your VPC, create VPC
2. name, CIDR block (our ip ranges. we used 10.0.0.0/16), tenancy (shared or dedicated hardware).
3. default route table, ACL, security groups are created
4. subnets-> create -> name, vpc (select the newly created one), AZ , CIDR (we used 10.0.1.0/24 which will give us 10.0.1.xxx)
5. create anoter subnet ->name , vpc (same as above), AZ(different then above), CIDR (10.0.2.0/24)
6. internet gateways ->create ->name
7. attach to vpc (select newly created vpc)
8. route tables -> (main route table is private by default)
9. create new table -> name, vpc
10. edit -> add route that is open to the internet
11. subnet associations -> edit -> select a subnet that will be the public one
12. subnets -> selected the public one -> actions -> modify autoassign public ip.
13. deploy 2 EC2 instances. one is a public web server (can use a script). one is a private sql server. (notice for the auto-assign public ip..). for the private instace, set a new security group with ssh-10.0.1.0/24, mysql/aurora-10.0.1.0/24
14. for the mysqlserver add another rule for all ICMP with same ip address - this allows ping.
15. copy the content of the privateKey.pem file.
16. ssh into the web server -> create (echo or nano) new privateKey.pem file and paste the content.
17. chmod 0600 the privteKey.pem file (gives read and write privileges to that file).
18. ssh into sql server using the newly created file.
19. (NAT instace) launch an EC2 instance -> community -> search for nat
20. deploy into our VPC, and put into the public subnet.
21. use a public facing security group
22. actions -> networking -> change source/destination check->disable
23. VPC->route tables -> select the main route (nameless) -> add 0.0.0.0/0 target- our newly created EC2
 (?? associate public subnet )
24. (NAT gateway - replaces steps 19-23) VPC -> NAT gateways -> create -> subnet(public facing), elastic ip(create new EIP)
25. route tables -> main route table ->add 0.0.0.0/0 target - the newly created gateway.

--------------
security groups vs NACL:

security group acts as the first layer of defence. operates at the instance level. stateful
N(network)ACL operates at the subnet level. stateless. denies all traffic by default

a subnet can only be assiciated with one NACL. but an NACL can be assiciated with many subnets.
if you try to add a ACL to a subnet the is already associated with an ACL, the new ACL will just replace the old one.

rules are evalutaed in numerical order.
the lowest number rules have precedens over later rules.
example:
rule 99 blocks my ip
rull 100 allows all ips
==my ip is still blocked.

you can't block using a security group
---------------------------------------------
notes:
when setting up an ELB, to get good availability you need  at least two AZ or subnets. So notice if your VPC actually has more then 1 public subnet

Bastion - used to securly administer EC2 instances in private subnets (using ssh or RDP-remote desktop protocal). used instaed of NAT. 
for our purposes, we used the nat-EC2 as a Bastion

Flow logs - enable you to capture IP traffic flow information for the network interfaces in your resources and log them in cloudWatch.

Redis master-slave + KeepAlived achieve high availability

Redis master-slave + KeepAlived achieve high availability

Redis is a non-relational database that we currently use more frequently. It can support diverse data types, multi-threaded high concurrency support, and redis running in memory with faster read and write. Because of the excellent performance of redis, how can we ensure that redis can deal with downtime during operation?

So today’s summary of the construction of the redis master-slave high-availability system, with reference to online bloggers of some great gods, found that many are pitted, so I share this one time, hoping to help everyone.

Redis Features
Redis is completely open source free, complies with the BSD protocol and is a high performance key-value database.

Redis and other key-value cache products have the following three characteristics:

Redis supports the persistence of data, can keep the data in memory on the disk, and can be loaded again for use when restarted.

Redis not only supports simple key-value types of data, but also provides data structures such as: Strings, Maps, Lists, Sets, and sorted sets. Storage.

Redis supports data backup, that is, data backup in master-slave mode.

The Redis advantage
is extremely high – Redis can read 100K+ times/s and write at 80K+ times/s.

Rich data types – Redis supports Strings, Lists, Hashes, Sets, and Ordered Sets data type operations for binary cases.

Atoms – All operations of Redis are atomic, and Redis also supports atomic execution of all operations after all operations.

Rich features – Redis also supports publish/subscribe, notifications, key expiration, and more.

Prepare environment

CentOS 7 –> 172.16.81.140 –> Master Redis –> Master Keepalived

CentOS7 –> 172.16.81.141 –> From Redis –> Prepared Keepalived

VIP –> 172.16.81.139

Redis (normally 3.0 or later)

KeepAlived (direct online installation)

Redis compile and install

cd /opt
tar -zxvf redis-4.0.6.tar.gz
mv redis-4.0.6 redis
cd redis
makeMALLOC=libc
make PREFIX=/usr/local/redis install

2, configure the redis startup script

vim /etc/init.d/redis

#!/bin/sh

#chkconfig:2345 80 90
# Simple Redisinit.d script conceived to work on Linux systems
# as it doeSUSE of the /proc filesystem.

# Configure the redis port number
REDISPORT=6379
# Configure the redis startup command path
EXE=/usr/local/redis/bin/ redisserver
# Configure the redis connection command path
CLIEXE=/usr/local/redis/bin/redis-cli
# Configure
Redis Run PID path PIDFILE=/var/run/redis_6379.pid
# Configure the path of redis configuration file
CONF=”/etc/redis/redis.conf”
# Configure the connection authentication password for redis
REDISPASSWORD=123456
function start () {
if [ -f $PIDFILE ]

then

echo “$PIDFILE exists,process is already running or crashed”

else

echo “Starting Redisserver…”

$EXE $CONF &

fi
}

function stop () {
if [ ! -f $PIDFILE ]

then

echo “$PIDFILE does not exist, process is not running”

else

PID=$(cat $PIDFILE)

echo “Stopping …”

$CLIEXE -p $REDISPORT -a $REDISPASSWORD shutdown

while [ -x /proc/${PID} ]

do

echo “Waiting forRedis to shutdown …”

sleep 1

done

echo “Redis stopped”

fi
}

function restart () {
stop

sleep 3

start
}

case “$1” in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo -e “\e[31m Please use $0 [start|stop|restart] asfirst argument \e[0m”
;;
esac

Grant execution permissions:

chmod +x /etc/init.d/redis

Add boot start:

chkconfig –add redis

chkconfig redis on

See: chkconfig –list | grep redis

The test closed the firewall and selinux beforehand. The production environment is recommended to open the firewall.

3, add redis command environment variables

#vi /etc/profile #Add the
next line of parameters
exportPATH=”$PATH:/usr/local/redis/bin” #Environment
variables become effective
source /etc/profile

4. Start redis service

Service redis start #Check
startup

ps -ef | grep redis

Note: After we perform the same operation on our two servers, the installation completes redis. After the installation is completed, we directly enter the configuration master-slave environment.

Redis master-slave configuration

To extend back to the previous design pattern, our idea is to use 140 as the master, 141 as the slave, and 139 as the VIP elegant address. The application accesses the redis database through the 6379 port of the 139.

In normal operation, when the master node 140 goes down, the VIP floats to 141, and then 141 will take over 140 as the master node, and 140 will become the slave node, continuing to provide read and write operations.

When 140 returns to normal, 140 will perform data synchronization with 141 at this time, 140 the original data will not be lost, and the data that has been written into 141 between synchronization machines will be synchronized. After the data synchronization is completed,

The VIP will return to the 140 node and become the master node because of the weight. 141 will lose the VIP and become the slave node again, and restore to the initial state to continue providing uninterrupted read and write services.

1, configure the redis configuration file

Master-140 configuration file

vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
daemonize yes
requirepass 123456
slave-serve-stale-data yes
slave-read-only no

Slave-141 configuration file

vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
daemonize yes
slaveof 172.16.81.140 6379
masterauth 123456
slave-serve-stale-data yes
slave-read-only no

2. Restart the redis service after the configuration is complete! Verify that the master and slave are normal.

Master node 140 terminal login test:

[root@localhost ~]# redis-cli -a 123456
127.0.0.1:6379> INFO
.
.
.
# Replication
role:master
connected_slaves:1
slave0:ip=172.16.81.141,port=6379,state=online,offset=105768,lag=1
master_replid:f83fcc3c98614d770f2205831fef1e877fa3f482
master_replid2:1f25604997a4ad3eb8344e8155990e78acd93312
master_repl_offset:105768
second_repl_offset:447
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:447
repl_backlog_histlen:105322

Login test from node 141 terminal:

[root@localhost ~]# redis-cli -a 123456
127.0.0.1:6379> info
.
.
.
# Replication
role:slave
master_host:172.16.81.140
master_port:6379
master_link_status:up
master_last_io_seconds_ago:5
master_sync_in_progress:0
slave_repl_offset:105992
slave_priority:100
slave_read_only:0
connected_slaves:0
master_replid:f83fcc3c98614d770f2205831fef1e877fa3f482
master_replid2:1f25604997a4ad3eb8344e8155990e78acd93312
master_repl_offset:105992
second_repl_offset:447
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:239
repl_backlog_histlen:105754
3, synchronous test

Master node 140

The masters and slaves of this redis have been completed!

KeepAlived configuration to achieve dual hot standby

Use Keepalived to implement VIP, and achieve disaster recovery through notify_master, notify_backup, notify_fault, notify_stop.

1, configure Keepalived configuration file

Master Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id redis01
}

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2
}

vrrp_instance VI_1 {
state MASTER
interface eno16777984
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

track_script {
chk_redis
}
virtual_ipaddress {
172.16.81.139
}

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh
}

Spare Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id redis02
}

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2
}

vrrp_instance VI_1 {
state BACKUP
interface eno16777984
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

track_script {
chk_redis
}
virtual_ipaddress {
172.16.81.139
}

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh
}

2, configure the script

Master KeepAlived — 140

Create a script directory: mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh
#!/bin/bash

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ];then

echo $ALIVE

exit 0

else

echo $ALIVE

exit 1

fi

[root@localhost script]# cat redis_master.sh
#!/bin/bash
REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″
LOGFILE=”/var/log/keepalived-redis-state.log”
sleep 15
echo “[master]” >> $LOGFILE
date >> $LOGFILE
echo “Being master….” >>$LOGFILE 2>&1
echo “Run SLAVEOF cmd …”>> $LOGFILE
$REDISCLI SLAVEOF 172.16.81.141 6379 >>$LOGFILE 2>&1
if [ $? -ne 0 ];then
echo “data rsync fail.” >>$LOGFILE 2>&1
else
echo “data rsync OK.” >> $LOGFILE 2>&1
fi

Sleep 10 # delay 10 seconds later to cancel synchronization after the data synchronization is complete

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

$REDISCLI SLAVEOF NO ONE >> $LOGFILE 2>&1
if [ $? -ne 0 ];then
echo “Run SLAVEOF NO ONE cmd fail.” >>$LOGFILE 2>&1
else
echo “Run SLAVEOF NO ONE cmd OK.” >> $LOGFILE 2>&1
fi

[root@localhost script]# cat redis_backup.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

Sleep 15 #delay 15 seconds to wait until the data is synchronized to the other side and then switch the role of master-slave

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.141 6379 >>$LOGFILE 2>&1

[root@localhost script]# cat redis_fault.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[stop]” >> $LOGFILE

date >> $LOGFILE

Slave KeepAlived — 141

mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh
#!/bin/bash

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ]; then

echo $ALIVE

exit 0

else

echo $ALIVE

exit 1

fi

[root@localhost script]# cat redis_master.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[master]” >> $LOGFILE

date >> $LOGFILE

echo “Being master….” >>$LOGFILE 2>&1

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.140 6379 >>$LOGFILE 2>&1

sleep 10 #

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

$REDISCLI SLAVEOF NO ONE >> $LOGFILE 2>&1

[root@localhost script]# cat redis_backup.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

sleep 15 #

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.140 6379 >>$LOGFILE 2>&1

[root@localhost script]# cat redis_fault.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[stop]” >> $LOGFILE

date >> $LOGFILE

systemctl start keepalived

systemctl enable keepalived

ps -ef | grep keepalived

Full understanding of the new features of MySQL 8.0

Full understanding of the new features of MySQL 8.0

First, the function added in MySQL 8.0

1, the new system dictionary table

Integrated transaction data dictionary for storing information about database objects, all metadata is stored using the InnoDB engine

2, support for DDL atomic operations

The DDL of the InnoDB table supports transaction integrity, either to be successful or to roll back, to write the DDL operation rollback log to the data dictionary data dictionary table mysql.innodb_ddl_log for rollback operations

3, security and user management

Added caching_sha2_password authentication plugin and is the default authentication plugin. Enhanced performance and security

Permissions support role

New password history feature restricts reuse of previous passwords

4, support for resource management

Supports creation and management of resource groups and allows the allocation of threads run by the server to specific groups for execution by threads based on the resources available to the resource group

5, innodb enhancements

Self-enhanced optimization to fix MySQL bug#199, this bug causes MySQL to take the maximum self-increment on the table as the maximum when the DB is restarted, and the next allocation is to allocate max(id)+1 if it is an archive table or After the data is deleted in other modes, the DB system restarts, and self-enhancement may be reused.

Added INFORMATION_SCHEMA.INNODB_CACHED_INDEXES to see the index pages of each index cache in the InnoDB buffer pool

InnoDB temporary tables will be created in the shared temporary table space ibtmp1

InnoDB supports NOWAIT and SKIP LOCKED for SELECT … FOR SHARE and SELECT … FOR UPDATE statements

The minimum value of innodb_undo_tablespaces is 2, and it is no longer allowed to set innodb_undo_tablespaces to 0. Min 2 ensures that rollback segments are always created in the undo tablespace, not in the system tablespace

ALTER TABLESPACE … RENAME TO syntax

Added innodb_dedicated_server to let InnoDB automatically configure innodb_buffer_pool_size according to the amount of memory detected on the server, innodb_log_file_size, innodb_flush_method

New INFORMATION_SCHEMA.INNODB_TABLESPACES_BRIEF view

A new dynamic configuration item, innodb_deadlock_detect, is used to disable deadlock checking, because in high-concurrency systems, when a large number of threads wait for the same lock, deadlock checking can significantly slow down the database.

Supports use of the innodb_directories option to move or restore tablespace files to a new location when the server is offline

6, MySQL 8.0 better support for document database and JSON

7, optimization

Invisible index, starting to support invisible index, (feeling the same as Oracle ), you can set the index to be invisible during the optimization of the SQL, the optimizer will not use the invisible index

Support descending index, DESC can be defined on the index, before the index can be reversed scan, but affect performance, and descending index can be completed efficiently

8, support RANK (), LAG (), NTILE () and other functions

9, regular expression enhancements, provide REGEXP_LIKE (), EGEXP_INSTR (), REGEXP_REPLACE (), REGEXP_SUBSTR () and other functions

10. Add a backup lock to allow DML during online backup while preventing operations that may result in inconsistent snapshots. Backup locks supported by LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE syntax

11, the character set The default character set changed from latin1 to utf8mb4

12, configuration file enhancement

MySQL 8.0 supports online modification of global parameter persistence. By adding the PERSIST keyword, you can persist the adjustment to a new configuration file. Restarting db can also be applied to the latest parameters. For adding the PERSIST keyword modification parameter command, the MySQL system will generate a mysqld-auto.cnf file that contains json format data. For example, execute:

Set PERSIST expire_logs_days=10 ; # memory and json files are modified, restart also effective

Set GLOBAL expire_logs_days=10 ; # only modify memory, restart lost

The system will generate a file containing the following in the data directory mysqld-auto.cnf:

{ “mysql_server”: {“expire_logs_days”: “10” } }

When my.cnf and mysqld-auto.cnf exist at the same time, the latter has a high priority.

13. Histogram

The MySQL 8.0 version started to support long-awaited histograms. The optimizer will use the column_statistics data to determine the distribution of field values ??and get a more accurate execution plan.

You can use ANALYZE TABLE table_name [UPDATE HISTOGRAM on col_name with N BUCKETS | DROP HISTOGRAM ON clo_name] to collect or delete histogram information

14, support for session-level SET_VAR dynamically adjust some of the parameters, help to improve statement performance.

Select /*+ SET_VAR(sort_buffer_size = 16M) */ id from test order id ;

Insert /*+ SET_VAR(foreign_key_checks=OFF) */ into test(name) values(1);

15, the adjustment of the default parameters

Adjust the default value of back_log to keep the same with max_connections and increase the connection processing capacity brought by burst traffic.

Modifying event_scheduler defaults to ON, which was previously disabled by default.

Adjust the default value of max_allowed_packet from 4M to 64M.

Adjust bin_log, log_slave_updates default is on.

Adjust the expiry date of expire_logs_days to 30 days, and the old version to 7 days. In the production environment, check this parameter to prevent the binlog from creating too much space.

Adjust innodb_undo_log_truncate to ON by default

Adjust the default value of innodb_undo_tablespaces to 2

Adjust the innodb_max_dirty_pages_pct_lwm default 10

Adjust the default value of innodb_max_dirty_pages_pct 90

Added innodb_autoinc_lock_mode default value 2

16, InnoDB performance improvement

Abandon the buffer pool mutex, split the original mutex into multiple, increase concurrent

Splitting the two mutexes LOCK_thd_list and LOCK_thd_remove can increase the threading efficiency by approximately 5%.

17, line buffer

The MySQL8.0 optimizer can estimate the number of rows to be read, so it can provide the storage engine with an appropriately sized row buffer to store the required data. Mass performance of continuous data scans will benefit from larger record buffers

18, improve the scanning performance

Improving the performance of InnoDB range queries improves the performance of full-table queries and range queries by 5-20%.

19, the cost model

The InnoDB buffer can estimate how many tables and indexes are in the cache, which allows the optimizer to choose the access mode to know if the data can be stored in memory or must be stored on disk.

20, refactoring SQL analyzer

Improve the SQL analyzer. The old analyzer has serious limitations due to its grammatical complexity and top-down analysis, making it difficult to maintain and extend.

Second, MySQL 8.0 in the abandoned features

  • Deprecated validate_password plugin
  • Deprecation of ALTER TABLESPACE and DROP TABLESPACE ENGINE Clauses
  • Discard JSON_MERGE() -> JSON_MERGE_PRESERVE() instead
  • Abandoned have_query_cache system variable

Third, MySQL 8.0 is removed

Query cache functionality was removed and related system variables were also removed

Mysql_install_db is replaced by mysqld –initialize or –initialize-insecure

The INNODB_LOCKS and INNODB_LOCK_WAITS tables under INFORMATION_SCHEMA have been deleted. Replaced with Performance Schema data_locks and data_lock_waits tables

Four tables under INFORMATION_SCHEMA removed: GLOBAL_VARIABLES, SESSION_VARIABLES, GLOBAL_STATUS, SESSION_STATUS

InnoDB no longer supports compressed temporary tables.

PROCEDURE ANALYSE() syntax is no longer supported

InnoDB Information Schema Views Renamed
Old the Name the Name New
INNODB_SYS_COLUMNS INNODB_COLUMNS
INNODB_SYS_DATAFILES INNODB_DATAFILES
INNODB_SYS_FIELDS INNODB_FIELDS
INNODB_SYS_FOREIGN INNODB_FOREIGN
INNODB_SYS_FOREIGN_COLS INNODB_FOREIGN_COLS
INNODB_SYS_INDEXES INNODB_INDEXES
INNODB_SYS_TABLES INNODB_TABLES
INNODB_SYS_TABLESPACES INNODB_TABLESPACES
INNODB_SYS_TABLESTATS INNODB_TABLESTATS
INNODB_SYS_VIRTUAL INNODB_VIRTUAL

Remove’s server option:

–temp-pool
–ignore-builtin-innodb
–des-key-file
–log-warnings
–ignore-db-dir

Remove configuration options:

Innodb_file_format
innodb_file_format_check
innodb_file_format_max
innodb_large_prefix

Remove the system variable

information_schema_stats -> information_schema_stats_expiry
ignore_builtin_innodb
innodb_support_xa
show_compatibility_56
have_crypt
DATE_FORMAT
DATETIME_FORMAT
the time_format
max_tmp_tables
global.sql_log_bin (session.sql_log_bin reserved)
log_warnings -> log_error_verbosity
multi_range_count
secure_auth
sync_frm
TX_ISOLATION -> transaction_isolation
tx_read_only -> transaction_read_only
ignore_db_dirs
the query_cache_limit
the query_cache_min_res_unit
the query_cache_size
the query_cache_type
query_cache_wlock_invalidate
innodb_undo_logs -> innodb_rollback_segments

Remove the state variable

Com_alter_db_upgrade
Slave_heartbeat_period
Slave_last_heartbeat
Slave_received_heartbeats
Slave_retried_transactions, Slave_running
Qcache_free_blocks
Qcache_free_memory
Qcache_hits
Qcache_inserts
Qcache_lowmem_prunes
Qcache_not_cached
Qcache_queries_in_cache
Qcache_total_blocks
Innodb_available_undo_logs Status

Remove function

JSON_APPEND() –> JSON_ARRAY_APPEND()

ENCODE()

DECODE()

DES_ENCRYPT()

DES_DECRYPT()

Remove’s client option:

–ssl –ssl-verify-server-cert is deleted, with –ssl-mode = VERIFY_IDENTITY | alternative DISABLED | REQUIRED

–secure-auth

 

 

MySQL 8 official version 8.0.11 has been released. Officially stated that MySQL 8 is 2 times faster than MySQL 5.7, and it also brings a lot of improvements and faster performance!

The following is the record of the installation process of the person 2018.4.23 days. The entire process takes about an hour, and the make && make install process takes longer.

I. Environment

CentOS 7.4 64-bit Minimal Installation

II. Preparation

??1. Installation dependencies

Yum -y install wget cmake gcc gcc-c++ ncurses ncurses-devel libaio-devel openssl openssl-devel

??2. Download the source package

Wget https://cdn.mysql.com//Downloads/MySQL-8.0/mysql-boost-8.0.11.tar.gz (this version comes with boost)

??Create mysql user

Groupadd mysql
useradd -r -g mysql -s /bin/false mysql

??4 create an installation directory and data directory

Mkdir -p /usr/local/mysql
mkdir -p /data/mysql

Three. Install MySQL8.0.11

??1. Extract the source package

Tar -zxf mysql-boost-8.0.11.tar.gz -C /usr/local

??2. Compile & Install

Cd /usr/local/mysql-8.0.11
cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DMYSQL_DATADIR=/usr/local/mysql/data -DSYSCONFDIR=/etc -DMYSQL_TCP_PORT=3306 -DWITH_BOOST=/usr/local/ Mysql-8.0.11/boost
make && make install

3. Configure the my.cnf file

Cat /etc/my.cnf
[mysqld]
server-id=1
port=3306
basedir=/usr/local/mysql
datadir=/data/mysql

## Please add parameters according to the actual situation

4 directory permissions modify

Chown -R mysql:mysql /usr/local/mysql
chown -R mysql:mysql /data/mysql
chmod 755 /usr/local/mysql -R
chmod 755 /data/mysql -R

5. Initialization

Bin/mysqld –initialize –user=mysql –datadir=/data/mysql/
bin/mysql_ssl_rsa_setup

6. Start mysql

Bin/mysqld_safe –user=mysql &

??7. Modify account password

Bin/mysql -uroot -p
mysql> alter user ‘root’@’localhost’ identified by “123456”;

Mysql> show databases;
+——————–+
| Database |
+——————- – +
| information_schema |
| mysql |
| performance_schema |
| sys |
+——————–+
4 rows in set (0.00 sec)

##Add Remote Account

Mysql> create user root@’%’ identified by ‘123456’;
Query OK, 0 rows affected (0.08 sec)

????Mysql> grant all privileges on *.* to root@’%’;
????Query OK, 0 rows affected (0.04 sec)

????Mysql> flush privileges;
????Query OK, 0 rows affected (0.01 sec)

??8. Create a soft link (non-essential)

Ln -s /usr/local/mysql/bin/* /usr/local/bin/

Mysql -h 127.0.0.1 -P 3306 -uroot -p123456 -e “select version();”
mysql: [Warning] Using a password on the command line interface can be insecure.
+———- -+
| version() |
+———–+
| 8.0.11 |
+———–+

??9. Add to start (non-essential)

Cp support-files/mysql.server /etc/init.d/mysql.server

Hereby explain: MySQL official recommended to use binary installation. (The following figure is a screenshot of the official document)

Nginx load balancing and configuration

Nginx load balancing and configuration

1 Load Balancing Overview The
origin of load balancing is that when a server has a large amount of traffic per unit time, the server will be under great pressure. When it exceeds its own capacity, the server will crash. To avoid crashing the server. The user has a better experience, born load balancing to share the pressure of the server.

Load balancing is essentially implemented with the principle of reverse proxy, is a kind of technology that optimizes server resources and reasonably handles high concurrency, and can balance Server pressure to reduce user request wait time and ensure fault tolerance. Nginx is generally used as an efficient HTTP load balancing server to distribute traffic to multiple application servers to improve performance, scalability, and high availability.

Principle: Internal A large number of servers can be built on the network to form a server cluster. When a user visits the site, they first access the public network intermediate server. The intermediate server is assigned to the intranet server according to the algorithm and shares the pressure of the server. Therefore, each visit of the user ensures the server. The pressure of each server in the cluster tends to balance, sharing server pressure and avoiding servers The collapse of the case.

The nginx reverse proxy implementation includes the following load balancing HTTP, HTTPS, FastCGI, uwsgi, SCGI, and memcached.
To configure HTTPS load balancing, simply use the protocol that begins with ‘http’.
When you want to set load balancing for FastCGI, uwsgi, SCGI, or memcached, use the fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass commands, respectively.

2 Common Balancing Mechanisms of Load Balancing

1 round-robin: The requests are distributed to different servers in a polling manner. Each request is assigned to different back-end servers in chronological order. If the back-end server goes down, it is automatically removed to ensure normal services. .

Configuration 1:
upstream server_back {#nginx distribution service request
server 192.168.2.49;
server 192.168.2.50;
}

Configuration 2:
http {
upstream servergroup { # service group accepts requests, nginx polling distribution service requests
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;
}
server {
listen 80;
location / {
Proxy_pass http://servergroup; #All requests are proxied to servergroup service group
}
}
}
proxy_pass is followed by proxy server ip, can also be hostname, domain name, ip port mode
upstream set load balancing background server list

2 Weight load balancing: if no weight is configured, the load of each server is the same. When there is uneven server performance, weight polling is used. The weight parameter of the specified server is determined by load balancing. a part of. Heavy load is great.
Upstream server_back {
server 192.168.2.49 weight=3;
server 192.168.2.50 weight=7;
}

3 least-connected: The next request is allocated to the server with the least number of connections. When some requests take longer to respond, the least connections can more fairly control the load of application instances. Nginx forwards the request to the less loaded server.
Upstream servergroup {
least_conn;
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;
}

4 ip-hash: Client-based IP address. When load balancing occurs, each request is relocated to one of the server clusters. Users who have logged in to one server then relocate to another server and their login information is lost. This is obviously not appropriate. Use ip_hash to solve this problem. If the client has accessed a server, when the user accesses it again, the request will be automatically located to the server through a hash algorithm.

Each request is assigned according to the result of the IP hash, so the request is fixed to a certain back-end server, and it can also solve the session problem
upstream servergroup {
ip-hash;
server srv1.rmohan.com;
server srv2.rmohan.com;
server srv3.rmohan.com;
}

Attach an example:
#user nobody;
worker_processes 4;
events {
# maximum number of concurrent
workers_connections 1024;
}
http{
# The list of pending servers to be
upstream myserver{
# ip_hash instruction to bring the same user to the same server.
Ip_hash;
server 125.219.42.4 fail_timeout=60s; tentative time after the failure of #max_fails 60s
server 172.31.2.183;
}

Server{
# listening port
listen 80;
# root
location / /
# select which server list
proxy_pass http://myserver;
}
}
}

Max_fails allows the number of request failures to default to 1
fail_timeout=60s fail_timeout=60s timeout for failed timeouts
down indicates that the current server is not participating in the loadbackup. All nonbackup
machines will request backups when they are busy, so their stress will be lightest.

Solution Architect Associate

1. Messaging
2. Desktop and App Streaming
3. Security and Identity
4. Management Tools
5. Storage
6. Databases
7. Networking & Content Delivery
8. Compute
9. AWS Global Infrastructure

10. Migration
11. Analytics
12. Application services
13. Developer Tools

1. Messaging:
SNS - Simple Notification Service (Text, Email, Http)
SQS - Simple queue Service 

3. Security and Identity:
IAM - Identity Access Management
Inspector - Agent installed on VM provides security reports
Certificate Manager - Provides free certificate for domain name.
Directory Services - Provides Microroft Active directory  
WAF - Web Application Firewall (Application layer security x-site scripting / sql injection)
Artifect -  Get compliance ceriticate ISO PCI HIPPA

4. Management Tools:
Cloud Watch - Performance of AWS environment (Disk, CPU, RAM utilization)
Cloud Formation - Turn your AWS infrastructure into code. (Document)
Cloud trail - Auditing your AWS environment
OpsWork - Automatic deployment using chef
Config - Monitor your environemnt, set alerts
Service Catlog - Catlogs Enterprise authorise and authorise services
Trusted Advisor - Scan Environemtn suggest Performance optimization, security opti fault tollerance suggestions

5. Storage
S3 - Simple storage service Object based storage (DropBox)
Glacier - Not instance access 4-5 hours to recoved the archived files.
EFS - Elastic File service - File Storage Service
Storage Gateway - Not in exam

6. Databases
RDS - Relational Database Service ( SQL, MySQL, MariaDB, PostgreSQL, Aurora, and Oracle)
DynamoDB - Non Relational Database (High performance, scalable)
RedShift - Database Warehouse Service (Hudge data queries)
Elasticache - Cacheing data on the cloud (Quicker to featch from database)

7. Networking & Content Delivery:
VPC - Virtual Private Network
Route 53 - Amazons DNS Service
Cloud Front - Content Delivery Network
Direct Connect - Connect your Data Center with direct physical telephone line

8. Compute
EC2 - Elastic Coumpute Cloud
EC2 Container Services - Not in Exam
Elastic Beanstalk - developer's code on an infrastructure that is automatically provisioned to host that code
Lambda - allows you to run code without having to worry about provisioning any underlying resources.
Lightsail - New Service

9. AWS Global Infrastructure :
14 Regions and 38 Availibility zones
4 Regions and 11 more Availibility zones in 2017
66 Edge Locations
Regions - Phisical Geographocal Areas (An independent collection of AWS computing resources in a defined geography.)
Availibility Zones - Logical Data centers (Distinct locations from within an AWS region that are engineered to be isolated from failures.)
Edge Location - Content Delivery Network CDN End Points for CloudFront (Very large media objects)
----------------------------------------------------------------------------------------------------------------
10. Migration:
Snowball - Connect different discs and transfer data into cloud like S3
DMS - Database Migration Service (Migrate Oracle SQL MySQL to cloud)
SMS - Server Migration Service (Migrate VMWare to cloud)

11. Analytics:
Athena - SQL queries on S3
EMR - Elastic Map Reduce is specifically designed to assist you in processing large data sets
	Big Data Processing (Big Data, Log analysis, Analyse finantial markets)
Cloud Search - Fully Managed service
Elastic Search - Open source 
Kinesis - Process tera bit data and analyse it (Financial transaction Social Media centiment analysis,)
Data Pipeline - Move data from s3 to dynamo DB and vice-varsa
Quick Sight - Business analysis tool.

12. Application services:
Step Functions: Microservices used by your applicaitions 
SWF - Simple workflow service (co-ordinate physical and automated tasks)
API Gateway - Publish and monitor API and scale
AppStream - streaming desktop applications.
Elastic transcoder - Helps to run video on different form factor and reolutions

13. Developer Tools:
CodeCommit - Cloud git
CodeBuild - Compiling the code 
CodeDeploy - Way to deploy code to EC2 instances 
CodePipeLine - Track different versions of code UAT

Amazon Web Services (AWS)

Amazon Web Services (AWS)

  • Extensive set of cloud services available via the Internet
  • On-demand, virtually endless, elastic resources
  • Pay-per-use with no up-front costs (with optional commitment)
  • Self-serviced and programmable

 

 

 

Elastic Compute Cloud (EC2)

  • One of the core services of AWS
  • Virtual machines (or instances) as a service
  • Dozens of instance types that vary in performance and cost
  • Instance is created from an Amazon Machine Image (AMI), which in turn can be created again from instances

 

 

 

Regions and Availability Zones (AZ)

Notes: We will only use Ireland (eu-west-1) region in this workshop. See also A Rare Peek Into The Massive Scale of AWS.

Networking in AWS

Exercise: Launch an EC2 instance

  1. Log-in to gofore-crew.signin.aws.amazon.com/console
  2. Switch to Ireland region and go to EC2 dashboard
  3. Launch a new EC2 instance according instructor guidance
  • In “Configure Instance Details”, pass a User Data script under Advanced
  • In “Configure Security Group”, use a recognizable, unique name

#!/bin/sh
# When passed as User Data, this script will be run on boot
touch /new_empty_file_we_created.txt
echo "It works!" > /it_works.txt

Exercise: SSH into the instance

SSH into the instance (find the IP address in the EC2 console)

# Windows Putty users must convert key to .ppk (see notes)
ssh -i your_ssh_key.pem ubuntu@instance_ip_address

View instance metadata

curl http://169.254.169.254/latest/meta-data/

View your User Data and find the changes your script made

curl http://169.254.169.254/latest/user-data/
ls -la /

Notes: You will have to reduce keyfile permissions chmod og-xrw mykeyfile.pem. If you are on Windows and use Putty, you will have to convert the .pem key to .ppk key using puttygen (Conversions -> Import key -> *.pem file -> Save private key. Now you can use your *.ppk key with Putty: Connection -> SSH -> Auth -> Private key file)

Exercise: Security groups

Setup a web server that hosts the id of the instance

mkdir ~/webserver && cd ~/webserver
curl http://169.254.169.254/latest/meta-data/instance-id > index.html
python -m SimpleHTTPServer

Configure the security group of your instance to allow inbound requests to your web server from anywhere. Check that you can access the page with your browser.

Exercise: Security groups

Delete the previous rule. Ask a neighbor for the name of their security group, and allow requests to your server from your neighbor’s security group.

Have your neighbor access your web server from his/her instance.

# Private IP address of the web server (this should work)
curl 172.31.???.???:8000
# Public IP address of the web server (how about this one?)
curl 52.??.???.???:8000

Speaking of IP addresses, there is also Elastic IP Address. Later on, we will see use cases for this, as well as better alternatives.

Also, notice the monitoring metrics. These come from CloudWatch. Later on, we will create alarms based on the metrics.

Elastic Block Store (EBS)

  • Block storage service (virtual hard drives) with speed and encryption options
  • Disks (or volumes) are attached to EC2 instances
  • Snapshots can be taken from volumes
  • Alternative to EBS is ephemeral instance store

EC2 cost


Identity and Access Management

Identity and Access Management (IAM)

Notes: Always use roles inside instances (do not store credentials there), or something bad might happen.

Quiz: Users on many levels

Imagine running a content management system, discussion board or blog web application in EC2. How many different typesof user accounts you might have?


Virtual Private Cloud

Virtual Private Cloud (VPC)

  • Heavy-weight virtual IP networking for EC2 and RDS instances. Integral part of modern AWS, all instances are launched into VPCs (not true for EC2-classic)
  • An AWS root account can have many VPCs, each in a specific region
  • Each VPC is divided into subnets, each bound to an availability zone
  • Each instance connects to a subnet with a Elastic Network Interface

 

 

 

 

VPC with Public and Private Subnets

Access Control Lists

 

 

 

 

Auto Scaling

 

 

Provisioning capacity as needed

  • Changing the instance type is vertical scaling (scale up, scale down)
  • Adding or removing instances is horizontal scaling (scale out, scale in)
  • 1 instance 10 hours = 10 instances 1 hour

Auto Scaling instances

  • Launch Configuration describes the configuration of the instance. Having a good AMI and bootstrapping is crucial.
  • Auto Scaling Group contains instances whose lifecycles are automatically managed by CloudWatch alarms or schedule
  • Scaling Plan refers when scaling happens and what triggers it.

Scaling Plans

  • Maintain current number of instances
  • Manual scaling by user interaction or via API
  • Scheduled scaling
  • Dynamic Auto Scaling. A scaling policy describes how the group scales in or out. You should always have policies for both directions. Policy cooldowns control the rate in which scaling happens.

Auto Scaling Group Lifecycle

Auto Scaling Group Lifecycle

Elastic Load Balancer

  • Route traffic to an Auto Scaling Group (ASG)
  • Runs health checks to instances to decide whether to route traffic to them
  • Spread instances over multiple AZs for higher availability
  • ELB scales itself. Never use ELB IP address. Pre-warm before flash traffic.

 

Public networking

Route 53

  • Domain Name System (DNS)
  • Manage DNS records of hosted zones
  • Round Robin, Weighted Round Robin and Latency-based routing

CloudFront

  • Content Delivery Network (CDN)
  • Replicate static content from S3 to edge locations
  • Also supports dynamic and streaming content

EC2 instance

chmod 400 SeniorServer.pem

Server: ssh -i "SeniorServer.pem" ec2-user@ec2-54-149-37-172.us-west-2.compute.amazonaws.com
password for root:qwe123456

server2: ssh -i "senior_design_victor.pem" ec2-user@ec2-54-69-160-179.us-west-2.compute.amazonaws.com

controller: ssh -i "zheng.pem" ec2-user@ec2-52-34-59-51.us-west-2.compute.amazonaws.com


DB: mysql -h seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com -P 3306 -u root -p
username: root
password: qwe123456


use command "screen" to keep running server codes and controller codes on AWS
screen  #create a new screen session
screen -ls  #check running screens
screen -r screenID   #resume to a screen
screen -X -S screenID kill   #end a screen


MySQL:
create table transaction (username varchar(20), history varchar(20));
insert into property (username,password,money) values ("client1","123",100);


To create more replicated server/db:

Master (RDS) - all in mysql, no Bash - remember to remove [] from statements
	1. Create new slave
		CREATE USER '[SLAVE USERNAME]'@'%' IDENTIFIED BY '[SLAVE PASSWORD]'; 
	2. Give it access 
		GRANT REPLICATION SLAVE ON *.* TO '[SLAVE USERNAME]'@'%';  

Slave (Server) - 
	1. [Bash] Import the database from master
		mysqldump -h seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com -u root -p senior_design > dump.sql

	2. [Bash] Import the dump.sql into your database 
		mysql senior_design < dump.sql

	3. [Bash] Edit the /etc/my.cnf - will require root access, add the follow lines
	**Remember to keep server-id different (currently using 10, so next is 11, etc...)
		# Give the server a unique ID
		server-id               = #CHANGE THIS NUMBER#
		#
		# Ensure there's a relay log
		relay-log               = /var/lib/mysql/mysql-relay-bin.log
		#
		# Keep the binary log on
		log_bin                 = /var/lib/mysql/mysql-bin.log
		replicate_do_db            = senior_design

	4. [Bash] Restart mysqld
		service mysqld restart

Master-Slave Connection Creation
	1. On master (RDS) - type
		show master status;
		** We will need to keep note of File and Position
		+----------------------------+----------+--------------+------------------+-------------------+
		| File                       | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
		+----------------------------+----------+--------------+------------------+-------------------+
		| mysql-bin-changelog.010934 |      400 |              |                  |                   |
		+----------------------------+----------+--------------+------------------+-------------------+
	2. On the slave, enter mysql then enter
		CHANGE MASTER TO MASTER_HOST='seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com', MASTER_USER='[SLAVE NAME]', MASTER_PASSWORD='[SLAVE PWD]', MASTER_LOG_FILE='[MASTER FILE] ', MASTER_LOG_POS= [MASTER POSITION];
	3. On the slave, enter "START SLAVE;"
	4. Make sure the slave started - "SHOW SLAVE STATUS\G;"
	5. You can always triple check by adding a new row to senior_design in master then see if it updates in slave.

	TROUBLESHOOTING 
	- If for some reason you mess up the slave in step 2.
		[mysql] on the slave side
			reset slave;
		then repeat step 2 - 5
	- If in SHOW SLAVE STATUS\G shows error
		try 
			STOP SLAVE;
			SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
			START SLAVE;
		error should be gone, but this will only skip the error; the error may still re-appear

use senior_design;
select count(*) from property;


Slave user pass:
Slave 1 - username: slave1 pass: slave1pwd
Slave 2 - username: slave2 pass: [slave2]

CHANGE MASTER TO MASTER_HOST='seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com', MASTER_USER='slave1', MASTER_PASSWORD='slave1pwd', MASTER_LOG_FILE='mysql-bin-changelog.011030', MASTER_LOG_POS= 400;

CHANGE MASTER TO MASTER_HOST='seniordesign.c9btkcvedeon.us-west-2.rds.amazonaws.com', MASTER_USER='slave2', MASTER_PASSWORD='[slave2]', MASTER_LOG_FILE='mysql-bin-changelog.011030', MASTER_LOG_POS= 400;

Redis install

Install
——-
http://download.redis.io/redis-stable.tar.gz

$ wget http://download.redis.io/redis-stable.tar.gz
$ tar xvzf redis-stable.tar.gz
$ cd redis-stable
$ make
$ make test # optional
————–
# yum install wget tcl
# wget http://download.redis.io/releases/redis-3.2.5.tar.gz
# tar xzf redis-3.2.5.tar.gz
# cd redis-3.2.5
# make
# make test
————–
$ sudo make install
OR
$ sudo cp src/redis-server /usr/local/bin/
$ sudo cp src/redis-cli /usr/local/bin/
$ sudo mkdir /etc/redis
$ sudo mkdir /var/redis
$ sudo cp utils/redis_init_script /etc/init.d/redis_6379
$ sudo cp redis.conf /etc/redis/6379.conf
$ sudo mkdir /var/redis/6379
$ sudo update-rc.d redis_6379 defaults # OR sudo chkconfig –add redis_6379
$ sudo /etc/init.d/redis_6379 start
————–

TERMS
—–
RESP (REdis Serialization Protocol)
RESP, the type of some data depends on the first byte:
Simple Strings “+”
Errors “-”
Integers “:”
Bulk Strings “$”
Arrays “*”
Redis append-only file feature (AOF)

COMMANDS
——–
$ redis-server # start server
/etc/init.d/redis_PORT start
$ redis-cli [-p PORT] shutdown # stop server
/etc/init.d/redis_PORT stop
$ redis-cli ping # check if working
$ redis-cli –stat [-i interval] # continuous stats mode
$ redis-cli –bigkeys # scan for big keys
$ redis-cli [-p port ] –scan [–pattern REGEX] # get a list of keys
$ redis-cli [-p port ] monitor # monitor commands
$ redis-cli [-p port ] –latency # monitor latency of instances
$ redis-cli [-p port ] –latency-history [-i interval]
$ redis-cli [-p port ] –latency-dist # spectrum of latencies
$ redis-cli –intrinsic-latency [-p port ] [test-time] # latency of system
$ redis-cli –intrinsic-latency 5
$ redis-cli –rdb <dest-filename> # remote RDB backup ($?=0 success)
$ redis-cli –rdb /tmp/dump.rdb
$ redis-cli –slave # slave mode (monitor master -> slave writes)
$ redis-cli –lru-test 10000000 # Least Recently Used (LRU) simulation
# used to help configure ‘maxmemory’ for LRU
$ redis-cli save # save dump file (dump.rdb) to $dir
$ redis-cli select <DB_NUMBER> # select DB
$ redis-cli dbsize # show size of DB
$ redis-cli connect <SERVER> <PORT> # connect to different servers/ports
$ redis-cli debug restart
$ redis-cli –version
$ redis-cli pubsub channels [PATTERN]
$ redis-cli pubsub numsub [channel1 … channelN]
$ redis-cli subscribe/psubscribe/publish
$ redis-cli slowlog get [N]|len
a. unique progressive identifier for every slow log entry.
b. timestamp at which the logged command was processed.
c. amount of time needed for its execution, in microseconds.
d. array composing the arguments of the command.

FILES
—–
config file: /etc/redis/6371.conf
dbfilename dump.rdb
dir /var/lib/redis/6371
pidfile /var/run/redis/6371.pid
DB saved to:
/var/lib/redis/6371/dump.rdb

OPTIONS
——-
–raw, –no-raw

Configuration
————-
redis.conf # well documented
default ports: 6379 / 16379 (cluster mode) / 26379 (Sentinel)

daemonize no
pidfile /var/run/redis_6379.pid
port 6379
loglevel info
logfile /var/log/redis_6379.log
dir /var/redis/6379

keyword argument1 argument2 … argumentN

slaveof 127.0.0.1 6380
requirepass “hello world”
maxmemory 2mb
maxmemory-policy allkeys-lru
masterauth <password>
daemonize no # when run under daemontools

Administration
————–
/etc/sysctl.conf

vm.overcommit_memory = 1 # sysctl vm.overcommit_memory=1
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# passing arguments via CLI
$ ./redis-server –port 6380 –slaveof 127.0.0.1 6379

redis> config get GLOB
redis> config set slaveof 127.0.0.1 6380
redis> config rewrite

— —
Actual config

daemonize yes
pidfile /var/run/redis/6371.pid
port 6371
tcp-backlog 511
bind 0.0.0.0
timeout 0
tcp-keepalive 0
loglevel notice
logfile “”
syslog-enabled yes
syslog-ident redis
syslog-facility USER
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis/6371
slaveof 10.200.18.115 6371 # only on slave(s)
slave-serve-stale-data yes
slave-read-only yes
repl-disable-tcp-nodelay no
slave-priority 100
maxclients 10000
maxmemory-policy noeviction
appendonly no
appendfilename “appendonly.aof”
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
notify-keyspace-events “”
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

Replication
———–
redis> config set masterauth <password>
persistence = enabled OR automatic-restarts = disabled

slaveof 192.168.1.1 6379
repl-diskless-sync
repl-diskless-sync-delay
slave-read-only
masterauth <password> # config set masterauth <password>
min-slaves-to-write <number of slaves>
min-slaves-max-lag <number of seconds>
slave-announce-ip 5.5.5.5
slave-announce-port 1234

Redis Sentinel (26379)
————–
– Monitoring
– Notification
– Automatic failover
– Configuration provider

redis-sentinel /path/to/sentinel.conf
OR
redis-server /path/to/sentinel.conf –sentinel

# typical minimal config

# sentinel monitor <master-group-name> <ip> <port> <quorum>
# sentinel down-after-milliseconds <master-name> <milliseconds> # default 30 secs
# sentinel failover-timeout <master-name> <milliseconds> # default 3 minutes
# sentinel parallel-syncs <master-name> <numslaves>

# example typical minimal config:

sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1

# additional configs

# bind 127.0.0.1 192.168.1.1
# protected-mode no
# sentinel announce-ip <ip>
# sentinel announce-port <port>
# dir <working-directory>
# syntax: sentinel <option_name> <master_name> <option_value>
# sentinel auth-pass <master-name> <password>
# sentinel down-after-milliseconds <master-name> <milliseconds> # default 30 secs
# sentinel parallel-syncs <master-name> <numslaves>
# sentinel failover-timeout <master-name> <milliseconds> # default 3 minutes
# sentinel notification-script <master-name> <script-path>
# passed: <event type> <event description>
# sentinel client-reconfig-script <master-name> <script-path>
# passed: <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
# <state> is currently always “failover”
# <role> is either “leader” or “observer”
— — —
# actual config

/etc/redis/sentinel_26371.conf
# redis-sentinel 2.8.9 configuration file
# sentinel_26371.conf
daemonize no
dir “/var/lib/redis/sentinel_26371”
pidfile “/var/run/redis/sentinel_26371.pid”
port 26371
bind 0.0.0.0
sentinel monitor iowa_master_staging 10.200.18.115 6375 2
sentinel config-epoch iowa_master_staging 0
sentinel leader-epoch iowa_master_staging 0
sentinel known-slave iowa_master_staging 10.200.20.234 6375
logfile “”
syslog-enabled yes
syslog-ident “sentinel_26371”
syslog-facility user
————-
# sentinel messages/events

+monitor master <group-name> <ip> quorum <N>

# Testing
$ redis-cli -p PORT
127.0.0.1:PORT> SENTINEL master mymaster # info about master
127.0.0.1:PORT> SENTINEL slaves mymaster # info about slave(s)
127.0.0.1:PORT> SENTINEL sentinels mymaster # info about sentinel(s)
127.0.0.1:PORT> SENTINEL get-master-addr-by-name mymasterer # get address of master
$ redis-cli -p 6379 DEBUG sleep 30 # simulate master hanging

ping
SENTINEL masters # get list of monitored masters and their state
SENTINEL master <master name>
SENTINEL slaves <master name>
SENTINEL sentinels <master name>
SENTINEL get-master-addr-by-name <master name>
SENTINEL reset <pattern> # reset all masters matching pattern
SENTINEL failover <master name> # force a failover
SENTINEL ckquorum <master name> # check if current config is able to failover
SENTINEL flushconfig # rewrite config file
SENTINEL monitor <name> <ip> <port> <quorum> # start monitoring a new master
SENTINEL remove <name> # stop monitoring master
SENTINEL SET <name> <option> <value>

# Commands
$

Redis Cluster
————-
port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

=====================================
redis1:$ WTFI redis-cli => /nb/redis/bin/redis-cli

redis-cli -h <hostname> -p <port> -r <repeat (-1=forever)> -i <interval (secs)> -n <DB_NUM> <COMMAND>
redis-cli -p 6371|26371 info [server|clients|memory|persistence|stats|replication|cpu|keyspace|sentinel]
redis-cli -p 6371 ping
=====================================
Upgrading or restarting a Redis instance without downtime
Check out: https://redis.io/topics/admin (bottom)
=====================================
for p in $(grep ^port /etc/redis/*|awk ‘{print $NF}’); do echo “—- port: $p —-“; /nb/redis/bin/redis-cli -p $p info | grep stat; done

========== tool (redis_monit.sh) [begin] ==========
#!/bin/bash
# get status of redis servers
REDIS_CLI_CMD=/nb/redis/bin/redis-cli
# get the list of ports configured
ports=$(ls /etc/redis/*.conf | tr -d ‘[a-z/.]’)
for port in $ports; do
echo “—- port: $port —-”
if [ -e /etc/redis/$port.conf ]; then
$REDIS_CLI_CMD -p $port info | grep stat
else
echo “no config (/etc/redis/$port.conf)”
fi
done
========== tool (redis_monit.sh) [end] ==========

# update the monitor hosts – “live”
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel monitor redis2 10.204.21.219 6379 2
sentinel failover redis1
sentinel masters
sentinel slaves redis1

# manual failover
# CLUSTER FAILOVER [FORCE|TAKEOVER]

$ redis-cli -p 7002 debug segfault

s3 aws instance

creating a bucket:
——————-
S3 > Create bucket > unique name + region > create
bucket > select > upload > upload file or drag n drop

Backup Files to Amazon S3 using the AWS CLI
———————————————
Step 1: create login for aws console:
IAM > Users > Create > username: AWS_Admin > Permissions > Attach policy > AdministratorFullAccess
> Manage password > Auto generated, uncheck require password change > apply
> Download credentials > credentials.csv

Step 2: install and configure aws cli
download AWSCLI64.msi > install > windows run > cmd

Type aws configure and press enter. Enter the following when prompted:

AWS Access Key ID [None]: enter the Access Key Id from the credentials.csv file you downloaded in step 1 part d

Note: this should look something like AKIAPWINCOKAO3U4FWTN
AWS Secret Access Key [None]: enter the Secret Access Key from the credentials.csv file you downloaded in step 1 part d

Note: this should look something like 5dqQFBaGuPNf5z7NhFrgou4V5JJNaWPy1XFzBfX3

Default region name [None]: enter us-east-1
Default output format [None]: enter json

Step 3: Using the AWS CLI with Amazon S3
a. Creating a bucket is optional if you already have a bucket created that you want to use.
To create a new bucket named my-first-backup-bucket type aws s3 mb s3://my-first-backup-bucket

Note: bucket naming has some restrictions; one of those restrictions is that bucket names must be globally unique (e.g. two different AWS users can not have the same bucket name);
because of this, if you try the command above you will get a BucketAlreadyExists error.

b. To upload the file my-first-backup.bak located in the local directory to the S3 bucket my-first-backup-bucket,
you would use the following command:
aws s3 cp my-first-backup.bak s3://my-first-backup-bucket/

c. To download my-first-backup.bak from S3 to the local directory we would reverse the order of the commands as follows:
aws s3 cp s3://my-first-backup-bucket/my-first-backup.bak ./

d. To delete my-first-backup.bak from your my-first-backup-bucket bucket, use the following command:
aws s3 rm s3://my-first-backup-bucket/my-first-backup.bak

additional commands
recursively copying local files to s3
aws s3 cp myDir s3://mybucket/ –recursive –exclude “*.jpg”

recursively remove files (with caution!)
aws s3 rm myDir s3://mybucket/ –recursive –exclude “*.jpg”

list files:
aws s3 ls s3://mybucket

remove bucket:
$ aws s3 rb s3://bucket-name
or
$ aws s3 rb s3://bucket-name –force

EC2 instance aws

Launch a Linux Virtual Machine
===============================
Step 1: Launch an Amazon EC2 Instance
EC2 console > launch instance

Step 2: Configure your Instance
a. Amazon Linux AMI
b. t2.micro > default options
c. review and launch 
d. create new key pair > "MyKeyPair" > Download key pair
Windows users: We recommend saving your key pair in your user directory in a sub-directory called .ssh (ex. C:\user\{yourusername}\.ssh\MyKeyPair.pem).
Mac/Linux users: We recommend saving your key pair in the .ssh sub-directory from your home directory (ex. ~/.ssh/MyKeyPair.pem).
Note: On Mac, the key pair is downloaded to your Downloads directory by default. 
To move the key pair into the .ssh sub-directory, enter the following command in a terminal window: mv ~/Downloads/MyKeyPair.pem ~/.ssh/MyKeyPair.pem

click Launch instance

e. EC2 > View instances
f. make note of public ip address of the new instance

Step 3: Connect to your Instance
Windows users:  Select Windows below to see instructions for installing Git Bash.
Mac/Linux user: Select Mac / Linux below to see instructions for opening a terminal window.

a. instructions to install git bash
b. open git bash to run ssh command
c. connect to your instance

Windows users: Enter ssh -i 'c:\Users\yourusername\.ssh\MyKeyPair.pem' ec2-user@{IP_Address} (ex. ssh -i 'c:\Users\adamglic\.ssh\MyKeyPair.pem' ec2-user@52.27.212.125)
Mac/Linux users: Enter ssh -i ~/.ssh/MyKeyPair.pem ec2-user@{IP_Address} (ex. ssh -i ~/.ssh/MyKeyPair.pem ec2-user@52.27.212.125)

Note: if you started a Linux instance that isn't Amazon Linux, there may by a different user name that is used. 
common user names include ec2-user, root, ubuntu, and fedora. 
If you are unsure what the login user name is, check with your AMI provider.

You'll see a response similar to the following:

The authenticity of host 'ec2-198-51-100-1.compute-1.amazonaws.com (10.254.142.33)' can't be established. 
RSA key fingerprint is 1f:51:ae:28:df:63:e9:d8:cf:38:5d:87:2d:7b:b8:ca:9f:f5:b1:6f. 
Are you sure you want to continue connecting (yes/no)?

Type yes and press enter.

You'll see a response similar to the following:

Warning: Permanently added 'ec2-198-51-100-1.compute-1.amazonaws.com' (RSA) to the list of known hosts.

You should then see the welcome screen for your instance and you are now connected to your AWS Linux virtual machine in the cloud.


Step 4: Terminate Your Instance
a. EC2 console > instance > actions > Instance state > Terminate
b. confirm yes to terminate


Launch a Windows Virtual Machine
================================
Step 1: Enter the EC2 Dashboard
EC2 > console

Step 2: Create and Configure Your Virtual Machine
a. launch instance
b. Microsoft Windows Server 2012 R2 Base > select 
c. instance type > t2.micro > Review and launch
d. default options > launch

Step 3: Create a Key Pair and Launch Your Instance
a. popover > select "create a new key pair" > name: "MyKeyPair" > Download key pair > MyFirstKey.pem

Windows users: We recommend saving your key pair in your user directory in a sub-directory called .ssh (ex.C:\user\{yourusername}\.ssh\MyFirstKey.pem).
Mac/Linux users: We recommend saving your key pair in the .ssh sub-directory from your home directory (ex.~/.ssh/MyFirstKey.pem). 

b. Launch instance
c. on the next screen "View instances" > status

Step 4: Connect to Your Instance
connect using RDP

a. select instance > connect
b. login
The User name defaults to Administrator
To receive your password, click Get Password
c. choose MyFirstKey.pem > Decrypt password
d. save decrypted password in a safe location

Step 6: Connect to Your Instance
RDP client
a. Click download remote desktop file
b. enter username and password
you should be logged in!!

Step 7: Terminate Your Windows VM
a. EC2 Consolle > select instance > actions > Instance state > Terminate
b. confirm yes to terminate
Page 3 of 17812345...102030...Last »