Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

OS Tunining

CentOS and RHEL Performance Tuning Utilities and Daemons

Tuned and Ktune

Tuned is a daemon that can monitors and collects data on the system load and activity, by default tuned won’t dynamically change settings, however you can modify how the tuned daemon behaves and allow it to dynamically adjust settings on the fly based on activity. I prefer to leave the dynamic monitoring disabled and just use tuned-adm to set the profile once and be done with it. By using the latency-performance profile, tuned can significantly improve performance on CentOS 6 and CentOS 7 servers.

You can install tuned on a CentOS 6.x server by using the command below. For CentOS 7, tuned is installed and activated by default.

yum install tuned

To activate the latency-performance profile, you can use the “tuned-adm” command to set the profile. Personally I’ve found that latency-performance is one of the best profile to use if you want high disk IO and low latency. Who doesn’t want better performance?

tuned-adm profile latency-performance

To check what tuned profile is currently in use you can use the “active” option which lists the active tuned profile.

tuned-adm active

Ktune provides many different profiles that tuned can use to optimize performance.

tuned-adm has the ability to set the following profiles on CentOS 6 or CentOS 7:

  • default – Default power saving profile, this is the most basic. It enables only the disk and cpu pligins. This is not the same as turning tuned-adm off.
  • latency-performance – This turns off power saving features, cpuspeed mode is turned to performance. I/O elevator is changed to deadline. cpu_dma_latency requirement value 0 is registered.
  • throughput-performance – Recommended if the system is not using “enterprise class” storage. Same as “latency-performance” except:
kernel.sched_min_granularity_ns is set to 10 ms
kernel.sched_wakeup_granularity_ns is set to 15 ms
vm.dirty_ratio is set to 40% and transparent huge pages are enabled.
  • enterprise-storage – Recommended for enterprise class storage servers that have BBU raid controller cache protection and management of on-disk cache. Same as “throughput-performance except:
file systems are re-mounted with barrier=0
  • virtual-guest – Same as “enterprise storage” but it sets readahead value to x 4 of what it normally is. Non boot / root FS are remounted with barrier=0
  • virtual-host – Based on “enterprise storage”. This reduces swappiness of virtual memory and enables more aggressive writeback of dirty pages. Recommended for KVM hosts.

perf

Basic command to get some info on a running process:

perf stat -p $PID

You can also view process info in real time by using perf’s top like command:

perf top -p $PID

SystemTap

Lots of stuff here, will be going over this later.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/SystemTap_Beginners_Guide/index.html

CPU Overview

[Additional CPU and Numa Information]

SMP and NUMA

Older systems used to have only a few CPUs per system, this was known as SMP (Symmetric Multi-Processor). This means that each CPU in the system had more or less the same access to available memory. This was done by laying out physical connections between CPUs and RAM. These connections were known as Parallel Buses.

Newer systems have many more CPUs (and multiple cores per CPU), so giving them all the same access to memory can be expensive in terms of space needed to draw all the physical connections. This is known as NUMA (Non-Uniform Memory Access).

  • AMD has used this for a long time with Hyper Transport Interconnects (HT).
  • Intel has starting implementing NUMA with Quick Path Interconnect (QPI)
  • Tuning applications depends on whether or not a system is using SMP or NUMA. Most modern systems will be using NUMA, however this really only becomes a factor on multi socket systems. A server that has a single E3-1240 processor will not need the same tuning as a 4 socket AMD Opteron 6230.

Threads

A unit of execution is known as a thread. The OS schedules threads for the CPU. The OS is mainly concerned with keeping all the CPUs as busy as possible all the time. The issue with this is that the OS could decide to start a thread on a CPU that does not have the process’s Memory in it’s local bank, which means that there is latency involved, which ca n reduce performance.

Interrupts (IRQs)

An interrupt (known as IRQ) can impact an applications performance. The OS handles these events, and are used by peripherals to signal the arrival of data or the completion of an operation. IRQs don’t effect the applications functionality, however they can cause performance issues.

Parallel and Serial Buses

  • Early CPUs were designed to have the same path to a single pool of memory on the system. This meant that each CPU could access the same pool of RAM at the same speed. This worked up to a point, however as more CPUs were added, more physical connections needed to be added to the board, which take up more space, and don’t allow for new features to be added to the board.
  • However, once more CPUs were added, there was a space issue on the board, and the paths were taking up too much space. These paths are known as parallel buses.
  • On newer systems, a serial bus is used, which is a single wire communication path that has a very high clock rate. Serial buses are used to connect CPUs and pools of RAM. So instead of having 8 – 16 parallel buses for each CPU, there is now one bus that carrys information to and from the CPU and RAM. This means that all the CPUs talk to each other through this path, bursts of information are sent so that multiple CPUs can issue requests, much like how the Internet works.

Questions to ask yourself while performance tuning

If we have a two socket motherboard, and two quad core CPUs, each socket would also have it’s own bank of RAM. Let’s say that each CPU socket has 4 GB of RAM in it’s local bank. The local bank includes the RAM, along with a built in memory manager.

  • Socket One has 4 CPUs, known as CPU 0-3. It also has 4 GB of RAM in it’s local bank, with it’s own memory controller.
  • Socket Two has 4 CPUs, known as CPU 4-7. It also has 4 GB of RAM in it’s local bank, with it’s own memory controller.

Each socket in this example has a corresponding NUMA node, which means that we have two NUMA nodes on this system.

The ideal process for a CPU in Socket One, would be to execute processes on CPU 0 – 3 and also have the process use the RAM that is locted in Socket One’s RAM bank.

However, if there is not enough RAM in Socket One’s bank, or the OS does not realize that NUMA is in use, then it is possible for a CPU in Socket One to use the RAM from Socket Two. This is not optimal because there are two times as many steps involved to access this RAM. Having to make additional hops causes latency, which is bad.

To optimize performance for applications, you must ask the following questions:

  • 1) What is the topology of the system? (Is this NUMA, or not?)
  • 2) Where is the application currenly executing? (Using one Socket, or all of them?)
  • 3) If the application is using one CPU, what is the closest bank of RAM to it?

NUMA in action

The process for a CPU to get access to local memory is as follows:

1) A CPU from Socket One tells it's local Memory Controller the address it wants to access.

2) The memory controller setups up access to this address for that CPU.

3) The CPU then starts doing work with this address.

The process for a CPU to get access to Remote memory is as follows:

1) A CPU from Socket One tells it's local Memory Controller the address it wants to access.

2) Since the process is unable to use the local bank (not enough space, or started elsewhere) the local memory controller passes the request to Socket Two's memory manager.

3) Socket Two's memory manager (remote) then setups up access for that CPU on Socket two's RAM.

4) The CPU then starts doing work with this address.

You can see that there are extra steps involved, which can really add up over time and slow down performance.

CPU Affinity with Taskset

This utility can be used to retrieve and set CPU affinity of a running process. You can also use this utility to launch a process on specific CPUs. However, this command will not ensure that local memory is used by whatever CPU is set.

To control this, you would use numactl instead.

CPU affinity is represented as a bitmask. The lowest order bits represent the first logical CPU, and the highest order bits would represent the last logical CPU.

Examples would be 0x00001 = processor 0 and 0x00003 = processor 0 and 1

Command used to set the CPU affinity of a running process would be:

taskset -p $CPU_bit_mask $PID_of_task_to_set

Command used to launch a process with a set affinity:

taskset $CPU_bit_mask --$program_to_launch

You can also just specify logical CPU numbers with this option:

taskset -c 0,1,2,3 --$program_to_launch

CPU Scheduling

The Linux scheduler has a few different policies that it uses to determine where, and for how long a thread will run.

The two main categories are:

1) Realtime Policies

  • SCHED_FIFO
  • SCHED_RR

2) Normal Policies

  • SCHED_OTHER
  • SCHED_BATCH
  • SCHED_IDLE

Realtime scheduling policies:

Realtime threads are scheduled first, and normal threads are scheduled to run after realtime threads.

Realtime policies are used for time-critical tasks that must complete without interruptions.

SCHED_FIFO

This policy defines a fixed priority for each thread (1 – 99). The scheduler scans the list of these threads and run t he highest priority threads first. The threads then run until they are blocked, exit or another thread comes along tha t has a higher priority to run.

Even a low priority thread using this policy will run before any other thread under a normal policy.

SCHED_RR

This uses a Round Robin style, and it load balances between threads with the same priority.

Memory

Transparent HugePage Overview

Most modern CPUs can take advantage of multiple pages sizes, however the Linux Kernel will usually stick with the smallest page size unless you tell it otherwise. The smallest page size is 4096 bytes or 4KB. Any page size that is above 4KB is considered a huge page, larger pages can improve performance in some cases because the larger page sizes reduce the amount of page faults needed to access data in memory.

While this might be an over simplified example, it should explain why huge pages can be awesome. If an application needs, say, 2MB of data and I am using 4KB page sizes, there would need to be 512 page faults to read all the data from memory (2048KB / 4KB = 512). Now, if I was using 2MB page sizes, there would only need to be 1 page fault since all the data can fit inside a single “huge page”.

In addition to reduced page faults, huge pages also boost performance by reducing the performance cost of virtual to physical address translation since less pages need to be accessed to obtain all the data from memory. Since there are less lookups and translations going on, the CPU caches will become warmer, thus improving performance.

The Kernel will map it’s own address space with hugepages to help reduce the amount of TLB pressure, however the only way that a user space application can utilize huge pages is through hugetlbfs which can be a huge pain in the ass to configure. However, Transparent Huge Pages came along and rescued the day by automating some of the use of huge pages in a safe way that won’t break an application.

Since the Transparent HugePage patch was added to the 2.6.38 Kernel by Andrea Arcangeli. A new thread called khugepaged was introduced which scans for pages that could be merged into a single large page, once the pages have been merged into a single huge page, the smaller pages are removed and freed up. If the huge page needs to be swapped to disk, then the single page is automatically split back up into smaller pages and written to disk, so THP basically is awesome.

How to Configure Transparent HugePages

You can see if your OS is configured to use transparent huge pages by cating the file below. The file should exist and most of the time the value will return “always”

cat /sys/kernel/mm/transparent_hugepage/enabled

You can disable transparent huge pages by echoing “never” into the value. Although I do not suggest disabling THP unless you really know what you are doing/

echo never > /sys/kernel/mm/transparent_hugepage/enabled

If you want to see if there are any active huge pages, you can check /proc/meminfo. In this case I am not using Transparent Huge Pages, but if you were using Transparent Huge Pages there would be values other than zero in the output.

cat /proc/meminfo  | grep -i huge

AnonHugePages:    301056 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

NUMA

Controlling NUMA with numactl

If you are running CentOS 6 or CentOS 7, you can use numactl to run processes with specific scheduling or memory placement policies. These policies then applies to the process and any children processes that it creates.

You can check the following locations to determine how the CPUs talk to each other, along with NUMA node info.

/sys/devices/system/cpu
/sys/devices/system/node

Long running applications that require high performance should be configured so that they only use memory that is located as close to the CPU as possible.

Applications that are multi-threaded should be locked down to a specific NODE, not CPU core. This is because of the way that the CPU caches instructions, it is better to lock down some cores for specific threads, otherwise it’s possible that another applications thread will run, and clear out the previous cache, the CPU then wastes time re caching things.

To display the current NUMA policy settings for a current process:

numactl --show

To display available NUMA nodes on a CentOS 7 system

numactl --hardware

To only allocate memory to a process from specific NUMA nodes, use the following command

numactl --membind=$nodes $program_to_run

To only run a process on specific CPU nodes, use the following command

numactl --cpunodebind=$nodes $program_to_run

To run a process on specific CPUs, (not NUMA nodes), run this command

numactl --physcoubind=$CPU $program_to_run

Viewing NUMA stats with numastat

The numastat command can be used to view memory statistics for CentOS and what processes on running on which NUMA node, broken down on a per NUMA node basis. If you want to know if your server is achieving optimal CPU performance by avoiding NUMA hits, use numastat and look for low numa_miss and numa_foreign values. If you notice that there are a lot of numa misses (like 5% or more) you may want to try and pin the process to specific nodes to improve performance/

You can run numastat without any options and will display quite a lot of information. If you run numastat on CentOS 7 the output might look slightly different than it does on CentOS 6

numastat

The default tracking categories are as follows:

  • numa_hit – The number of successful allocations to this node.
  • numa_miss – The amount of memory accesses / attempted allocations to another NUMA node that were instead allocated to this node. If this value is high, it means that some NUMA nodes are using remote memory instead of local memory. This will hurt performance badly since there is additional latency for every remote memory access, many remote accesses begin to slow down the application. Lots of NUMA misses could be caused by lack of available memory, so if your server is close to utilizing all memory you may want to look into adding more memory, or using taskset to balance out process placement on NUMA nodes.
  • numa_foreign – Similar to numa_miss, but instead it shows the amount of allocations that were initially intended for this node, but were moved to another node. High values here are also bad. This value should reflect numa_miss
  • interleave_hit – The number of attempted interleave allocations to this node that were a success.
  • local_node – The number of times a process on this node successfully allocated memory on this node.
  • other_node – The number of times a process on another node allocated memory on this node.

NUMA Affinity Management Daemon (numad)

Numad is a daemon that automatically monitors NUMA topology and resource usage within the OS, numad can run on CentOS 6 or CentOS 7 servers. Numad can help to dynamically improve NUMA resource allocation and management, which can help to improve performance by watching what CPU processes run on, and what NUMA nodes those processes access. Over time NUMAD will balance out processes so that they are able to access data from the local memory bank which helps to lower latency.

The NUMA daemon looks for significant, long running processes, and then attempts to pin the process down to certain NUMA nodes, so that the CPU and Memory reside in the same NUMA node, this assumes there is enough free memory for the process to use..

You will see significant performance improvements on systems that have long running processes that consume a lot of resources, but not enough to occupy multiple NUMA nodes. For instance, MySQL is a prime candidate for NUMA balancing, Varnish would be another good candidate.

Applications such as large, in memory databases may not see any improvement, this is because you cannot lock down resources to a single NUMA node if all the system RAM is being used by one process.

To start the numad process on CentOS 6 or CentOS 7

service numad start

To ensure NUMAD starts on reboot, use chkconfig to set numad to “on”

chkconfig numad on

File System Optimization

Barriers

Write barriers are a kernel mechanism used to ensure that file system meta data is correctly written and ordered on persistent storage. If this is enabled, it ensures that any data transmitted using fsynch persists accross power outages. This is enabled by default.

If a system does not have volatile write caches, this can be disabled to help improve performance.

Mount option:

nobarrier

Access Time

Whenever a file is read, the access time for that file must be updated in the inode metadata, whic usually involves additional write I/O. If this is not needed, you can disable this to help with IO performance.

Mount option:

noatime

Increased Read-Ahead Support

Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk.

To view the current read-ahead value for a particular block device, run:

blockdev -getra device

To modify the read-ahead value for that block device, run the following command. N represents the number of 512-byte sectors.

blockdev -setra N device

This will not persist accross reboots, so add this to a run level init.d script to apply after reboots.

Syscall Utilities

strace

strace can be used to watch systems calls made my a program.

strace $program

sysdig

Sysdig is newer than strace and there is a lot you can do with it. By default it shows all system calls on a server but you can filter out certain applications if you want.

sysdig proc.name=$program

For more information on how to install and use sysdig:

To install on CentOS 6

rpm --import https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public  
curl -s -o /etc/yum.repos.d/draios.repo http://download.draios.com/stable/rpm/draios.repo
rpm -i http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-8.noarch.rpm
yum -y install kernel-devel-$(uname -r)
yum -y install sysdig

System Calls

http://sysdigcloud.com/fascinating-world-linux-system-calls/

System Call is how a program requests a service from an OS’s kernel. System calls can ask for access to a harddrive, open a file, create a new process, etc, etc, etc.

System calls require a switch from user mode to kernel mode.

Clone

The Clone syscall creates new processes and threads. It is one of the more complex system calls and can be expensive to run so if you notice tons of these syscalls and performance is low you may want to try and reduce the amount of times this happens by increasing the process lifetime or reducing the amount of processes in general.

sysdig filter for clone

sysdig evt.type=clone

Execve

This syscall executes programs, typically you will see this call after the clone syscall. Everything that gets executed goes through this call.

sysdig filter for execve

sysdig evt.type=execve

Chdir

This syscall changes the process working directory. If anything changes directory you can see it by filtering this syscall.

sysdig filter for chdir

sysdig evt.type=chdir

open/creat

These syscalls opens files and can also create them. If you trace this syscall you can view file creation and who is touching what.

sysdig filter for open and creat

sysdig evt.type=open
sysdig evt.type=creat

connect

This syscall initiates connections on a socket(s). This syscall is the only one that can establish a network connection.

sysdig filter for connect. You can also specify a port or IP to view specific services or IPs.

sysdig evt.type=connect
sysdig evt.type=connect and fd.port=80

accept

This syscall accepts a connection on a socket. You will always see this syscall when connect is called.

sysdig filter for accept. You can also specify a port or IP to view specific services or IPs.

sysdig evt.type=accept
sysdig evt.type=accept and fd.port=80

read/write

These syscalls read or write data to or from a file.

sysdig filter for IO

sysdig evt.is_io=true

You can also use chisel to view IO for certain files, ports, or programs, for example

sysdig -c echo_fds fd.name=/var/lib/mysql/

sysdig -c echo_fds proc.name=httpd and fd.port!=80

unlink/rename

These syscalls delete or rename files.

sysdig evt.type=unlink
sysdig evt.type=rename

Links

Performance Tuning in centos7

Performance Tuning in centos7

Tuned

In RedHat (and thus CentOS) 7.0, a daemon called “tuned” was introduced as a unified system for applying tunings to Linux. tuned operates with simple, file-based tuning “profiles” and provides an admin command-line interface named “tuned-adm” for applying, listing and even recommending tuned profiles.

Some operational benefits of tuned:

  • File-based configuration – Profile tunings are contained in a simple, consolidated files
  • Swappable profiles – Profiles are easily changed back/forth
  • Standards compliance – Using tuned profiles ensures tunings are not overridden or ignored

Note: If you use configuration management systems like Puppet, Chef, Salt, Ansible, etc., I suggest you configure those systems to deploy tunings via tuned profiles instead of applying tunings directly, as tuned will likely start to fight this automation, overriding the changes.

The default available tuned profiles (as of  RedHat 7.2.1511) are:

  • balanced
  • desktop
  • latency-performance
  • network-latency
  • network-throughput
  • powersave
  • throughput-performance
  • virtual-guest
  • virtual-host

The profiles that are generally interesting for database usage are:

  • latency-performance

    “A server profile for typical latency performance tuning. This profile disables dynamic tuning mechanisms and transparent hugepages. It uses the performance governer for p-states through cpuspeed, and sets the I/O scheduler to deadline.”

  • throughput-performance

    “A server profile for typical throughput performance tuning. It disables tuned and ktune power saving mechanisms, enables sysctl settings that improve the throughput performance of your disk and network I/O, and switches to the deadline scheduler. CPU governor is set to performance.”

  • network-latency – Includes “latency-performance,” disables transparent_hugepages, disables NUMA balancing and enables some latency-based network tunings.
  • network-throughput – Includes “throughput-performance” and increases network stack buffer sizes.

I find “network-latency” is the closest match to our recommended tunings, but some additional changes are still required.

T

Tuning a server according to specific requirements is not an easy task. You need to know a lot of system parameters and how to change them in a intelligent manner.
Red Hat offers a tool called tuned-adm that makes these changes easy by using tuning profiles.

The tuned-adm command requires the tuned package (if not already installed):

# yum install -y tuned

Tuning Profiles

A tuning profile consists in a list of system changes corresponding to a specific requirement.
To get the list of the available tuning profiles, type:

# tuned-adm list
Available profiles:
- balanced
- desktop
- latency-performance
- network-latency
- network-throughput
- powersave
- sap
- throughput-performance
- virtual-guest
- virtual-host
Current active profile: virtual-guest

Note: All these tuning profiles are explained in details in the tuned-adm man page.

To only get the active profile, type:

# tuned-adm active
Current active profile: virtual-guest

To get the recommended tuning profile in your current configuration, type:

# tuned-adm recommend
virtual-guest

To apply a different tuning profile (here throughput-performance), type:

# tuned-adm profile throughput-performance

cpu setting

tuned-adm profile throughput-performance
tuned-adm active
cpupower idle-set -d 4
cpupower idle-set -d 3
cpupower idle-set -d 2
cpupower frequency-set -g performance
# for more info /usr/lib/tuned/throughput-performance/tuned.conf

 

sysctl

kernel.numa_balancing=0

net.core.netdev_max_backlog = 300000
net.ipv4.tcp_sack = 0
net.core.netdev_budget=600
net.ipv4.tcp_timestamps=1
net.ipv4.tcp_low_latency=1
net.ipv4.tcp_rmem=16384 349520 16777216
net.ipv4.tcp_wmem=16384 349520 16777216
net.ipv4.tcp_mem = 2314209      3085613 4628418
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.somaxconn=2048
net.ipv4.tcp_adv_win_scale=1
net.ipv4.tcp_window_scaling = 1
#UDP buffer
net.core.rmem_max=16777216

Linux Kernel Tuning for Centos 7

 

tuned` should already be installed for Cent 7 and default profile is balanced.

tuned-adm profiles can be found in this directory

ls /usr/lib/tuned/
 
balanced/               latency-performance/    powersave/              virtual-guest/
desktop/                network-latency/        recommend.conf          virtual-host/
functions               network-throughput/     throughput-performance/

To see what the active profile is:

tuned-adm active

To activated tuned xxx

tuned-adm profile xxx

 

latency-performance

  • latency-performance
    • Profile for low latency performance tuning.
    • Disables power saving mechanisms.
    • CPU governor is set to performance and locked to the low C states (by PM QoS).
    • CPU energy performance bias to performance.
    • This profile is the Parent profile to “network-latency”.

Activate tuned latency-performance for CentOS 7

tuned-adm profile latency-performance

For CentOS 7, the latency-performance profile includes the following tweaks

cat /usr/lib/tuned/latency-performance/tuned.conf
[cpu]
force_latency=1
governor=performance
energy_perf_bias=performance
min_perf_pct=100
 
[sysctl]
kernel.sched_min_granularity_ns=10000000
vm.dirty_ratio=10
vm.dirty_background_ratio=3
vm.swappiness=10
kernel.sched_migration_cost_ns=5000000

network-latency

  • network-latency
    • This is a Child profile of “latency-performance”.
    • That this means is that if you were to activate network-latency profile via tuned, it would automatically enable latency-performance, then make some additional tweaks to improve network latency.
    • Disables transparent hugepages, and makes some net.core kernel tweaks.

 

cat /usr/lib/tuned/network-latency/tuned.conf
[main]
include=latency-performance
 
[vm]
transparent_hugepages=never
 
[sysctl]
net.core.busy_read=50
net.core.busy_poll=50
net.ipv4.tcp_fastopen=3
kernel.numa_balancing=0

throughput-performance

  • throughput-performance
    • This is the Parent profile to virtual-guest, virtual-host and network-throughput.
    • This profile is optimized for large, streaming files or any high throughput workloads.

 

cat /usr/lib/tuned/throughput-performance/tuned.conf
[cpu]
governor=performance
energy_perf_bias=performance
min_perf_pct=100
 
[vm]
transparent_hugepages=always
 
[disk]
readahead=>4096
 
[sysctl]
kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
vm.dirty_ratio = 40
vm.dirty_background_ratio = 10
vm.swappiness=10

virtual-guest

  • virtual-guest
    • Profile optimized for virtual guests based on throughput-performance profile.
    • It additionally decreases virtual memory swapiness and increases dirty_ratio settings.

 

cat /usr/lib/tuned/virtual-guest/tuned.conf
[main]
include=throughput-performance
 
[sysctl]
vm.dirty_ratio = 30
vm.swappiness = 30

virtual-host

  • virtual-host
    • Profile optimized for virtual hosts based on throughput-performance profile.
    • It additionally enables more aggressive write-back of dirty pages.

 

cat /usr/lib/tuned/virtual-host/tuned.conf
[main]
include=throughput-performance
 
[sysctl]
vm.dirty_background_ratio = 5
kernel.sched_migration_cost_ns = 5000000

I/O scheduler

echo 'deadline' > /sys/block/sda/queue/scheduler
vim /etc/grub2.cfg

menuentry ‘CAKE 3.0, with Linux 3.10.0-229.1.2.el7.x86_64′
set root=’hd0,msdos1’
linux16 /vmlinuz-3.10.0-229.1.2.el7.x86_64 root= …. elevator=deadline
initrd16 /initramfs-3.10.0-229.1.2.el7.x86_64.img

WinPE 10-8 Sergei Strelec

WinPE 10-8 Sergei Strelec (x86/x64/Native x86) 2018.01.05 English Version | File size: 2.89 GB

Bootable disk Windows 10 and 8 PE – for maintenance of computers, hard disks and partitions, backup and restore disks and partitions, computer diagnostics, data recovery, Windows installation.

WinPE 10-8 Sergei Strelec (x86/x64/Native x86) 2017.10.03 English Version

The World’s Most Advanced Microsoft Windows 10 USB Bootable OS

The Portable Microsoft Windows 10 Operating System For Cyber Agent

 

 

Backup and restore
Acronis True Image 2017 20.0 Build 8058
Acronis True Image Premium 2014 Build 6673
Acronis Backup Advanced 11.7.50064
Active Disk Image Professional 7.0.4
StorageCraft Recovery Environment 5.2.5.37836
FarStone Recovery Manager 10.10
QILING Disk Master 4.3.6.20170806
R-Drive Image 6.1 Build 6109
Symantec Veritas System Recovery 16.0.2.56166
Symantec Ghost 12.0.0.10561
TeraByte Image for Windows 3.15
AOMEI Backupper 4.0.6
Drive SnapShot 1.45.0.17689
Macrium Reflect 7.1.2801
Disk2vhd 2.01
Vhd2disk v0.2

Hard disk
Acronis Disk Director 12.0.3297
EASEUS Partition Master 12.5 WinPE Edition
Paragon Hard Disk Manager 15 10.1.25.1137
MiniTool Partition Wizard 10.2.2
AOMEI Partition Assistant 6.6.0
AOMEI Dynamic Disk Manager 1.2.0
Eassos PartitionGuru 4.9.5.508
Defraggler 2.21.993
Auslogics Disk Defrag 7.1.0
HDD Low Level Format Tool 4.40
Active KillDisk 10.1.1
FarStone DriveClone 11.10 Build 20150825 (WinPE10)

Diagnostics
HD Tune Pro 5.70
Check Disk GUI
Victoria 4.47
HDD Regenerator 2011
HDDScan 3.3
Hard Disk Sentinel Pro 5.01 Build 8557
Western Digital Data LifeGuard Diagnostics 1.31.0
CrystalDiskInfo 7.5.1
CrystalDiskMark 6.0.0
AIDA64 Extreme Edition 5.92.4300
BurnInTest Pro 8.1 Build 1025
PerformanceTest 9.0 Build 1022
ATTO Disk Benchmark 3.05
RWEverything 1.7
CPU-Z 1.82.1
PassMark MonitorTest 3.2 Build 1004
HWiNFO32 5.70 Build 3300
OCCT Perestroika 4.5.1
Keyboard Test Utility 1.4.0

Network programs
Opera 46
PENetwork 0.58.2
TeamViewer 6
Ammyy Admin 3.5
AeroAdmin 4.1 Build 2767
µTorrent 3.1.3
FileZilla 3.24.0
Internet Download Accelerator 6.10.1.1527
OpenVpn 2.4.4
PuTTY 0.70

Other programs
Active Password Changer 7.0.9.1
Reset Windows Password 4.2.0.470
PCUnlocker 3.8.0
UltraISO 9.7.0 Build 3476
Total Commander 9.00
Remote Registry (?86/64)
FastStone Capture 7.7
IrfanView 4.38
STDU Viewer
Bootice 1.3.4
Unlocker 1.9.2
7-ZIP
WinNTSetup 3.8.8.3
Double Driver 4.1.0
Imagex
GImageX 2.1.1
Media Player Classic
EasyBCD 2.3
EasyUEFI 3.0
SoftMaker Office
Far Manager 3.0 Build 5100
BitLocker
78Setup (author conty9)
Dism++ 10.1.1000.52B
WinHex 19.3
FastCopy 3.40
UltraSearch 2.12
Everything 1.4.1.877
Linux Reader 2.6
WinDirStat 1.1.2
Recover Keys 10.0.4.198
NirLauncher 1.20.25
Remote Registry Editor
Windows Recovery Environment (WinPE 10)

Data Recovery
R-Studio 8.5 Build 170117
Active File Recovery 15.0.7
Active Partition Recovery 15.0.0
Runtime GetDataBack for NTFS 4.33
Runtime GetDataBack for FAT 4.33
DM Disk Editor and Data Recovery 2.10.0
UFS Explorer Professional Recovery 5.23.1
Hetman Partition Recovery 2.7
Eassos Recovery 4.2.1.297
EaseUS Data Recovery Wizard 11.9

 

 

How-to extend a root LVM partition online

How-to extend a root LVM partition online

This guide will explain you how to extend a root LVM partition online.

There is also a quick remedy for the emergency situation when your root partition runs out of disk space. There is a feature specific to ext3 and ext4 that can help the goal of resolving the full disk situation. Unless explicitly changed during filesystem creation, both by default reserve five percent (5%) of a volume capacity to the superuser (root).

# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/vg_main-lv_root
              ext4    8.4G  8.0G  952K 100% /
tmpfs        tmpfs    499M     0  499M   0% /dev/shm
/dev/vda1     ext4    485M   33M  428M   8% /boot

# dumpe2fs /dev/vg_main/lv_root | grep 'Reserved block count'
dumpe2fs 1.41.12 (17-May-2010)
Reserved block count:     111513

It turned out 111513 of 4KB blocks were reserved for the superuser, which was exactly five percent of the volume capacity.

How to enable it?

# tune2fs -m 0 /dev/vg_main/lv_root 
tune2fs 1.41.12 (17-May-2010)
Setting reserved blocks percentage to 0% (0 blocks)
# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/vg_main-lv_root
              ext4    8.4G  8.0G  437M  95% /
tmpfs        tmpfs    499M     0  499M   0% /dev/shm
/dev/vda1     ext4    485M   33M  428M   8% /boot

Now that we have some free space on the root partition to work on we can extend the LVM partition:

Create a new partition of appropriate size using fdisk

fdisk /dev/sdb1

This is a key sequence on the keyboard to create a new LVM type (8e) partition:

n, p, 1, enter (accept default first sector), enter (accept default last sector), t, 8e, w

Create a new Physical Volume

# pvcreate /dev/sdb1
  Writing physical volume data to disk "/dev/sdb1"
  Physical volume "/dev/sdb1" successfully created

Extend a Volume Group

# vgextend vg_main /dev/sdb1
  Volume group "vg_main" successfully extended

Extend your LVM

– extend the size of your LVM by the amount of free space on PV

# lvextend /dev/vg_main/lv_root /dev/sdb1
  Extending logical volume lv_root to 18.50 GiB
  Logical volume lv_root successfully resized

– or with a given size

lvextend -L +10G /dev/vg_main/lv_root

Finally resize the file system online

# resize2fs /dev/vg_main/lv_root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg_main/lv_root is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/vg_main/lv_root to 4850688 (4k) blocks.
The filesystem on /dev/vg_main/lv_root is now 4850688 blocks long.

Now we can set the reserved blocks back to the default percentage – 5%

tune2fs -m 5 /dev/mapper/vg_main-lv_root

Results:

# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/vg_main-lv_root
              ext4     19G  8.0G  9.4G  46% /
tmpfs        tmpfs    499M     0  499M   0% /dev/shm
/dev/vda1     ext4    485M   33M  428M   8% /boot

Firewalld Centos-7

As of Centos7 the default firewall application has changed from iptables to firewalld. FirewallD provides dynamic filterinc versus static ones in iptables. You can read more about details of the features included at Fedora project page here and or on their official homepage here.

This page will help me to unlearn the iptables and remember the firewalld commands.

Get Initial information

  • Get the status of firewalld
 firewall-cmd --state
  • Reload the firewall without loosing state information:
 firewall-cmd --reload
  • Get a list of all supported zones
 firewall-cmd --get-zones
  • Get a list of all supported services
 firewall-cmd --get-services
  • Get a list of all supported icmptypes
 firewall-cmd --get-icmptypes
  • List all zones with the enabled features.
 firewall-cmd --list-all-zones
  • Print zone <zone> with the enabled features. If zone is omitted, the default zone will be used.
 firewall-cmd [--zone=<zone>] --list-all
  • Get the default zone set for network connections
 firewall-cmd --get-default-zone
  • Set the default zone
 firewall-cmd --set-default-zone=<zone>

All interfaces that are located in the default zone will be pushed in the new default zone, that defines the limitations for new external initiated connection attempts. Active connections are not affected.

  • Get active zones
 firewall-cmd --get-active-zones
  • Get zone related to an interface
 firewall-cmd --get-zone-of-interface=<interface>

 

Update the basic rules

This prints the zone name, if the interface is part of a zone

  • Add an interface to a zone
 firewall-cmd [--zone=<zone>] --add-interface=<interface>

Add an interface to a zone, if it was not in a zone before. If the zone options is omitted, the default zone will be used. The interfaces are reapplied after reloads.

  • Change the zone an interface belongs to
 firewall-cmd [--zone=<zone>] --change-interface=<interface>

This is similar to the –add-interface options, but pushes the interface in the new zone even if it was in another zone before.

  • Remove an interface from a zone
 firewall-cmd [--zone=<zone>] --remove-interface=<interface>
  • Query if an interface is in a zone
 firewall-cmd [--zone=<zone>] --query-interface=<interface>

Returns if the interface is in the zone. There is no output.

  • List the enabled services in a zone
 firewall-cmd [ --zone=<zone> ] --list-services
  • Enable panic mode to block all network traffic in case of emergency
 firewall-cmd --enable-panic
  • Disable panic mode
 firewall-cmd --disable-panic
  • Query panic mode
 firewall-cmd --query-panic

This returns the state of the panic mode, there is no output. To get a visual state use

 firewall-cmd --query-panic && echo "On" || echo "Off"

Runtime zone handling

In the runtime mode the changes to zones are not permanent. The changes will be gone after reload or restart.

  • Enable a service in a zone
 firewall-cmd [--zone=<zone>] --add-service=<service> [--timeout=<seconds>]

This enables a service in a zone. If zone is not set, the default zone will be used. If timeout is set, the service will only be enabled for the amount of seconds in the zone. If the service is already active, there will be no warning message.

  • Example: Enable ipp-client service for 60 seconds in the home zone:
 firewall-cmd --zone=home --add-service=ipp-client --timeout=60
  • Example: Enable the http service in the default zone:
 firewall-cmd --add-service=http
  • Disable a service in a zone
 firewall-cmd [--zone=<zone>] --remove-service=<service>

This disables a service in a zone. If zone is not set, the default zone will be used.

  • Example: Disable http service in the home zone:
 firewall-cmd --zone=home --remove-service=http

The service will be disabled in the zone. If the service is not enabled in the zone, there will be an warning message.

  • Query if a service is enabled in a zone
 firewall-cmd [--zone=<zone>] --query-service=<service>

This returns 1 if the service is enabled in the zone, otherwise 0. There is no output.

  • Enable a port and protocol combination in a zone
 firewall-cmd [--zone=<zone>] --add-port=<port>[-<port>]/<protocol> [--timeout=<seconds>]

This enables a port and protocol combination. The port can be a single port or a port range -. The protocol can be either tcp or udp.

  • Disable a port and protocol combination in a zone
 firewall-cmd [--zone=<zone>] --remove-port=<port>[-<port>]/<protocol>
  • Query if a port and protocol combination in enabled in a zone
 firewall-cmd [--zone=<zone>] --query-port=<port>[-<port>]/<protocol>

This command returns if it is enabled, there is no output.

Masquerading

This is used to hide internal addresses behind a public IP or port.

  • Enable masquerading in a zone
 firewall-cmd [--zone=<zone>] --add-masquerade

This enables masquerading for the zone. The addresses of a private network are mapped to and hidden behind a public IP address. This is a form of address translation and mostly used in routers. Masquerading is IPv4 only because of kernel limitations.

  • Disable masquerading in a zone
 firewall-cmd [--zone=<zone>] --remove-masquerade
  • Query masquerading in a zone
 firewall-cmd [--zone=<zone>] --query-masquerade

This command returns if it is enabled, there is no output.

  • Enable ICMP blocks in a zone
 firewall-cmd [--zone=<zone>] --add-icmp-block=<icmptype>

This enabled the block of a selected Internet Control Message Protocol (ICMP) message. ICMP messages are either information requests or created as a reply to information requests or in error conditions.

  • Disable ICMP blocks in a zone
 firewall-cmd [--zone=<zone>] --remove-icmp-block=<icmptype>
  • Query ICMP blocks in a zone
 firewall-cmd [--zone=<zone>] --query-icmp-block=<icmptype>

This command returns if it is enabled, there is no output.

  • Example: Block echo-reply messages in the public zone:
 firewall-cmd --zone=public --add-icmp-block=echo-reply
  • Enable port forwarding or port mapping in a zone
 firewall-cmd [--zone=<zone>] --add-forward-port=port=<port>[-<port>]:proto=<protocol> { :toport=<port>[-<port>] | :toaddr=<address> | :toport=<port>[-<port>]:toaddr=<address> }

The port is either mapped to the same port on another host or to another port on the same host or to another port on another host. The port can be a singe port <port> or a port range <port>-<port>. The protocol is either tcp or udp. toport is either port or a port range -. toaddr is an IPv4 address. Port forwarding is IPv4 only because of kernel limitations.

  • Disable port forwarding or port mapping in a zone
 firewall-cmd [--zone=<zone>] --remove-forward-port=port=<port>[-<port>]:proto=<protocol> { :toport=<port>[-<port>] | :toaddr=<address> | :toport=<port>[-<port>]:toaddr=<address> }
  • Query port forwarding or port mapping in a zone
 firewall-cmd [--zone=<zone>] --query-forward-port=port=<port>[-<port>]:proto=<protocol> { :toport=<port>[-<port>] | :toaddr=<address> | :toport=<port>[-<port>]:toaddr=<address> }

This command returns if it is enabled, there is no output.

  • Example: Forward ssh to host 127.0.0.2 in the home zone
 firewall-cmd --zone=home --add-forward-port=port=22:proto=tcp:toaddr=127.0.0.2

Permanent zone handling

The permanent options are not affecting runtime directly. These options are only available after a reload or restart. To have runtime and permanent setting, you need to supply both. The –permanent option needs to be the first option for all permanent calls.

  • Get a list of supported permanent services
 firewall-cmd --permanent --get-services
  • Get a list of supported permanent icmptypes
 firewall-cmd --permanent --get-icmptypes
  • Get a list of supported permanent zones
 firewall-cmd --permanent --get-zones
  • Enable a service in a zone
 firewall-cmd --permanent [--zone=<zone>] --add-service=<service>

This enables the service in the zone permanently. If the zone option is omitted, the default zone is used.

  • Disable a service in a zone
 firewall-cmd --permanent [--zone=<zone>] --remove-service=<service>
  • Query if a service is enabled in a zone
 firewall-cmd --permanent [--zone=<zone>] --query-service=<service>

This command returns if it is enabled, there is no output.

  • Example: Enable service ipp-client permanently in the home zone
 firewall-cmd --permanent --zone=home --add-service=ipp-client
  • Enable a port and protocol combination permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --add-port=<port>[-<port>]/<protocol>
  • Disable a port and protocol combination permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --remove-port=<port>[-<port>]/<protocol>
  • Query if a port and protocol combination is enabled permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --query-port=<port>[-<port>]/<protocol>

This command returns if it is enabled, there is no output.

  • Example: Enable port 443/tcp for https permanently in the home zone
 firewall-cmd --permanent --zone=home --add-port=443/tcp
  • Enable masquerading permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --add-masquerade

This enables masquerading for the zone. The addresses of a private network are mapped to and hidden behind a public IP address. This is a form of address translation and mostly used in routers. Masquerading is IPv4 only because of kernel limitations.

  • Disable masquerading permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --remove-masquerade
  • Query masquerading permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --query-masquerade

This command returns if it is enabled, there is no output.

  • Enable ICMP blocks permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --add-icmp-block=<icmptype>

This enabled the block of a selected Internet Control Message Protocol (ICMP) message. ICMP messages are either information requests or created as a reply to information requests or in error conditions.

  • Disable ICMP blocks permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --remove-icmp-block=<icmptype>
  • Query ICMP blocks permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --query-icmp-block=<icmptype>

This command returns if it is enabled, there is no output.

  • Example: Block echo-reply messages in the public zone:
 firewall-cmd --permanent --zone=public --add-icmp-block=echo-reply
  • Enable port forwarding or port mapping permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --add-forward-port=port=<port>[-<port>]:proto=<protocol> { :toport=<port>[-<port>] | :toaddr=<address> | :toport=<port>[-<port>]:toaddr=<address> }

The port is either mapped to the same port on another host or to another port on the same host or to another port on another host. The port can be a singe port <port> or a port range <port>-<port>. The protocol is either tcp or udp. toport is either port or a port range -. toaddr is an IPv4 address. Port forwarding is IPv4 only because of kernel limitations.

  • Disable port forwarding or port mapping permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --remove-forward-port=port=<port>[-<port>]:proto=<protocol> { :toport=<port>[-<port>] | :toaddr=<address> | :toport=<port>[-<port>]:toaddr=<address> }
  • Query port forwarding or port mapping permanently in a zone
 firewall-cmd --permanent [--zone=<zone>] --query-forward-port=port=<port>[-<port>]:proto=<protocol> { :toport=<port>[-<port>] | :toaddr=<address> | :toport=<port>[-<port>]:toaddr=<address> }

This command returns if it is enabled, there is no output.

  • Example: Forward ssh to host 127.0.0.2 in the home zone
 firewall-cmd --permanent --zone=home --add-forward-port=port=22:proto=tcp:toaddr=127.0.0.2

Direct options

The direct options give a more direct access to the firewall. These options require user to know basic iptables concepts, i.e. table (filter/mangle/nat/…), chain (INPUT/OUTPUT/FORWARD/…), commands (-A/-D/-I/…), parameters (-p/-s/-d/-j/…) and targets (ACCEPT/DROP/REJECT/…). Direct options should be used only as a last resort when it’s not possible to use for example –add-service=service or –add-rich-rule=’rule’. The first argument of each option has to be ipv4 or ipv6 or eb. With ipv4 it will be for IPv4 (iptables(8)), with ipv6 for IPv6 (ip6tables(8)) and with eb for ethernet bridges (ebtables(8)).

  • Pass a command through to the firewall. <args> can be all iptables, ip6tables and ebtables command line arguments
 firewall-cmd --direct --passthrough { ipv4 | ipv6 | eb } <args>
  • Add a new chain <chain> to a table <table>.
 firewall-cmd [--permanent] --direct --add-chain { ipv4 | ipv6 | eb } <table> <chain>
  • Remove a chain with name <chain> from table <table>.
 firewall-cmd [--permanent] --direct --remove-chain { ipv4 | ipv6 | eb } <table> <chain>
  • Query if a chain with name <chain> exists in table <table>. Returns 0 if true, 1 otherwise.
 firewall-cmd [--permanent] --direct --query-chain { ipv4 | ipv6 | eb } <table> <chain>

This command returns if it is enabled, there is no output.

  • Get all chains added to table <table> as a space separated list.
 firewall-cmd [--permanent] --direct --get-chains { ipv4 | ipv6 | eb } <table>
  • Add a rule with the arguments <args> to chain <chain> in table <table> with priority <priority>.
 firewall-cmd [--permanent] --direct --add-rule { ipv4 | ipv6 | eb } <table> <chain> <priority> <args>
  • Remove a rule with the arguments <args> from chain <chain> in table <table>.
 firewall-cmd [--permanent] --direct --remove-rule { ipv4 | ipv6 | eb } <table> <chain> <args>
  • Query if a rule with the arguments <args> exists in chain <chain> in table <table>. Returns 0 if true, 1 otherwise.
 firewall-cmd [--permanent] --direct --query-rule { ipv4 | ipv6 | eb } <table> <chain> <args>

This command returns if it is enabled, there is no output.

  • Get all rules added to chain <chain> in table <table> as a newline separated list of arguments.
 firewall-cmd [--permanent] --direct --get-rules { ipv4 | ipv6 | eb } <table> <chain>

Docker centos7

Introduction

In previous posts we have seen the installation and working of Docker. This post will explain the installation of the Docker Community Edition (CE) on CentOS. If you are looking for the Ubuntu installation you should check our this post.

 

Step 1 | Remove Old Versions

$ sudo yum remove docker docker-common docker-selinux docker-engine

Step 2 | Install Required Packages

$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2

Step 3 | Setup the Docker CE Repository

$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Step 4 | Update the Packages

$ sudo yum update

Step 5 |Install specific version (Production recommended)

$ sudo yum list docker-ce.x86_64  --showduplicates | sort -r

Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable
Available Packages

$ sudo yum install docker-ce-<VERSION>

Step 6 | Install through respository

$ sudo yum install docker-ce

Step 7 | Start Docker

$ sudo systemctl start docker

Step 8 | Verify the Installation

$ sudo docker run hello-world

This will install the Docker on CentOS.

Redis

what is Redis

Redis is a key-value storage system. Similar to Memcached, it supports storing relatively many value types, including string, list, set, and zset.
These data types support push / pop, add / remove, and intersection and union and difference sets and richer operations, all of which are atomic.
On this basis, Redis supports a variety of different sorts. Like memcached, data is cached in memory for efficiency.
The difference is that Redis periodically writes updated data to disk or writes modifications to additional log files and implements master-slave synchronization based on this. Redis is a high-performance key-value database.
The emergence of Redis, a large extent, to compensate for such lack of keyvalue memcached storage, in some cases can play a good complement to the relational database.

 

1, Redis installation

1.1 Pre-installation environment description

Using a CentOS 7 version of Linux system with
master ip of 192.168.1.110
slave ip of 192.168.1.111 The

1.2 Download Redis

Redis can go to the official website to download: https://redis.io/download , now the latest stable version has reached 4.0.
Used here is redis-4.0.1.tar.gz .

1.3 installation steps

$ wget http://download.redis.io/releases/redis-4.0.1.tar.gz
$ tar xzf redis-4.0.1.tar.gz -C /usr/local/
$ cd /usr/local/redis-4.0.1
$ make & make test
make Possible exceptions

make[1]: Leaving directory `/usr/local/redis-4.0.1/src’ make[1]: Entering directory `/usr/local/redis-4.0.1/src’
You need tcl 8.5 or newer in order to run the Redis test
make[1]: *** [test] Error 1
make[1]: Leaving directory `/usr/local/redis-4.0.1/src’
make: *** [test] Error 2
solution

yum install -y tcl

2, Redis simple configuration

All configuration changes are in this configuration file

/usr/local/redis-4.0.1/redis.conf

 

2.2 bound host address

Bind after the host to add ip,from behind Redis need to connect through the IP.

bind 127.0.0.1 192.168.1.110

2.3 Set Redis password

The password is set herejaven

# requirepass foobared
requirepass mohan

2.4 Set the Redis port number

The default port is6379

port 6379

3, test Redis

start up
/usr/local/redis-4.0.1/redis.conf
src/redis-server

src/redis-server redis.conf
Client connection

src/redis-cli
src/redis-cli -a mohan

/usr/local/redis-4.0.1/redis.conf
src/redis-cli shutdown
src/redis-cli -p 6666 shutdown

 

4, Redis master-slave replication configuration

Redis master-slave replication is very powerful, a master can have multiple slaves, and a slave can have multiple slaves, so go on, forming a powerful multi-level server cluster architecture. The following simple configuration.

Modify the slave’s redis configuration file

Master’s redis configuration file bindcan be set as long as

Slave redis modify the slave configuration file slaveof 10.211.55.3 6379 (mapped to the main server, 6379the port number)
can also be dynamically set:
Redis-cli connected to the slave node server, execute the following command.
slaveof 10.211.55.3 6379

If master sets the authentication password, you also need to configure masterauth. Here I set the master authentication password javen, so configure masterauth javen.

After configuring the slave start the Redis service, OK, master-slave configuration is completed (is not very simple).
The following test:
In the master and slave, respectively, the info command to view the results are as follows:

slave:

[root@centos-linux-2 redis-4.0.1]# src/redis-cli
127.0.0.1:6379> info

 

 

5, Redis remote connection

Usage: redis-cli [OPTIONS] [cmd [arg [arg …]]]

-h <host ip>, the default is 127.0.0.1

-p <port>, the default is 6379

-a <password>, redis lock, you need to pass the password

-help, Show help information

redis-cli -h 10.211.55.4 -p 6379 -a javen

 

Master and slave MySQL versions are MySQL5.6.31

Master and slave MySQL versions are MySQL5.6.31

Primary server IP: 192.168.1.178

From the server IP: 192.168.1.145

Master and slave are able to ping each other.

A?192.168.1.178?Master?
B?192.168.1.145?Slave?

 

service mysqld stop
service mysqld start
service mysqld restart

 

mysql> grant replication slave on *.* to ‘mohan’@’192.168.1.145’ identified by ‘123456’;
mysql> flush privileges;

vi /etc/my.cnf

port=3306

binlog-ignore-db=mysql
server-id=1
expire-logs-days=7
binlog-ignore-db=information_schema
binlog-ignore-db=performation_schema
binlog-ignore-db=sys
binlog-ignore-db=gogs

service mysqld restart

mysql -u root -proot -P3306
mysql> flush tables with read lock;

show master status;

mysql> show master status;
+——————+———-+————–+——————————————————-+——————-+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+——————+———-+————–+——————————————————-+——————-+
| mysql-bin.000001 | 154 | | mysql,information_schema,performation_schema,sys,gogs | |
+——————+———-+————–+——————————————————-+——————-+
1 row in set (0.00 sec)

5 Slave

# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html

[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

log-bin=mysql-bin
server-id=3
binlog-ignore-db = mysql
binlog-ignore-db = information_schema
binlog-ignore-db = performation_schema
binlog-ignore-db = sys
log-slave-updates
slave-skip-errors=all
slave-net-timeout=60

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

 

service mysqld restart

 

mysql> stop slave;
mysql> change master to master_host=’192.168.1.178′,master_user=’mohan’,master_password=’123456′,master_log_file=’mysql-bin.000001′, master_log_pos= 154;
mysql> show slave status \G;

mysql> unlock tables;
Query OK, 0 rows affected (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql>

Setting up MySQL server on aws EC2 instance

Launch an EC2 instance of type Amazon Linux AMI from your aws console.

SSH into your ec2 instance
ssh -i ec2-user@my_ec2_ip_address

Update the instance
sudo yum update -y

Install the mysqld server
sudo yum install -y mysql55-server

start the mysqld instance
sudo service mysqld start

the following command ensure launches mysqld on server restart
sudo chkconfig mysqld on

run the following command to set password for root user and delete test databases.
sudo mysql_secure_installation

make a note of the root password ????

Let’s try to create a user and database. This way we can control the database access levels.
mysql -uroot -pmy_root_password

I’m going to create a db_demo with demo_user having password demo123.
CREATE DATABASE db_demo;
USE db_demo;
CREATE USER 'demo_user'@'localhost' IDENTIFIED BY 'demo123';
GRANT ALL PRIVILEGES ON *.* TO 'demo_user'@'localhost' WITH GRANT OPTION;
CREATE USER 'demo_user'@'%' IDENTIFIED BY 'demo123';
GRANT ALL PRIVILEGES ON *.* TO 'demo_user'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;

That’s it. You are all set.

Note:

    • details on mysql privileges can be found here
    • Don’t forget to open the default port 3306 if you want to access the database outside the ec2 instance boundary
    • useful mysqld commands

  • sudo service mysqld start
  • sudo service mysqld stop
  • sudo service mysqld restart
  • sudo service mysqld status

 

Setting git on aws EC2 instance

Launch an EC2 instance of type Amazon Linux AMI from your aws console.

SSH into your ec2 instance
ssh -i ec2-user@my_ec2_ip_address

Update the instance
sudo yum update -y

install developer tools
sudo yum groupinstall -y "Development Tools"

install git
sudo yum install git

checkout the source code
git clone https://my.git.repo.git
cd my_local_git_folder
git checkout -f branch_to_checkout

Page 1 of 16412345...102030...Last »