March 2019
« Jan    


WordPress Quotes

Life shrinks or expands in proportion to one's courage.
Anais Nin

Recent Comments

March 2019
« Jan    

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (34)
Ansibile (19)
Apache (133)
Asterisk (2)
cassandra (2)
Centos (209)
Centos RHEL 7 (261)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (28)
Eassy (11)
ELKS (1)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
horoscope (23)
Hyper-V (10)
IIS (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (2)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (31)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (34)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (60)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

8 visitors online now
1 guests, 7 bots, 0 members

Hit Counter provided by dental implants orange county

Spectre and Meltdown explained

By now, most of you have probably already heard of the biggest disaster in the history of IT – Meltdown and Spectre security vulnerabilities which affect all modern CPUs, from those in desktops and servers, to ones found in smartphones. Unfortunately, there’s much confusion about the level of threat we’re dealing with here, because some of the impacted vendors need reasons to explain the still-missing security patches. But even those who did release a patch, avoid mentioning that it only partially addresses the threat. And, there’s no good explanation of these vulnerabilities on the right level (not for developers), something that just about anyone working in IT could understand to make their own conclusion. So, I decided to give it a shot and deliver just that.

First, some essential background. Both vulnerabilities leverage the “speculative execution” feature, which is central to the modern CPU architecture. Without this, processors would idle most of the time, just waiting to receive I/O results from various peripheral devices, which are all at least 10x slower than processors. For example, RAM – kind of the fastest thing out there in our mind – runs at comparable frequencies with CPU, but all overclocking enthusiasts know that RAM I/O involves multiple stages, each taking multiple CPU cycles. And hard disks are at least a hundred times slower than RAM. So, instead of waiting for the real result of some IF clause to be calculated, the processor assumes the most probable result, and continues the execution according to the assumed result. Then, many cycles later, when the actual result of said IF is known, if it was “guessed” right – then we’re already way ahead in the program code execution path, and didn’t just waste all those cycles waiting for the I/O operation to complete. However, if it appears that the assumption was incorrect – then, the execution state of that “parallel universe” is simply discarded, and program execution is restarted back from said IF clause (as if speculative execution did not exist). But, since those prediction algorithms are pretty smart and polished, more often than not the guesses are right, which adds significant boost to execution performance for some software. Speculative execution is a feature that processors had for two decades now, which is also why any CPU that is still able to run these days is affected.

Now, while the two vulnerabilities are distinctly different, they share one thing in common – and that is, they exploit the cornerstone of computer security, and specifically the process isolation. Basically, the security of all operating systems and software is completely dependent on the native ability of CPUs to ensure complete process isolation in terms of them being able to access each other’s memory. How exactly is such isolation achieved? Instead of having direct physical RAM access, all processes operate in virtual address spaces, which are mapped to physical RAM in the way that they do not overlap. These memory allocations are performed and controlled in hardware, in the so-called Memory Management Unit (MMU) of CPU.

At this point, you already know enough to understand Meltdown. This vulnerability is basically a bug in MMU logic, and is caused by skipping address checks during the speculative execution (rumors are, there’s the source code comment saying this was done “not to break optimizations”). So, how can this vulnerability be exploited? Pretty easily, in fact. First, the malicious code should trick a processor into the speculative execution path, and from there, perform an unrestricted read of another process’ memory. Simple as that. Now, you may rightfully wonder, wouldn’t the results obtained from such a speculative execution be discarded completely, as soon as CPU finds out it “took a wrong turn”? You’re absolutely correct, they are in fact discarded… with one exception – they will remain in the CPU cache, which is a completely dumb thing that just caches everything CPU accesses. And, while no process can read the content of the CPU cache directly, there’s a technique of how you can “read” one implicitly by doing legitimate RAM reads within your process, and measuring the response times (anything stored in the CPU cache will obviously be served much faster). You may have already heard that browser vendors are currently busy releasing patches that makes JavaScript timers more “coarse” – now you know why (but more on this later).

As far as the impact goes, Meltdown is limited to Intel and ARM processors only, with AMD CPUs unaffected. But for Intel, Meltdown is extremely nasty, because it is so easy to exploit – one of our enthusiasts compiled the exploit literally over a morning coffee, and confirmed it works on every single computer he had access to (in his case, most are Linux-based). And possibilities Meltdown opens are truly terrifying, for example how about obtaining admin password as it is being typed in another process running on the same OS? Or accessing your precious bitcoin wallet? Of course, you’ll say that the exploit must first be delivered to the attacked computer and executed there – which is fair, but here’s the catch: JavaScript from some web site running in your browser will do just fine too, so the delivery part is the easiest for now. By the way, keep in mind that those 3rd party ads displayed on legitimate web sites often include JavaScript too – so it’s really a good idea to install ad blocker now, if you haven’t already! And for those using Chrome, enabling Site Isolation feature is also a good idea.

OK, so let’s switch to Spectre next. This vulnerability is known to affect all modern CPUs, albeit to a different extent. It is not based on a bug per say, but rather on a design peculiarity of the execution path prediction logic, which is implemented by so-called Branch Prediction Unit (BPU). Essentially, what BPU does is accumulating statistics to estimate the probability of IF clause results. For example, if certain IF clause that compares some variable to zero returned FALSE 100 times in a row, you can predict with high probability that the clause will return FALSE when called for the 101st time, and speculatively move along the corresponding code execution branch even without having to load the actual variable. Makes perfect sense, right? However, the problem here is that while collecting this statistics, BPU does NOT distinguish between different processes for added “learning” effectiveness – which makes sense too, because computer programs share much in common (common algorithms, constructs implementation best practices and so on). And this is exactly what the exploit is based on: this peculiarity allows the malicious code to basically “train” BPU by running a construct that is identical to one in the attacked process hundreds of times, effectively enabling it to control speculative execution of the attacked process once it hits its own respective construct, making one dump “good stuff” into the CPU cache. Pretty awesome find, right?

But here comes the major difference between Meltdown and Spectre, which significantly complicates Spectre-based exploits implementation. While Meltdown can “scan” CPU cache directly (since the sought-after value was put there from within the scope of process running the Meltdown exploit), in case of Spectre it is the victim process itself that puts this value into the CPU cache. Thus, only the victim process itself is able to perform that timing-based CPU cache “scan”. Luckily for hackers, we live in the API-first world, where every decent app has API you can call to make it do the things you need, again measuring how long the execution of each API call took. Although getting the actual value requires deep analysis of the specific application, so this approach is only worth pursuing with the open-source apps. But the “beauty” of Spectre is that apparently, there are many ways to make the victim process leak its data to the CPU cache through speculative execution in the way that allows the attacking process to “pick it up”. Google engineers found and documented a few, but unfortunately many more are expected to exist. Who will find them first?

Of course, all of that only sounds easy at a conceptual level – while implementations with the real-world apps are extremely complex, and when I say “extremely” I really mean that. For example, Google engineers created a Spectre exploit POC that, running inside a KVM guest, can read host kernel memory at a rate of over 1500 bytes/second. However, before the attack can be performed, the exploit requires initialization that takes 30 minutes! So clearly, there’s a lot of math involved there. But if Google engineers could do that, hackers will be able too – because looking at how advanced some of the ransomware we saw last year was, one might wonder if it was written by folks who Google could not offer the salary or the position they wanted. It’s also worth mentioning here that a JavaScript-based POC also exists already, making the browser a viable attack vector for Spectre.

Now, the most important part – what do we do about those vulnerabilities? Well, it would appear that Intel and Google disclosed the vulnerability to all major vendors in advance, so by now most have already released patches. By the way, we really owe a big “thank you” to all those dev and QC folks who were working hard on patches while we were celebrating – just imagine the amount of work and testing required here, when changes are made to the holy grail of the operating system. Anyway, after reading the above, I hope you agree that vulnerabilities do not get more critical than these two, so be sure to install those patches ASAP. And, aside of most obvious stuff like your operating systems and hypervisors, be sure not to overlook any storage, network and other appliances – as they all run on some OS that too needs to be patched against these vulnerabilities. And don’t forget your smartphones! By the way, here’s one good community tracker for all security bulletins (Microsoft is not listed there, but they did push the corresponding emergency update to Windows Update back on January 3rd).

Having said that, there are a couple of important things you should keep in mind about those patches. First, they do come with a performance impact. Again, some folks will want you to think that the impact is negligible, but it’s only true for applications with low I/O activity. While many enterprise apps will definitely take a big hit – at least, big enough to account for. For example, installing the patch resulted in almost 20% performance drop in the PostgreSQL benchmark. And then, there is this major cloud service that saw CPU usage double after installing the patch on one of its servers. This impact is caused due to the patch adding significant overhead to so-called syscalls, which is what computer programs must use for any interactions with the outside world.

Last but not least, do know that while those patches fully address Meltdown, they only address a few currently known attacks vector that Spectre enables. Most security specialists agree that Spectre vulnerability opens a whole slew of “opportunities” for hackers, and that the solid fix can only be delivered in CPU hardware. Which in turn probably means at least two years until first such processor appears – and then a few more years until you replace the last impacted CPU. But until that happens, it sounds like we should all be looking forward to many fun years of jumping on yet another critical patch against some newly discovered Spectre-based attack

Jenkins on CentOS 7

Jenkins is a free and open source CI (Continuous Integration) tool which is written in JAVA. Jenkins is widely used for project development, deployment, and automation. Jenkins allows you to automate the non-human part of the whole software development process. It supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands. The creator of Jenkins is Kohsuke Kawaguchi.[3] Released under the MIT License.Jenkins’ security depends on two factors: 1. access control 2. protection from external threats.

  1. Access control can be customized via two ways, user authentication and authorization.
  2. Protection from external threats such as CSRF attacks and malicious builds is supported as well.


It does not require any special kind of hardware, you’ll only need a CentOS 7 server and a root user access over it. You can switch from non root user to root user using sudo -i command.

Update System

It is highly recommended to install Jenkins on a freshly updated server. To upgrade available packages and system run below given command and it’ll do the job for you.

yum -y update

Install Java

Before going through the installation process of Jenkins you’ll need to set up Java Virtual Machine or JDK to your system. Simply run following command to install Java.

yum -y install java-1.8.0-openjdk.x86_64Once installation is finished you can check the version to confirm the installation, to do run following command.

java -versionThe above command will tell you about the installation details of java. By running the above command you should see the following result on your terminal screen.

openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

Next, you’ll need to setup two environment variables JAVA_HOME and JRE_HOME and to do so run following commands one by one.

cp /etc/profile /etc/profile_backupecho 'export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk' | sudo tee -a /etc/profileecho 'export JRE_HOME=/usr/lib/jvm/jre' | sudo tee -a /etc/profilesource /etc/profileFinally print them for review using following commands.

echo $JAVA_HOMEYou should see following output.


echo $JRE_HOME

shell scirpt


yum install -y java

wget -O /etc/yum.repos.d/jenkins.repo

rpm –import

yum install -y jenkins

yum install -y git

service jenkins start/stop/restart

chkconfig jenkins on



java -version

java -jar jenkins.war –httpPort=8080

# get jenkin-cli

wget http://localhost:8080/jnlpJars/jenkins-cli.jar

# jenkin-cli install plugin

java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin checkstyle cloverphp crap4j dry htmlpublisher jdepend plot pmd violations warnings xunit –username=yang –password=lljkl

# safe restart

java -jar jenkins-cli.jar -s http://localhost:8080 safe-restart –username=yang –password=lljkl

Install Jenkins

We have installed all the dependencies required by Jenkins and now we are ready to install Jenkins. Run following commands to install latest stable release of Jenkins.

wget -O /etc/yum.repos.d/jenkins.repo --import above two commands will add the jenkins repository and also import the key. If in case you have previously imported the key from Jenkins then the rpm --import will fail because you already have a key. Please ignore that and move on.

Now run following command to install Jenkins on your server.

yum -y install jenkinsNext, you’ll need to start Jenkins services and set it to run at boot time and to do so use following commands.

systemctl start jenkins.servicesystemctl enable jenkins.service

Connecting to a Git Repo

You will probably want to connect to a git repository next. This is also somewhat dependent on the operating system you use, so I provide the steps to do this on CentOS as well:

  • Install git
sudo yum install git
  • Generate an SSH key on the server
ssh-keygen -t rsa
  • When prompted, save the SSH key under the following path (I got this idea from reading the comments here)
  • Assure that the .ssh directory is owned by the Jenkins user:
sudo chown -R jenkins:jenkins /var/lib/jenkins/.ssh
  • Copy the public generated key to your git server (or add it in the GitHub/BitBucket web interface)
  • Assure your git server is listed in the known_hosts file. In my case, since I am using BitBucket my /var/lib/jenkins/.ssh/known_hosts file contains something like the following, ssh-rsa [...]
  • You can now create a new project and use Git as the SCM. You don’t need to provide any git credentials. Jenkins pulls these automatically form the /var/lib/jenkins/.ssh directory. There are good instructions for this available here.

Connecting to GitHub

  • In the Jenkins web interface, click on Credentials and then select the Jenkins Global credentials. Add a credential for GitHub which includes your GitHub username and password.
  • In the Jenkins web interface, click on Manage Jenkins and then on Configure System. Then scroll down to GitHub and then under GitHub servers click the Advanced Button. Then click the button Manage additional GitHub actions.

  • In the popup select Convert login and password to token and follow the prompts. This will result in a new credential having been created. Save and reload the page.
  • Now go back to the GitHub servers section and now click to add an additional server. As credential, select the credential which you have just selected.
  • In the Jenkins web interface, click on New Item and then select GitHub organisation and connect it to your user account.

Any of your GitHub projects will be automatically added to Jenkins, if they contain a Jenkinsfile. Here is an example.

Connect with BitBucket

  • First, you will need to install the BitBucket plugin.
  • After it is installed, create a normal git project.
  • Go to the Configuration for this project and select the following option:

  • Log into BitBucket and create a webhook in the settings for your project pointing to your Jenkins server as follows: (note the slash at the end)

Testing a Java Project

Chances are high you would like to run tests against a Java project, in the following, some instructions to get that working:

XFS Filesystem has duplicate UUID problem

If you can not mount your XFS partition with classical wrong fs type, bad superblock etc. error and you see a message in kernel logs (dmesg) like that:

XFS: Filesystem sdb7 has duplicate UUID - can't mount

you can still mount the filesystem with nouuid options as below:

mount -o nouuid /dev/sdb7 disk-7

But every mount, you have to provide nouuid option. So, for exact solution you have to generate a new UUID for this partition with xfs_admin utility:

xfs_admin -U generate /dev/sdb7
Clearing log and setting UUID
writing all SBs
new UUID = 01fbb5f2-1ee0-4cce-94fc-024efb3cd3a4

Yum (Yellowdog Update Modified)

Patching a server is an important task for Linux system administrators in order to make the system more stable and perform better. Manufacturers often release some security / high-risk patches, the software needs to be upgraded to guard against potential security risks.

Yum (Yellowdog Update Modified) is an RPM package management tool used on CentOS and RedHat systems. The yum history command allows the system administrator to roll back the system to the previous state, but due to some limitations, rollback is not possible in all cases Success, sometimes yum command may do nothing, and sometimes may delete some other packages.

I suggest that you still have to do a complete system backup before you upgrade, and yum history can not be used to replace the system backup. System backup allows you to restore the system to an arbitrary node status.

In some cases, what should I do if the installed application does not work or has some errors after it has been patched (possibly due to library incompatibilities or package upgrades)?

Talk to the application development team and find out where the problem is with libraries and packages, then use the yum history command to roll back.

Server patching is one of the important task of Linux system administrator to make the system more stable and better performance. All the vendors used to release security/vulnerabilities patches very often, the affected package must be updated in order to limit any potential security risks.

Yum (Yellowdog Update Modified) is RPM Package Management utility for CentOS and Red Hat systems, Yum history command allows administrator to rollback the system to a previous state but due to some limitations, rollbacks do not work in all situations, or The yum command may simply do nothing, or it may remove packages you do not expect.

I advise you to take a full system backup prior to performing any update/upgrade is always recommended, and yum history is NOT meant to replace systems backups. This will help you to restore the system to previous state at any point of time.

yum update
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
epel/metalink | 12 kB 00:00
* epel:
base | 3.7 kB 00:00
dockerrepo | 2.9 kB 00:00
draios | 2.9 kB 00:00
draios/primary_db | 13 kB 00:00
epel | 4.3 kB 00:00
epel/primary_db | 5.9 MB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 2.5 MB 00:00
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be updated
—> Package git.x86_64 0:1.7.1-9.el6_9 will be an update
—> Package httpd.x86_64 0:2.2.15-60.el6.centos.4 will be updated
—> Package httpd.x86_64 0:2.2.15-60.el6.centos.5 will be an update
—> Package httpd-tools.x86_64 0:2.2.15-60.el6.centos.4 will be updated
—> Package httpd-tools.x86_64 0:2.2.15-60.el6.centos.5 will be an update
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be updated
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be an update
–> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
git x86_64 1.7.1-9.el6_9 updates 4.6 M
httpd x86_64 2.2.15-60.el6.centos.5 updates 836 k
httpd-tools x86_64 2.2.15-60.el6.centos.5 updates 80 k
perl-Git noarch 1.7.1-9.el6_9 updates 29 k

Transaction Summary
Upgrade 4 Package(s)

Total download size: 5.5 M
Is this ok [y/N]: n

As you can see in the above output git package update is available, so we are going to take that. Run the following command to know the version information about the package (current installed version and available update version).

yum list git
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
* epel:
Installed Packages
git.x86_64 1.7.1-8.el6 @base
Available Packages

Run the following command to update git package from 1.7.1-8 to 1.7.1-9.

# yum update git
Loaded plugins: fastestmirror, presto
Setting up Update Process
Loading mirror speeds from cached hostfile
* base:
* epel:
* extras:
* updates:
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be updated
–> Processing Dependency: git = 1.7.1-8.el6 for package: perl-Git-1.7.1-8.el6.noarch
—> Package git.x86_64 0:1.7.1-9.el6_9 will be an update
–> Running transaction check
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be updated
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be an update
–> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
git x86_64 1.7.1-9.el6_9 updates 4.6 M
Updating for dependencies:
perl-Git noarch 1.7.1-9.el6_9 updates 29 k

Transaction Summary
Upgrade 2 Package(s)

Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 4.6 M
(1/2): git-1.7.1-9.el6_9.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-9.el6_9.noarch.rpm | 29 kB 00:00
Total 5.8 MB/s | 4.6 MB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : perl-Git-1.7.1-9.el6_9.noarch 1/4
Updating : git-1.7.1-9.el6_9.x86_64 2/4
Cleanup : perl-Git-1.7.1-8.el6.noarch 3/4
Cleanup : git-1.7.1-8.el6.x86_64 4/4
Verifying : git-1.7.1-9.el6_9.x86_64 1/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 2/4
Verifying : git-1.7.1-8.el6.x86_64 3/4
Verifying : perl-Git-1.7.1-8.el6.noarch 4/4

git.x86_64 0:1.7.1-9.el6_9

Dependency Updated:
perl-Git.noarch 0:1.7.1-9.el6_9


Verify updated version of git package.

# yum list git
Installed Packages
git.x86_64 1.7.1-9.el6_9 @updates

# rpm -q git

As of now, we have successfully completed package update and got a package for rollback. Just follow below steps for rollback mechanism.

First get the yum transaction id using following command. The below output clearly shows all the required information such transaction id, who done the transaction (i mean username), date and time, Actions (Install or update), how many packages altered in this transaction.

# yum history
# yum history list all
Loaded plugins: fastestmirror, presto
ID | Login user | Date and time | Action(s) | Altered
13 | root | 2017-08-18 13:30 | Update | 2
12 | root | 2017-08-10 07:46 | Install | 1
11 | root | 2017-07-28 17:10 | E, I, U | 28 EE
10 | root | 2017-04-21 09:16 | E, I, U | 162 EE
9 | root | 2017-02-09 17:09 | E, I, U | 20 EE
8 | root | 2017-02-02 10:45 | Install | 1
7 | root | 2016-12-15 06:48 | Update | 1
6 | root | 2016-12-15 06:43 | Install | 1
5 | root | 2016-12-02 10:28 | E, I, U | 23 EE
4 | root | 2016-10-28 05:37 | E, I, U | 13 EE
3 | root | 2016-10-18 12:53 | Install | 1
2 | root | 2016-09-30 10:28 | E, I, U | 31 EE
1 | root | 2016-07-26 11:40 | E, I, U | 160 EE

The above command shows two packages has been altered because git updated it’s dependence package too perl-Git. Run the following command to view detailed information about the transaction.

# yum history info 13
Loaded plugins: fastestmirror, presto
Transaction ID : 13
Begin time : Fri Aug 18 13:30:52 2017
Begin rpmdb : 420:f5c5f9184f44cf317de64d3a35199e894ad71188
End time : 13:30:54 2017 (2 seconds)
End rpmdb : 420:d04a95c25d4526ef87598f0dcaec66d3f99b98d4
User : root
Return-Code : Success
Command Line : update git
Transaction performed with:
Installed rpm-4.8.0-55.el6.x86_64 @base
Installed yum-3.2.29-81.el6.centos.noarch @base
Installed yum-plugin-fastestmirror-1.1.30-40.el6.noarch @base
Installed yum-presto-0.6.2-1.el6.noarch @anaconda-CentOS-201207061011.x86_64/6.3
Packages Altered:
Updated git-1.7.1-8.el6.x86_64 @base
Update 1.7.1-9.el6_9.x86_64 @updates
Updated perl-Git-1.7.1-8.el6.noarch @base
Update 1.7.1-9.el6_9.noarch @updates
history info

Fire the following command to Rollback the git package to the previous version.

# yum history undo 13
Loaded plugins: fastestmirror, presto
Undoing transaction 53, from Fri Aug 18 13:30:52 2017
Updated git-1.7.1-8.el6.x86_64 @base
Update 1.7.1-9.el6_9.x86_64 @updates
Updated perl-Git-1.7.1-8.el6.noarch @base
Update 1.7.1-9.el6_9.noarch @updates
Loading mirror speeds from cached hostfile
* base:
* epel:
* extras:
* updates:
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be a downgrade
—> Package git.x86_64 0:1.7.1-9.el6_9 will be erased
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be a downgrade
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be erased
–> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
git x86_64 1.7.1-8.el6 base 4.6 M
perl-Git noarch 1.7.1-8.el6 base 29 k

Transaction Summary
Downgrade 2 Package(s)

Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 4.6 M
(1/2): git-1.7.1-8.el6.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-8.el6.noarch.rpm | 29 kB 00:00
Total 3.4 MB/s | 4.6 MB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : perl-Git-1.7.1-8.el6.noarch 1/4
Installing : git-1.7.1-8.el6.x86_64 2/4
Cleanup : perl-Git-1.7.1-9.el6_9.noarch 3/4
Cleanup : git-1.7.1-9.el6_9.x86_64 4/4
Verifying : git-1.7.1-8.el6.x86_64 1/4
Verifying : perl-Git-1.7.1-8.el6.noarch 2/4
Verifying : git-1.7.1-9.el6_9.x86_64 3/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 4/4

git.x86_64 0:1.7.1-9.el6_9 perl-Git.noarch 0:1.7.1-9.el6_9

git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6

After rollback, use the following command to re-check the downgraded package version.

Rollback Updates using YUM downgrade command
Alternatively we can rollback an updates using YUM downgrade command.

# yum downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6
Loaded plugins: search-disabled-repos, security, ulninfo
Setting up Downgrade Process
Resolving Dependencies
–> Running transaction check
—> Package git.x86_64 0:1.7.1-8.el6 will be a downgrade
—> Package git.x86_64 0:1.7.1-9.el6_9 will be erased
—> Package perl-Git.noarch 0:1.7.1-8.el6 will be a downgrade
—> Package perl-Git.noarch 0:1.7.1-9.el6_9 will be erased
–> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
git x86_64 1.7.1-8.el6 base 4.6 M
perl-Git noarch 1.7.1-8.el6 base 29 k

Transaction Summary
Downgrade 2 Package(s)

Total download size: 4.6 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): git-1.7.1-8.el6.x86_64.rpm | 4.6 MB 00:00
(2/2): perl-Git-1.7.1-8.el6.noarch.rpm | 28 kB 00:00
Total 3.7 MB/s | 4.6 MB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : perl-Git-1.7.1-8.el6.noarch 1/4
Installing : git-1.7.1-8.el6.x86_64 2/4
Cleanup : perl-Git-1.7.1-9.el6_9.noarch 3/4
Cleanup : git-1.7.1-9.el6_9.x86_64 4/4
Verifying : git-1.7.1-8.el6.x86_64 1/4
Verifying : perl-Git-1.7.1-8.el6.noarch 2/4
Verifying : git-1.7.1-9.el6_9.x86_64 3/4
Verifying : perl-Git-1.7.1-9.el6_9.noarch 4/4

git.x86_64 0:1.7.1-9.el6_9 perl-Git.noarch 0:1.7.1-9.el6_9

git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6

Note : You have to downgrade a dependence packages too, otherwise this will remove the current version of dependency packages instead of downgrade because the downgrade command cannot satisfy the dependency.

For Fedora Users
Use the same above commands and change the package manager command to DNF instead of YUM.

# dnf list git
# dnf history
# dnf history info
# dnf history undo
# dnf list git
# dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6

# dnf listgit
# dnf history
# dnf history info
# dnf history undo
# dnf listgit
# dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6

MySQL backup command mysqldump parameters and examples

MySQL backup command mysqldump parameters and examples

1. Grammar options
-h, –host=name
CPU name

-P[ port_num], –port=port_num
The TCP / IP port number used to connect to the MySQL server

This option adds the location and filename of the binlog to the output. If it is equal to 1, it will be printed as a CHANGE MASTERcommand; if equal to 2, the comment prefix will be added. And this option will automatically open –lock-all-tables, unless at the same time set up –single-transaction(in this case, the global read lock will only add a small amount of time to start the dump, do not forget to read –single-transactionthe part). In all cases, the actions in all logs occur at the exact moment of export. This option will automatically shut down –lock-tables.

-x, –lock-all-tables
Lock all the tables in all the libraries. This is achieved by holding a global read lock throughout the dump. Will automatically shut down –single-transactionand –lock-tables.

The exported data is a consistent snapshot by encapsulating the export operation within a single transaction. Works only if the table uses a storage engine that supports MVCC (currently only InnoDB); other engines can not guarantee that the export is consistent. When the export –single-transactionoption is enabled , to make sure that the export file is valid (the correct table data and binary log location), make sure that no other connections execute the following statement: ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLEThis will invalidate the consistent snapshot. This option will automatically shut down when turned on –lock-tables.

-l, –lock-tables
Read lock on all tables. (Default is open, use –skip-lock-tablesto close the above options will shut down -loption)

-F, –flush-logs
Refresh the server’s log file before starting export. Note that if you export many databases at a time (use -databases=or –all-databasesoption), log refreshes when each library is exported. The exception is when using –lock-all-tablesor –master-data: The log will only be refreshed once, at that time all tables will be locked. So if you want your exports and log refreshes to happen at exactly the same time, you need to use –lock-all-tablesor –master-datacooperate –flush-logs.

After the backup is complete delete the log on the main library. This option automatically opens “ –master-data`.

With -add-drop-table, –add-locks, –create-options, –quick, –extended-insert, –lock-tables, –set-charset, –disable-keys. ( –skip-optWhich is turned on by default and off means that these options keep their defaults) should give you the quickest possible export for reading into a MySQL server, –compactalmost banning the options above.

-q, –quick
Do not buffer the query, directly exported to stdout. (Turned on by default, use –skip-quickto close) This option is used for dumping large tables.

Will be SET NAMES default_character_setadded to the output. This option is enabled by default. To disable the SET NAMESstatement, use –skip-set-charset.

CREATE TABLEAdd a DROP TABLEstatement before each statement. Open by default.

Added before LOCK TABLESand after each table export UNLOCK TABLE. (In order to make it faster to insert into MySQL). Open by default.

Include all MySQL table options in the CREATE TABLE statement. Open by default, use –skip-create-optionsto close.

-e, –extended-insert
Using the new multi-line INSERT syntax, turned on by default (gives tighter and faster insert statements)

-d, –no-data
Any row information not written to the table. This is useful if you only want to export the structure of a table.

Before the create database DROP DATABASE, the default off, so generally need to ensure that the database has been imported.

The default character set to use. If not specified, mysqldump uses utf8.

-B, –databases
Dump several databases. Usually, mysqldump treats the first name parameter in the command line as the database name, followed by the name as the table name. With this option, it treats all name parameters as database names. CREATE DATABASE IF NOT EXISTS db_nameAnd the USE db_namestatement is included in the output in front of each new database.

Overlay –databaseoptions. All arguments after the option are treated as table names.

-u[ name], –user=
MySQL user name to use when connecting to the server.

-p[password], –password[=password]
The password to use when connecting to the server. If you use short form (-p), you can not have a space between the option and the password. If on the command line, the password value behind –passwordor -poption is ignored , you will be prompted for one.

Export a database:

$ mysqldump -h localhost -uroot -ppassword \ – master-data = 2 –single-transaction –add-drop-table –create-options –quick \ – extended-insert –default-character-set = utf8 \ – databases discuz> backup-file.sql
Export a table:

$ mysqldump -u pak -p –opt –flush-logs pak t_user> pak-t_user.sql
Compress backup files:

$ mysqldump -hhostname -uusername -ppassword –databases dbname | gzip> backup-file.sql.gz
The corresponding reduction action is
gunzip <backup-file.sql.gz | mysql -uusername -ppassword dbname
Import the database:

mysql> use target_dbname
mysql> source /mysql/backup/path/backup-file.sql
$ mysql target_dbname <backup-file.sql
Import there is a mysqlimportcommand, yet to study.

Dump directly from one database to another:

mysqldump -u username -p –opt dbname | mysql –host remote_host -C dbname2

CentOS 7 through nmcli team to achieve multiple network card binding

CentOS 7 through nmcli team to achieve multiple network card binding

nmcli con add type team con-name team0 ifname team0 config ‘{“runner”:{“name”:”roundrobin”}}’

Run ip link command to view the interface available in the system
1, create a bond card
nmcli con add type team con-name team0 ifname team0 config ‘{ “runner”: { “name”: “roundrobin”}}’
various modes:
the METHOD is One of the following: broadcast, activebackup, roundrobin, loadbalance, or lacp.
The first mode: mod = 0, namely: (balance-rr) Round-robin policy Switch configuration Eth-Trunk
The second mode: mod = 1, namely: policy The
third mode: mod = 2, namely: (balance-xor) XOR policy The
fourth mode: mod = 3, namely: broadcast The
fifth mode: mod = 4, namely: (802.3ad) IEEE 802.3ad Dynamic link aggregation
Sixth mode: mod = 5, namely: balance-tlb Adaptive transmit load balancing:
The seventh mode: mod = 6, namely: (balance-alb) Adaptive load balancing (adapter adaptable load balancing)
2, view the creation of the network card
nmcli con show
3, add the load of the network card
nmcli con add type team-slave con-name team0-port1 ifname em1 master team0
nmcli con add type team-slave con-name team0-prot2 ifname em4 master team0
4. Configure IP address and gateway
nmcli con mod team0 ipv4.addresses “171.16 .41.x / 24 ”
nmcli con mod team0 ipv4.gateway” 171.15.41.x ”
nmcli con mod team0 ipv4.method manual
nmcli con up team0
5, restart the network service
systemctl restart network
6, check the network card binding status
teamdctl team0 state
7, check NIC binding
nmcli dev dis em1 / / closed binding state
nmcli dev con em1 / / restore the binding state

OS Tunining

CentOS and RHEL Performance Tuning Utilities and Daemons

Tuned and Ktune

Tuned is a daemon that can monitors and collects data on the system load and activity, by default tuned won’t dynamically change settings, however you can modify how the tuned daemon behaves and allow it to dynamically adjust settings on the fly based on activity. I prefer to leave the dynamic monitoring disabled and just use tuned-adm to set the profile once and be done with it. By using the latency-performance profile, tuned can significantly improve performance on CentOS 6 and CentOS 7 servers.

You can install tuned on a CentOS 6.x server by using the command below. For CentOS 7, tuned is installed and activated by default.

yum install tuned

To activate the latency-performance profile, you can use the “tuned-adm” command to set the profile. Personally I’ve found that latency-performance is one of the best profile to use if you want high disk IO and low latency. Who doesn’t want better performance?

tuned-adm profile latency-performance

To check what tuned profile is currently in use you can use the “active” option which lists the active tuned profile.

tuned-adm active

Ktune provides many different profiles that tuned can use to optimize performance.

tuned-adm has the ability to set the following profiles on CentOS 6 or CentOS 7:

  • default – Default power saving profile, this is the most basic. It enables only the disk and cpu pligins. This is not the same as turning tuned-adm off.
  • latency-performance – This turns off power saving features, cpuspeed mode is turned to performance. I/O elevator is changed to deadline. cpu_dma_latency requirement value 0 is registered.
  • throughput-performance – Recommended if the system is not using “enterprise class” storage. Same as “latency-performance” except:
kernel.sched_min_granularity_ns is set to 10 ms
kernel.sched_wakeup_granularity_ns is set to 15 ms
vm.dirty_ratio is set to 40% and transparent huge pages are enabled.
  • enterprise-storage – Recommended for enterprise class storage servers that have BBU raid controller cache protection and management of on-disk cache. Same as “throughput-performance except:
file systems are re-mounted with barrier=0
  • virtual-guest – Same as “enterprise storage” but it sets readahead value to x 4 of what it normally is. Non boot / root FS are remounted with barrier=0
  • virtual-host – Based on “enterprise storage”. This reduces swappiness of virtual memory and enables more aggressive writeback of dirty pages. Recommended for KVM hosts.


Basic command to get some info on a running process:

perf stat -p $PID

You can also view process info in real time by using perf’s top like command:

perf top -p $PID


Lots of stuff here, will be going over this later.

CPU Overview

[Additional CPU and Numa Information]


Older systems used to have only a few CPUs per system, this was known as SMP (Symmetric Multi-Processor). This means that each CPU in the system had more or less the same access to available memory. This was done by laying out physical connections between CPUs and RAM. These connections were known as Parallel Buses.

Newer systems have many more CPUs (and multiple cores per CPU), so giving them all the same access to memory can be expensive in terms of space needed to draw all the physical connections. This is known as NUMA (Non-Uniform Memory Access).

  • AMD has used this for a long time with Hyper Transport Interconnects (HT).
  • Intel has starting implementing NUMA with Quick Path Interconnect (QPI)
  • Tuning applications depends on whether or not a system is using SMP or NUMA. Most modern systems will be using NUMA, however this really only becomes a factor on multi socket systems. A server that has a single E3-1240 processor will not need the same tuning as a 4 socket AMD Opteron 6230.


A unit of execution is known as a thread. The OS schedules threads for the CPU. The OS is mainly concerned with keeping all the CPUs as busy as possible all the time. The issue with this is that the OS could decide to start a thread on a CPU that does not have the process’s Memory in it’s local bank, which means that there is latency involved, which ca n reduce performance.

Interrupts (IRQs)

An interrupt (known as IRQ) can impact an applications performance. The OS handles these events, and are used by peripherals to signal the arrival of data or the completion of an operation. IRQs don’t effect the applications functionality, however they can cause performance issues.

Parallel and Serial Buses

  • Early CPUs were designed to have the same path to a single pool of memory on the system. This meant that each CPU could access the same pool of RAM at the same speed. This worked up to a point, however as more CPUs were added, more physical connections needed to be added to the board, which take up more space, and don’t allow for new features to be added to the board.
  • However, once more CPUs were added, there was a space issue on the board, and the paths were taking up too much space. These paths are known as parallel buses.
  • On newer systems, a serial bus is used, which is a single wire communication path that has a very high clock rate. Serial buses are used to connect CPUs and pools of RAM. So instead of having 8 – 16 parallel buses for each CPU, there is now one bus that carrys information to and from the CPU and RAM. This means that all the CPUs talk to each other through this path, bursts of information are sent so that multiple CPUs can issue requests, much like how the Internet works.

Questions to ask yourself while performance tuning

If we have a two socket motherboard, and two quad core CPUs, each socket would also have it’s own bank of RAM. Let’s say that each CPU socket has 4 GB of RAM in it’s local bank. The local bank includes the RAM, along with a built in memory manager.

  • Socket One has 4 CPUs, known as CPU 0-3. It also has 4 GB of RAM in it’s local bank, with it’s own memory controller.
  • Socket Two has 4 CPUs, known as CPU 4-7. It also has 4 GB of RAM in it’s local bank, with it’s own memory controller.

Each socket in this example has a corresponding NUMA node, which means that we have two NUMA nodes on this system.

The ideal process for a CPU in Socket One, would be to execute processes on CPU 0 – 3 and also have the process use the RAM that is locted in Socket One’s RAM bank.

However, if there is not enough RAM in Socket One’s bank, or the OS does not realize that NUMA is in use, then it is possible for a CPU in Socket One to use the RAM from Socket Two. This is not optimal because there are two times as many steps involved to access this RAM. Having to make additional hops causes latency, which is bad.

To optimize performance for applications, you must ask the following questions:

  • 1) What is the topology of the system? (Is this NUMA, or not?)
  • 2) Where is the application currenly executing? (Using one Socket, or all of them?)
  • 3) If the application is using one CPU, what is the closest bank of RAM to it?

NUMA in action

The process for a CPU to get access to local memory is as follows:

1) A CPU from Socket One tells it's local Memory Controller the address it wants to access.

2) The memory controller setups up access to this address for that CPU.

3) The CPU then starts doing work with this address.

The process for a CPU to get access to Remote memory is as follows:

1) A CPU from Socket One tells it's local Memory Controller the address it wants to access.

2) Since the process is unable to use the local bank (not enough space, or started elsewhere) the local memory controller passes the request to Socket Two's memory manager.

3) Socket Two's memory manager (remote) then setups up access for that CPU on Socket two's RAM.

4) The CPU then starts doing work with this address.

You can see that there are extra steps involved, which can really add up over time and slow down performance.

CPU Affinity with Taskset

This utility can be used to retrieve and set CPU affinity of a running process. You can also use this utility to launch a process on specific CPUs. However, this command will not ensure that local memory is used by whatever CPU is set.

To control this, you would use numactl instead.

CPU affinity is represented as a bitmask. The lowest order bits represent the first logical CPU, and the highest order bits would represent the last logical CPU.

Examples would be 0x00001 = processor 0 and 0x00003 = processor 0 and 1

Command used to set the CPU affinity of a running process would be:

taskset -p $CPU_bit_mask $PID_of_task_to_set

Command used to launch a process with a set affinity:

taskset $CPU_bit_mask --$program_to_launch

You can also just specify logical CPU numbers with this option:

taskset -c 0,1,2,3 --$program_to_launch

CPU Scheduling

The Linux scheduler has a few different policies that it uses to determine where, and for how long a thread will run.

The two main categories are:

1) Realtime Policies


2) Normal Policies


Realtime scheduling policies:

Realtime threads are scheduled first, and normal threads are scheduled to run after realtime threads.

Realtime policies are used for time-critical tasks that must complete without interruptions.


This policy defines a fixed priority for each thread (1 – 99). The scheduler scans the list of these threads and run t he highest priority threads first. The threads then run until they are blocked, exit or another thread comes along tha t has a higher priority to run.

Even a low priority thread using this policy will run before any other thread under a normal policy.


This uses a Round Robin style, and it load balances between threads with the same priority.


Transparent HugePage Overview

Most modern CPUs can take advantage of multiple pages sizes, however the Linux Kernel will usually stick with the smallest page size unless you tell it otherwise. The smallest page size is 4096 bytes or 4KB. Any page size that is above 4KB is considered a huge page, larger pages can improve performance in some cases because the larger page sizes reduce the amount of page faults needed to access data in memory.

While this might be an over simplified example, it should explain why huge pages can be awesome. If an application needs, say, 2MB of data and I am using 4KB page sizes, there would need to be 512 page faults to read all the data from memory (2048KB / 4KB = 512). Now, if I was using 2MB page sizes, there would only need to be 1 page fault since all the data can fit inside a single “huge page”.

In addition to reduced page faults, huge pages also boost performance by reducing the performance cost of virtual to physical address translation since less pages need to be accessed to obtain all the data from memory. Since there are less lookups and translations going on, the CPU caches will become warmer, thus improving performance.

The Kernel will map it’s own address space with hugepages to help reduce the amount of TLB pressure, however the only way that a user space application can utilize huge pages is through hugetlbfs which can be a huge pain in the ass to configure. However, Transparent Huge Pages came along and rescued the day by automating some of the use of huge pages in a safe way that won’t break an application.

Since the Transparent HugePage patch was added to the 2.6.38 Kernel by Andrea Arcangeli. A new thread called khugepaged was introduced which scans for pages that could be merged into a single large page, once the pages have been merged into a single huge page, the smaller pages are removed and freed up. If the huge page needs to be swapped to disk, then the single page is automatically split back up into smaller pages and written to disk, so THP basically is awesome.

How to Configure Transparent HugePages

You can see if your OS is configured to use transparent huge pages by cating the file below. The file should exist and most of the time the value will return “always”

cat /sys/kernel/mm/transparent_hugepage/enabled

You can disable transparent huge pages by echoing “never” into the value. Although I do not suggest disabling THP unless you really know what you are doing/

echo never > /sys/kernel/mm/transparent_hugepage/enabled

If you want to see if there are any active huge pages, you can check /proc/meminfo. In this case I am not using Transparent Huge Pages, but if you were using Transparent Huge Pages there would be values other than zero in the output.

cat /proc/meminfo  | grep -i huge

AnonHugePages:    301056 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB


Controlling NUMA with numactl

If you are running CentOS 6 or CentOS 7, you can use numactl to run processes with specific scheduling or memory placement policies. These policies then applies to the process and any children processes that it creates.

You can check the following locations to determine how the CPUs talk to each other, along with NUMA node info.


Long running applications that require high performance should be configured so that they only use memory that is located as close to the CPU as possible.

Applications that are multi-threaded should be locked down to a specific NODE, not CPU core. This is because of the way that the CPU caches instructions, it is better to lock down some cores for specific threads, otherwise it’s possible that another applications thread will run, and clear out the previous cache, the CPU then wastes time re caching things.

To display the current NUMA policy settings for a current process:

numactl --show

To display available NUMA nodes on a CentOS 7 system

numactl --hardware

To only allocate memory to a process from specific NUMA nodes, use the following command

numactl --membind=$nodes $program_to_run

To only run a process on specific CPU nodes, use the following command

numactl --cpunodebind=$nodes $program_to_run

To run a process on specific CPUs, (not NUMA nodes), run this command

numactl --physcoubind=$CPU $program_to_run

Viewing NUMA stats with numastat

The numastat command can be used to view memory statistics for CentOS and what processes on running on which NUMA node, broken down on a per NUMA node basis. If you want to know if your server is achieving optimal CPU performance by avoiding NUMA hits, use numastat and look for low numa_miss and numa_foreign values. If you notice that there are a lot of numa misses (like 5% or more) you may want to try and pin the process to specific nodes to improve performance/

You can run numastat without any options and will display quite a lot of information. If you run numastat on CentOS 7 the output might look slightly different than it does on CentOS 6


The default tracking categories are as follows:

  • numa_hit – The number of successful allocations to this node.
  • numa_miss – The amount of memory accesses / attempted allocations to another NUMA node that were instead allocated to this node. If this value is high, it means that some NUMA nodes are using remote memory instead of local memory. This will hurt performance badly since there is additional latency for every remote memory access, many remote accesses begin to slow down the application. Lots of NUMA misses could be caused by lack of available memory, so if your server is close to utilizing all memory you may want to look into adding more memory, or using taskset to balance out process placement on NUMA nodes.
  • numa_foreign – Similar to numa_miss, but instead it shows the amount of allocations that were initially intended for this node, but were moved to another node. High values here are also bad. This value should reflect numa_miss
  • interleave_hit – The number of attempted interleave allocations to this node that were a success.
  • local_node – The number of times a process on this node successfully allocated memory on this node.
  • other_node – The number of times a process on another node allocated memory on this node.

NUMA Affinity Management Daemon (numad)

Numad is a daemon that automatically monitors NUMA topology and resource usage within the OS, numad can run on CentOS 6 or CentOS 7 servers. Numad can help to dynamically improve NUMA resource allocation and management, which can help to improve performance by watching what CPU processes run on, and what NUMA nodes those processes access. Over time NUMAD will balance out processes so that they are able to access data from the local memory bank which helps to lower latency.

The NUMA daemon looks for significant, long running processes, and then attempts to pin the process down to certain NUMA nodes, so that the CPU and Memory reside in the same NUMA node, this assumes there is enough free memory for the process to use..

You will see significant performance improvements on systems that have long running processes that consume a lot of resources, but not enough to occupy multiple NUMA nodes. For instance, MySQL is a prime candidate for NUMA balancing, Varnish would be another good candidate.

Applications such as large, in memory databases may not see any improvement, this is because you cannot lock down resources to a single NUMA node if all the system RAM is being used by one process.

To start the numad process on CentOS 6 or CentOS 7

service numad start

To ensure NUMAD starts on reboot, use chkconfig to set numad to “on”

chkconfig numad on

File System Optimization


Write barriers are a kernel mechanism used to ensure that file system meta data is correctly written and ordered on persistent storage. If this is enabled, it ensures that any data transmitted using fsynch persists accross power outages. This is enabled by default.

If a system does not have volatile write caches, this can be disabled to help improve performance.

Mount option:


Access Time

Whenever a file is read, the access time for that file must be updated in the inode metadata, whic usually involves additional write I/O. If this is not needed, you can disable this to help with IO performance.

Mount option:


Increased Read-Ahead Support

Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk.

To view the current read-ahead value for a particular block device, run:

blockdev -getra device

To modify the read-ahead value for that block device, run the following command. N represents the number of 512-byte sectors.

blockdev -setra N device

This will not persist accross reboots, so add this to a run level init.d script to apply after reboots.

Syscall Utilities


strace can be used to watch systems calls made my a program.

strace $program


Sysdig is newer than strace and there is a lot you can do with it. By default it shows all system calls on a server but you can filter out certain applications if you want.


For more information on how to install and use sysdig:

To install on CentOS 6

rpm --import  
curl -s -o /etc/yum.repos.d/draios.repo
rpm -i
yum -y install kernel-devel-$(uname -r)
yum -y install sysdig

System Calls

System Call is how a program requests a service from an OS’s kernel. System calls can ask for access to a harddrive, open a file, create a new process, etc, etc, etc.

System calls require a switch from user mode to kernel mode.


The Clone syscall creates new processes and threads. It is one of the more complex system calls and can be expensive to run so if you notice tons of these syscalls and performance is low you may want to try and reduce the amount of times this happens by increasing the process lifetime or reducing the amount of processes in general.

sysdig filter for clone

sysdig evt.type=clone


This syscall executes programs, typically you will see this call after the clone syscall. Everything that gets executed goes through this call.

sysdig filter for execve

sysdig evt.type=execve


This syscall changes the process working directory. If anything changes directory you can see it by filtering this syscall.

sysdig filter for chdir

sysdig evt.type=chdir


These syscalls opens files and can also create them. If you trace this syscall you can view file creation and who is touching what.

sysdig filter for open and creat

sysdig evt.type=open
sysdig evt.type=creat


This syscall initiates connections on a socket(s). This syscall is the only one that can establish a network connection.

sysdig filter for connect. You can also specify a port or IP to view specific services or IPs.

sysdig evt.type=connect
sysdig evt.type=connect and fd.port=80


This syscall accepts a connection on a socket. You will always see this syscall when connect is called.

sysdig filter for accept. You can also specify a port or IP to view specific services or IPs.

sysdig evt.type=accept
sysdig evt.type=accept and fd.port=80


These syscalls read or write data to or from a file.

sysdig filter for IO

sysdig evt.is_io=true

You can also use chisel to view IO for certain files, ports, or programs, for example

sysdig -c echo_fds

sysdig -c echo_fds and fd.port!=80


These syscalls delete or rename files.

sysdig evt.type=unlink
sysdig evt.type=rename


Performance Tuning in centos7

Performance Tuning in centos7


In RedHat (and thus CentOS) 7.0, a daemon called “tuned” was introduced as a unified system for applying tunings to Linux. tuned operates with simple, file-based tuning “profiles” and provides an admin command-line interface named “tuned-adm” for applying, listing and even recommending tuned profiles.

Some operational benefits of tuned:

  • File-based configuration – Profile tunings are contained in a simple, consolidated files
  • Swappable profiles – Profiles are easily changed back/forth
  • Standards compliance – Using tuned profiles ensures tunings are not overridden or ignored

Note: If you use configuration management systems like Puppet, Chef, Salt, Ansible, etc., I suggest you configure those systems to deploy tunings via tuned profiles instead of applying tunings directly, as tuned will likely start to fight this automation, overriding the changes.

The default available tuned profiles (as of  RedHat 7.2.1511) are:

  • balanced
  • desktop
  • latency-performance
  • network-latency
  • network-throughput
  • powersave
  • throughput-performance
  • virtual-guest
  • virtual-host

The profiles that are generally interesting for database usage are:

  • latency-performance

    “A server profile for typical latency performance tuning. This profile disables dynamic tuning mechanisms and transparent hugepages. It uses the performance governer for p-states through cpuspeed, and sets the I/O scheduler to deadline.”

  • throughput-performance

    “A server profile for typical throughput performance tuning. It disables tuned and ktune power saving mechanisms, enables sysctl settings that improve the throughput performance of your disk and network I/O, and switches to the deadline scheduler. CPU governor is set to performance.”

  • network-latency – Includes “latency-performance,” disables transparent_hugepages, disables NUMA balancing and enables some latency-based network tunings.
  • network-throughput – Includes “throughput-performance” and increases network stack buffer sizes.

I find “network-latency” is the closest match to our recommended tunings, but some additional changes are still required.


Tuning a server according to specific requirements is not an easy task. You need to know a lot of system parameters and how to change them in a intelligent manner.
Red Hat offers a tool called tuned-adm that makes these changes easy by using tuning profiles.

The tuned-adm command requires the tuned package (if not already installed):

# yum install -y tuned

Tuning Profiles

A tuning profile consists in a list of system changes corresponding to a specific requirement.
To get the list of the available tuning profiles, type:

# tuned-adm list
Available profiles:
- balanced
- desktop
- latency-performance
- network-latency
- network-throughput
- powersave
- sap
- throughput-performance
- virtual-guest
- virtual-host
Current active profile: virtual-guest

Note: All these tuning profiles are explained in details in the tuned-adm man page.

To only get the active profile, type:

# tuned-adm active
Current active profile: virtual-guest

To get the recommended tuning profile in your current configuration, type:

# tuned-adm recommend

To apply a different tuning profile (here throughput-performance), type:

# tuned-adm profile throughput-performance

cpu setting

tuned-adm profile throughput-performance
tuned-adm active
cpupower idle-set -d 4
cpupower idle-set -d 3
cpupower idle-set -d 2
cpupower frequency-set -g performance
# for more info /usr/lib/tuned/throughput-performance/tuned.conf




net.core.netdev_max_backlog = 300000
net.ipv4.tcp_sack = 0
net.ipv4.tcp_rmem=16384 349520 16777216
net.ipv4.tcp_wmem=16384 349520 16777216
net.ipv4.tcp_mem = 2314209      3085613 4628418
net.ipv4.tcp_window_scaling = 1
#UDP buffer

Linux Kernel Tuning for Centos 7


tuned` should already be installed for Cent 7 and default profile is balanced.

tuned-adm profiles can be found in this directory

ls /usr/lib/tuned/
balanced/               latency-performance/    powersave/              virtual-guest/
desktop/                network-latency/        recommend.conf          virtual-host/
functions               network-throughput/     throughput-performance/

To see what the active profile is:

tuned-adm active

To activated tuned xxx

tuned-adm profile xxx



  • latency-performance
    • Profile for low latency performance tuning.
    • Disables power saving mechanisms.
    • CPU governor is set to performance and locked to the low C states (by PM QoS).
    • CPU energy performance bias to performance.
    • This profile is the Parent profile to “network-latency”.

Activate tuned latency-performance for CentOS 7

tuned-adm profile latency-performance

For CentOS 7, the latency-performance profile includes the following tweaks

cat /usr/lib/tuned/latency-performance/tuned.conf


  • network-latency
    • This is a Child profile of “latency-performance”.
    • That this means is that if you were to activate network-latency profile via tuned, it would automatically enable latency-performance, then make some additional tweaks to improve network latency.
    • Disables transparent hugepages, and makes some net.core kernel tweaks.


cat /usr/lib/tuned/network-latency/tuned.conf


  • throughput-performance
    • This is the Parent profile to virtual-guest, virtual-host and network-throughput.
    • This profile is optimized for large, streaming files or any high throughput workloads.


cat /usr/lib/tuned/throughput-performance/tuned.conf
kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
vm.dirty_ratio = 40
vm.dirty_background_ratio = 10


  • virtual-guest
    • Profile optimized for virtual guests based on throughput-performance profile.
    • It additionally decreases virtual memory swapiness and increases dirty_ratio settings.


cat /usr/lib/tuned/virtual-guest/tuned.conf
vm.dirty_ratio = 30
vm.swappiness = 30


  • virtual-host
    • Profile optimized for virtual hosts based on throughput-performance profile.
    • It additionally enables more aggressive write-back of dirty pages.


cat /usr/lib/tuned/virtual-host/tuned.conf
vm.dirty_background_ratio = 5
kernel.sched_migration_cost_ns = 5000000

I/O scheduler

echo 'deadline' > /sys/block/sda/queue/scheduler
vim /etc/grub2.cfg

menuentry ‘CAKE 3.0, with Linux 3.10.0-229.1.2.el7.x86_64′
set root=’hd0,msdos1’
linux16 /vmlinuz-3.10.0-229.1.2.el7.x86_64 root= …. elevator=deadline
initrd16 /initramfs-3.10.0-229.1.2.el7.x86_64.img

WinPE 10-8 Sergei Strelec

WinPE 10-8 Sergei Strelec (x86/x64/Native x86) 2018.01.05 English Version | File size: 2.89 GB

Bootable disk Windows 10 and 8 PE – for maintenance of computers, hard disks and partitions, backup and restore disks and partitions, computer diagnostics, data recovery, Windows installation.

WinPE 10-8 Sergei Strelec (x86/x64/Native x86) 2017.10.03 English Version

The World’s Most Advanced Microsoft Windows 10 USB Bootable OS

The Portable Microsoft Windows 10 Operating System For Cyber Agent



Backup and restore
Acronis True Image 2017 20.0 Build 8058
Acronis True Image Premium 2014 Build 6673
Acronis Backup Advanced 11.7.50064
Active Disk Image Professional 7.0.4
StorageCraft Recovery Environment
FarStone Recovery Manager 10.10
QILING Disk Master
R-Drive Image 6.1 Build 6109
Symantec Veritas System Recovery
Symantec Ghost
TeraByte Image for Windows 3.15
AOMEI Backupper 4.0.6
Drive SnapShot
Macrium Reflect 7.1.2801
Disk2vhd 2.01
Vhd2disk v0.2

Hard disk
Acronis Disk Director 12.0.3297
EASEUS Partition Master 12.5 WinPE Edition
Paragon Hard Disk Manager 15
MiniTool Partition Wizard 10.2.2
AOMEI Partition Assistant 6.6.0
AOMEI Dynamic Disk Manager 1.2.0
Eassos PartitionGuru
Defraggler 2.21.993
Auslogics Disk Defrag 7.1.0
HDD Low Level Format Tool 4.40
Active KillDisk 10.1.1
FarStone DriveClone 11.10 Build 20150825 (WinPE10)

HD Tune Pro 5.70
Check Disk GUI
Victoria 4.47
HDD Regenerator 2011
HDDScan 3.3
Hard Disk Sentinel Pro 5.01 Build 8557
Western Digital Data LifeGuard Diagnostics 1.31.0
CrystalDiskInfo 7.5.1
CrystalDiskMark 6.0.0
AIDA64 Extreme Edition 5.92.4300
BurnInTest Pro 8.1 Build 1025
PerformanceTest 9.0 Build 1022
ATTO Disk Benchmark 3.05
RWEverything 1.7
CPU-Z 1.82.1
PassMark MonitorTest 3.2 Build 1004
HWiNFO32 5.70 Build 3300
OCCT Perestroika 4.5.1
Keyboard Test Utility 1.4.0

Network programs
Opera 46
PENetwork 0.58.2
TeamViewer 6
Ammyy Admin 3.5
AeroAdmin 4.1 Build 2767
µTorrent 3.1.3
FileZilla 3.24.0
Internet Download Accelerator
OpenVpn 2.4.4
PuTTY 0.70

Other programs
Active Password Changer
Reset Windows Password
PCUnlocker 3.8.0
UltraISO 9.7.0 Build 3476
Total Commander 9.00
Remote Registry (?86/64)
FastStone Capture 7.7
IrfanView 4.38
STDU Viewer
Bootice 1.3.4
Unlocker 1.9.2
Double Driver 4.1.0
GImageX 2.1.1
Media Player Classic
EasyBCD 2.3
EasyUEFI 3.0
SoftMaker Office
Far Manager 3.0 Build 5100
78Setup (author conty9)
Dism++ 10.1.1000.52B
WinHex 19.3
FastCopy 3.40
UltraSearch 2.12
Linux Reader 2.6
WinDirStat 1.1.2
Recover Keys
NirLauncher 1.20.25
Remote Registry Editor
Windows Recovery Environment (WinPE 10)

Data Recovery
R-Studio 8.5 Build 170117
Active File Recovery 15.0.7
Active Partition Recovery 15.0.0
Runtime GetDataBack for NTFS 4.33
Runtime GetDataBack for FAT 4.33
DM Disk Editor and Data Recovery 2.10.0
UFS Explorer Professional Recovery 5.23.1
Hetman Partition Recovery 2.7
Eassos Recovery
EaseUS Data Recovery Wizard 11.9



How-to extend a root LVM partition online

How-to extend a root LVM partition online

This guide will explain you how to extend a root LVM partition online.

There is also a quick remedy for the emergency situation when your root partition runs out of disk space. There is a feature specific to ext3 and ext4 that can help the goal of resolving the full disk situation. Unless explicitly changed during filesystem creation, both by default reserve five percent (5%) of a volume capacity to the superuser (root).

# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
              ext4    8.4G  8.0G  952K 100% /
tmpfs        tmpfs    499M     0  499M   0% /dev/shm
/dev/vda1     ext4    485M   33M  428M   8% /boot

# dumpe2fs /dev/vg_main/lv_root | grep 'Reserved block count'
dumpe2fs 1.41.12 (17-May-2010)
Reserved block count:     111513

It turned out 111513 of 4KB blocks were reserved for the superuser, which was exactly five percent of the volume capacity.

How to enable it?

# tune2fs -m 0 /dev/vg_main/lv_root 
tune2fs 1.41.12 (17-May-2010)
Setting reserved blocks percentage to 0% (0 blocks)
# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
              ext4    8.4G  8.0G  437M  95% /
tmpfs        tmpfs    499M     0  499M   0% /dev/shm
/dev/vda1     ext4    485M   33M  428M   8% /boot

Now that we have some free space on the root partition to work on we can extend the LVM partition:

Create a new partition of appropriate size using fdisk

fdisk /dev/sdb1

This is a key sequence on the keyboard to create a new LVM type (8e) partition:

n, p, 1, enter (accept default first sector), enter (accept default last sector), t, 8e, w

Create a new Physical Volume

# pvcreate /dev/sdb1
  Writing physical volume data to disk "/dev/sdb1"
  Physical volume "/dev/sdb1" successfully created

Extend a Volume Group

# vgextend vg_main /dev/sdb1
  Volume group "vg_main" successfully extended

Extend your LVM

– extend the size of your LVM by the amount of free space on PV

# lvextend /dev/vg_main/lv_root /dev/sdb1
  Extending logical volume lv_root to 18.50 GiB
  Logical volume lv_root successfully resized

– or with a given size

lvextend -L +10G /dev/vg_main/lv_root

Finally resize the file system online

# resize2fs /dev/vg_main/lv_root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg_main/lv_root is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/vg_main/lv_root to 4850688 (4k) blocks.
The filesystem on /dev/vg_main/lv_root is now 4850688 blocks long.

Now we can set the reserved blocks back to the default percentage – 5%

tune2fs -m 5 /dev/mapper/vg_main-lv_root


# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
              ext4     19G  8.0G  9.4G  46% /
tmpfs        tmpfs    499M     0  499M   0% /dev/shm
/dev/vda1     ext4    485M   33M  428M   8% /boot