May 2019
M T W T F S S
« Apr    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Categories

WordPress Quotes

Old friends pass away, new friends appear. It is just like the days. An old day passes, a new day arrives. The important thing is to make it meaningful: a meaningful friend - or a meaningful day.
Dalai Lama
May 2019
M T W T F S S
« Apr    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (34)
Ansibile (19)
Apache (133)
Asterisk (2)
cassandra (2)
Centos (209)
Centos RHEL 7 (264)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (3)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (32)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (34)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (61)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

23 visitors online now
3 guests, 20 bots, 0 members

Hit Counter provided by dental implants orange county

Performance Co-Pilot (PCP) on my RHEL server to capture performance logs

# run these commands on a Centos 7 server as root
yum install -y pcp pcp-webapi pcp-system-tools
chkconfig pmcd on
service pmcd start
chkconfig pmlogger on
service pmlogger start
chkconfig pmwebd on
service pmwebd start
# open port 44323 in the firewall
# To start vector on your laptop
docker run -d –name vector -p 80:80 netflixoss/vector:latest

open http://localhost

 

Performance Co-Pilot (PCP) on my RHEL server to capture performance logs

 

RHEL 7 (prior to RHEL 7.4)

PCP is included in the base RHEL and Fedora distributions. A minimal installation requires just the pcp package (and its dependencies) to enable performance data logs to be collected for later analysis:

yum install pcp
systemctl enable pmcd
systemctl enable pmlogger
systemctl start pmcd
systemctl start pmlogger



pcp

pmstat

pmatop

pmcollectl


At the moment Vector comes with the following list of widgets and dashboards that can be easily extended. Here is a short list of metrics available by default.

CPU

  • Load Average
  • Runnable
  • CPU Utilization
  • Per-CPU Utilization
  • Context Switches

Memory

  • Memory Utilization
  • Page Faults

Disk

  • Disk IOPS
  • Disk Throughput
  • Disk Utilization
  • Disk Latency

Network

  • Network Drops
  • TCP Retransmits
  • TCP Connections
  • Network Throughput
  • Network Packets

“This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.”

two solutions

  1. in the plugin configuration file disable plugin

    vim /etc/yum/pluginconf.d/subscription-manager.conf

    enabled=0

  2. or register to the satellite

ps command #1 – Basic

A linux command to monitor the system process consuming resources on the server. If you are working on a linux system it is good to have at least basic understanding of this command.

ps program or command when run take a snapshot of running processes at that time and display on the terminal which can be used to analyse the system performance or identify any problematic process which can be a risk for system.

ps Command:


When we run simple ps command, it will display very basic information –

$ ps
  PID TTY          TIME CMD
22396 pts/0    00:00:00 su
22402 pts/0    00:00:00 bash
22417 pts/0    00:00:00 su
22420 pts/0    00:00:00 bash
23332 pts/0    00:00:00 ps
PID – process id
TTY – terminal in which process is running
TIME – total cpu time taken till now
CMD – command

Let’s try with one argument -f (full)

$ ps -f
UID PID PPID C STIME TTY TIME CMD
root 22396 22377 0 09:49 pts/0 00:00:00 su
root 22402 22396 0 09:49 pts/0 00:00:00 bash
root 22417 22402 0 09:50 pts/0 00:00:00 su
root 22420 22417 0 09:50 pts/0 00:00:00 bash
root 23337 22420 0 11:02 pts/0 00:00:00 ps -f

this output is display with some more information –
UID – process owner user id
PPID – parent process id
STIME – process start time

Let’s play with some argument and see what will be the output look like –

$ ps -ef

mohan 7585 1 0 18:29 ? 00:00:00 /usr/libexec/gvfsd-http –spawner :1.7 /org/gtk/gvfs/exec_spaw/2
root 16991 1 0 Dec04 ? 00:00:00 /usr/sbin/bluetoothd –udev
mohan 17099 1 0 Dec04 ? 00:09:26 /usr/lib64/firefox/firefox
mohan 22246 1 0 19:28 ? 00:00:05 gnome-terminal
mohan 22248 22246 0 19:28 ? 00:00:00 gnome-pty-helper
mohan 22377 22246 0 19:35 pts/0 00:00:00 bash
root 22396 22377 0 19:35 pts/0 00:00:00 su
root 22402 22396 0 19:35 pts/0 00:00:00 bash
root 22417 22402 0 19:35 pts/0 00:00:00 su
root 22420 22417 0 19:35 pts/0 00:00:00 bash
mohan 22937 1 0 20:06 ? 00:00:00 gedit
root 23282 1899 0 20:47 ? 00:00:00 /usr/libexec/hald-addon-rfkill-killswitch
root 24348 1810 0 22:07 ? 00:00:00 /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-eth1.pid -lf /var/lib/dhclient/dhclient-e738be73-e337-4f64-865e-aa936ac77c14-eth1.lease -cf /var/run/nm-dhclient-eth1.conf eth1
mohan 27098 1236 2 22:23 ? 00:00:00 /usr/lib/rstudio-server/bin/rsession -u mohan
root 27112 1 0 22:23 ? 00:00:00 /usr/libexec/fprintd

All process
To see all processes on the system (along with the command line arguments used to start each process) you could use:

$ ps aux

Processes for User
To see all processes for a particular user (along with the command line arguments for each process) you could use:

$ ps U <username> u

$ ps U mohan u

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mohan 2695 0.0 0.0 229128 672 ? Sl Dec03 0:00 /usr/bin/gnome-keyring-daemon –daemonize –login
mohan 2705 0.0 0.1 253264 1888 ? Ssl Dec03 0:01 gnome-session
mohan 2713 0.0 0.0 20040 128 ? S Dec03 0:00 dbus-launch –sh-syntax –exit-with-session
mohan 2714 0.0 0.1 32476 1356 ? Ssl Dec03 0:01 /bin/dbus-daemon –fork –print-pid 5 –print-address 7 –session
mohan 2732 0.0 0.3 133360 3636 ? S Dec03 0:06 /usr/libexec/gconfd-2
mohan 2740 0.0 0.3 507280 3408 ? Ssl Dec03 0:26 /usr/libexec/gnome-settings-daemon
mohan 2741 0.0 0.1 286220 1624 ? Ss Dec03 0:00 seahorse-daemon
mohan 2746 0.0 0.0 137388 844 ? S Dec03 0:00 /usr/libexec/gvfsd
mohan 2760 0.0 0.5 447048 5116 ? Sl Dec03 0:25 metacity
mohan 2767 0.0 0.7 502416 7600 ? Sl Dec03 0:34 gnome-panel
mohan 2769 0.0 0.3 450232 3156 ? S<sl Dec03 0:35 /usr/bin/pulseaudio –start –log-target=syslog
mohan 2772 0.0 0.0 94828 252 ? S Dec03 0:00 /usr/libexec/pulse/gconf-helper
mohan 2773 0.0 5.6 1199004 57544 ? Sl Dec03 1:18 nautilus
mohan 2775 0.0 0.0 696412 256 ? Ssl Dec03 0:00 /usr/libexec/bonobo-activation-server –ac-activate –ior-output-fd=18
mohan 2778 0.0 0.2 30400 2212 ? S Dec03 0:00 /usr/sbin/restorecond -u
mohan 2783 0.0 0.4 469076 4244 ? Sl Dec03 0:02 gpk-update-icon
mohan 2786 0.0 0.0 146404 900 ? S Dec03 0:00 /usr/libexec/gvfs-gdu-volume-monitor
mohan 2787 0.0 0.2 375072 2924 ? S Dec03 0:00 gnome-volume-control-applet
mohan 2788 0.0 0.5 331480 5988 ? S Dec03 0:48 /usr/libexec/wnck-applet –oaf-activate-iid=OAFIID:GNOME_Wncklet_Factory –oaf-ior-fd=18
mohan 2789 0.0 0.2 476996 2900 ? Sl Dec03 0:00 /usr/libexec/trashapplet –oaf-activate-iid=OAFIID:GNOME_Panel_TrashApplet_Factory –oaf-ior-fd=24

Process tree
A process tree shows the child/parent relationships between processes. (When a process spawns another process, the spawned is called a child process while the other is the parent)

$ ps afjx

 

 

Usually, when we are monitoring process, we are targeting something which can impact our server performance or some specific process. For doing so we grep the ps output –

This is how we call list all http processes –

$ ps aux | grep http
atul      7585  0.0  0.0 177676   592 ?        S    Dec06   0:00 /usr/libexec/gvfsd-http --spawner :1.7 /org/gtk/gvfs/exec_spaw/2
root     28848  0.0  0.0   2700   168 pts/0    D+   02:49   0:00 grep http

you can filter ps command output by any keyword as above.

There are some ps options which can give you a customized output –

To see every process on the system using standard syntax:

$ ps -e
$ ps -ef
$ ps -eF
$ ps -ely
 To see every process on the system using BSD syntax:

$ ps ax
$ ps axu

To print a process tree:

$ ps -ejH
$ ps axjf

To get info about threads:

$ ps -eLf
$ ps axms

To get security info:

$ ps -eo euser,ruser,suser,fuser,f,comm,lable
$ ps axZ
$ ps -eM

To see every process running as root (real & effective ID) in user format:

$ ps -U root -u root u

To see every process with a user-defined format:

$ ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
$ ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
$ ps -eopid,tt,user,fname,tmout,f,wchan

Print only the process IDs of process syslogd:

$ ps -C syslogd -o pid=
 #ps -C <process_name> -o pid=

Print only the name of PID 42:

$ ps -p 42 -o comm=
  #ps -p <process_id> -o comm=

We can sort the ps command output by unix sort also which is easy to use. Need to pass ps command output to sort command with proper argument and Volla !! You will get the output as you want.

Let’s see how this is work [ sort command arguement can differ per your linux flavour and version ]
I am using – CentOS 6.3

1. Display the top CPU consuming process (Column 3 – %CPU)

$ ps aux | head -1; ps aux | sort -k3 -nr |grep -v ‘USER’| head

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND

mohan 21210 2.0 0.1 110232 1140 pts/3 R+ 00:13 0:00 ps aux
hduser 2671 0.8 4.1 960428 42436 pts/1 Sl+ Aug22 5:29 mongod
root 1447 0.2 0.3 185112 3384 ? Sl Aug22 1:36 /usr/sbin/vmtoolsd
mohan 2478 0.2 2.1 448120 21876 ? Sl Aug22 1:51 /usr/lib/vmware-tools/sbin64/vmtoolsd -n vmusr –blockFd 3
rtkit 2359 0.1 0.1 168448 1204 ? SNl Aug22 0:44 /usr/libexec/rtkit-daemon
root 7 0.1 0.0 0 0 ? S Aug22 0:53 [events/0]
root 2204 0.1 4.3 147500 43872 tty1 Ss+ Aug22 0:45 /usr/bin/Xorg :0 -nr -verbose -audit 4 -auth /var/run/gdm/auth-for-gdm-wEmBs1/database -nolisten tcp vt1
root 920 0.0 0.0 0 0 ? S Aug22 0:00 [bluetooth]
root 9 0.0 0.0 0 0 ? S Aug22 0:00 [khelper]
root 8 0.0 0.0 0 0 ? S Aug22 0:00 [cgroup]

For my linux sort command arguements are —
-kn ==> This use to select the column n, such as for column 4, -k4
-n ==> column is numeric
-r ==> reverse order

sort -k3 -nr ==> sort the third column of output in numeric reverse sort (largest to smallest)

2. Display the top 10 memory consuming process (Column 4 – %MEM)

$ ps aux | head -1; ps aux | sort -k4 -nr |grep -v ‘USER’| head

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND

root 2204 0.1 4.3 147500 43872 tty1 Ss+ Aug22 0:46 /usr/bin/Xorg :0 -nr -verbose -audit 4 -auth /var/run/gdm/auth-for-gdm-wEmBs1/database -nolisten tcp vt1
hduser 2671 0.8 4.1 960428 42436 pts/1 Sl+ Aug22 5:32 mongod
mohan 2458 0.0 2.3 943204 23624 ? S Aug22 0:16 nautilus
mohan 2516 0.0 2.2 275280 22316 ? Ss Aug22 0:06 gnome-screensaver
mohan 2478 0.2 2.1 448120 21876 ? Sl Aug22 1:52 /usr/lib/vmware-tools/sbin64/vmtoolsd -n vmusr –blockFd 3
mohan 2507 0.0 1.6 321388 16680 ? S Aug22 0:01 python /usr/share/system-config-printer/applet.py
mohan 2589 0.0 1.4 292556 14600 ? Sl Aug22 0:14 gnome-terminal
mohan 2536 0.0 1.3 395832 13372 ? S Aug22 0:00 /usr/bin/gnote –panel-applet –oaf-activate-iid=OAFIID:GnoteApplet_Factory –oaf-ior-fd=22
mohan 2502 0.0 1.3 474620 13952 ? Sl Aug22 0:01 gpk-update-icon
mohan 2537 0.0 1.2 459964 12736 ? S Aug22 0:10 /usr/libexec/clock-applet –oaf-activate-iid=OAFIID:GNOME_ClockApplet_Factory –oaf-ior-fd=34

3. Display the process by time (Column 4 – TIME)

$ ps vx | head -1; ps vx | sort -k4 -r| grep -v ‘PID’ | head

PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND 2478 ? Sl 1:52 351 593 447526 21876 2.1 /usr/lib/vmware-tools/sbin64/vmtoolsd -n vmusr –blockFd 3
2458 ? S 0:16 228 1763 941440 23624 2.3 nautilus
2589 ? Sl 0:15 28 296 292259 14600 1.4 gnome-terminal
2421 ? Ssl 0:14 22 34 500541 9676 0.9 /usr/libexec/gnome-settings-daemon
2479 ? S 0:13 23 403 310472 11996 1.1 nm-applet –sm-disable
2537 ? S 0:10 37 168 459795 12736 1.2 /usr/libexec/clock-applet –oaf-activate-iid=OAFIID:GNOME_ClockApplet_Factory –oaf-ior-fd=34
2444 ? Ssl 0:10 25 64 445791 4872 0.4 /usr/bin/pulseaudio –start –log-target=syslog
2445 ? S 0:07 51 593 322206 12684 1.2 gnome-panel
2516 ? Ss 0:06 4 151 275128 22316 2.2 gnome-screensaver
2522 ? Sl 0:05 5 41 231870 1960 0.1 /usr/libexec/gvfs-afc-volume-monitor

4. Display the top 10 real memory usage process (Column 8 – RSS)

$ ps vx | head -1; ps vx | sort -k8 -nr| grep -v ‘PID’ | head

PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
2458 ? S 0:16 228 1763 941440 23624 2.3 nautilus
2516 ? Ss 0:06 4 151 275128 22316 2.2 gnome-screensaver
2478 ? Sl 1:52 351 593 447526 21876 2.1 /usr/lib/vmware-tools/sbin64/vmtoolsd -n vmusr –blockFd 3
2507 ? S 0:01 73 2 321385 16680 1.6 python /usr/share/system-config-printer/applet.py
2589 ? Sl 0:15 28 296 292259 14600 1.4 gnome-terminal
2502 ? Sl 0:01 29 257 474362 13952 1.3 gpk-update-icon
2536 ? S 0:00 92 1607 394224 13372 1.3 /usr/bin/gnote –panel-applet –oaf-activate-iid=OAFIID:GnoteApplet_Factory –oaf-ior-fd=22
2537 ? S 0:10 37 168 459795 12744 1.2 /usr/libexec/clock-applet –oaf-activate-iid=OAFIID:GNOME_ClockApplet_Factory –oaf-ior-fd=34
2445 ? S 0:07 51 593 322206 12684 1.2 gnome-panel
2438 ? Sl 0:03 30 542 433105 12512 1.2 metacity

Like above examples you can create so many one liners for you. But before using anyone of above one command, check your ps and sort command behavior then use them.
Mostly, every other shell has its own argument for ps and sort but basics are same. For sorting any command output by particular column first understand that output/column and then use sort commnd.

memory usage in Linux per process

Not because I really want to; but because I just don’t have the money to spend on a 2+ GB RAM VPS and I would like to run Jira.

In order to do this I keep a close eye on the processes running and how much memory each takes.

For this I found (and tweaked) the following bash command (originally found here):

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'

This command will output every process and its memory usage in human readable (thus megabytes) format. For your convenience it is sorted by memory size descending (from highest to lowest).

An example of its use:

test@rmohan.com:~$ ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'
0.00 Mb COMMAND
116.18 Mb /usr/sbin/mysqld
11.58 Mb /usr/sbin/apache2 -k start
11.58 Mb /usr/sbin/apache2 -k start
11.58 Mb /usr/sbin/apache2 -k start

(I am sure I can tune the above to use less memory; this is merely an example)

I hope it will be as useful to you as it is to me,

Top Show Running Processes

he Linux top command is used to show all the running processes within your Linux environment. This guide shows you how to use the top command by explaining the different switches available and the information that is displayed:

How To Run The Top Command

In its basic form all you need to do to show the current processes is type the following in a Linux terminal:

top

What Information Is Shown:

The following information is displayed when you run the Linux top command:

Line 1

  • The time
  • How long the computer has been running
  • Number of users
  • Load average

The load average shows the system load time for the last 1, 5 and 15 minutes.

Line 2

 

Line 3

  • CPU usage as a percentage by the user
  • CPU usage as a percentage by system
  • CPU usage as a percentage by low priority processes
  • CPU usage as a percentage by idle processes
  • CPU usage as a percentage by io wait
  • CPU usage as a percentage by hardware interrupts
  • CPU usage as a percentage by software interrupts
  • CPU usage as a percentage by steal time

This guide gives a definition of what CPU usage means.

Line 3

Line 4

  • Total swap available
  • Total swap free
  • Total swap used
  • Available memory

This guide gives a description of swap partitions and whether you need them.

Main Table

  • Process ID
  • User
  • Priority
  • Nice level
  • Virtual memory used by process
  • Resident memory used by a process
  • Shareable memory
  • CPU used by process as a percentage
  • Memory used by process as a percentage
  • Time process has been running
  • Command

Here is a good guide discussing computer memory.

Keep Linux Top Running All The Time In The Background

You can keep the top command easily available without having to type the word top each time into your terminal window.

To pause top so that you can continue using the terminal, press CTRL and Z on the keyboard.

To bring top back to the foreground, type fg.

Key Switches For The Top Command:

  • -h – Show the current version
  • -c – This toggles the command column between showing command and program name
  • -d – Specify the delay time between refreshing the screen
  • -o – Sorts by the named field
  • -p – Only show processes with specified process IDs
  • -u – Show only processes by the specified user
  • -i – Do not show idle tasks

Show The Current Version

Type the following to show the current version details for top:

top -h

Output is in the form procps -ng version 3.3.10

Specify A Delay Time Between Screen Refreshes

To specify a delay between the screen refreshes whilst using top type the following:

top -d

To refresh every 5 seconds type top -d 5

Obtain A List Of Columns To Sort By

To get a list of the columns with which you can sort the top command by type the following:

top -O

There are a lot of columns so you might wish to pipe the output to less as follows:

top -O | less

Sort The Columns In The Top Command By A Column Name

Use the previous section to find a column to sort by and then use the following syntax to sort by that column:

top -o

To sort by %CPU type the following:

top -o %CPU

Only Show The Processes For A Specific User

To show only the processes that a specific user is running use the following syntax:

top -u

For example to show all the processes that the user gary is running type the following:

top -u gary

Hide Idle Tasks

The default top view can seem cluttered and if you want to see only active processes (i.e those that are not idle) then you can ran the top command using the following command:

top -i

Adding Extra Columns To The Top Display

Whilst running top you can press the ‘F’ key which shows the list of fields that can be displayed in the table:

Use the arrow keys to move up and down the list of fields.

To set a field so that it is displayed on the screen press the ‘D’ key. To remove the field press “D” on it again. An asterisk (*) will appear next to displayed fields.

You can set the field to sort the table by simply by pressing the “S” key on the field you wish to sort by.

Press the enter key to commit your changes and press “Q” to quit.

Toggling Modes

Whilst running top you can press the “A” key to toggle between the standard display and an alternate display.

Changing Colors

Press the “Z” key to change the colors of the values within top.

There are three stages required to change the colors:

  1. Press either S for summary data, M for messages, H for column headings or T for task information to target that area for a color change
  2. Choose a color for that target, 0 for black, 1 for red, 2 for green, 3 for yellow, 4 for blue, 5 for magenta, 6 for cyan and 7 for white
  3. Enter to commit

Press the “B” key to make text bold.

Change The Display Whilst Running Top

Whilst the top command is running you can toggle many of the features on and off by pressing relevant keys whilst it is running.

The following table shows the key to press and the function it provides:

Function Keys
Function Key Description
A Alternative display (default off)
d Refresh screen after specified delay in seconds (default 1.5 seconds)
H Threads mode (default off), summarises tasks
p PID Monitoring (default off), show all processes
B Bold enable (default on), values are shown in bold text
l Display load average (default on)
t Determines how tasks are displayed (default 1+1)
m Determines how memory usage is displayed (default 2 lines)
1 Single cpu (default off) – i.e. shows for multiple CPUs
J Align numbers to the right (default on)
j Align text to the right (default off)
R Reverse sort (default on) – Highest processes to lowest processes
S Cumulative time (default off)
u User filter (default off) show euid only
U User filter (default off) show any uid
V Forest view (default on) show as branches
x Column highlight (default off)
z Color or mono (default on) show colors

Summary

There are more switches available and you can read more about them by typing the following into your terminal window:

Install Microsoft SQL Server On CentOS Linux

In December 2016 Microsoft made their SQL Server database available in Linux. Here we’ll cover how to install and perform basic setup of MSSQL in the RHEL based Linux distribution CentOS.

Install MSSQL In CentOS 7

First we’ll set up the repository file, Microsoft provide a copy of this for RHEL here: https://packages.microsoft.com/config/rhel/7/mssql-server.repo

We’ll use the wget command to copy this file to the /etc/yum.repos.d/ directory so that we can use it with the yum or dnf package manager.

[root@centos7 ~]# wget https://packages.microsoft.com/config/rhel/7/mssql-server.repo -O /etc/yum.repos.d/mssql-server.repo

Now that the repository file is in place, installation is as simple as running the following command. At the time of writing the total size of the package was a 139mb download.

 

[root@centos7 ~]# yum install mssql-server -y
...
+-------------------------------------------------------------------+
| Please run /opt/mssql/bin/sqlservr-setup to complete the setup of |
|                  Microsoft(R) SQL Server(R).                      |
+-------------------------------------------------------------------+

Once the installation has completed, we are advised to run the /opt/mssql/bin/sqlservr-setup bash script to complete the setup process.

During my first installation attempt, I got the following error as my virtual machine was only running with 2GB of memory, so be sure that you have enough memory before proceeding.

sqlservr: This program requires a machine with at least 3250 megabytes of memory.
Microsoft(R) SQL Server(R) setup failed with error code 1.

You’ll be able to proceed once you have adequate memory available.

[root@centos7 ~]# /opt/mssql/bin/sqlservr-setup
Microsoft(R) SQL Server(R) Setup

You can abort setup at anytime by pressing Ctrl-C. Start this program
with the --help option for information about running it in unattended
mode.

The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746388 and found
in /usr/share/doc/mssql-server/LICENSE.TXT.

Do you accept the license terms? If so, please type "YES": YES

Please enter a password for the system administrator (SA) account:
Please confirm the password for the system administrator (SA) account:

Setting system administrator (SA) account password...

Do you wish to start the SQL Server service now? [y/n]: y
Do you wish to enable SQL Server to start on boot? [y/n]: y
Created symlink from /etc/systemd/system/multi-user.target.wants/mssql-server.service to /usr/lib/systemd/system/mssql-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/mssql-server-telemetry.service to /usr/lib/systemd/system/mssql-server-telemetry.service.

Setup completed successfully.


That’s it, Microsoft SQL Server is now running successfully and listening for traffic on TCP port 1434.

[root@centos7 ~]# systemctl status mssql-server
â mssql-server.service - Microsoft(R) SQL Server(R) Database Engine
   Loaded: loaded (/usr/lib/systemd/system/mssql-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-12-30 02:26:37 PST; 38s ago
 Main PID: 2974 (sqlservr)
   CGroup: /system.slice/mssql-server.service
           ââ2974 /opt/mssql/bin/sqlservr
           ââ2995 /opt/mssql/bin/sqlservr

[root@centos7 ~]# netstat -antp | grep 1434
tcp        0      0 127.0.0.1:1434          0.0.0.0:*               LISTEN      2995/sqlservr

Connecting To MSSQL

In order to actually connect to the server from Linux we need to install the mssql-tools package, which comes from a different repository than the one that we just set up. It can be found here: https://packages.microsoft.com/config/rhel/7/prod.repo

First we’ll download a copy of the prod.repo file and place it into the /etc/yum.repos.d/ directory.

[root@centos7 ~]# wget https://packages.microsoft.com/config/rhel/7/prod.repo -O /etc/yum.repos.d/prod.repo

We can now proceed with installing the mssql-tools package, as shown below.

[root@centos7 ~]# yum install mssql-tools -y

Once this is installed we can use the sqlcmd command to interact with the database. To see how to run sqlcmd, simply run it with the -? option for help.

Unfortunately it appears that when you specify the -P option for the password, the password must be provided in the command line with no option of being prompted for it later. Keep in mind that your password will be stored in your bash history running it this way.

[root@centos7 ~]# sqlcmd -U SA -P password
1> create database test;
2> go
1> use test;
2> go
Changed database context to 'test'.
1> create table websites(domain varchar(255));
2> go
1> insert into websites (domain)
2> values ('rootusers.com');
3> go

(1 rows affected)
1> select domain
2> from websites;
3> go
domain
rootusers.com
(1 rows affected)

In this example we create a test database with a table named websites and a column for domain names. We then insert a domain name and pull it back out with select, confirming both that we are able to connect and that basic SQL queries appear to be working as expected.

Summary

Microsoft’s SQL Server is now available for installation on Linux. Personally I don’t think I’ll ever use this over other alternatives such as MariaDB or PostgreSQL, so hopefully someone somewhere actually finds this information useful!

amazon linux redis install

Redis Install

Using epel Repository

sudo yum --enablerepo=epel install redis
=====================================================================================================================================================================
 Package                              Arch                                  Version                                        Repository                           Size
=====================================================================================================================================================================
Installing:
 redis x86_64 2.4.10-1.el6 warm 213 k

Transaction Summary
=====================================================================================================================================================================
Install  1 Package

version 2.4.10-1.el6 has been installed. (January 30, 2016)

Using remi Repository

sudo rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
sudo yum --enablerepo=remi install redis
=====================================================================================================================================================================
 Package                              Arch                               Version                                         Repository                             Size
=====================================================================================================================================================================
Installing:
 redis x86_64 3.0.7-1.el6.remi remi 442 k
Installing for dependencies:
 jemalloc                             x86_64                             3.3.1-1.8.amzn1                                 amzn-main                             111 k

Transaction Summary
=====================================================================================================================================================================
Install  1 Package (+1 Dependent package)

version 3.0.7-1.el6.remi was installed. (January 30, 2016)

Reference URL
Install redis on AWS EC 2

Setting Service

Start service, set automatic startup

sudo service redis start
sudo chkconfig --level 35 redis on
sudo chkconfig --list | grep redis

Execution result

$ sudo service redis start
Starting redis-server:   [  OK  ]
$ sudo chkconfig --level 35 redis on
$ sudo chkconfig --list | grep redis
redis           0:off   1:off   2:off   3:on    4:off   5:on    6:off
redis-sentinel  0:off   1:off   2:off   3:off   4:off   5:off   6:off

Linux memory management

I think that is a common question for every Linux user soon or later in their career of desktop or server administrator “Why Linux uses all my Ram while not doing much ?”. To this one today I’ve add another question that I’m sure is common for many Linux system administrator “Why the command free show swap used and I’ve so much free Ram ?”, so from my study of today on SwapCached i present to you some useful, or at least i hope so, information on the management of memory in a Linux system.




Linux has this basic rule: a page of free RAM is wasted RAM. RAM is used for a lot more than just user application data. It also stores data for the kernel itself and, most importantly, can mirror data stored on the disk for super-fast access, this is reported usually as “buffers/cache”, “disk cache” or “cached” by top. Cached memory is essentially free, in that it can be replaced quickly if a running (or newly starting) program needs the memory.

Keeping the cache means that if something needs the same data again, there’s a good chance it will still be in the cache in memory.

So as first thing in your system you can use the command free to get a first idea of how is going the use of your RAM.

This is the output on my old laptop with Xubuntu:

xubuntu-home:~# free
             total       used       free     shared    buffers     cached
Mem:          1506       1373        133          0         40        359
-/+ buffers/cache:        972        534
Swap:          486         24        462

xubuntu-home:~# free total used free shared buffers cached Mem: 1506 1373 133 0 40 359 -/+ buffers/cache: 972 534 Swap: 486 24 462

The -/+ buffers/cache line shows how much memory is used and free from the perspective of the applications. In this example 972 MB of RAM are used and 534 MB are available for applications.
Generally speaking, if little swap is being used, memory usage isn’t impacting performance at all.

But if you want to get some more information about your memory the file you must check is /proc/meminfo, this is mine on Xubuntu 12.04 with a 3.2.0-25-generic Kernel:

xubuntu-home:~# cat /proc/meminfo 
MemTotal:        1543148 kB
MemFree:          152928 kB
Buffers:           41776 kB
Cached:           353612 kB
SwapCached:         8880 kB
Active:           629268 kB
Inactive:         665188 kB
Active(anon):     432424 kB
Inactive(anon):   474704 kB
Active(file):     196844 kB
Inactive(file):   190484 kB
Unevictable:         160 kB
Mlocked:             160 kB
HighTotal:        662920 kB
HighFree:          20476 kB
LowTotal:         880228 kB
LowFree:          132452 kB
SwapTotal:        498684 kB
SwapFree:         470020 kB
Dirty:                44 kB
Writeback:             0 kB
AnonPages:        891472 kB
Mapped:           122284 kB
Shmem:              8060 kB
Slab:              56416 kB
SReclaimable:      44068 kB
SUnreclaim:        12348 kB
KernelStack:        3208 kB
PageTables:        10380 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     1270256 kB
Committed_AS:    2903848 kB
VmallocTotal:     122880 kB
VmallocUsed:        8116 kB
VmallocChunk:     113344 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       4096 kB
DirectMap4k:       98296 kB
DirectMap4M:      811008 kB

xubuntu-home:~# cat /proc/meminfo MemTotal: 1543148 kB MemFree: 152928 kB Buffers: 41776 kB Cached: 353612 kB SwapCached: 8880 kB Active: 629268 kB Inactive: 665188 kB Active(anon): 432424 kB Inactive(anon): 474704 kB Active(file): 196844 kB Inactive(file): 190484 kB Unevictable: 160 kB Mlocked: 160 kB HighTotal: 662920 kB HighFree: 20476 kB LowTotal: 880228 kB LowFree: 132452 kB SwapTotal: 498684 kB SwapFree: 470020 kB Dirty: 44 kB Writeback: 0 kB AnonPages: 891472 kB Mapped: 122284 kB Shmem: 8060 kB Slab: 56416 kB SReclaimable: 44068 kB SUnreclaim: 12348 kB KernelStack: 3208 kB PageTables: 10380 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1270256 kB Committed_AS: 2903848 kB VmallocTotal: 122880 kB VmallocUsed: 8116 kB VmallocChunk: 113344 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 98296 kB DirectMap4M: 811008 kB

MemTotal and MemFree are easily understandable for everyone, these are some of the other values:

Cached
The Linux Page Cache (“Cached:” from meminfo ) is the largest single consumer of RAM on most systems. Any time you do a read() from a file on disk, that data is read into memory, and goes into the page cache. After this read() completes, the kernel has the option to simply throw the page away since it is not being used. However, if you do a second read of the same area in a file, the data will be read directly out of memory and no trip to the disk will be taken. This is an incredible speedup and is the reason why Linux uses its page cache so extensively: it is betting that after you access a page on disk a single time, you will soon access it again.

dentry/inode caches
Each time you do an ‘ls’ (or any other operation: open(), stat(), etc…) on a filesystem, the kernel needs data which are on the disk. The kernel parses these data on the disk and puts it in some filesystem-independent structures so that it can be handled in the same way across all different filesystems. In the same fashion as the page cache in the above examples, the kernel has the option of throwing away these structures once the ‘ls’ is completed. However, it makes the same bets as before: if you read it once, you’re bound to read it again. The kernel stores this information in several “caches” called the dentry and inode caches. dentries are common across all filesystems, but each filesystem has its own cache for inodes.

This ram is a component of “Slab:” in meminfo

You can view the different caches and their sizes by executing this command:

 head -2 /proc/slabinfo; cat /proc/slabinfo  | egrep dentry\|inode

head -2 /proc/slabinfo; cat /proc/slabinfo | egrep dentry\|inode

Buffer Cache
The buffer cache (“Buffers:” in meminfo) is a close relative to the dentry/inode caches. The dentries and inodes in memory represent structures on disk, but are laid out very differently. This might be because we have a kernel structure like a pointer in the in-memory copy, but not on disk. It might also happen that the on-disk format is a different endianness than CPU.

Memory mapping in top: VIRT, RES and SHR

When you are running top there are three fields related to memory usage. In order to assay your server memory requirements you have to understand their meaning.

VIRT stands for the virtual size of a process, which is the sum of memory it is actually using, memory it has mapped into itself (for instance the video cards’s RAM for the X server), files on disk that have been mapped into it (most notably shared libraries), and memory shared with other processes. VIRT represents how much memory the program is able to access at the present moment.

RES stands for the resident size, which is an accurate representation of how much actual physical memory a process is consuming. (This also corresponds directly to the %MEM column.) This will virtually always be less than the VIRT size, since most programs depend on the C library.

SHR indicates how much of the VIRT size is actually sharable (memory or libraries). In the case of libraries, it does not necessarily mean that the entire library is resident. For example, if a program only uses a few functions in a library, the whole library is mapped and will be counted in VIRT and SHR, but only the parts of the library file containing the functions being used will actually be loaded in and be counted under RES.

Swap

Now we have seen some information on our RAM, but what happens when there is no more free RAM? If I have no memory free, and I need a page for the page cache, inode cache, or dentry cache, where do I get it?

First of all the kernel tries not to let you get close to 0 bytes of free RAM. This is because, to free up RAM, you usually need to allocate more. This is because our Kernel need a kind of “working space” for its own housekeeping, and so if it arrives to zero free RAM it cannot do anything more.

Based on the amount of RAM and the different types (high/low memory), the kernel comes up with a heuristic for the amount of memory that it feels comfortable with as its working space. When it reaches this watermark, the kernel starts to reclaim memory from the different uses described above. The kernel can get memory back from any of the these.

However, there is another user of memory that we may have forgotten about by now: user application data.
When the kernel decides not to get memory from any of the other sources we’ve described so far, it starts to swap. During this process it takes user application data and writes it to a special place (or places) on the disk, note that this happen not only when RAM go close to become full, but the Kernel can decide to move to swap also some data on RAM that has not be used from some time (see swappiness).
For this reason, even a system with vast amounts of RAM (even when properly tuned) can swap. There are lots of pages of memory which are user application data, but are rarely used. All of these are targets for being swapped in favor of other uses for the RAM.

You can check if swap is used with the command free, the last line of the output show information about our swap space, taking the free I’ve used in the example above:

xubuntu-home:~# free
             total       used       free     shared    buffers     cached
Mem:          1506       1373        133          0         40        359
-/+ buffers/cache:        972        534
Swap:          486         24        462

xubuntu-home:~# free total used free shared buffers cached Mem: 1506 1373 133 0 40 359 -/+ buffers/cache: 972 534 Swap: 486 24 462

We can see that on this computer there are 24 MB of swap used and 462 MB available.

So the mere presence of used swap is not evidence of a system which has too little RAM for its workload, the best way to determine this is to use the command vmstat if you see a lot of pages that are swapped in (si) and out (so) it means that the swap is actively used and that the system is “thrashing” or that it is needing new RAM as fast as it can swap out application data.

This is an output on my gentoo laptop, while it’s idle:

~ # vmstat 5 5
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  0      0 2802448  25856 731076    0    0    99    14  365  478  7  3 88  3
 0  0      0 2820556  25868 713388    0    0     0     9  675  906  2  2 96  0
 0  0      0 2820736  25868 713388    0    0     0     0  675  925  3  1 96  0
 2  0      0 2820388  25868 713548    0    0     0     2  671  901  3  1 96  0
 0  0      0 2820668  25868 713320    0    0     0     0  681  920  2  1 96  0

~ # vmstat 5 5 procs ———–memory———- —swap– —–io—- -system– —-cpu—- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 2802448 25856 731076 0 0 99 14 365 478 7 3 88 3 0 0 0 2820556 25868 713388 0 0 0 9 675 906 2 2 96 0 0 0 0 2820736 25868 713388 0 0 0 0 675 925 3 1 96 0 2 0 0 2820388 25868 713548 0 0 0 2 671 901 3 1 96 0 0 0 0 2820668 25868 713320 0 0 0 0 681 920 2 1 96 0

Note that in the output of the free command you have just 2 values about swap: free and used, but there is another important value also for the swap space : Swap cache.

Swap Cache

The swap cache is very similar in concept to the page cache. A page of user application data written to disk is very similar to a page of file data on the disk. Any time a page is read in from swap (“si” in vmstat), it is placed in the swap cache. Just like the page cache, this is a bet on the kernel’s part. It is betting that we might need to swap this page out _again_. If that need arises, we can detect that there is already a copy on the disk and simply throw the page in memory away immediately. This saves us the cost of re-writing the page to the disk.

The swap cache is really only useful when we are reading data from swap and never writing to it. If we write to the page, the copy on the disk is no longer in sync with the copy in memory. If this happens, we have to write to the disk to swap the page out again, just like we did the first time. However, the cost of saving _any_ writes to disk is great, and even with only a small portion of the swap cache ever written to, the system will perform better.

So to know the swap used for real we should subtract to the value of SwapUsed the value of SwapCached , you can find these information in /proc/meminfo

Swappiness

When an application needs memory and all the RAM is fully occupied, the kernel has two ways to free some memory at its disposal: it can either reduce the disk cache in the RAM by eliminating the oldest data or it may swap some less used portions (pages) of programs out to the swap partition on disk. It is not easy to predict which method would be more efficient. The kernel makes a choice by roughly guessing the effectiveness of the two methods at a given instant, based on the recent history of activity.

Before the 2.6 kernels, the user had no possible means to influence the calculations and there could happen situations where the kernel often made the wrong choice, leading to thrashing and slow performance. The addition of swappiness in 2.6 changes this.

Swappiness takes a value between 0 and 100 to change the balance between swapping applications and freeing cache. At 100, the kernel will always prefer to find inactive pages and swap them out; in other cases, whether a swapout occurs depends on how much application memory is in use and how poorly the cache is doing at finding and releasing inactive items.

The default swappiness is 60. A value of 0 gives something close to the old behavior where applications that wanted memory could shrink the cache to a tiny fraction of RAM. For laptops which would prefer to let their disk spin down, a value of 20 or less is recommended.

Conclusions

In this article I’ve put some information that I’ve found useful in my work as system administrator i hope they can be useful to you as well.

Reference
Most of this article is based on the work found on these pages:

Spectre and Meltdown explained

By now, most of you have probably already heard of the biggest disaster in the history of IT – Meltdown and Spectre security vulnerabilities which affect all modern CPUs, from those in desktops and servers, to ones found in smartphones. Unfortunately, there’s much confusion about the level of threat we’re dealing with here, because some of the impacted vendors need reasons to explain the still-missing security patches. But even those who did release a patch, avoid mentioning that it only partially addresses the threat. And, there’s no good explanation of these vulnerabilities on the right level (not for developers), something that just about anyone working in IT could understand to make their own conclusion. So, I decided to give it a shot and deliver just that.

First, some essential background. Both vulnerabilities leverage the “speculative execution” feature, which is central to the modern CPU architecture. Without this, processors would idle most of the time, just waiting to receive I/O results from various peripheral devices, which are all at least 10x slower than processors. For example, RAM – kind of the fastest thing out there in our mind – runs at comparable frequencies with CPU, but all overclocking enthusiasts know that RAM I/O involves multiple stages, each taking multiple CPU cycles. And hard disks are at least a hundred times slower than RAM. So, instead of waiting for the real result of some IF clause to be calculated, the processor assumes the most probable result, and continues the execution according to the assumed result. Then, many cycles later, when the actual result of said IF is known, if it was “guessed” right – then we’re already way ahead in the program code execution path, and didn’t just waste all those cycles waiting for the I/O operation to complete. However, if it appears that the assumption was incorrect – then, the execution state of that “parallel universe” is simply discarded, and program execution is restarted back from said IF clause (as if speculative execution did not exist). But, since those prediction algorithms are pretty smart and polished, more often than not the guesses are right, which adds significant boost to execution performance for some software. Speculative execution is a feature that processors had for two decades now, which is also why any CPU that is still able to run these days is affected.

Now, while the two vulnerabilities are distinctly different, they share one thing in common – and that is, they exploit the cornerstone of computer security, and specifically the process isolation. Basically, the security of all operating systems and software is completely dependent on the native ability of CPUs to ensure complete process isolation in terms of them being able to access each other’s memory. How exactly is such isolation achieved? Instead of having direct physical RAM access, all processes operate in virtual address spaces, which are mapped to physical RAM in the way that they do not overlap. These memory allocations are performed and controlled in hardware, in the so-called Memory Management Unit (MMU) of CPU.

At this point, you already know enough to understand Meltdown. This vulnerability is basically a bug in MMU logic, and is caused by skipping address checks during the speculative execution (rumors are, there’s the source code comment saying this was done “not to break optimizations”). So, how can this vulnerability be exploited? Pretty easily, in fact. First, the malicious code should trick a processor into the speculative execution path, and from there, perform an unrestricted read of another process’ memory. Simple as that. Now, you may rightfully wonder, wouldn’t the results obtained from such a speculative execution be discarded completely, as soon as CPU finds out it “took a wrong turn”? You’re absolutely correct, they are in fact discarded… with one exception – they will remain in the CPU cache, which is a completely dumb thing that just caches everything CPU accesses. And, while no process can read the content of the CPU cache directly, there’s a technique of how you can “read” one implicitly by doing legitimate RAM reads within your process, and measuring the response times (anything stored in the CPU cache will obviously be served much faster). You may have already heard that browser vendors are currently busy releasing patches that makes JavaScript timers more “coarse” – now you know why (but more on this later).

As far as the impact goes, Meltdown is limited to Intel and ARM processors only, with AMD CPUs unaffected. But for Intel, Meltdown is extremely nasty, because it is so easy to exploit – one of our enthusiasts compiled the exploit literally over a morning coffee, and confirmed it works on every single computer he had access to (in his case, most are Linux-based). And possibilities Meltdown opens are truly terrifying, for example how about obtaining admin password as it is being typed in another process running on the same OS? Or accessing your precious bitcoin wallet? Of course, you’ll say that the exploit must first be delivered to the attacked computer and executed there – which is fair, but here’s the catch: JavaScript from some web site running in your browser will do just fine too, so the delivery part is the easiest for now. By the way, keep in mind that those 3rd party ads displayed on legitimate web sites often include JavaScript too – so it’s really a good idea to install ad blocker now, if you haven’t already! And for those using Chrome, enabling Site Isolation feature is also a good idea.

OK, so let’s switch to Spectre next. This vulnerability is known to affect all modern CPUs, albeit to a different extent. It is not based on a bug per say, but rather on a design peculiarity of the execution path prediction logic, which is implemented by so-called Branch Prediction Unit (BPU). Essentially, what BPU does is accumulating statistics to estimate the probability of IF clause results. For example, if certain IF clause that compares some variable to zero returned FALSE 100 times in a row, you can predict with high probability that the clause will return FALSE when called for the 101st time, and speculatively move along the corresponding code execution branch even without having to load the actual variable. Makes perfect sense, right? However, the problem here is that while collecting this statistics, BPU does NOT distinguish between different processes for added “learning” effectiveness – which makes sense too, because computer programs share much in common (common algorithms, constructs implementation best practices and so on). And this is exactly what the exploit is based on: this peculiarity allows the malicious code to basically “train” BPU by running a construct that is identical to one in the attacked process hundreds of times, effectively enabling it to control speculative execution of the attacked process once it hits its own respective construct, making one dump “good stuff” into the CPU cache. Pretty awesome find, right?

But here comes the major difference between Meltdown and Spectre, which significantly complicates Spectre-based exploits implementation. While Meltdown can “scan” CPU cache directly (since the sought-after value was put there from within the scope of process running the Meltdown exploit), in case of Spectre it is the victim process itself that puts this value into the CPU cache. Thus, only the victim process itself is able to perform that timing-based CPU cache “scan”. Luckily for hackers, we live in the API-first world, where every decent app has API you can call to make it do the things you need, again measuring how long the execution of each API call took. Although getting the actual value requires deep analysis of the specific application, so this approach is only worth pursuing with the open-source apps. But the “beauty” of Spectre is that apparently, there are many ways to make the victim process leak its data to the CPU cache through speculative execution in the way that allows the attacking process to “pick it up”. Google engineers found and documented a few, but unfortunately many more are expected to exist. Who will find them first?

Of course, all of that only sounds easy at a conceptual level – while implementations with the real-world apps are extremely complex, and when I say “extremely” I really mean that. For example, Google engineers created a Spectre exploit POC that, running inside a KVM guest, can read host kernel memory at a rate of over 1500 bytes/second. However, before the attack can be performed, the exploit requires initialization that takes 30 minutes! So clearly, there’s a lot of math involved there. But if Google engineers could do that, hackers will be able too – because looking at how advanced some of the ransomware we saw last year was, one might wonder if it was written by folks who Google could not offer the salary or the position they wanted. It’s also worth mentioning here that a JavaScript-based POC also exists already, making the browser a viable attack vector for Spectre.

Now, the most important part – what do we do about those vulnerabilities? Well, it would appear that Intel and Google disclosed the vulnerability to all major vendors in advance, so by now most have already released patches. By the way, we really owe a big “thank you” to all those dev and QC folks who were working hard on patches while we were celebrating – just imagine the amount of work and testing required here, when changes are made to the holy grail of the operating system. Anyway, after reading the above, I hope you agree that vulnerabilities do not get more critical than these two, so be sure to install those patches ASAP. And, aside of most obvious stuff like your operating systems and hypervisors, be sure not to overlook any storage, network and other appliances – as they all run on some OS that too needs to be patched against these vulnerabilities. And don’t forget your smartphones! By the way, here’s one good community tracker for all security bulletins (Microsoft is not listed there, but they did push the corresponding emergency update to Windows Update back on January 3rd).

Having said that, there are a couple of important things you should keep in mind about those patches. First, they do come with a performance impact. Again, some folks will want you to think that the impact is negligible, but it’s only true for applications with low I/O activity. While many enterprise apps will definitely take a big hit – at least, big enough to account for. For example, installing the patch resulted in almost 20% performance drop in the PostgreSQL benchmark. And then, there is this major cloud service that saw CPU usage double after installing the patch on one of its servers. This impact is caused due to the patch adding significant overhead to so-called syscalls, which is what computer programs must use for any interactions with the outside world.

Last but not least, do know that while those patches fully address Meltdown, they only address a few currently known attacks vector that Spectre enables. Most security specialists agree that Spectre vulnerability opens a whole slew of “opportunities” for hackers, and that the solid fix can only be delivered in CPU hardware. Which in turn probably means at least two years until first such processor appears – and then a few more years until you replace the last impacted CPU. But until that happens, it sounds like we should all be looking forward to many fun years of jumping on yet another critical patch against some newly discovered Spectre-based attack

Jenkins on CentOS 7

Jenkins is a free and open source CI (Continuous Integration) tool which is written in JAVA. Jenkins is widely used for project development, deployment, and automation. Jenkins allows you to automate the non-human part of the whole software development process. It supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands. The creator of Jenkins is Kohsuke Kawaguchi.[3] Released under the MIT License.Jenkins’ security depends on two factors: 1. access control 2. protection from external threats.

  1. Access control can be customized via two ways, user authentication and authorization.
  2. Protection from external threats such as CSRF attacks and malicious builds is supported as well.

Requirements

It does not require any special kind of hardware, you’ll only need a CentOS 7 server and a root user access over it. You can switch from non root user to root user using sudo -i command.

Update System

It is highly recommended to install Jenkins on a freshly updated server. To upgrade available packages and system run below given command and it’ll do the job for you.

yum -y update

Install Java

Before going through the installation process of Jenkins you’ll need to set up Java Virtual Machine or JDK to your system. Simply run following command to install Java.

yum -y install java-1.8.0-openjdk.x86_64Once installation is finished you can check the version to confirm the installation, to do run following command.

java -versionThe above command will tell you about the installation details of java. By running the above command you should see the following result on your terminal screen.

openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

Next, you’ll need to setup two environment variables JAVA_HOME and JRE_HOME and to do so run following commands one by one.

cp /etc/profile /etc/profile_backupecho 'export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk' | sudo tee -a /etc/profileecho 'export JRE_HOME=/usr/lib/jvm/jre' | sudo tee -a /etc/profilesource /etc/profileFinally print them for review using following commands.

echo $JAVA_HOMEYou should see following output.

/usr/lib/jvm/jre-1.8.0-openjdk

echo $JRE_HOME

shell scirpt

#!/bin/sh

yum install -y java

wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

rpm –import https://jenkins-ci.org/redhat/jenkins-ci.org.key

yum install -y jenkins

yum install -y git

service jenkins start/stop/restart

chkconfig jenkins on

 

java

java -version

java -jar jenkins.war –httpPort=8080

# get jenkin-cli

wget http://localhost:8080/jnlpJars/jenkins-cli.jar

# jenkin-cli install plugin

java -jar jenkins-cli.jar -s http://localhost:8080 install-plugin checkstyle cloverphp crap4j dry htmlpublisher jdepend plot pmd violations warnings xunit –username=yang –password=lljkl

# safe restart

java -jar jenkins-cli.jar -s http://localhost:8080 safe-restart –username=yang –password=lljkl

Install Jenkins

We have installed all the dependencies required by Jenkins and now we are ready to install Jenkins. Run following commands to install latest stable release of Jenkins.

wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.reporpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.keyThe above two commands will add the jenkins repository and also import the key. If in case you have previously imported the key from Jenkins then the rpm --import will fail because you already have a key. Please ignore that and move on.

Now run following command to install Jenkins on your server.

yum -y install jenkinsNext, you’ll need to start Jenkins services and set it to run at boot time and to do so use following commands.

systemctl start jenkins.servicesystemctl enable jenkins.service

Connecting to a Git Repo

You will probably want to connect to a git repository next. This is also somewhat dependent on the operating system you use, so I provide the steps to do this on CentOS as well:

  • Install git
sudo yum install git
  • Generate an SSH key on the server
ssh-keygen -t rsa
  • When prompted, save the SSH key under the following path (I got this idea from reading the comments here)
/var/lib/jenkins/.ssh
  • Assure that the .ssh directory is owned by the Jenkins user:
sudo chown -R jenkins:jenkins /var/lib/jenkins/.ssh
  • Copy the public generated key to your git server (or add it in the GitHub/BitBucket web interface)
  • Assure your git server is listed in the known_hosts file. In my case, since I am using BitBucket my /var/lib/jenkins/.ssh/known_hosts file contains something like the following
bitbucket.org,104.192.143.3 ssh-rsa [...]
  • You can now create a new project and use Git as the SCM. You don’t need to provide any git credentials. Jenkins pulls these automatically form the /var/lib/jenkins/.ssh directory. There are good instructions for this available here.

Connecting to GitHub

  • In the Jenkins web interface, click on Credentials and then select the Jenkins Global credentials. Add a credential for GitHub which includes your GitHub username and password.
  • In the Jenkins web interface, click on Manage Jenkins and then on Configure System. Then scroll down to GitHub and then under GitHub servers click the Advanced Button. Then click the button Manage additional GitHub actions.

  • In the popup select Convert login and password to token and follow the prompts. This will result in a new credential having been created. Save and reload the page.
  • Now go back to the GitHub servers section and now click to add an additional server. As credential, select the credential which you have just selected.
  • In the Jenkins web interface, click on New Item and then select GitHub organisation and connect it to your user account.

Any of your GitHub projects will be automatically added to Jenkins, if they contain a Jenkinsfile. Here is an example.

Connect with BitBucket

  • First, you will need to install the BitBucket plugin.
  • After it is installed, create a normal git project.
  • Go to the Configuration for this project and select the following option:

  • Log into BitBucket and create a webhook in the settings for your project pointing to your Jenkins server as follows: http://youserver.com/bitbucket-hook/ (note the slash at the end)

Testing a Java Project

Chances are high you would like to run tests against a Java project, in the following, some instructions to get that working: