March 2020
M T W T F S S
« Jan    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Categories

WordPress Quotes

I've failed over and over and over again in my life and that is why I succeed.
Michael Jordan
March 2020
M T W T F S S
« Jan    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (40)
Ansibile (19)
Apache (135)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (270)
centos8 (3)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (2)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Ubuntu (1)
Uncategorized (30)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

15 visitors online now
2 guests, 13 bots, 0 members

Hit Counter provided by dental implants orange county

Swappiness in Linux

Most of Linux users that have installed a distribution before, must have noticed the existence of the “swap space” during the partitioning phase (it is usually found as /sda5). This is a dedicated space in your hard drive that is usually set to at least twice the capacity of your RAM, and along with it constitutes the total virtual memory of your system. From time to time, the Linux kernel utilizes this swap space by copying chunks from your RAM to the swap, allowing active processes that require more memory than it is physically available to run.

Swappiness is the kernel parameter that defines how much (and how often) your Linux kernel will copy RAM contents to swap. This parameter’s default value is “60” and it can take anything from “0” to “100”. The higher the value of the swappiness parameter, the more aggressively your kernel will swap.

Why change it?
The default value is a one-fit-all solution that can’t possibly be equally efficient in all of the individual use cases, hardware specifications and user needs. Moreover, the swappiness of a system is a primary factor that determines the overall functionality and speed performance of an OS. That said, it is very important to understand how swappiness works and how the various configurations of this element could improve the operation of your system and thus your everyday usage experience.

As RAM memory is so much larger and cheaper than it used to be in the past, there are many users nowadays that have enough memory to almost never need to use the swap file. The obvious benefit that derives from this is that no system resources are ever occupied by the swapping process and that cached files are not moved back and forth from the RAM to the swap and vise Versa for no reason.

How to change Swappiness?
The swappiness parameter value is stored in a simple configuration text file located in /proc/sys/vm and is named “swappiness”. If you navigate there through the file manager, you will be able to locate the file and open it to check your system’s swappiness. You can also check it or change it through the terminal (which is faster) by typing the following command:

sudo sysctl vm.swappiness=10
or whatever else between “0” and “100” instead of the value “10” that I used. To ensure that the swappiness value was correctly changed to the desired one, you simply type:

cat /proc/sys/vm/swappiness
on the terminal again and the active value will be outputted.

his change has an immediate effect in your system’s operation and thus no rebooting is required. In fact, rebooting will revert the swappiness back to its default value (60). If you have thoroughly tested your desired swapping value and you found that it works reliably, you can make the change permanent by navigating to /etc/sysctl.conf which is yet another text configuration file. You may open this as root (administrator) and add the following line on the bottom to determine the swappiness: vm.swappiness=”your desire value here”.

Then, save the text file and you’re done!

Install SSM Agent on Ubuntu EC2 instances

sudo apt update
sudo apt install snapd
sudo snap install amazon-ssm-agent –classic

sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service
sudo systemctl stop snap.amazon-ssm-agent.amazon-ssm-agent.service
sudo systemctl status snap.amazon-ssm-agent.amazon-ssm-agent.service

Converting your virtual machine to AWS EC2 AMI

The way to use AWS is not limited to AMI provided by Amazon (or 3rd party/community), but is possible to instantiate an EC2 workload starting from your own image, and converting to AMI.

The steps to create your custom AMI starting from VMware runs through these macro steps:

  • create VM template (ova)
  • create S3 bucket and upload the template
  • convert with awscli

OVA creation and upload in S3

This is the easiest part of this how-to that I don’t want to explain is how to export the Virtual Machine ova from the vInfrastructure or Workstation/Fusion… anyway IMHO the best method to manage VM template is using ova; starting from ovf and vmdk files, you could simply converting these files to ovf using ovftool (https://www.vmware.com/support/developer/ovf/), and executing the following command:

1ovftool <vm_image>.ovf <vm_image>.ova

Create an S3 bucket and upload the ova template via web, keeping in mind the name of the bucket and the name of the ova.

AMI conversion

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: { “Service”: “vmie.amazonaws.com” },
“Action”: “sts:AssumeRole”,
“Condition”: {
“StringEquals”:{
“sts:Externalid”: “vmimport”
}
}
}
]
}

Prepare the policy document trust-policy.json:

Then, create the role:

1aws iam create-role –role-name vmimport –assume-role-policy-document file://trust-policy.json

…and repare the role policy document named role-policy.json

{
“Version”:”2012-10-17″,
“Statement”:[
{
“Effect”:”Allow”,
“Action”:[
“s3:GetBucketLocation”,
“s3:GetObject”,
“s3:ListBucket”
],
“Resource”:[
“arn:aws:s3:::mohanawss3”,
“arn:aws:s3:::mohanawss3/” ] }, { “Effect”:”Allow”, “Action”:[ “ec2:ModifySnapshotAttribute”, “ec2:CopySnapshot”, “ec2:RegisterImage”, “ec2:Describe
],
“Resource”:”*”
}
]
}

After role, create the role policy:

1aws iam put-role-policy –role-name vmimport –policy-name vmimport –policy-document file://role-policy.json

Finally we could proceed with the real conversion, uploading the ova file into S3 bucket and creating the “container” description file.

The container.json will look like this:

[
{
“Description”: “mycentos OVA”,
“Format”: “ova”,
“UserBucket”: {
“S3Bucket”: “mohanawss3”,
“S3Key”: “awsmohan.ova”
}
}]

Then execute the command:

1aws ec2 import-image –description “Mohanaws” –license-type BYOL –disk-containers file://containers.json

The process is asynchronous and to see what is the state of this task, simply issuing the following command using “import-ami-xxxxxx” as task id:

1aws ec2 describe-import-image-tasks –import-task-ids import-ami-xxxx

Following the official documentation ( http://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html ) the states are:

  • active — The import task is in progress.
  • deleting — The import task is being canceled.
  • deleted — The import task is canceled.
  • updating — Import status is updating.
  • validating — The imported image is being validated.
  • converting — The imported image is being converted into an AMI.
  • completed — The import task is completed and the AMI is ready to use.

When the conversion is completed, you could start the first EC2 instance to see if all is gone well.


Installing Kubernetes 1.8.1 on centos 7 with flannel

Prerequisites:-

You should have at least two VMs (1 master and 1 slave) with you before creating cluster in order to test full functionality of k8s.

1] Master :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD ( suggested )

2] Slave :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD ( suggested )

3] Also, make sure of following things.

Network interconnectivity between VMs.
hostnames
Prefer to give Static IP.
DNS entries
Disable SELinux
$ vi /etc/selinux/config

Disable and stop firewall. ( If you are not familiar with firewall )
$ systemctl stop firewalld

$ systemctl disable firewalld

Following steps creates k8s cluster on the above VMs using kubeadm on centos 7.

Step 1] Installing kubelet and kubeadm on all your hosts

$ ARCH=x86_64

$ cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-${ARCH}

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

   https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

$ setenforce 0

$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni

$ systemctl enable docker && systemctl start docker

$ systemctl enable kubelet && systemctl start kubelet

You might have an issue where the kubelet service does not start. You can see the error in /var/log/messages: If you have an error as follows:
Oct 16 09:55:33 k8s-master kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Oct 16 09:55:33 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE

Then you will have to initialize the kubeadm first as in the next step. And the start the kubelet service.

Step 2.1] Initializing your master

$ kubeadm init

Note:-

execute above command on master node. This command will select one of interface to be used as API server. If you wants to provide another interface please provide “–apiserver-advertise-address=” as an argument. So the whole command will be like this-
$ kubeadm init –apiserver-advertise-address=

K8s has provided flexibility to use network of your choice like flannel, calico etc. I am using flannel network. For flannel network we need to pass network CIDR explicitly. So now the whole command will be-
$ kubeadm init –apiserver-advertise-address= –pod-network-cidr=10.244.0.0/16

Exa:- $ kubeadm init –apiserver-advertise-address=172.31.14.55 –pod-network-cidr=10.244.0.0/16

Step 2.2] Start using cluster

$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf
-> Use same network CIDR as it is also configured in yaml file of flannel that we are going to configure in step 3.

-> At the end you will get one token along with the command, make a note of it, which will be used to join the slaves.

Step 3] Installing a pod network

Different networks are supported by k8s and depends on user choice. For this demo I am using flannel network. As of k8s-1.6, cluster is more secure by default. It uses RBAC ( Role Based Access Control ), so make sure that the network you are going to use has support for RBAC and k8s-1.6.

Create RBAC Pods :
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Check whether pods are creating or not :

$ kubectl get pods –all-namespaces

Create Flannel pods :
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check whether pods are creating or not :

$ kubectl get pods –all-namespaces -o wide

-> at this stage all your pods should be in running state.

-> option “-o wide” will give more details like IP and slave where it is deployed.

Step 4] Joining your nodes

SSH to slave and execute following command to join the existing cluster.

$ kubeadm join –token :

You might also have an ca-cert-hash make sure you copy the entire join command from the init output to join the nodes.

Go to master node and see whether new slave has joined or not as-

$ kubectl get nodes

-> If slave is not ready, wait for few seconds, new slave will join soon.

Step 5] Verify your cluster by running sample nginx application

$ vi sample_nginx.yaml

———————————————

apiVersion: apps/v1beta1

kind: Deployment

metadata:

name: nginx-deployment

spec:

replicas: 2 # tells deployment to run 2 pods matching the template

template: # create pods using pod definition in this template

metadata:

 # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is

 # generated from the deployment name

 labels:

   app: nginx

spec:

 containers:

 – name: nginx

   image: nginx:1.7.9

   ports:

   – containerPort: 80

——————————————————

$ kubectl create -f sample_nginx.yaml

Verify pods are getting created or not.

$ kubectl get pods

$ kubectl get deployments

Now , lets expose the deployment so that the service will be accessible to other pods in the cluster.

$ kubectl expose deployment nginx-deployment –name=nginx-service –port=80 –target-port=80 –type=NodePort

Above command will create service with the name “nginx-service”. Service will be accessible on the port given by “–port” option for the “–target-port”. Target port will be of pod. Service will be accessible within the cluster only. In order to access it using your host IP “NodePort” option will be used.

–type=NodePort :- when this option is given k8s tries to find out one of free port in the range 30000-32767 on all the VMs of the cluster and binds the underlying service with it. If no such port found then it will return with an error.

Check service is created or not

$ kubectl get svc

Try to curl –

$ curl 80

From all the VMs including master. Nginx welcome page should be accessible.

$ curl nodePort

$ curl nodePort

Execute this from all the VMs. Nginx welcome page should be accessible.

Also, Access nginx home page by using browser.

CentOS 7.6 configures Nginx reverse proxy

First, the experiment introduction
Using a three CentOS 7 virtual machine to build a simple Nginx reverse proxy load cluster, three virtual machine addresses and functions
192.168.1.76 nginx load balancer
192.168.1.82 web01 server
192.168.1.78 web02 server
Second, install the nginx software (the following operations must be carried out on three virtual machines)
Some Centos 7.6 does not have the wget command installed, so install it yourself:
yum -y install wget

Install nginx software: (three servers must be installed)

$ wget http://dl.Fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ rpm -ivh epel-release-latest-7.noarch.rpm
$ yum install nginx (direct yum installation)

Installation is so simple and convenient, after the installation is complete, you can use systemctl to control the startup of nginx.
$ systemctl enable nginx (join boot)
$ systemctl start nginx (turn on nginx)
$ systemctl status nginx (view status)
After the three servers are installed with nginx respectively, the test can run normally and provide web services. If the error is probably the cause of the firewall, please see the last few steps about the firewall.
Modify the configuration file of the nginx of the proxy server to implement load balancing. As the name implies, multiple requests are distributed to different services to achieve a balanced load and reduce the pressure on a single service.

$ vi /etc/nginx/nginx.conf (modify configuration file, global configuration file)
For more information on configuration, see:
* Official English Documentation: http://nginx.org/en/docs/
* Official Russian Documentation: http://nginx.org/ru/docs/
User nginx;
worker_processes auto; (default is automatic, you can set it yourself, generally no more than cpu core)
error_log /var/log/nginx/error.log; (error log path)
pid /run/nginx.pid; (pid file path)
Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
Events { accept_mutex on; (set network connection serialization to prevent surprises, default is on) multi_accept on; (set whether a process accepts multiple network connections at the same time, the default is off) worker_connections 1024; (the maximum of a process Number of connections)
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main; Sendfile on; # tcp_nopush on; (not commented out here) tcp_nodelay on; keepalive_timeout 65; (connection timeout) types_hash_max_size 2048; gzip on; (open compression) include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf;
Here to set load balancing, load balancing has multiple strategies, nginx comes with polling, weights, ip-hash, response time and so on.
Default is to split the http load, the way to poll.
is to distribute the request according to the weight, the load with high weight is large
ip-hash, according to ip to allocate, keep the same ip on the same server.
Response time, according to the response time of the server nginx, preferentially distributed to the server with fast response.
The centralized strategy can be combined with
upstream tomcat { (tomcat is a custom load balancing rule name)
ip_hash; (ip_hash is the ip-hash method)
??????server 192.168.1.78:80 weight=3 fail_timeout=20s;
??????server 192.168.1.82:80 weight=4 fail_timeout=20s;
can define multiple sets of rules
}
Server { listen 80 default_server; (default listening port 80) listen localhost; (listening server) server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; Location / { ( / means all requests, can be customized to set different load rules and services for different domain names)
proxy_pass http://tomcat; (reverse proxy, fill in your own load balancing rule name)
proxy_redirect off; (The following settings can be copied directly. If not, it may lead to some problems such as unauthentication.)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90; The following are just some timeout settings, but don't)
proxy_send_timeout 90;
proxy_read_timeout 90;
}
# location ~.(gif|jpg|png)$ { (for example, write in regular expression)
# root /home/root/ Images;
# }
error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } }
Settings for a TLS enabled server.
#
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /usr/share/nginx/html;
#
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/private/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
#
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
#
location / {
}
#
error_page 404 /404.html;
location = /40x.html {
}
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
After the configuration is updated, the reload configuration can take effect without restarting the service.
nginx -s reload
If you can't access it, it may be because the firewall is open and the port is not open:
Start: systemctl start firewalld
off: systemctl stop firewalld
view status: systemctl status firewalld
boot disable: systemctl disable firewalld
boot enable: systemctl enable firewalld
Open a port:
Add
firewall-cmd --zone=public --add-port=80/tcp --permanent (--permanent is permanent, no failure after restarting this parameter)
Reload
firewall-cmd --reload
view
firewall-cmd -- zone = public --query-port = 80 / tcp
delete
firewall-cmd --zone = public --remove- port = 80 / tcp --permanent

Apache-based Web virtual host under Linux

Web virtual host refers to running multiple web sites in the same server, each of which does not actually occupy the entire server. Therefore, it is called a “virtual” web host, and the virtual web hosting service can make full use of the server. Hardware resources.

Using httpd makes it easy to set up a virtual host server. It only needs to run an httpd service to support a large number of web sites at the same time. There are three types of virtual hosts supported by httpd (like the Windows IIS service):

  1. A virtual host with the same IP, port number, and different domain name;
  2. Virtual host with the same IP and different port numbers;
  3. Virtual hosts with different IP addresses and the same port number;

Most O&M personnel should adopt the first solution when setting up a virtual host. The virtual host is based on different domain names, which is also the most user-friendly solution.

First, start building a domain-based virtual host:

  1. Provide domain name resolution for virtual hosts

[root@localhost /]

# vim /etc/named.conf

zone “mohan1.com” in {
type master;
file “mohan1.com.zone”;
};

zone “mohan2.com” in {
type master;
file “mohan2.com.zone”;
};

root@localhost /]# vim /var/named/mohan1.com.zone
in ns www.mohan1.com.
www in a 192.168.1.1

[root@localhost /]

# vim /var/named/mohan2.com.zone

    in      ns      www.mohan2.com.

www in a 192.168.1.1

2, prepare web documents for the virtual host

Prepare website directories and web documents for each virtual web host. For the convenience of mohaning, each virtual web host is provided with a different home page file:

[root@localhost named]

# mkdir -p /var/www/mohan1com

[root@localhost named]

# mkdir -p /var/www/mohan2com

[root@localhost named]

# echo “

www.mohan1.com

” > /var/www/mohan1com/index.html

[root@localhost named]

# echo “

www.mohan2.com

” > /var/www/mohan2com/index.html

3, add virtual host configuration

[root@localhost named]

# vim /usr/local/httpd/conf/extra/httpd-vhosts.conf

ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan1com” ServerName www.mohan1.com ErrorLog “logs/mohan1-error_log” CustomLog “logs/mohan1-access_log” common require all granted

mohan2
ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan2com” ServerName www.mohan2.com ErrorLog “logs/mohan2-error_log” CustomLog “logs/mohan2-access_log” common require all granted

[root@localhost named]

# vim /usr/local/httpd/conf/httpd.conf

Include conf/extra/httpd-vhosts.conf

[root@localhost named]

# systemctl restart httpd

  1. Access the virtual web host in the client

Verify it, the result is as follows:

Second, the virtual host based on IP address:

(100,000 don’t want to write down, because the next content can be understood, it won’t be used, but….. Just in case, just write it)

Note that there is no connection between each method. Don’t confuse IP-based virtual hosts with domain-based ones.

[root@localhost named]

# vim /usr/local/httpd/conf/extra/httpd-vhosts.conf
…………..
ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan1com” ErrorLog “mohan1-error_log” CustomLog “mohan1-access_log” common require all granted

ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan2com” ErrorLog “mohan2-error_log” CustomLog “mohan2-access_log” common require all granted

[root@localhost named]

# vim /usr/local/httpd/conf/httpd.conf
………………….
Include conf/extra/httpd-vhosts.conf

[root@localhost named]

# systemctl restart httpd

Second, the port-based virtual host:

[root@localhost named]

# vim /usr/local/httpd/conf/extra/httpd-vhosts.conf

ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan1com” ErrorLog “mohan1-error_log” CustomLog “mohan1-access_log” common require all granted

ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan2com” ErrorLog “mohan2-error_log” CustomLog “mohan2-access_log” common require all granted

listen 8000

[root@localhost named]

# vim /usr/local/httpd/conf/httpd.conf
………………….
Include conf/extra/httpd-vhosts.conf

[root@localhost named]

# systemctl restart httpd

Create an SSH server alias on a Linux system

If you frequently access many different remote systems via SSH, this technique will save you some time. You can create SSH aliases for frequently accessed systems via SSH, so you don’t have to remember all the different usernames, hostnames, SSH port numbers, and IP addresses. In addition, it avoids repeatedly entering the same username, hostname, IP address, and port number when SSHing to a Linux server.

Create an SSH alias in Linux
Before I know this trick, I usually use one of the following methods to connect to a remote system via SSH.

Use IP address:

$ ssh 192.168.225.22
Or use the port number, username, and IP address:

$ ssh -p 22 ec2-user@192.168.225.22
Or use the port number, username, and hostname:

$ ssh -p 22 ec2-user@server.example.com
Here

22 is the port number,
ec2-user is the username of the remote system.
192.168.225.22 is the IP of my remote system,
Server.example.com is the host name of the remote system.
I believe that most Linux novices and/or some administrators will connect to remote systems via SSH in this way. However, if you connect to multiple different systems via SSH, remembering all hostnames or IP addresses, as well as usernames, is difficult unless you write them on paper or save them in a text file. do not worry! This can be easily solved by creating an alias (or shortcut) for the SSH connection.

We can create aliases for SSH commands in two ways .

Method 1 – Use an SSH Profile
This is my preferred method of creating an alias.

We can use the SSH default configuration file to create an SSH alias. To do this, edit the ~/.ssh/config file (if this file doesn’t exist, just create one):

$ vi ~/.ssh/config
Add details for all remote hosts as follows:
Host webserver
HostName 192.168.225.22
User ec2-user

Host dns
HostName server.example.com
User root

Host dhcp
HostName 192.168.225.25
User ec2-user
Port 2233

Create an SSH alias in Linux using an SSH configuration file

Replace the values ??of the Host, Hostname, User, and Port configuration with your own values. After adding the details of all remote hosts, save and exit the file.

Now you can access the system via SSH using the following command :

$ ssh webserver
$ ssh dns
$ ssh dhcp
It’s that simple!

Access remote systems using SSH aliases

see it? I only use an alias (such as webserver) to access a remote system with an IP address of 192.168.225.22.

Please note that this is only for the current user. If you want to provide an alias for all users (system-wide), add the above line to the /etc/ssh/ssh_config file.

You can also add a lot of other content to your SSH configuration file. For example, if you have configured SSH key-based authentication, the location of the SSH key file is as follows:

Host Ubuntu
HostName 192.168.225.50
User senthil
IdentityFIle ~/.ssh/id_rsa_remotesystem
Make sure you have replaced your hostname, username, and SSH key file path with your own values.
Now connect to the remote server using the following command:

$ ssh ubuntu
This way, you can add as many remote hosts you want to access via SSH and quickly access them using aliases.

Method 2 – Use a Bash Alias
This is an emergency workaround for creating SSH aliases that speed up communication. You can make this taec2-user easier with the alias command.

Open the ~/.bashrc or ~/.bash_profile file:

Alias ??webserver=’ssh ec2-user@server.example.com’
Alias ??dns=’ssh ec2-user@server.example.com’
Alias ??dhcp=’ssh ec2-user@server.example.com -p 2233′
Alias ??ubuntu=’ssh ec2-user@server.example.com -i ~/.ssh/id_rsa_remotesystem’
Again, make sure you have replaced the host, hostname, port number, and IP address with your own values. Save the file and exit.
Then, use the command to apply the changes:

$ source ~/.bashrc
or
$ source ~/.bash_profile
In this method, you don’t even need to use the ssh alias command. Instead, just use an alias as shown below.
$ webserver
$ dns
$ dhcp
$ ubuntu

How to create a TCP listener or open ports in unix os

You can create a port listener using Netcat .

yum install nc -y

root@rmohan:~# nc -l 5000
you can also check if port is open or not using netstat command .

root@vm-rmohan:~# netstat -tulpen | grep nc
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 0 710327 17533/nc
you can also check with nc :

Netcat Server listener :

nc -l localhost 5000
Netcat Client :

root@vm-rmohan:~# nc -v localhost 5000
Connection to localhost 5000 port [tcp/*] succeeded!

INSTALLING KUBERNETES ON CENTOS7

[RUN ALL BELOW COMMADS on ALL NODES]

yum update
yum install -y epel-release

yum install docker [v1.11 or 1.12 or 1.13]

setup kubernates respos

kubeadm kubectl kubelet

[root@kubmaster yum.repos.d]

# cat kubernetes.repo

[kubernetes]

name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

setenforce 0

yum install -y kubelet kubeadm kubectl

  • Add host entry in /etc/hosts

systemctl start Docker
swapoff /dev/centos/swap
systemctl enable kubelet.service
systemctl enable docker

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

useradd kubeadmin
ifdown enp0s3

NOTE: On Virtual BOX disable NAT network interface before hitting init
or else port 6443 will get bound to NAT IP
disconnect N/A from console and reboot

kubeadm init –pod-network-cidr=10.244.0.0/16

Note: If you have multiple IPs / Hostname to bind ; run following to add name/ip in certificate

kubeadm init –pod-network-cidr=10.244.0.0/16 –apiserver-advertise-address 192.168.56.240 –apiserver-cert-extra-sans kubemaster.mhn.com

Create User

su – kubeadmin

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.56.240:6443 –token wxf3y9.ci2txlf7ja04svyg –discovery-token-ca-cert-hash sha256:ea3eeb5de0ffd9efe6d0f304f4fd9853c005ee98902ad7a7c110425c23eeab04


In order for your pods to communicate with one another, you’ll need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it’s easy to install and reliable. Enter this command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

If you see error like beow

The connection to the server localhost:8080 was refused – did you specify the right host or port?

Do the following as normal user

su – kubeadmin

sudo cp /etc/kubernetes/admin.conf $HOME/

sudo chown $(id -u):$(id -g) $HOME/admin.conf

export KUBECONFIG=$HOME/admin.conf

[root@kubmaster ~]

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io “flannel” created
clusterrolebinding.rbac.authorization.k8s.io “flannel” created
serviceaccount “flannel” created
configmap “kube-flannel-cfg” created
daemonset.extensions “kube-flannel-ds” created


[kubeadmin@kubmaster ~]

$ kubectl get pods
No resources found.

[kubeadmin@kubmaster ~]

$ kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-kubmaster 1/1 Running 0 47m
kube-system kube-apiserver-kubmaster 1/1 Running 0 47m
kube-system kube-controller-manager-kubmaster 1/1 Running 0 47m
kube-system kube-dns-86f4d74b45-mrq4d 3/3 Running 0 1h
kube-system kube-flannel-ds-854ns 1/1 Running 0 47m
kube-system kube-proxy-rlpbc 1/1 Running 0 1h
kube-system kube-scheduler-kubmaster 1/1 Running 0 47m

[kubeadmin@kubmaster ~]

$

k8s ansible install

Ansible role to setup 1 master +2 node kubernetes cluster (more nodes can be added)

setup centos VMs
configure hostnames
Update hosts file template in ../roles/kubernetes-deploy/files/hosts.template with host names and ipaddress
setup password less auth between your Ansible host and Kubernetes nodes

$ ssh-copyid root@kube-nodes?

setup Ansible inventory

kube-master.rmohan.com hostrole=master
kube-node1.rmohan.com hostrole=node
kube-node2.rmohan.com hostrole=node

Run Ansible Role

$ ansible-playbook install-kubernetes-centos7.yml

Role does follwoing

  • updated os
  • reboot
  • setup kubernetes environment

upon completion of ansible play, copy following command from stdout of play and run on all node as root

kubeadm join 192.168.1.240:6443 –token ce2b82.hbu4u9x12luwbhyr –discovery-token-ca-cert-hash sha256:510573c7ec722ac20674e96403517e97696e2110635d57455d869bae06ffefaa

  • Validation on Master

kubectl get nodes (check node status)

kubectl get pods –all-namespaces (you may need to wait for sometime to get the containers up)

[RUN ALL BELOW COMMADS on ALL NODES]

yum update
yum install -y epel-release

yum install docker [v1.11 or 1.12 or 1.13]

setup kubernates respos

kubeadm kubectl kubelet

[root@kubmaster yum.repos.d]

# cat kubernetes.repo

[kubernetes]

name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

setenforce 0

yum install -y kubelet kubeadm kubectl

  • Add host entry in /etc/hosts

systemctl start Docker
swapoff /dev/centos/swap
systemctl enable kubelet.service
systemctl enable docker

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

useradd kubeadmin
ifdown enp0s3

NOTE: On Virtual BOX disable NAT network interface before hitting init
or else port 6443 will get bound to NAT IP
disconnect N/A from console and reboot

kubeadm init –pod-network-cidr=10.244.0.0/16

Note: If you have multiple IPs / Hostname to bind ; run following to add name/ip in certificate

kubeadm init –pod-network-cidr=10.244.0.0/16 –apiserver-advertise-address 192.168.56.240 –apiserver-cert-extra-sans kubemaster.mhn.com

Create User

su – kubeadmin

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.56.240:6443 –token wxf3y9.ci2txlf7ja04svyg –discovery-token-ca-cert-hash sha256:ea3eeb5de0ffd9efe6d0f304f4fd9853c005ee98902ad7a7c110425c23eeab04


In order for your pods to communicate with one another, you’ll need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it’s easy to install and reliable. Enter this command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

If you see error like beow

The connection to the server localhost:8080 was refused – did you specify the right host or port?

Do the following as normal user

su – kubeadmin

sudo cp /etc/kubernetes/admin.conf $HOME/

sudo chown $(id -u):$(id -g) $HOME/admin.conf

export KUBECONFIG=$HOME/admin.conf

[root@kubmaster ~]

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io “flannel” created
clusterrolebinding.rbac.authorization.k8s.io “flannel” created
serviceaccount “flannel” created
configmap “kube-flannel-cfg” created
daemonset.extensions “kube-flannel-ds” created


[kubeadmin@kubmaster ~]

$ kubectl get pods
No resources found.

[kubeadmin@kubmaster ~]

$ kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-kubmaster 1/1 Running 0 47m
kube-system kube-apiserver-kubmaster 1/1 Running 0 47m
kube-system kube-controller-manager-kubmaster 1/1 Running 0 47m
kube-system kube-dns-86f4d74b45-mrq4d 3/3 Running 0 1h
kube-system kube-flannel-ds-854ns 1/1 Running 0 47m
kube-system kube-proxy-rlpbc 1/1 Running 0 1h
kube-system kube-scheduler-kubmaster 1/1 Running 0 47m

[kubeadmin@kubmaster ~]

$