August 2019
M T W T F S S
« Jul    
 1234
567891011
12131415161718
19202122232425
262728293031  

Categories

WordPress Quotes

Nothing is predestined: The obstacles of your past can become the gateways that lead to new beginnings.
Ralph Blum
August 2019
M T W T F S S
« Jul    
 1234
567891011
12131415161718
19202122232425
262728293031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (40)
Ansibile (19)
Apache (135)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (268)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (1)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

22 visitors online now
1 guests, 21 bots, 0 members

Hit Counter provided by dental implants orange county

How to install Apache, PHP 7.3 and MySQL on CentOS 7.6

How to install Apache, PHP 7.3 and MySQL on CentOS 7.6

I will add the EPEL repo here to install latest phpMyAdmin as follows:

rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY*
yum -y install epel-release

Installing MySQL / MariaDB
MariaDB is a MySQL fork of the original MySQL developer Monty Widenius. MariaDB is compatible with MySQL and I’ve chosen to use MariaDB here instead of MySQL. Run this command to install MariaDB with yum:

yum -y install mariadb-server mariadb
Then we create the system startup links for MySQL (so that MySQL starts automatically whenever the system boots) and start the MySQL server:

systemctl start mariadb.service
systemctl enable mariadb.service
Set passwords for the MySQL root account:

mysql_secure_installation

[root@server1 ~]

# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we’ll need the current
password for the root user. If you’ve just installed MariaDB, and
you haven’t set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): <–ENTER
OK, successfully used password, moving on…

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n]
New password: <–yourmariadbpassword
Re-enter new password: <–yourmariadbpassword
Password updated successfully!
Reloading privilege tables..
… Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] <–ENTER
… Success!

Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] <–ENTER
… Success!

By default, MariaDB comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] <–ENTER

  • Dropping test database…
    … Success!
  • Removing privileges on test database…
    … Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] <–ENTER
… Success!

Cleaning up…

All done! If you’ve completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

[root@server1 ~]

#
3 Installing Apache
CentOS 7 ships with Apache 2.4. Apache is directly available as a CentOS 7 package, therefore we can install it like this:

yum -y install httpd

Now configure your system to start Apache at boot time…

systemctl start httpd.service
systemctl enable httpd.service
To be able to access the webserver from outside, we have to open the HTTP (80) and HTTPS (443) ports in the firewall. The default firewall on CentOS is firewalld which can be configured with the firewalld-cmd command.

firewall-cmd –permanent –zone=public –add-service=http
firewall-cmd –permanent –zone=public –add-service=https
firewall-cmd –reload
Now direct your browser to the IP address of your server, in my case http://192.168.1.100, and you should see the Apache placeholder page:

Installing PHP
The PHP version that ships with CentOS as default is quite old (PHP 5.4). Therefore I will show you in this chapter some options to install newer PHP versions like PHP 7.0 to 7.3 from Remi repository.

Add the Remi CentOS repository.

rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm
Install yum-utils as we need the yum-config-manager utility.

yum -y install yum-utils
and run yum update

yum update
Now you have to chose which PHP version you want to use on the server. If you like to use PHP 5.4, then proceed to chapter 4.1. To install PHP 7.0, follow the commands in chapter 4.2, for PHP 7.1 chapter 4.3, for PHP 7.4 use chapter 4.4 and for PHP 7.3 follow chapter 4.5 instead. Follow just one of the 4.x chapters and not all of them as you can only use one PHP version at a time with Apache mod_php.

4.1 Install PHP 5.4
To install PHP 5.4, run this command:

yum -y install php
4.2 Install PHP 7.0
We can install PHP 7.0 and the Apache PHP 7.0 module as follows:

yum-config-manager –enable remi-php70
yum -y install php php-opcache
4.3 Install PHP 7.1
If you want to use PHP 7.1 instead, use:

yum-config-manager –enable remi-php71
yum -y install php php-opcache
4.4 Install PHP 7.2
If you want to use PHP 7.2 instead, use:

yum-config-manager –enable remi-php72
yum -y install php php-opcache
4.5 Install PHP 7.3
If you want to use PHP 7.3 instead, use:

yum-config-manager –enable remi-php73
yum -y install php php-opcache
In this example and in the downloadable virtual machine, I’ll use PHP 7.3.

We must restart Apache to apply the changes:

systemctl restart httpd.service
5 Testing PHP / Getting Details About Your PHP Installation
The document root of the default website is /var/www/html. We will create a small PHP file (info.php) in that directory and call it in a browser to test the PHP installation. The file will display lots of useful details about our PHP installation, such as the installed PHP version.

nano /var/www/html/info.php
<?php
phpinfo();
Now we call that file in a browser (e.g. http://192.168.1.100/info.php)

Getting MySQL Support In PHP
To get MySQL support in PHP, we can install the php-mysqlnd package. It’s a good idea to install some other PHP modules as well as you might need them for your applications. You can search for available PHP5 modules like this:

yum search php
Pick the ones you need and install them like this:

yum -y install php-mysqlnd php-pdo
In the next step I will install some common PHP modules that are required by CMS Systems like WordPress, Joomla, and Drupal:

yum -y install php-gd php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-soap curl curl-devel
Now restart Apache web server:

systemctl restart httpd.service
Now reload http://192.168.1.100/info.php in your browser and scroll down to the modules section again. You should now find lots of new modules like curl etc there.:

If you don’t need the PHP info output anymore, then delete that file for security reasons.

rm /var/www/html/info.php

7 phpMyAdmin installation

phpMyAdmin is a web interface through which you can manage your MySQL databases.
phpMyAdmin can now be installed as follows:

yum -y install phpMyAdmin

Now we configure phpMyAdmin. We change the Apache configuration so that phpMyAdmin allows connections not just from localhost (by commenting out the stanza and adding the ‘Require all granted’ line):

nano /etc/httpd/conf.d/phpMyAdmin.conf
[…]
Alias /phpMyAdmin /usr/share/phpMyAdmin
Alias /phpmyadmin /usr/share/phpMyAdmin


AddDefaultCharset UTF-8


# Apache 2.4

Require ip 127.0.0.1

Require ip ::1

Require all granted

# Apache 2.2 Order Deny,Allow Deny from All Allow from 127.0.0.1 Allow from ::1


Options none
AllowOverride Limit
Require all granted

Restart Apache to apply the configuration changes:

systemctl restart httpd.service

Afterwards, you can access phpMyAdmin under http://192.168.1.100/phpmyadmin/

aws-cli

Description

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

The AWS CLI introduces a new set of simple file commands for efficient file transfers to and from Amazon S3.

Supported Services

For a list of the available services you can use with AWS Command Line Interface, see Available Services in the AWS CLI Command Reference.

AWS Command Line Interface on GitHub

You can view—and fork—the source code for the AWS CLI on GitHub in the https://github.com/aws/aws-cli project.

Versions#

VersionRelease Date
1.10.382016-06-14
1.10.352016-06-03
1.10.332016-05-25
1.10.302016-05-18

AWS CLI Cheat sheet – List of All CLI commands#

Setup#

Install AWS CLI#

AWS CLI is an common CLI tool for managing the AWS resources. With this single tool we can manage all the aws resources

sudo apt-get install -y python-dev python-pip
sudo pip install awscli
aws --version
aws configure

Bash one-liners#

cat <file> # output a file
tee # split output into a file
cut -f 2 # print the 2nd column, per line
sed -n '5{p;q}' # print the 5th line in a file
sed 1d # print all lines, except the first
tail -n +2 # print all lines, starting on the 2nd
head -n 5 # print the first 5 lines
tail -n 5 # print the last 5 lines

expand # convert tabs to 4 spaces
unexpand -a # convert 4 spaces to tabs
wc # word count
tr ' ' \\t # translate / convert characters to other characters

sort # sort data
uniq # show only unique entries
paste # combine rows of text, by line
join # combine rows of text, by initial column value

Cloudtrail – Logging and Auditing#

http://docs.aws.amazon.com/cli/latest/reference/cloudtrail/ 5 Trails total, with support for resource level permissions

# list all trails
aws cloudtrail describe-trails

# list all S3 buckets
aws s3 ls

# create a new trail
aws cloudtrail create-subscription \
    --name awslog \
    --s3-new-bucket awslog2016

# list the names of all trails
aws cloudtrail describe-trails --output text | cut -f 8

# get the status of a trail
aws cloudtrail get-trail-status \
    --name awslog

# delete a trail
aws cloudtrail delete-trail \
    --name awslog

# delete the S3 bucket of a trail
aws s3 rb s3://awslog2016 --force

# add tags to a trail, up to 10 tags
aws cloudtrail add-tags \
    --resource-id awslog \
    --tags-list "Key=log-type,Value=all"

# list the tags of a trail
aws cloudtrail list-tags \
    --resource-id-list 

# remove a tag from a trail
aws cloudtrail remove-tags \
    --resource-id awslog \
    --tags-list "Key=log-type,Value=all"

IAM#

Users#

https://blogs.aws.amazon.com/security/post/Tx15CIT22V4J8RP/How-to-rotate-access-keys-for-IAM-usershttp://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html Limits = 5000 users, 100 group, 250 roles, 2 access keys / user

http://docs.aws.amazon.com/cli/latest/reference/iam/index.html
# list all user's info
aws iam list-users

# list all user's usernames
aws iam list-users --output text | cut -f 6

# list current user's info
aws iam get-user

# list current user's access keys
aws iam list-access-keys

# crate new user
aws iam create-user \
    --user-name aws-admin2

# create multiple new users, from a file
allUsers=$(cat ./user-names.txt)
for userName in $allUsers; do
    aws iam create-user \
        --user-name $userName
done

# list all users
aws iam list-users --no-paginate

# get a specific user's info
aws iam get-user \
    --user-name aws-admin2

# delete one user
aws iam delete-user \
    --user-name aws-admin2

# delete all users
# allUsers=$(aws iam list-users --output text | cut -f 6);
allUsers=$(cat ./user-names.txt)
for userName in $allUsers; do
    aws iam delete-user \
        --user-name $userName
done

Password policy#

http://docs.aws.amazon.com/cli/latest/reference/iam/
# list policy
# http://docs.aws.amazon.com/cli/latest/reference/iam/get-account-password-policy.html
aws iam get-account-password-policy

# set policy
# http://docs.aws.amazon.com/cli/latest/reference/iam/update-account-password-policy.html
aws iam update-account-password-policy \
    --minimum-password-length 12 \
    --require-symbols \
    --require-numbers \
    --require-uppercase-characters \
    --require-lowercase-characters \
    --allow-users-to-change-password

# delete policy
# http://docs.aws.amazon.com/cli/latest/reference/iam/delete-account-password-policy.html
aws iam delete-account-password-policy

Access Keys#

http://docs.aws.amazon.com/cli/latest/reference/iam/
# list all access keys
aws iam list-access-keys

# list access keys of a specific user
aws iam list-access-keys \
    --user-name aws-admin2

# create a new access key
aws iam create-access-key \
    --user-name aws-admin2 \
    --output text | tee aws-admin2.txt

# list last access time of an access key
aws iam get-access-key-last-used \
    --access-key-id AKIAINA6AJZY4EXAMPLE

# deactivate an acccss key
aws iam update-access-key \
    --access-key-id AKIAI44QH8DHBEXAMPLE \
    --status Inactive \
    --user-name aws-admin2

# delete an access key
aws iam delete-access-key \
    --access-key-id AKIAI44QH8DHBEXAMPLE \
    --user-name aws-admin2

Groups, Policies, Managed Policies#

http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.htmlhttp://docs.aws.amazon.com/cli/latest/reference/iam/
# list all groups
aws iam list-groups

# create a group
aws iam create-group --group-name FullAdmins

# delete a group
aws iam delete-group \
    --group-name FullAdmins

# list all policies
aws iam list-policies

# get a specific policy
aws iam get-policy \
    --policy-arn <value>

# list all users, groups, and roles, for a given policy
aws iam list-entities-for-policy \
    --policy-arn <value>

# list policies, for a given group
aws iam list-attached-group-policies \
    --group-name FullAdmins

# add a policy to a group
aws iam attach-group-policy \
    --group-name FullAdmins \
    --policy-arn arn:aws:iam::aws:policy/AdministratorAccess

# add a user to a group
aws iam add-user-to-group \
    --group-name FullAdmins \
    --user-name aws-admin2

# list users, for a given group
aws iam get-group \
    --group-name FullAdmins

# list groups, for a given user
aws iam list-groups-for-user \
    --user-name aws-admin2

# remove a user from a group
aws iam remove-user-from-group \
    --group-name FullAdmins \
    --user-name aws-admin2

# remove a policy from a group
aws iam detach-group-policy \
    --group-name FullAdmins \
    --policy-arn arn:aws:iam::aws:policy/AdministratorAccess

# delete a group
aws iam delete-group \
    --group-name FullAdmins

EC2#

keypairs#

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
# list all keypairs
# http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-key-pairs.html
aws ec2 describe-key-pairs

# create a keypair
# http://docs.aws.amazon.com/cli/latest/reference/ec2/create-key-pair.html
aws ec2 create-key-pair \
    --key-name <value>

# create a new private / public keypair, using RSA 2048-bit
ssh-keygen -t rsa -b 2048

# import an existing keypair
# http://docs.aws.amazon.com/cli/latest/reference/ec2/import-key-pair.html
aws ec2 import-key-pair \
    --key-name keyname_test \
    --public-key-material file:///home/apollo/id_rsa.pub

# delete a keypair
# http://docs.aws.amazon.com/cli/latest/reference/ec2/delete-key-pair.html
aws ec2 delete-key-pair \
    --key-name <value>

Security Groups#

http://docs.aws.amazon.com/cli/latest/reference/ec2/index.html
# list all security groups
aws ec2 describe-security-groups

# create a security group
aws ec2 create-security-group \
    --vpc-id vpc-1a2b3c4d \
    --group-name web-access \
    --description "web access"

# list details about a securty group
aws ec2 describe-security-groups \
    --group-id sg-0000000

# open port 80, for everyone
aws ec2 authorize-security-group-ingress \
    --group-id sg-0000000 \
    --protocol tcp \
    --port 80 \
    --cidr 0.0.0.0/24

# get my public ip
my_ip=$(dig +short myip.opendns.com @resolver1.opendns.com);
echo $my_ip

# open port 22, just for my ip
aws ec2 authorize-security-group-ingress \
    --group-id sg-0000000 \
    --protocol tcp \
    --port 80 \
    --cidr $my_ip/24

# remove a firewall rule from a group
aws ec2 revoke-security-group-ingress \
    --group-id sg-0000000 \
    --protocol tcp \
    --port 80 \
    --cidr 0.0.0.0/24

# delete a security group
aws ec2 delete-security-group \
    --group-id sg-00000000

Instances#

http://docs.aws.amazon.com/cli/latest/reference/ec2/index.html
# list all instances (running, and not running)
# http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html
aws ec2 describe-instances

# create a new instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html
aws ec2 run-instances \
    --image-id ami-f0e7d19a \    
    --instance-type t2.micro \
    --security-group-ids sg-00000000 \
    --dry-run

# stop an instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/terminate-instances.html
aws ec2 terminate-instances \
    --instance-ids <instance_id>

# list status of all instances
# http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instance-status.html
aws ec2 describe-instance-status

# list status of a specific instance
aws ec2 describe-instance-status \
    --instance-ids <instance_id>

Tags#

# list the tags of an instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-tags.html
aws ec2 describe-tags

# add a tag to an instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/create-tags.html
aws ec2 create-tags \
    --resources "ami-1a2b3c4d" \
    --tags Key=name,Value=debian

# delete a tag on an instance
# http://docs.aws.amazon.com/cli/latest/reference/ec2/delete-tags.html
aws ec2 delete-tags \
    --resources "ami-1a2b3c4d" \
    --tags Key=Name,Value=

Cloudwatch#

Log Groups#

http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatchLogs.htmlhttp://docs.aws.amazon.com/cli/latest/reference/logs/index.html#cli-aws-logscreate a group

http://docs.aws.amazon.com/cli/latest/reference/logs/create-log-group.html
aws logs create-log-group \
    --log-group-name "DefaultGroup"

list all log groups

http://docs.aws.amazon.com/cli/latest/reference/logs/describe-log-groups.html
aws logs describe-log-groups

aws logs describe-log-groups \
    --log-group-name-prefix "Default"

delete a group

http://docs.aws.amazon.com/cli/latest/reference/logs/delete-log-group.html
aws logs delete-log-group \
    --log-group-name "DefaultGroup"

Log Streams#

# Log group names can be between 1 and 512 characters long. Allowed
# characters include a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen),
# '/' (forward slash), and '.' (period).

# create a log stream
# http://docs.aws.amazon.com/cli/latest/reference/logs/create-log-stream.html
aws logs create-log-stream \
    --log-group-name "DefaultGroup" \
    --log-stream-name "syslog"

# list details on a log stream
# http://docs.aws.amazon.com/cli/latest/reference/logs/describe-log-streams.html
aws logs describe-log-streams \
    --log-group-name "syslog"

aws logs describe-log-streams \
    --log-stream-name-prefix "syslog"

# delete a log stream
# http://docs.aws.amazon.com/cli/latest/reference/logs/delete-log-stream.html
aws logs delete-log-stream \
    --log-group-name "DefaultGroup" \
    --log-stream-name "Default Stream"

AWS completer for Ubuntu with Bash#

The following utility can be used for auto-completion of commands:

$ which aws_completer
/usr/bin/aws_completer

$ complete -C '/usr/bin/aws_completer' aws

For future shell sessions, consider add this to your ~/.bashrc

$ echo "complete -C '/usr/bin/aws_completer' aws" >> ~/.bashrc

To check, type:

$ aws ec

Press the [TAB] key, it should add 2 automatically:

$ aws ec2

Creating a New Profile#

To setup a new credential profile with the name myprofile :

$ aws configure --profile myprofile
AWS Access Key ID [None]: ACCESSKEY
AWS Secret Access Key [None]: SECRETKEY
Default region name [None]: REGIONNAME
Default output format [None]: text | table | json

For the AWS access key id and secret, create an IAM user in the AWS console and generate keys for it.

Region will be the default region for commands in the format eu-west-1 or us-east-1 .

The default output format can either be text , table or json .

You can now use the profile name in other commands by using the --profile option, e.g.:

$ aws ec2 describe-instances --profile myprofile

AWS libraries for other languages (e.g. aws-sdk for Ruby or boto3 for Python) have options to use the profile you create with this method too. E.g. creating a new session in boto3 can be done like this, boto3.Session(profile_name:'myprofile') and it will use the credentials you created for the profile.

The details of your aws-cli configuration can be found in ~/.aws/config and ~/.aws/credentials (on linux and mac-os). These details can be edited manually from there.

Installation and setup#

There are a number of different ways to install the AWS CLI on your machine, depending on what operating system and environment you are using:

On Microsoft Windows – use the MSI installer. On Linux, OS X, or Unix – use pip (a package manager for Python software) or install manually with the bundled installer.

Install using pip:

You will need python to be installed (version 2, 2.6.5+,3 or 3.3+). Check with

python --version

pip --help

Given that both of these are installed, use the following command to install the aws cli.

sudo pip install awscli

Install on Windows The AWS CLI is supported on Microsoft Windows XP or later. For Windows users, the MSI installation package offers a familiar and convenient way to install the AWS CLI without installing any other prerequisites. Windows users should use the MSI installer unless they are already using pip for package management.

Run the downloaded MSI installer. Follow the instructions that appear.

To install the AWS CLI using the bundled installer

Prerequisites:

  • Linux, OS X, or Unix
  • Python 2 version 2.6.5+ or Python 3 version 3.3+
  1. Download the AWS CLI Bundled Installer using wget or curl.
  2. Unzip the package.
  3. Run the install executable.

On Linux and OS X, here are the three commands that correspond to each step:

$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip
$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

Install using HomeBrew on OS X:

Another option for OS X

brew install awscli

Test the AWS CLI Installation

Confirm that the CLI is installed correctly by viewing the help file. Open a terminal, shell or command prompt, enter aws help and press Enter:

$ aws help

Configuring the AWS CLI

Once you have finished the installation you need to configure it. You’ll need your access key and secret key that you get when you create your account on aws. You can also specify a default region name and a default output type (text|table|json).

$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: ENTER

Updating the CLI tool

Amazon periodically releases new versions of the AWS Tool. If the tool was installed using the Python Pip tool the following command will check the remote repository for updates, and apply it to your local system.

$ pip install awscli --upgrade

List S3 buckets#

aws s3 ls

Use a named profile

aws --profile myprofile s3 ls

List all objects in a bucket, including objects in folders, with size in human-readable format and a summary of the buckets properties in the end –

aws s3 ls --recursive --summarize --human-readable s3://<bucket_name>/

Using aws cli commands#

The syntax for using the aws cli is as follows:

aws [options] <command> <subcommand> [parameters]

Some examples using the ‘ec2’ command and the ‘describe-instances’ subcommand:

aws ec2 describe-instances

aws ec2 describe-instances --instance-ids <your-id>

Example with a fake id:

aws ec2 describe-instances --instance-ids i-c71r246a

Swappiness in Linux

Most of Linux users that have installed a distribution before, must have noticed the existence of the “swap space” during the partitioning phase (it is usually found as /sda5). This is a dedicated space in your hard drive that is usually set to at least twice the capacity of your RAM, and along with it constitutes the total virtual memory of your system. From time to time, the Linux kernel utilizes this swap space by copying chunks from your RAM to the swap, allowing active processes that require more memory than it is physically available to run.

Swappiness is the kernel parameter that defines how much (and how often) your Linux kernel will copy RAM contents to swap. This parameter’s default value is “60” and it can take anything from “0” to “100”. The higher the value of the swappiness parameter, the more aggressively your kernel will swap.

Why change it?
The default value is a one-fit-all solution that can’t possibly be equally efficient in all of the individual use cases, hardware specifications and user needs. Moreover, the swappiness of a system is a primary factor that determines the overall functionality and speed performance of an OS. That said, it is very important to understand how swappiness works and how the various configurations of this element could improve the operation of your system and thus your everyday usage experience.

As RAM memory is so much larger and cheaper than it used to be in the past, there are many users nowadays that have enough memory to almost never need to use the swap file. The obvious benefit that derives from this is that no system resources are ever occupied by the swapping process and that cached files are not moved back and forth from the RAM to the swap and vise Versa for no reason.

How to change Swappiness?
The swappiness parameter value is stored in a simple configuration text file located in /proc/sys/vm and is named “swappiness”. If you navigate there through the file manager, you will be able to locate the file and open it to check your system’s swappiness. You can also check it or change it through the terminal (which is faster) by typing the following command:

sudo sysctl vm.swappiness=10
or whatever else between “0” and “100” instead of the value “10” that I used. To ensure that the swappiness value was correctly changed to the desired one, you simply type:

cat /proc/sys/vm/swappiness
on the terminal again and the active value will be outputted.

his change has an immediate effect in your system’s operation and thus no rebooting is required. In fact, rebooting will revert the swappiness back to its default value (60). If you have thoroughly tested your desired swapping value and you found that it works reliably, you can make the change permanent by navigating to /etc/sysctl.conf which is yet another text configuration file. You may open this as root (administrator) and add the following line on the bottom to determine the swappiness: vm.swappiness=”your desire value here”.

Then, save the text file and you’re done!

Install SSM Agent on Ubuntu EC2 instances

sudo apt update
sudo apt install snapd
sudo snap install amazon-ssm-agent –classic

sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service
sudo systemctl stop snap.amazon-ssm-agent.amazon-ssm-agent.service
sudo systemctl status snap.amazon-ssm-agent.amazon-ssm-agent.service

Converting your virtual machine to AWS EC2 AMI

The way to use AWS is not limited to AMI provided by Amazon (or 3rd party/community), but is possible to instantiate an EC2 workload starting from your own image, and converting to AMI.

The steps to create your custom AMI starting from VMware runs through these macro steps:

  • create VM template (ova)
  • create S3 bucket and upload the template
  • convert with awscli

OVA creation and upload in S3

This is the easiest part of this how-to that I don’t want to explain is how to export the Virtual Machine ova from the vInfrastructure or Workstation/Fusion… anyway IMHO the best method to manage VM template is using ova; starting from ovf and vmdk files, you could simply converting these files to ovf using ovftool (https://www.vmware.com/support/developer/ovf/), and executing the following command:

1ovftool <vm_image>.ovf <vm_image>.ova

Create an S3 bucket and upload the ova template via web, keeping in mind the name of the bucket and the name of the ova.

AMI conversion

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: { “Service”: “vmie.amazonaws.com” },
“Action”: “sts:AssumeRole”,
“Condition”: {
“StringEquals”:{
“sts:Externalid”: “vmimport”
}
}
}
]
}

Prepare the policy document trust-policy.json:

Then, create the role:

1aws iam create-role –role-name vmimport –assume-role-policy-document file://trust-policy.json

…and repare the role policy document named role-policy.json

{
“Version”:”2012-10-17″,
“Statement”:[
{
“Effect”:”Allow”,
“Action”:[
“s3:GetBucketLocation”,
“s3:GetObject”,
“s3:ListBucket”
],
“Resource”:[
“arn:aws:s3:::mohanawss3”,
“arn:aws:s3:::mohanawss3/” ] }, { “Effect”:”Allow”, “Action”:[ “ec2:ModifySnapshotAttribute”, “ec2:CopySnapshot”, “ec2:RegisterImage”, “ec2:Describe
],
“Resource”:”*”
}
]
}

After role, create the role policy:

1aws iam put-role-policy –role-name vmimport –policy-name vmimport –policy-document file://role-policy.json

Finally we could proceed with the real conversion, uploading the ova file into S3 bucket and creating the “container” description file.

The container.json will look like this:

[
{
“Description”: “mycentos OVA”,
“Format”: “ova”,
“UserBucket”: {
“S3Bucket”: “mohanawss3”,
“S3Key”: “awsmohan.ova”
}
}]

Then execute the command:

1aws ec2 import-image –description “Mohanaws” –license-type BYOL –disk-containers file://containers.json

The process is asynchronous and to see what is the state of this task, simply issuing the following command using “import-ami-xxxxxx” as task id:

1aws ec2 describe-import-image-tasks –import-task-ids import-ami-xxxx

Following the official documentation ( http://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html ) the states are:

  • active — The import task is in progress.
  • deleting — The import task is being canceled.
  • deleted — The import task is canceled.
  • updating — Import status is updating.
  • validating — The imported image is being validated.
  • converting — The imported image is being converted into an AMI.
  • completed — The import task is completed and the AMI is ready to use.

When the conversion is completed, you could start the first EC2 instance to see if all is gone well.


Installing Kubernetes 1.8.1 on centos 7 with flannel

Prerequisites:-

You should have at least two VMs (1 master and 1 slave) with you before creating cluster in order to test full functionality of k8s.

1] Master :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD ( suggested )

2] Slave :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD ( suggested )

3] Also, make sure of following things.

Network interconnectivity between VMs.
hostnames
Prefer to give Static IP.
DNS entries
Disable SELinux
$ vi /etc/selinux/config

Disable and stop firewall. ( If you are not familiar with firewall )
$ systemctl stop firewalld

$ systemctl disable firewalld

Following steps creates k8s cluster on the above VMs using kubeadm on centos 7.

Step 1] Installing kubelet and kubeadm on all your hosts

$ ARCH=x86_64

$ cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-${ARCH}

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

   https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

$ setenforce 0

$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni

$ systemctl enable docker && systemctl start docker

$ systemctl enable kubelet && systemctl start kubelet

You might have an issue where the kubelet service does not start. You can see the error in /var/log/messages: If you have an error as follows:
Oct 16 09:55:33 k8s-master kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Oct 16 09:55:33 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE

Then you will have to initialize the kubeadm first as in the next step. And the start the kubelet service.

Step 2.1] Initializing your master

$ kubeadm init

Note:-

execute above command on master node. This command will select one of interface to be used as API server. If you wants to provide another interface please provide “–apiserver-advertise-address=” as an argument. So the whole command will be like this-
$ kubeadm init –apiserver-advertise-address=

K8s has provided flexibility to use network of your choice like flannel, calico etc. I am using flannel network. For flannel network we need to pass network CIDR explicitly. So now the whole command will be-
$ kubeadm init –apiserver-advertise-address= –pod-network-cidr=10.244.0.0/16

Exa:- $ kubeadm init –apiserver-advertise-address=172.31.14.55 –pod-network-cidr=10.244.0.0/16

Step 2.2] Start using cluster

$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf
-> Use same network CIDR as it is also configured in yaml file of flannel that we are going to configure in step 3.

-> At the end you will get one token along with the command, make a note of it, which will be used to join the slaves.

Step 3] Installing a pod network

Different networks are supported by k8s and depends on user choice. For this demo I am using flannel network. As of k8s-1.6, cluster is more secure by default. It uses RBAC ( Role Based Access Control ), so make sure that the network you are going to use has support for RBAC and k8s-1.6.

Create RBAC Pods :
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Check whether pods are creating or not :

$ kubectl get pods –all-namespaces

Create Flannel pods :
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check whether pods are creating or not :

$ kubectl get pods –all-namespaces -o wide

-> at this stage all your pods should be in running state.

-> option “-o wide” will give more details like IP and slave where it is deployed.

Step 4] Joining your nodes

SSH to slave and execute following command to join the existing cluster.

$ kubeadm join –token :

You might also have an ca-cert-hash make sure you copy the entire join command from the init output to join the nodes.

Go to master node and see whether new slave has joined or not as-

$ kubectl get nodes

-> If slave is not ready, wait for few seconds, new slave will join soon.

Step 5] Verify your cluster by running sample nginx application

$ vi sample_nginx.yaml

———————————————

apiVersion: apps/v1beta1

kind: Deployment

metadata:

name: nginx-deployment

spec:

replicas: 2 # tells deployment to run 2 pods matching the template

template: # create pods using pod definition in this template

metadata:

 # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is

 # generated from the deployment name

 labels:

   app: nginx

spec:

 containers:

 – name: nginx

   image: nginx:1.7.9

   ports:

   – containerPort: 80

——————————————————

$ kubectl create -f sample_nginx.yaml

Verify pods are getting created or not.

$ kubectl get pods

$ kubectl get deployments

Now , lets expose the deployment so that the service will be accessible to other pods in the cluster.

$ kubectl expose deployment nginx-deployment –name=nginx-service –port=80 –target-port=80 –type=NodePort

Above command will create service with the name “nginx-service”. Service will be accessible on the port given by “–port” option for the “–target-port”. Target port will be of pod. Service will be accessible within the cluster only. In order to access it using your host IP “NodePort” option will be used.

–type=NodePort :- when this option is given k8s tries to find out one of free port in the range 30000-32767 on all the VMs of the cluster and binds the underlying service with it. If no such port found then it will return with an error.

Check service is created or not

$ kubectl get svc

Try to curl –

$ curl 80

From all the VMs including master. Nginx welcome page should be accessible.

$ curl nodePort

$ curl nodePort

Execute this from all the VMs. Nginx welcome page should be accessible.

Also, Access nginx home page by using browser.

CentOS 7.6 configures Nginx reverse proxy

First, the experiment introduction
Using a three CentOS 7 virtual machine to build a simple Nginx reverse proxy load cluster, three virtual machine addresses and functions
192.168.1.76 nginx load balancer
192.168.1.82 web01 server
192.168.1.78 web02 server
Second, install the nginx software (the following operations must be carried out on three virtual machines)
Some Centos 7.6 does not have the wget command installed, so install it yourself:
yum -y install wget

Install nginx software: (three servers must be installed)

$ wget http://dl.Fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ rpm -ivh epel-release-latest-7.noarch.rpm
$ yum install nginx (direct yum installation)

Installation is so simple and convenient, after the installation is complete, you can use systemctl to control the startup of nginx.
$ systemctl enable nginx (join boot)
$ systemctl start nginx (turn on nginx)
$ systemctl status nginx (view status)
After the three servers are installed with nginx respectively, the test can run normally and provide web services. If the error is probably the cause of the firewall, please see the last few steps about the firewall.
Modify the configuration file of the nginx of the proxy server to implement load balancing. As the name implies, multiple requests are distributed to different services to achieve a balanced load and reduce the pressure on a single service.

$ vi /etc/nginx/nginx.conf (modify configuration file, global configuration file)
For more information on configuration, see:
* Official English Documentation: http://nginx.org/en/docs/
* Official Russian Documentation: http://nginx.org/ru/docs/
User nginx;
worker_processes auto; (default is automatic, you can set it yourself, generally no more than cpu core)
error_log /var/log/nginx/error.log; (error log path)
pid /run/nginx.pid; (pid file path)
Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
Events { accept_mutex on; (set network connection serialization to prevent surprises, default is on) multi_accept on; (set whether a process accepts multiple network connections at the same time, the default is off) worker_connections 1024; (the maximum of a process Number of connections)
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main; Sendfile on; # tcp_nopush on; (not commented out here) tcp_nodelay on; keepalive_timeout 65; (connection timeout) types_hash_max_size 2048; gzip on; (open compression) include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf;
Here to set load balancing, load balancing has multiple strategies, nginx comes with polling, weights, ip-hash, response time and so on.
Default is to split the http load, the way to poll.
is to distribute the request according to the weight, the load with high weight is large
ip-hash, according to ip to allocate, keep the same ip on the same server.
Response time, according to the response time of the server nginx, preferentially distributed to the server with fast response.
The centralized strategy can be combined with
upstream tomcat { (tomcat is a custom load balancing rule name)
ip_hash; (ip_hash is the ip-hash method)
??????server 192.168.1.78:80 weight=3 fail_timeout=20s;
??????server 192.168.1.82:80 weight=4 fail_timeout=20s;
can define multiple sets of rules
}
Server { listen 80 default_server; (default listening port 80) listen localhost; (listening server) server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; Location / { ( / means all requests, can be customized to set different load rules and services for different domain names)
proxy_pass http://tomcat; (reverse proxy, fill in your own load balancing rule name)
proxy_redirect off; (The following settings can be copied directly. If not, it may lead to some problems such as unauthentication.)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90; The following are just some timeout settings, but don't)
proxy_send_timeout 90;
proxy_read_timeout 90;
}
# location ~.(gif|jpg|png)$ { (for example, write in regular expression)
# root /home/root/ Images;
# }
error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } }
Settings for a TLS enabled server.
#
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /usr/share/nginx/html;
#
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/private/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
#
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
#
location / {
}
#
error_page 404 /404.html;
location = /40x.html {
}
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
After the configuration is updated, the reload configuration can take effect without restarting the service.
nginx -s reload
If you can't access it, it may be because the firewall is open and the port is not open:
Start: systemctl start firewalld
off: systemctl stop firewalld
view status: systemctl status firewalld
boot disable: systemctl disable firewalld
boot enable: systemctl enable firewalld
Open a port:
Add
firewall-cmd --zone=public --add-port=80/tcp --permanent (--permanent is permanent, no failure after restarting this parameter)
Reload
firewall-cmd --reload
view
firewall-cmd -- zone = public --query-port = 80 / tcp
delete
firewall-cmd --zone = public --remove- port = 80 / tcp --permanent

Apache-based Web virtual host under Linux

Web virtual host refers to running multiple web sites in the same server, each of which does not actually occupy the entire server. Therefore, it is called a “virtual” web host, and the virtual web hosting service can make full use of the server. Hardware resources.

Using httpd makes it easy to set up a virtual host server. It only needs to run an httpd service to support a large number of web sites at the same time. There are three types of virtual hosts supported by httpd (like the Windows IIS service):

  1. A virtual host with the same IP, port number, and different domain name;
  2. Virtual host with the same IP and different port numbers;
  3. Virtual hosts with different IP addresses and the same port number;

Most O&M personnel should adopt the first solution when setting up a virtual host. The virtual host is based on different domain names, which is also the most user-friendly solution.

First, start building a domain-based virtual host:

  1. Provide domain name resolution for virtual hosts

[root@localhost /]

# vim /etc/named.conf

zone “mohan1.com” in {
type master;
file “mohan1.com.zone”;
};

zone “mohan2.com” in {
type master;
file “mohan2.com.zone”;
};

root@localhost /]# vim /var/named/mohan1.com.zone
in ns www.mohan1.com.
www in a 192.168.1.1

[root@localhost /]

# vim /var/named/mohan2.com.zone

    in      ns      www.mohan2.com.

www in a 192.168.1.1

2, prepare web documents for the virtual host

Prepare website directories and web documents for each virtual web host. For the convenience of mohaning, each virtual web host is provided with a different home page file:

[root@localhost named]

# mkdir -p /var/www/mohan1com

[root@localhost named]

# mkdir -p /var/www/mohan2com

[root@localhost named]

# echo “

www.mohan1.com

” > /var/www/mohan1com/index.html

[root@localhost named]

# echo “

www.mohan2.com

” > /var/www/mohan2com/index.html

3, add virtual host configuration

[root@localhost named]

# vim /usr/local/httpd/conf/extra/httpd-vhosts.conf

ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan1com” ServerName www.mohan1.com ErrorLog “logs/mohan1-error_log” CustomLog “logs/mohan1-access_log” common require all granted

mohan2
ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan2com” ServerName www.mohan2.com ErrorLog “logs/mohan2-error_log” CustomLog “logs/mohan2-access_log” common require all granted

[root@localhost named]

# vim /usr/local/httpd/conf/httpd.conf

Include conf/extra/httpd-vhosts.conf

[root@localhost named]

# systemctl restart httpd

  1. Access the virtual web host in the client

Verify it, the result is as follows:

Second, the virtual host based on IP address:

(100,000 don’t want to write down, because the next content can be understood, it won’t be used, but….. Just in case, just write it)

Note that there is no connection between each method. Don’t confuse IP-based virtual hosts with domain-based ones.

[root@localhost named]

# vim /usr/local/httpd/conf/extra/httpd-vhosts.conf
…………..
ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan1com” ErrorLog “mohan1-error_log” CustomLog “mohan1-access_log” common require all granted

ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan2com” ErrorLog “mohan2-error_log” CustomLog “mohan2-access_log” common require all granted

[root@localhost named]

# vim /usr/local/httpd/conf/httpd.conf
………………….
Include conf/extra/httpd-vhosts.conf

[root@localhost named]

# systemctl restart httpd

Second, the port-based virtual host:

[root@localhost named]

# vim /usr/local/httpd/conf/extra/httpd-vhosts.conf

ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan1com” ErrorLog “mohan1-error_log” CustomLog “mohan1-access_log” common require all granted

ServerAdmin admin@mohan.com DocumentRoot “/var/www/mohan2com” ErrorLog “mohan2-error_log” CustomLog “mohan2-access_log” common require all granted

listen 8000

[root@localhost named]

# vim /usr/local/httpd/conf/httpd.conf
………………….
Include conf/extra/httpd-vhosts.conf

[root@localhost named]

# systemctl restart httpd

Create an SSH server alias on a Linux system

If you frequently access many different remote systems via SSH, this technique will save you some time. You can create SSH aliases for frequently accessed systems via SSH, so you don’t have to remember all the different usernames, hostnames, SSH port numbers, and IP addresses. In addition, it avoids repeatedly entering the same username, hostname, IP address, and port number when SSHing to a Linux server.

Create an SSH alias in Linux
Before I know this trick, I usually use one of the following methods to connect to a remote system via SSH.

Use IP address:

$ ssh 192.168.225.22
Or use the port number, username, and IP address:

$ ssh -p 22 ec2-user@192.168.225.22
Or use the port number, username, and hostname:

$ ssh -p 22 ec2-user@server.example.com
Here

22 is the port number,
ec2-user is the username of the remote system.
192.168.225.22 is the IP of my remote system,
Server.example.com is the host name of the remote system.
I believe that most Linux novices and/or some administrators will connect to remote systems via SSH in this way. However, if you connect to multiple different systems via SSH, remembering all hostnames or IP addresses, as well as usernames, is difficult unless you write them on paper or save them in a text file. do not worry! This can be easily solved by creating an alias (or shortcut) for the SSH connection.

We can create aliases for SSH commands in two ways .

Method 1 – Use an SSH Profile
This is my preferred method of creating an alias.

We can use the SSH default configuration file to create an SSH alias. To do this, edit the ~/.ssh/config file (if this file doesn’t exist, just create one):

$ vi ~/.ssh/config
Add details for all remote hosts as follows:
Host webserver
HostName 192.168.225.22
User ec2-user

Host dns
HostName server.example.com
User root

Host dhcp
HostName 192.168.225.25
User ec2-user
Port 2233

Create an SSH alias in Linux using an SSH configuration file

Replace the values ??of the Host, Hostname, User, and Port configuration with your own values. After adding the details of all remote hosts, save and exit the file.

Now you can access the system via SSH using the following command :

$ ssh webserver
$ ssh dns
$ ssh dhcp
It’s that simple!

Access remote systems using SSH aliases

see it? I only use an alias (such as webserver) to access a remote system with an IP address of 192.168.225.22.

Please note that this is only for the current user. If you want to provide an alias for all users (system-wide), add the above line to the /etc/ssh/ssh_config file.

You can also add a lot of other content to your SSH configuration file. For example, if you have configured SSH key-based authentication, the location of the SSH key file is as follows:

Host Ubuntu
HostName 192.168.225.50
User senthil
IdentityFIle ~/.ssh/id_rsa_remotesystem
Make sure you have replaced your hostname, username, and SSH key file path with your own values.
Now connect to the remote server using the following command:

$ ssh ubuntu
This way, you can add as many remote hosts you want to access via SSH and quickly access them using aliases.

Method 2 – Use a Bash Alias
This is an emergency workaround for creating SSH aliases that speed up communication. You can make this taec2-user easier with the alias command.

Open the ~/.bashrc or ~/.bash_profile file:

Alias ??webserver=’ssh ec2-user@server.example.com’
Alias ??dns=’ssh ec2-user@server.example.com’
Alias ??dhcp=’ssh ec2-user@server.example.com -p 2233′
Alias ??ubuntu=’ssh ec2-user@server.example.com -i ~/.ssh/id_rsa_remotesystem’
Again, make sure you have replaced the host, hostname, port number, and IP address with your own values. Save the file and exit.
Then, use the command to apply the changes:

$ source ~/.bashrc
or
$ source ~/.bash_profile
In this method, you don’t even need to use the ssh alias command. Instead, just use an alias as shown below.
$ webserver
$ dns
$ dhcp
$ ubuntu

How to create a TCP listener or open ports in unix os

You can create a port listener using Netcat .

yum install nc -y

root@rmohan:~# nc -l 5000
you can also check if port is open or not using netstat command .

root@vm-rmohan:~# netstat -tulpen | grep nc
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 0 710327 17533/nc
you can also check with nc :

Netcat Server listener :

nc -l localhost 5000
Netcat Client :

root@vm-rmohan:~# nc -v localhost 5000
Connection to localhost 5000 port [tcp/*] succeeded!