April 2019
M T W T F S S
« Jan    
1234567
891011121314
15161718192021
22232425262728
2930  

Categories

WordPress Quotes

In the absence of clearly-defined goals, we become strangely loyal to performing daily trivia until ultimately we become enslaved by it.
Robert Heinlein
April 2019
M T W T F S S
« Jan    
1234567
891011121314
15161718192021
22232425262728
2930  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (34)
Ansibile (19)
Apache (133)
Asterisk (2)
cassandra (2)
Centos (209)
Centos RHEL 7 (261)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (29)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (3)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (31)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (34)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (60)
Uncategorized (30)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

14 visitors online now
3 guests, 11 bots, 0 members

Hit Counter provided by dental implants orange county

AWS Notes 2

Limits:
======
VPCs per region: 5
Subnets per VPC: 200
IGW per region: 5
VGW per region: 5
CGW per region: 50
VPN connections per region: 50
Route tables per VPC: 200 (including main route table)
Entries per route table: 50
EIP per region for each account: 5
Security groups per VPC: 100
Rules per security group: 50 (per network interface max limit: 250)
Security groups per network interface: 5
Network ACLs per VPC: 200
Rules per ACL: 20
BGP advertised routes per VPN Connection: 100
Active VPC peering connections per VPC: 50
Outstanding VPC peering connection requests: 25

IAM:
—-
Interfaces:
1. AWS management console
2. CLI
3. IAM Query API
4. Existing Libraries

MyISAM (non transaction db)
—————————
steps to creating a read replica:
1. stop all DDL and DML operations on non transactional tables and wait for them to complete. SELECT statement can continue running
2. Flush and lock those tables
3. Create read replica using CreateDBInstanceReadReplica API
4. Check progress of replication using DescribeDBInstance API
5. Once replica is available unlock tables and resume normal database operations

CloudFront – Alternate Domain Names:
————————————-
/images/image.jpg > http://www.mydomain.com/images/image.jpg
instead of http://d111111abcdef8.cloudfront.net/images/image.jpg

1. add CNAME for www.mydomain.com to your distribution
2.update or cretae CNAME record with your DNS service to route queries

Elasticity:
———-
3 ways of implementing:
1. Proactive cycle scaling – periodic scaling that occurs at fixed intervals (daily, weekly, monthly, quarterly)
2. Proactive event based scaling – scaling based on event like a big surge of traffic requests due to a scheduled business event (new product launc, marketing campaigns, etc.)
3. Auto-scaling based on demand: by using a monitoring service, your system can send triggers to take appropriate actions, scale up or down based on metrics (utilization of servers, network I/O)

Instance stores:
——————-
Data on instance stores persists only during life of the instance
If an instance reboots then data persists

Data is lost under following scenarios:
– Failure of an underlying drive
– Stopping and Amazon EBS backed instance
– Terminating an instance

Therefore, do not rely on instance store volumes for long term data
instead keep replication strategy across multiple instances, storing data on S3 or using EBS volumes

CloudFront instance id:
———————-
Logical ID and physical ID
physical ID can be used to view instance and properties through EC2 console but can only be accessed once cloudfront has created the resources.
logical id is used for mapping resources e.g. EBS to an instance -> logical ids for both EBS and EC2 instance to specify the mapping

Elastic load balancer:
———————-
Internet facing load balancer – DNS name, public IP and IGW (internet gateway)
Internal load balancer – DNS name and private IP
DNS of both load balancers are publicly resolvable
Flow: Internet facing load balancer > DNS resolve > Webservers > Internal load balancer > DNS resolve > private IPs > backend instances within the private subnet
application instances behind the load balancer do not need to be in the same subnet

Auto-scaling AMI:
——————
AMI ID used in Auto scaling policy is configured in the “launch configuration”
There are differences between creating a launch configuration from scratch and creating a launch configuration from an existing EC2 instance. When you create a launch configuration from scratch, you specify the image ID, instance type, optional resources (such as storage devices), and optional settings (like monitoring). When you create a launch configuration from a running instance, by default Auto Scaling derives attributes for the launch configuration from the specified instance, plus the block device mapping for the AMI that the instance was launched from (ignoring any additional block devices that were added to the instance after launch).

When you create a launch configuration using a running instance, you can override the following attributes by specifying then as part of the same request: AMI, block devices, key pair, instance profile, instance type, kernel, monitoring, placement tenancy, ramdisk, security groups, Spot price, user data, whether the instance has a public IP address is associated, and whether the instance is EBS-optimized.

Amazon RDS – High Availability (Multi-AZ)
—————————————–

Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Multi-AZ deployments for Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring.

Note
Amazon Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single region, regardless of whether the instances in the DB cluster span multiple Availability Zones. For more information on Amazon Aurora, see Aurora on Amazon RDS.
In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption. For more information on Availability Zones, see Regions and Availability Zones.

Note
The high-availability feature is not a scaling solution for read-only scenarios; you cannot use a standby replica to serve read traffic. To service read-only traffic, you should use a Read Replica. For more information, see Working with PostgreSQL, MySQL, and MariaDB Read Replicas.

AWS Storage Gateway:
———————
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the Amazon Web Services (AWS) storage infrastructure. You can use the service to store data in the AWS Cloud for scalable and cost-effective storage that helps maintain data security. AWS Storage Gateway offers file-based, volume-based and tape-based storage solutions
1. File Gateway: file interface to S3
2. Volume Gateway: iSCSI devices on premise
– cached volumes > S3 > low latency access
3. Tape Gateway: backup data to Amazon Glacier
– stored volumes > S3 > asynchronous backup from on premise to point in time snapshots

AWS Cloud Formation:
——————–
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

You can use AWS CloudFormation?s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don?t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.

ELB limitations – pre-warming:
—————————–
For pre-warming following details required from customer:
1. Start or end dates of your test or expected flash traffic
2. Expected request rate per second
3. Total size of the request/response you’ll be testing

Note: To ensure there is no outage during this setup we recommend Multi-AZ setup

Auto-scaling health checks:
—————————
Health Checks for Auto Scaling Instances

Auto Scaling periodically performs health checks on the instances in your Auto Scaling group and identifies any instances that are unhealthy. After Auto Scaling marks an instance as unhealthy, it is scheduled for replacement. For more information, see Replacing Unhealthy Instances.

Instance Health Status

An Auto Scaling instance is either healthy or unhealthy. Auto Scaling determines the health status of an instance using one or more of the following:

Status checks provided by Amazon EC2. For more information, see Status Checks for Your Instances in the Amazon EC2 User Guide for Linux Instances.
Health checks provided by Elastic Load Balancing. For more information, see Health Checks for Your Target Groups in the Application Load Balancer Guide or Configure Health Checks for Your Classic Load Balancer in the Classic Load Balancer Guide.
Custom health checks. For more information, see Instance Health Status and Custom Health Checks.
By default, Auto Scaling health checks use the results of the status checks to determine the health status of an instance. Auto Scaling marks an instance as unhealthy if its instance status is any value other than running or its system status is impaired.

If you have attached a load balancer to your Auto Scaling group, you can optionally have Auto Scaling include the results of Elastic Load Balancing health checks when determining the health status of an instance. After you add these health checks, Auto Scaling also marks an instance as unhealthy if Elastic Load Balancing reports the instance state as OutOfService. For more information, see Adding Health Checks to Your Auto Scaling Group.

Health Check Grace Period

Frequently, an Auto Scaling instance that has just come into service needs to warm up before it can pass the Auto Scaling health check. Auto Scaling waits until the health check grace period ends before checking the health status of the instance. While the EC2 status checks and ELB health checks can complete before the health check grace period expires, Auto Scaling does not act on them until the health check grace period expires. To provide ample warm-up time for your instances, ensure that the health check grace period covers the expected startup time for your application. Note that if you add a lifecycle hook to perform actions as your instances launch, the health check grace period does not start until the lifecycle hook is completed and the instance enters the InService state.

Instance Health Status and Custom Health Checks

If you have custom health checks, you can send the information from your health checks to Auto Scaling so that Auto Scaling can use this information. For example, if you determine that an instance is not functioning as expected, you can set the health status of the instance to Unhealthy. The next time that Auto Scaling performs a health check on the instance, it will determine that the instance is unhealthy and then launch a replacement instance.

Internet Gateways
—————–

An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

An Internet gateway serves two purposes: to provide a target in your VPC route tables for Internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.

An Internet gateway supports IPv4 and IPv6 traffic.

Enabling Internet Access

To enable access to or from the Internet for instances in a VPC subnet, you must do the following:

Attach an Internet gateway to your VPC.
Ensure that your subnet’s route table points to the Internet gateway.
Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control and security group rules allow the relevant traffic to flow to and from your instance.
To use an Internet gateway, your subnet’s route table must contain a route that directs Internet-bound traffic to the Internet gateway. You can scope the route to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6), or you can scope the route to a narrower range of IP addresses; for example, the public IPv4 addresses of your company?s public endpoints outside of AWS, or the Elastic IP addresses of other Amazon EC2 instances outside your VPC. If your subnet is associated with a route table that has a route to an Internet gateway, it’s known as a public subnet.

To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The Internet gateway logically provides the one-to-one NAT on behalf of your instance, so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or Elastic IP address of your instance, and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or Elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC.

To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default.

In the following diagram, Subnet 1 in the VPC is associated with a custom route table that points all Internet-bound IPv4 traffic to an Internet gateway. The instance has an Elastic IP address, which enables communication with the Internet.

You can deploy and update a template and its associated collection of resources (called a stack) by using the AWS Management Console, AWS Command Line Interface, or APIs. CloudFormation is available at no additional charge, and you pay only for the AWS resources needed to run your applications.

 

 

EC2 stop running instance:
————————–
When you stop a running instance, the following happens:

The instance performs a normal shutdown and stops running; its status changes to stopping and then stopped.
Any Amazon EBS volumes remain attached to the instance, and their data persists.
Any data stored in the RAM of the host computer or the instance store volumes of the host computer is gone.
In most cases, the instance is migrated to a new underlying host computer when it’s started.
EC2-Classic: We release the public and private IPv4 addresses for the instance when you stop the instance, and assign new ones when you restart it.
EC2-VPC: The instance retains its private IPv4 addresses and any IPv6 addresses when stopped and restarted. We release the public IPv4 address and assign a new one when you restart it.
EC2-Classic: We disassociate any Elastic IP address that’s associated with the instance. You’re charged for Elastic IP addresses that aren’t associated with an instance. When you restart the instance, you must associate the Elastic IP address with the instance; we don’t do this automatically.
EC2-VPC: The instance retains its associated Elastic IP addresses. You’re charged for any Elastic IP addresses associated with a stopped instance.
When you stop and start a Windows instance, the EC2Config service performs tasks on the instance such as changing the drive letters for any attached Amazon EBS volumes. For more information about these defaults and how you can change them, see Configuring a Windows Instance Using the EC2Config Service in the Amazon EC2 User Guide for Windows Instances.
If you’ve registered the instance with a load balancer, it’s likely that the load balancer won’t be able to route traffic to your instance after you’ve stopped and restarted it. You must de-register the instance from the load balancer after stopping the instance, and then re-register after starting the instance. For more information, see Register or Deregister EC2 Instances for Your Classic Load Balancer in the Classic Load Balancer Guide.
If your instance is in an Auto Scaling group, the Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance. For more information, see Health Checks for Auto Scaling Instances in the Auto Scaling User Guide.
When you stop a ClassicLink instance, it’s unlinked from the VPC to which it was linked. You must link the instance to the VPC again after restarting it. For more information about ClassicLink, see ClassicLink.
For more information, see Differences Between Reboot, Stop, and Terminate.

You can modify the following attributes of an instance only when it is stopped:

Instance type
User data
Kernel
RAM disk
If you try to modify these attributes while the instance is running, Amazon EC2 returns the IncorrectInstanceState error.

Stopping and Starting Your Instances

You can start and stop your Amazon EBS-backed instance using the console or the command line.

By default, when you initiate a shutdown from an Amazon EBS-backed instance (using the shutdown, halt, or poweroff command), the instance stops. You can change this behavior so that it terminates instead. For more information, see Changing the Instance Initiated Shutdown Behavior.

To stop and start an Amazon EBS-backed instance using the console

In the navigation pane, choose Instances, and select the instance.

[EC2-Classic] If the instance has an associated Elastic IP address, write down the Elastic IP address and the instance ID shown in the details pane.

Choose Actions, select Instance State, and then choose Stop. If Stop is disabled, either the instance is already stopped or its root device is an instance store volume.

Warning
When you stop an instance, the data on any instance store volumes is erased. Therefore, if you have any data on instance store volumes that you want to keep, be sure to back it up to persistent storage.
In the confirmation dialog box, choose Yes, Stop. It can take a few minutes for the instance to stop.

[EC2-Classic] When the instance state becomes stopped, the Elastic IP, Public DNS (IPv4), Private DNS, and Private IPs fields in the details pane are blank to indicate that the old values are no longer associated with the instance.

While your instance is stopped, you can modify certain instance attributes. For more information, see Modifying a Stopped Instance.

To restart the stopped instance, select the instance, choose Actions, select Instance State, and then choose Start.

In the confirmation dialog box, choose Yes, Start. It can take a few minutes for the instance to enter the running state.

[EC2-Classic] When the instance state becomes running, the Public DNS (IPv4), Private DNS, and Private IPs fields in the details pane contain the new values that we assigned to the instance.

[EC2-Classic] If your instance had an associated Elastic IP address, you must reassociate it as follows:

In the navigation pane, choose Elastic IPs.

Select the Elastic IP address that you wrote down before you stopped the instance.

Choose Actions, and then select Associate address.

Select the instance ID that you wrote down before you stopped the instance, and then choose Associate.

To stop and start an Amazon EBS-backed instance using the command line

You can use one of the following commands. For more information about these command line interfaces, see Accessing Amazon EC2.

stop-instances and start-instances (AWS CLI)
Stop-EC2Instance and Start-EC2Instance (AWS Tools for Windows PowerShell)
Modifying a Stopped Instance

You can change the instance type, user data, and EBS-optimization attributes of a stopped instance using the AWS Management Console or the command line interface. You can’t use the AWS Management Console to modify the DeleteOnTermination, kernel, or RAM disk attributes.

To modify an instance attribute

To change the instance type, see Resizing Your Instance.
To change the user data for your instance, see Configuring Instances with User Data.
To enable or disable EBS?optimization for your instance, see Modifying EBS?Optimization.
To change the DeleteOnTermination attribute of the root volume for your instance, see Updating the Block Device Mapping of a Running Instance.
To modify an instance attribute using the command line

You can use one of the following commands. For more information about these command line interfaces, see Accessing Amazon EC2.

modify-instance-attribute (AWS CLI)
Edit-EC2InstanceAttribute (AWS Tools for Windows PowerShell)
Troubleshooting

If you have stopped your Amazon EBS-backed instance and it appears “stuck” in the stopping state, you can forcibly stop it. For more information, see Troubleshooting Stopping Your Instance.

AWS – Amazon Web Service – Concepts

AWS : Amazon Web Services is a Cloud Service Provider, AKA – Infrastructure as a Service (IaaS).
– Storage, Computing Power, Databases, Networking, Analytics, Developer Tools, Virtualization, Security.

Major Terminology/Reason/Advantages:
####################################
– High Availibility
– Fault Tolerance
– Scalability (automatically grow Dynamically)
– Elasticity (automatically srink Dtynamically)

– Instance (Server)

Services:
########

VPCs: Virtual Private Cloud
*****************************
It is your private section of AWS, where you can place AWS Resources, and allow/restrict access to them.

EC2 (compute power): Elastic Cloud Compute
*******************************************
It is a virtual instance;/sever/computer that you can use for whatever you like.
ex: common use, web host

EC2- Part#2:
************
It is good for any type of “processing” activity.
ex: in netflix – video stream encoding and transacoding happens on the EC2 instance (stream loaded from S3)

Amazon RDS:
***********
It is AWS provisioned database service. Comonly used for things like storing customer account information and cataloging inventory.

AWS S3:
*******
It is massive/long-term storage bucket

 

AWS – Essentials:

 

IAM: Identity & Access Management
**********************************
It is where you manage your AWS users and their access to AWS account and services.

Common use:
Users
Group
IAM Access policies
Roles

The user created when you created the AWS account is called the “root” user.

By default, root user has FULL administrative rights and access to every part of AWS

By default, any newly created user will have no access to any AWS service (except ability to login). permission must be given to grant the access.

Best Practice: Security Status should be green for all configurations.
***********************************************************************

Activate MFA: Multi Factor Authentication – Same as RSA Token (available virtual and hard fob)
***************************************************************************************

Create individual IAM users:
*****************************
– As per best practice, we should be not using the root user in day to day job, including administrator

User groups to assign permission:
*********************************
– Create custom group (we have admin)

VPC – Virtual Private Clouds
****************************

Global Infrastructure:
*********************
AWS Regions:
Availibility Zones – Physical Data Centers (Multiple availibility zone – multiple backup – Redundency – HA Fault Tolerance)

VPC Basics: when you create account with AWS by default VPC have been created, and includes following standard component:
************************************************************************************************************************
(1) Internet Gateway – VPC can have only one IGW, Once active AWS resource would be there then IGW can’t be detached.
– it is horizontally scaled, redundent and highly available VPC component
– Allow communication between instances in your VPC and the internet

Rules/Details for Interner Gateway:
– Only 1 IGW can be attached to a VPC at a time
– IGW can not be detached from VPC while there are active AWS resources in the VPC (such as EC2 instansaces, RDS databases, etc..)

(2) A Route Table (with predefined routes to the default subnets)
– It contains set of rules, called routes, that are used to determine where network traffic is directed.
– Defulat VPC already has a ‘main’ route table.

Rules/Details for Route Tables:
– Unlike an IGW, you can have multiple route tables in a VPC
– You can not delete a route table if it has dependencies (associate subnets)

(3) A network access control list (NACL) (with predefined rules for access)
– it is an optional layer of security for VPC that act as firewall for controlling traffic in and out of one or more subnets.

– Defulat VPC already has a NACL in place and associated with the default subnets.

Rules/Details for NACL:
– Rules are evaluated lowest to highest based on rule number.
– The first rule found that applies to the traffic type immediatly applied, regardless of any highest number of rule come after
– Default NACL allows all the traffic to the default subnets
– Any newly created NACL, deny all traffic by default
– A subnet can be only associated with ONE NACL at a time.

(4) Subnet to provision AWS resources in (such as EC2 instances)
– it is like subnetworks, is sub-section of the network.

Rules/Details for subnets:
– it must be associated with Route table
– Public subnet has route to the internet
– Private subnet does not have a route to the internet
– A subnet is located in specific availibility zone.

Simple Storage Service (S3)
***************************
– An online, bulk storage service that you can access from almost any device

– It has simple webservice interface that you can use to store and retrive any amount of data, at any time, from anywhere on the web.
it gives any user access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that amazon uses to run
its own globle network of websites. the service aim to maximize benefits of scale and to pass those benefis to users.

– Default 5 GB of storage free

(1) S3 Storage Classes:
These classes are defined based on object/file availability ( and the durability (corrupt/lost)

Standard: Default Storage Options
– General all purpose storage
– 99.999999999999% Object durability (“eleven nines”)
– 99.99% Object availability
– Most expensive

Reduce Redundancy Storage (RSS) – Backup
– Designed for non-critical, reproducible objects
– 99.99% object durability
– 99.99% object availability
– less expensive than Standard

Infrequent Access (S3-IA) – Not accessed day to day base – May be weekly or monthly
– Designed for objects that you do not access frequently but must be immediately available when accessed
– 99.999999999999% Object durability (“eleven nines”)
– 99.90% Object livability
– less expensive than Standard/RSS

Glacier
– Designed for long-term archival storage
– May take several hours for objects stored in Glacier to be retrieved
– 99.999999999999% Object durability (“eleven nines”)
– cheapest S3 Storage (very low cost)

(2) Object Lifecycle:

– It is located on the bucket level

– However, it can be applied to
– The entire bucket (applied all the objects in the Bucket)
– One specific folder within a bucket (applied all the objects in that folder)
– one specified object within a bucket

– you can always delete lifecycle policy or manually change the storage class back to whatever you like

(3) Permissions:

– It can be found on  bucket or object level

– On bucket level you can control
List: who can see the backet name
Upload/Delete: Objects to (upload) or in the bucket (delete)
View Permission
Edit Permission

Bucket level permission are generally used for “internal” access control

– On the object level, you can control (for each object individually)
Open/download
View permissions
Edit Permissions

You can share specific objects via a link with the anyone in the world.

(4) Object Versioning

– S3 Versioning is a feature that keeps track of and stores all old/new versions of an object so that you can access and use an older version you like

– Versioning is either ON or OFF
– Once it is turned ON, you can only “suspend” versioning. It can not be fully turned OFF.
– Suspending versioning only prevents versioning going forward. All previous object with versions will still maintain their older versions.
– Versioning can only be set on the bucket level and applies to ALL objects in the bucket

Elastic Compute Cloud (EC2)
***************************

– Think of EC2 as your basic computer (which has OS, cpu, hard drive, network card, firewall, ram)

– EC2 provides scalable computing capacity in AWS Cloud
– It can be used to launch as many or as few virtual servers as you need, configure security and networking, and manage storage

(1) AMI’s – Amazon Machine Images
– A preconfigured package required to launch an EC2 instance
includes OS, software packages and other required settings.

– you specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need.
you can also launch instances from as many different AMIs as you need.

(2) Instance Types:
– it is the CPU/Core
– Each instance offers different compute, memory and storage capabilities

(3) Elastic Block Store (EBS)
– Storage volume for an EC2 instance (like hard drive)

– IOPS – input/output operations per second – More IOPS means better volume performance

(4) Security Groups
– are similar to NACLs in that they allow/deny traffic.
– security groups are found on the instance level (as opposed to subnet level)

– Virtual firewall that controls the traffic for one or more instances
– when you launch instances, you associate one or more security groups with the instance

(5) IP Addressing:
– Private IP addressing for EC2 instance
– By default all EC2 instances created with private IP address,
– It allow for instances to communicate with each other as long as they are located in the same VPC

Public IP addressing for EC2 instance
– Instances can be launched with or without a public IP Address (by default) depending on VPC/Subnet settings.
– Public IP Address REQUIRED for the instance to communicate with the internet.

RDS and DynamoDB
******************

RDS – Relational SQL databases (Amazon Aurora, SQL Server, ORACLE, PostgreSQL, MySQL)
DynamoDB – Non-Relational, No-SQL Database (DynamoDB only available, we can install/download the mongoDB, Cassandra, Oracle noSQL

Simple Notification Service (SNS): In other word it is alert service
***********************************

AWS service that allows you to automate the sending of email or text message notification based on events that happens in your AWS account
Topic – Like EC2 crashed
Subscriber – Person/Group who gets the notification
Publisher – Cloudwatch/human/alarm

AWS CloudWatch: in Other word it is monitoring service..
****************

It is service that allows you to monitor various elements of your AWS account.

This alerts will be distribution using SNS service…

example…
– setup the alerts for the mothly billing exceeding certain amounts.
– setup the alerts for the EC2 instance CPU utilizations..

Elastic Load Balancer (ELB) (Classic) :
*************************************
An ELB evenly distributes traffic between EC2 instances that are associated with it.

AutoScalling:
**************
– Auto Scalling is the process of adding (scalling up) OR removing (scalling down) EC2 instances based on traffic demand for you application.

– Handle the load for your application and Auto Scalling Groups

– It is a service and not the physical part of the infrastructure

Lambda – Serverless Computing
******************************

 

AWS – Cloud Computing

 

AWS Cloud Platform Devided into following categories:

– Compute and Networking (ex: virtual server and vpc)
EC2 – RHEL, CentOS, Ubuntu, Debian, Fedora, Amazon Linux, Oracle Linux, Microsoft Windows Server
Route53 – DNS system which we configure on AWS
VPC – Virtual Private Cloud
– Storage and CDN (ex: various storage services, also content which leaves in network)
Amazon S3 (store your images, contents and even static websites)
Amazon Glacier (Archival system – economical compare to S3)
Amazon CloudFront
– Databases
Amazon RDS:
– MySQL
– MS SQL Server
– Oracle
– Application Services (notification services, emial services.. etc)
– Amazon SES (mass emailing as e-advertisement)
– Amazon SNS (Monitoring email).
– Deployment and Management (CI-CD)
– Amazon CloudWatch (monitoring service for resources such as servers, storage, even billings, DNS, RDS Database)
– Amazone IAM (Manage Users and Groups using Identity and Access management)

aws cli part -1

1. Create a VPC

aws ec2 create-vpc –cidr-block 10.0.0.0/16

2. Create a VPC with dedicated tenancy

aws ec2 create-vpc –cidr-block 10.0.0.0/16 –instance-tenancy dedicated

3. Create a VPC with an IPv6 CIDR block

aws ec2 create-vpc –cidr-block 10.16.0.0/16 –amazon-provided-ipv6-cidr-block >> /root/awscreateVPC.json

4. Create a subnet within the VPC

aws ec2 create-subnet –vpc-id  vpc-b774aace –cidr-block 10.16.1.0/24  >> /root/awscreateSubnet1.json

aws ec2 create-subnet –vpc-id  “vpc-b774aace” –cidr-block “10.16.2.0/24”  –availability-zone  “us-east-1a” >> /root/awscreateSubnet2.json

6. Delete VPC

aws ec2 delete-vpc  –vpc-id vpc-7c6ab405

7. Create route table (a default route table is created during vpc creation)

aws ec2 create-route-table –vpc-id vpc-b774aace  >>  /root/awscreateRouteTable.json

8. Associate subnet (say our subnet2 id = subnet-2b8a2c07) with the above route table (say route table id = rtb-0068f078)

aws ec2 associate-route-table –route-table-id  rtb-0068f078 –subnet-id subnet-2b8a2c07 >>  /root/awsassociateRouteTable.json

9. Dissociate subnet from route table

aws ec2 disassociate-route-table –association-id rtbassoc-802b6efb

10. Create Internet Gateway

aws ec2 create-internet-gateway >> /root/awscreateInternetGateway.json

11. Attach Internet Gateway to VPC (An Internet gateway already attached to an vpc cannot be attached to another vpc)

aws ec2 attach-internet-gateway –internet-gateway-id   igw-b946d3df   –vpc-id vpc-b774aace >> /root/awsattachInternetGateway.json

12. Detach Internet Gateway

aws ec2 detach-internet-gateway     –internet-gateway-id        igw-b946d3df                  –vpc-id  vpc-b774aace

13.  Create Route   (To create new route you need a Internet Gateway, Network Interface, or Virtual Private Gateway as targets.)

aws ec2 create-route –route-table-id  rtb-714cd209 –destination-cidr-block 0.0.0.0/0 –gateway-id igw-b946d3df

14. Create NACL

aws ec2  create-network-acl  –vpc-id vpc-b774aace >> /root/awscreateNetworkACL.json

15. Create NACL entry (to add a allow or deny rule)

aws ec2 create-network-acl-entry –network-acl-id    acl-f769128e   –ingress  –rule-number 25 –protocol tcp –port-range From=22,To=22–cidr-block 0.0.0.0/0  –rule-action allow

aws ec2 create-network-acl-entry –network-acl-id    acl-f769128e   –ingress  –rule-number 35 –protocol tcp –port-range From=80,To=80–cidr-block 0.0.0.0/0  –rule-action allow

aws ec2 create-network-acl-entry –network-acl-id    acl-f769128e   –ingress  –rule-number 50 –protocol all –port-range From=0,To=65535 –cidr-block 10.16.2.251/32 –rule-action deny

aws ec2 create-network-acl-entry –network-acl-id    acl-f769128e   –exgress  –rule-number 50 –protocol all –port-range From=0,To=65535 –cidr-block 10.16.2.251/32 –rule-action deny

16. Modify NACL Entry

aws ec2 replace-network-acl-entry –network-acl-id    acl-f769128e   –ingress  –rule-number 100 –protocol all –port-range From=0,To=65535 –cidr-block 10.16.2.0/24 –rule-action allow

17. create security group

aws ec2 create-security-group –group-name mySG1 –description “my security group” –vpc-id vpc-b774aace

18. Create SG inbound (To add a rule that allows inbound SSH traffic)

aws ec2 authorize-security-group-ingress –group-id sg-3fdcc241 –protocol tcp –port 22 –cidr 0.0.0.0/0

19. Create SG inbound (To add a rule that allows inbound HTTP traffic from another security group)

aws ec2 authorize-security-group-ingress –group-id sg-3fdcc241 –protocol tcp –port 80 –cidr 0.0.0.0/0

Note: for https use port 443

20. Create key pair

aws ec2 create-key-pair –key-name MyKeyPair –query ‘KeyMaterial’ –output text >> /root/awsMyKeyPair.pem

aws ec2 create-key-pair –key-name MyKeyPair –query ‘KeyMaterial’ –output text | out-file -encoding ascii -filepath MyKeyPair.pem  [windows powershell]

21. Launches the specified number of instances using an AMI for which you have permissions.

aws ec2 run-instances

15. Delete route table

aws ec2  delete-route-table –route-table-id    rtb-4069f138

9. aws ec2 associate-route-table –route-table-id rtb-22574640 –subnet-id subnet-9d4a7b6c
4. To create an endpoint

aws ec2 create-vpc-endpoint –vpc-id vpc-1a2b3c4d –service-name com.amazonaws.us-east-1.s3 –route-table-ids rtb-11aa22bb

This example creates a VPC endpoint between VPC vpc-1a2b3c4d and Amazon S3 in the us-east-1 region, and associates route table rtb-11aa22bb with the endpoint.

5. To create a VPC peering connection between your VPCs

aws ec2 create-vpc-peering-connection –vpc-id vpc-1a2b3c4d –peer-vpc-id vpc-11122233

6. To create a VPC peering connection with a VPC in another account

aws ec2 create-vpc-peering-connection –vpc-id vpc-1a2b3c4d –peer-vpc-id vpc-11122233 –peer-owner-id 123456789012

7. To create a VPN connection with dynamic routing

aws ec2 create-vpn-connection –type ipsec.1 –customer-gateway-id cgw-0e11f167 –vpn-gateway-id vgw-9a4cacf3

8. To create a static route for a VPN connection

aws ec2 create-vpn-connection-route –vpn-connection-id vpn-40f41529 –destination-cidr-block 11.12.0.0/16

9. To create a virtual private gateway

mod_jk or mod_proxy_ajp ?

mod_jk or mod_proxy_ajp ?

A Tomcat servlet container can be put behind an Apache web server using the AJP protocol, which carries all request information from Apache to Tomcat. There are two implementations of AJP module:

  • mod_jk which must be installed separately
  • mod_proxy_ajp which is a standard module since Apache 2.2

They both use protocol AJP, so they both provide the same functionality.

The advantage of mod_jk is its JkEnv directive, that allows to send any environmental variable from Apache to Tomcat as a request attribute. If you need to get for example the SSL_CLIENT_S_DN variable with SSL certificate DN provided by mod_ssl, or the AUTHENTICATE_CN variable provided by mod_ldap, then mod_jk can be directed to send it using simply:

<IfModule mod_jk.c>
   JkEnvVar SSL_CLIENT_S_DN
</IfModule>

while for mod_proxy_ajp, you have to use mod_rewrite to prepend AJP_ prefix to variables that you want to send:

<IfModule mod_proxy_ajp.c>
   RewriteRule .* - [E=AJP_SSL_CLIENT_S_DN:%{SSL:SSL_CLIENT_S_DN}]
</IfModule>

which is more complicated and forces you to activate the mod_rewrite.

The advantage of mod_proxy_ajp is that it is a standard Apache module, so you do not need to compile and install it itself.

An example configuration of mod_jk in Apache http.conf file is as follows:

<IfModule mod_jk.c>
 # a list of Tomcat instances
 JkWorkerProperty worker.list=tomcatA,tomcatB
 # connection properties to instance A on localhost
 JkWorkerProperty worker.tomcatA.type=ajp13
 JkWorkerProperty worker.tomcatA.host=localhost
 JkWorkerProperty worker.tomcatA.port=8009
 # connection properties to instance B on some other machine
 JkWorkerProperty worker.tomcatB.type=ajp13
 JkWorkerProperty worker.tomcatB.host=zeus.example.com
 JkWorkerProperty worker.tomcatB.port=8009
 # some other configuration
 JkLogFile "|/usr/bin/cronolog /var/log/apache2/%Y/%m/%d/mod_jk.log"
 JkLogLevel error
 JkShmFile /var/log/apache2/jk.shm
 JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
 # forwarding URL prefixes to Tomcat instances
 JkMount /opencms tomcatA
 JkMount /otherapp tomcatB
</IfModule>

An example configuration of mod_proxy_ajp is here:

<IfModule mod_proxy_ajp.c>
 <Location "/opencms">
   Allow from all
   ProxyPass ajp://localhost:8009/opencms
 </Location>
 <Location "/otherapp">
   Allow from all
   ProxyPass ajp://zeus.example.com:8009/otherapp
 </Location>
</IfModule>

So mod_jk has more flexible configuration, but needs a separate installation and its configuration is more complex. If you have no special requirements, go for mod_proxy_ajp. If you need something special, like to use authentication modules from Apache for securing applications in Tomcat, go for mod_jk.

New site configuration

If you are running OpenCms (6.0 or greater) in Tomcat using an Apache front end (with mod_jk or mod_proxy_ajp, NOT MOD_PROXY IN HTTP MODE), there are three basic steps to configuring a new site in your implementation:

Create the containing folder for the site in the OpenCms Explorer

In the OpenCms Explorer view, change to the ‘/’ site, go into the ‘sites’ folder, and create a new folder. The folder name is case-sensitive, so keep track of exactly what you entered. For the examples that follow, we’ll assume the creation of a /sites/MyNewSite folder.

Add site information to OpenCms’s configuration

In order to make your new site available within OpenCms (i.e. displayed in the site list of the workplace), we need to modify the opencms-system.xml configuration file, located in <opencmsroot>/WEB-INF/config/.

Find the section of opencms-system.xml that looks like:

 <sites>
    <workplace-server>http://www.mysite.com</workplace-server>
    <default-uri>/sites/default/</default-uri>
    <site server="www.mysite.com" uri="/sites/default/"/>
 </sites>

and add another site definition as follows:

    <site server="www.mynewsite.com" uri="/sites/MyNewSite/"/>

This tells OpenCms that when it receives a request for www.mynewsite.com, it should serve that request out of the MyNewSite container. I believe you have to restart tomcat or reload opencms for this config file to be reread.

Adjust OpenCms automatic link generation (static export, module-resources)

This configuration is only valid if OpenCms is installed as the ROOT application in Tomcat. Edit the file “WEB-INF/config/opencms-importexport.xml” in your OpenCms installation and change the content of the <vfs-prefix> tag to empty:

<rendersettings>
  <rfs-prefix>${CONTEXT_NAME}/export</rfs-prefix>
  <vfs-prefix></vfs-prefix>
</rendersettings>

Then all links will have empty prefix, i.e. a link to the file /dir/file.html will be /dir/file.html instead of /opencms/dir/file.html.

Configuring the Apache WebServer

http.conf

Add the following lines to the http.conf file if needed (not already be done) to load the modules needed. Other apache distributions recommend to configure the modules to load on different locations. For apache 2.2 on SuSE-release this is e.g. done in /etc/sysconfig/apache2. On Debian, use the a2enmod command to link the files from /etc/apache2/mods-available to /etc/apache/mods-enabled. In the end, the following lines need to be somehwo included in the Apache configuration:

LoadModule jk_module modules/mod_jk.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule rewrite_module modules/mod_rewrite.so

After the modules are loaded they have to be configured.

mod_jk

If you use mod_jk, put there the following:

<IfModule mod_jk.c>
 JkWorkerProperty worker.list=ocms
 JkWorkerProperty worker.ocms.type=ajp13
 JkWorkerProperty worker.ocms.host=localhost
 JkWorkerProperty worker.ocms.port=8009
 JkLogFile "|/usr/bin/cronolog /var/log/apache2/%Y/%m/%d/mod_jk.log"
 JkLogLevel error
 JkShmFile /var/log/apache2/jk.shm
 JkOptions +RejectUnsafeURI
 JkMount /opencms/* ocms
 JkMount /export/* ocms
 JkMount /resources/* ocms
 JkMountCopy All
</IfModule>

The JkMount directives forward requests to the OpenCMS servlet at /opencms and the directories at /export and /resources to Tomcat. The JkMountCopy All directive mount that for all virtual servers. If you plan to use some virtual servers without OpenCMS, do not put the directives here, but mount the prefixes in each virtual server.

mod_proxy_ajp

If you use mod_proxy_ajp, put there the following:

  <IfModule mod_proxy_ajp.c>
   <Location "/opencms">
    Allow from all
    ProxyPass ajp://localhost:8009/opencms
   </Location>
   <Location "/export">
    Allow from all
    ProxyPass ajp://localhost:8009/export
   </Location>
   <Location "/resources">
    Allow from all
    ProxyPass ajp://localhost:8009/resources
   </Location>
   <Location "/update">
    Allow from all
    ProxyPass ajp://localhost:8009/resources
   </Location>
  </IfModule>

Defining the virtual hosts

This configuration is for an OpenCms installation which is installed as the ROOT application in Tomcat.

<VirtualHost *:80>
  ServerName www.mysite.com
  ServerAdmin admin@example.com
  DocumentRoot "C:/Tomcat5.5/webapps/ROOT"
  ErrorLog logs/error.log

  # Allow accessing the document root directory 
  <Directory "C:/Tomcat5.5/webapps/ROOT">
    Options FollowSymlinks
    AllowOverride All
    Order allow,deny
    Allow from all
  </Directory>
  
  # If the requested URI is located in the resources folder, do not forward the request
  SetEnvIfNoCase Request_URI ^/resources/.*$ no-jk
  
  # If the requested URI is static content do not forward the request
  SetEnvIfNoCase Request_URI ^/export/.*$ no-jk
  RewriteEngine On
  RewriteLog logs/rewrite.log
  RewriteLogLevel 1

  # Deny access to php files
  RewriteCond %{REQUEST_FILENAME} (.+)\.php(.*)
  RewriteRule (.*) / [F]

  # If the requested URI is NOT located in the resources folder.
  # Prepend an /opencms to everything that does not already starts with it
  # and force the result to be handled by the next URI-handler ([PT]) (JkMount in this case)
  RewriteCond %{REQUEST_URI} !^/resources/.*$
  RewriteCond %{REQUEST_URI} !^/export/.*$
  RewriteCond %{REQUEST_URI} !^/webdav.*$
  RewriteRule !^/opencms/(.*)$ /opencms%{REQUEST_URI} [PT]

  # These are the settings for static export. If the requested resource is not already
  # statically exported create a new request to the opencms404 handler. This has to be
  # a new request, because the current would net get through mod_jk because of the "no-jk" var.
  RewriteCond %{REQUEST_URI} ^/export/.*$
  RewriteCond "%{DOCUMENT_ROOT}%{REQUEST_FILENAME}" !-f
  RewriteCond "%{DOCUMENT_ROOT}%{REQUEST_FILENAME}/index_export.html" !-f
  RewriteRule .* /opencms/handle404?exporturi=%{REQUEST_URI}&%{QUERY_STRING} [P]
  
  JkMount /* ocms
</VirtualHost>

This redirect doesn’t work with opencms 7.5.1 for static export.

RewriteRule .* /opencms/handle404?exporturi=%{REQUEST_URI}&%{QUERY_STRING} [P]

so I change it to:

RewriteRule .* http://127.0.0.1:8080/opencms/handle404?exporturi=%{REQUEST_URI}&%{QUERY_STRING} [P]

After the configuration is finished the Apache WebServer needs to be restarted.

Alternative definition

The previous definition is too complex, here is my simpler definition that works for me:

<VirtualHost 147.251.9.183:80 >
   ServerAdmin admin@example.com
   ServerName www.mysite.com
   DocumentRoot /var/www/mysite
   <Directory /var/www/mysite>
       Options Indexes MultiViews
       AllowOverride None
       Order allow,deny
       allow from all
   </Directory>
   RewriteEngine On
   RewriteRule ^/$ /opencms/ [passthrough]
   RewriteCond %{REQUEST_URI} !^/opencms/.*$
   RewriteCond %{REQUEST_URI} !^/export/.*$
   RewriteCond %{REQUEST_URI} !^/resources/.*$
   RewriteCond %{REQUEST_URI} !^/error/.*$
   RewriteCond %{REQUEST_URI} !^/icons/.*$
   RewriteCond %{REQUEST_URI} !^/update/.*$
   RewriteRule .* /opencms%{REQUEST_URI} [QSA,passthrough]
</VirtualHost>

The configuration rewrites all requests by adding /opencms in front of them, except requests that already have the prefix, or go for static files or go for Apache error files or Apache file icons.

Configuring Tomcat

Make sure the connector to be used by Apache mod_jk is configured in the server.xml file.

<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009"
enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />

Mysql backup

#!/usr/bin/env bash

USER=""
PASSWORD=""
OUTPUTDIR=""
DAYS_TO_KEEP=60

databases=`mysql -u$USER -p$PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`
cd $OUTPUTDIR
for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ; then
        echo "Dumping database: $db"
        mysqldump -u mysqlbackup --databases --events $db | gzip > `date +%Y%m%d`.$db.sql.gz
    fi
done
find ¨$OUTPUTDIR*.gz¨ -type f -ctime +$DAYS_TO_KEEP -exec rm '{}' ';'

How to install Fail2ban in rhel 6 & 7

How to install Fail2ban in rhel 6 & 7

What is fail2ban?

Fail2ban works by scanning and monitoring log files for selected entries then bans IPs that show the malicious signs like too many password failures, seeking for exploits, etc.


1. Install Fail2Ban

For RHEL 6

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

For RHEL 7

rpm -Uvh http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm

yum install fail2ban

2. Copy the Configuration File

The default fail2ban configuration file is located at /etc/fail2ban/jail.conf. The configuration work should not be done in that file, since it can be modified by package upgrades, but rather copy it so that we can make our changes safely.

We need to copy this to a file called jail.local for fail2ban to find it:


cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local


3. Configure defaults in Jail.Local

The first section of defaults covers the basic rules that fail2ban will follow to all services enabled for fail2ban that are not overridden in the service’s own section.. If you want to set up more nuanced protection for your server, you can customize the details in each section.

You can see the default section below.

[DEFAULT]

# “ignoreip” can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = 127.0.0.1

# “bantime” is the number of seconds that a host is banned.
bantime  = 3600

# A host is banned if it has generated “maxretry” during the last “findtime”
# seconds.
findtime  = 600

# “maxretry” is the number of failures before a host get banned.
maxretry = 3

4. Add a jail file to protect SSH

Although you can add this parameters in the global jail.local file, it is a good practice to create seperate jail files for each of the services we want to protect with Fail2Ban.

So lets create a new jail for SSH with the vi editor.

vi /etc/fail2ban/jail.d/sshd.local

In the above file, add the following lines of code:

[sshd]
enabled = true
port = ssh
action = iptables-multiport
logpath = /var/log/secure
maxretry = 3
bantime = 3600

5. Restart Fail2Ban

service fail2ban restart

iptables -L

Check Fail2Ban Status

Use fail2ban-client command to query the overall status of the Fail2Ban jails.


fail2ban-client status

You can also query a specific jail status using the following command:

fail2ban-client status sshd

Manually Unban IP Banned by Fail2Ban

If for some reason you want to grant access to an IP that it is banned, use the following expression to manually unban an IP address, banned by fail2ban:

fail2ban-client set JAIL unbanip IP

eg. Unban IP 192.168.1.101, that was banned according to [ssh-iptables] jail:

fail2ban-client set sshd unbanip 192.168.1.101

Ngxin environment. It requires http to force a jump to https

The company intends to replace http with https in the Ngxin environment. It requires http to force a jump to https. This search on the Internet, the basic summary
Configure rewrite ^(.*)$ https://$host$1 permanent;

Or in the server configuration return 301 https://$server_name$request_uri;

Or in the server with if, here refers to the need to configure multiple domain names

If ($host ~* “^rmohan.com$”) {

Rewrite ^/(.*)$ https://dev.rmohan.com/ permanent;

}

Or in the server configuration error_page 497 https://$host$uri?$args;

Basically on the above several methods, website visit is no problem, jump is ok

After the configuration is successful, prepare to change the address of the APP interface to https. This is a problem.

The investigation found that the first GET request is to receive information, POST pass in the past is no information, I configure the $ request_body in the nginx log, the log inside that does not come with parameters, view the front of the log, POST changed Become a GET. Finding the key to the problem

Through the online search, the discovery was caused by 301. Replaced by 307 problem solving.

301 Moved Permanently The
requested resource has been permanently moved to a new location, and any future references to this resource should use one of several URIs returned by this response

307 Temporary Redirect The
requested resource now temporarily responds to requests from different URIs. Because such redirection is temporary, the client should continue to send future requests to the original address.

From the above we can see that 301 jump is a permanent redirect, and 307 is a temporary redirect. This is the difference between 301 jumps and 307 jumps.

The above may not look very clear, simple and straightforward to express the difference:

Return 307 https://$server_name$request_uri;

307: For a POST request, indicating that the request has not yet been processed, the client should re-initiate a POST request to the URI in Location.

Change to the 307 status code to force the request to change the previous method.

The following configuration 80 and 443 coexist:

Need to be configured in a server, 443 port plus ssl. Comment out ssl on;, as follows:

Server{
listen 80;
listen 443 ssl;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
#ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE -RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404. Html;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

The two server wording:

Server{
listen 80;
server_name testapp.***.com;
rewrite ^(.*) https://$server_name$1 permanent;
}

Server{
listen 443;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
Ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE- RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404.html ;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

Offer ssl optimization, the following can be used according to business, not all configuration, the general configuration of the red part on the line

Ssl on;
ssl_certificate /usr/local/https/www.localhost.com.crt;
ssl_certificate_key /usr/local/https/www.localhost.com.key;

Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #allows only TLS protocol
ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:! AESGCM; # cipher suite, here used CloudFlare’s Internet facing SSL cipher configurationssl_prefer_server_ciphers on; # negotiated the best encryption algorithm for the server ssl_session_cache builtin: 1000 shared: SSL: 10m;
# Session Cache, the Session cache to the server, which may take up More server resources ssl_session_tickets on; # Open the browser’s Session Ticket cache ssl_session_timeout 10m; # SSL session expiration time ssl_stapling on;
# OCSP Stapling is ON, OCSP is a service for online query certificate revocation, using OCSP Stapling can certificate The valid state information is cached to the server to increase the TLS handshake speed ssl_stapling_verify on; #OCSP Stapling verification opens the resolver 8.8.8.8 8.8.4.4 valid=300s; # is used to query the DNS resolver_timeout 5s of the OCSP server; # query domain timeout time

ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

Create a user in Mysql in linux

login to mysql as a root

mysql -u root -p

now create user with following command

CREATE USER ‘testdb’@’localhost’ IDENTIFIED BY ‘test123’;

if you got error like below.

then you have to reset the root password as password policy level in mysql. so simply use the below command to set the password for root in mysql.

ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘Root@1234’;then it will show like “Query OK, 0 rows affected (0.00 sec)”

now try again the step to create user as per the password policy.

If you don’t want password policy and you want to create user password with some random simple password then follow the step below.

login mysql as root

mysql -u root -p

then check the policy status with below command

SHOW VARIABLES LIKE ‘validate_paswword%’;

it will show like below image.

you can see the validate_password_policy in MEDIUM.

now you have to change to LOW. So you can proceed in your own way. Now set the paoly rule in low with following command.

SET GLOBAL validat_password_policy=LOW;

now check the password policy like above. You will get like below image.

mysql> SET GLOBAL validate_password_length = 4;
Query OK, 0 rows affected (0.01 sec)

mysql> SHOW VARIABLES LIKE ‘validate_password%’;
+————————————–+——–+
| Variable_name | Value |
+————————————–+——–+
| validate_password_dictionary_file | |
| validate_password_length | 4 |
| validate_password_mixed_case_count | 1 |
| validate_password_number_count | 1 |
| validate_password_policy | MEDIUM |
| validate_password_special_char_count | 1 |
+————————————–+——–+
6 rows in set (0.00 sec)

mysql> SET GLOBAL validate_password_policy = LOW;
Query OK, 0 rows affected (0.01 sec)

 

Performance schema is not installed by default.

For checking, you can run the command

SHOW VARIABLES LIKE 'performance_schema';

Suppose, now you will see OFF

To enable it, start the server with the performance_schema variable enabled. For example, use these lines in your my.cnf file:

[mysqld]
performance_schema=ON

More details you can found in official documentation:

https://dev.mysql.com/doc/refman/en/performance-schema-q

MySQL Slave Failed to Open the Relay Log

This problem is a little tricky, there are possible fixes that MySQL website has stated. Sad to say, the one’s I read in the forum and site didn’t fix my problem. What I encountered was that the relay-bin from my MySQL slave server has already been ‘rotated’, meaning deleted from the folder. This happens when the slave has been disconnected from the master for quite a long time already and has not replicated anything. A simple way to fix this is to flush the logs, but make sure the slave is stopped before using this command…

FLUSH LOGS;

Bring in a fresh copy of the database from the master-server and update the slave-server database. THIS IS IMPORTANT! Since if you don’t update the slave database, you will not have the data from the time you were disconnected until you reset the relay logs. So UPDATE YOUR SLAVE WITH THE LATEST DATABASE FROM THE MASTER!

Now when the logs are flushed,all the relay-bin logs will be deleted when the slave is started again. Usually, this fixes the problem, but when you start the slave and the failed relay log error is still there, now you have to do some more desperate measures… reset the slave. This is what I had to do to fully restore my MySQL slave server. Reseting the slaves restores all the settings to default… password, username, relay-log, port, table to replicate, etc… So better to have a copy of your settings first before actually do a slave reset. When your ready to rest the slave, do the command…

RESET SLAVE;

after which you should restore all your setting with a command something like…

CHANGE MASTER TO MASTER_HOST=.....

now start your server with…

SLAVE START;

check your slave server with…

SHOW SLAVE STATUS\G

look for …

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

both should be YES, if not, check your syslog if there are other errors encountered. I’ll leave this until here since this is what I encountered and I was able to fix it.

Edit 5/14/11:

There is a possible change that after executing the CHANGE MASTER command that you’ll receive this error below…

ERROR 1201 (HY000): Could not initialize master info structure; more error messages can be found in the MySQL error log

This can occur when the relay logs under /var/lib/mysql were not properly cleaned and are still there. The next thing is to delete them manually, log back in to mysql, refresh logs, reset slave then execute the CHANGE MASTER command again. The file to delete would be relay-log.info .This should work now. Sometimes I don’t know why mysql can’t reset the slave logs.

Ngxin do http forced jump https interface POST request becomes GET

The company intends to replace http with https in the Ngxin environment. It requires http to force a jump to https. This search on the Internet, the basic summary
Configure rewrite ^(.*)$ https://$host$1 permanent;

Or in the server configuration return 301 https://$server_name$request_uri;

Or in the server with if, here refers to the need to configure multiple domain names

If ($host ~* “^rmohan.com$”) {

Rewrite ^/(.*)$ https://dev.rmohan.com/ permanent;

}

Or in the server configuration error_page 497 https://$host$uri?$args;

Basically on the above several methods, website visit is no problem, jump is ok

After the configuration is successful, prepare to change the address of the APP interface to https. This is a problem.

The investigation found that the first GET request is to receive information, POST pass in the past is no information, I configure the $ request_body in the nginx log, the log inside that does not come with parameters, view the front of the log, POST changed Become a GET. Finding the key to the problem

Through the online search, the discovery was caused by 301. Replaced by 307 problem solving.

301 Moved Permanently The
requested resource has been permanently moved to a new location, and any future references to this resource should use one of several URIs returned by this response

307 Temporary Redirect The
requested resource now temporarily responds to requests from different URIs. Because such redirection is temporary, the client should continue to send future requests to the original address.

From the above we can see that 301 jump is a permanent redirect, and 307 is a temporary redirect. This is the difference between 301 jumps and 307 jumps.

The above may not look very clear, simple and straightforward to express the difference:

Return 307 https://$server_name$request_uri;

307: For a POST request, indicating that the request has not yet been processed, the client should re-initiate a POST request to the URI in Location.

Change to the 307 status code to force the request to change the previous method.

The following configuration 80 and 443 coexist:

Need to be configured in a server, 443 port plus ssl. Comment out ssl on;, as follows:

Server{
listen 80;
listen 443 ssl;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
#ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE -RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404. Html;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

The two server wording:

Server{
listen 80;
server_name testapp.***.com;
rewrite ^(.*) https://$server_name$1 permanent;
}

Server{
listen 443;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
Ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE- RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404.html ;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

Offer ssl optimization, the following can be used according to business, not all configuration, the general configuration of the red part on the line

Ssl on;
ssl_certificate /usr/local/https/www.localhost.com.crt;
ssl_certificate_key /usr/local/https/www.localhost.com.key;

Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #allows only TLS protocol
ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:! AESGCM; # cipher suite, here used CloudFlare’s Internet facing SSL cipher configurationssl_prefer_server_ciphers on; # negotiated the best encryption algorithm for the server ssl_session_cache builtin: 1000 shared: SSL: 10m;
# Session Cache, the Session cache to the server, which may take up More server resources ssl_session_tickets on; # Open the browser’s Session Ticket cache ssl_session_timeout 10m; # SSL session expiration time ssl_stapling on;
# OCSP Stapling is ON, OCSP is a service for online query certificate revocation, using OCSP Stapling can certificate The valid state information is cached to the server to increase the TLS handshake speed ssl_stapling_verify on; #OCSP Stapling verification opens the resolver 8.8.8.8 8.8.4.4 valid=300s; # is used to query the DNS resolver_timeout 5s of the OCSP server; # query domain timeout time