August 2020
M T W T F S S
« Mar    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Categories

WordPress Quotes

If you have built castles in the air, your work need not be lost; that is where they should be. Now put foundations under them.
Henry David Thoreau
August 2020
M T W T F S S
« Mar    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (40)
Ansibile (19)
Apache (135)
Asterisk (2)
cassandra (2)
Centos (211)
Centos RHEL 7 (270)
centos8 (3)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (30)
Eassy (11)
ELKS (1)
EXCHANGE (3)
Fedora (6)
ftp (5)
GIT (3)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
health (2)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (7)
JBOSS (32)
jenkins (1)
Kubernetes (7)
Ldap (5)
Linux (188)
Linux Commands (166)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (24)
MYSQL (84)
Nagios (5)
NaturalOil (13)
Nginx (35)
Ngix (1)
openldap (1)
Openstack (6)
Oracle (35)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (12)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (62)
Ubuntu (1)
Uncategorized (30)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

0 visitors online now
0 guests, 0 bots, 0 members

Hit Counter provided by dental implants orange county

Downgrade with glibc Update to using YUM

Downgrade with glibc Update to using YUM

1). Existing RPM version checking and backup
#rpm -qa | grep glibc
compat-glibc-headers-2.3.4-2.26
glibc-common-2.5-81
glibc-devel-2.5-81
compat-glibc-2.3.4-2.26
compat-glibc-2.3.4-2.26
glibc-2.5-81
glibc-headers-2.5-81
glibc-devel-2.5-81
glibc-2.5-81

2). createrepo REPODATA
/usr/local/src/new_glibc

# pwd
/usr/local/src/new_glibc

#createrepo ./
12/12 – glibc-devel-2.5-123.el5_11.1.i386.rpm
Saving Primary metadata
Saving file lists metadata
Saving other metadata

3). old_glibc.repo
#vim /etc/yum.repos.d/new_glibc.repo
[old-glibc]
baseurl=file:///usr/local/src/new_glibc/
enabled=1
gpgcheck=0

# yum repolist
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Repository ‘new-glibc’ is missing name in configuration, using id
Unable to read consumer identity
new-glibc| 951 B 00:00
new-glibc/primary| 10 kB 00:00 new-glibc12/12
repo id repo name status
new-glibc new-glibc 12
rhel-DVD Red Hat Enterprise Linux 5Server – x86_64 – DVD 3,285
repolist: 3,297

# yum update glibc
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Repository ‘new-glibc’ is missing name in configuration, using id
Unable to read consumer identity
Skipping security plugin, no data
Setting up Update Process
Resolving Dependencies
Skipping security plugin, no data
–> Running transaction check
–> Processing Dependency: glibc = 2.5-81 for package: glibc-devel
–> Processing Dependency: glibc = 2.5-81 for package: glibc-headers
–> Processing Dependency: glibc = 2.5-81 for package: nscd
–> Processing Dependency: glibc = 2.5-81 for package: glibc-devel
—> Package glibc.i686 0:2.5-123.el5_11.1 set to be updated
–> Processing Dependency: glibc-common = 2.5-123.el5_11.1 for package: glibc
—> Package glibc.x86_64 0:2.5-123.el5_11.1 set to be updated
–> Running transaction check
—> Package glibc-common.x86_64 0:2.5-123.el5_11.1 set to be updated
—> Package glibc-devel.i386 0:2.5-123.el5_11.1 set to be updated
—> Package glibc-devel.x86_64 0:2.5-123.el5_11.1 set to be updated
—> Package glibc-headers.x86_64 0:2.5-123.el5_11.1 set to be updated
—> Package nscd.x86_64 0:2.5-123.el5_11.1 set to be updated
–> Finished Dependency Resolution

Dependencies Resolved

=====================================================================
Package Arch Version Repository Size
=====================================================================
Updating:
glibc i686 2.5-123.el5_11.1 new-glibc 5.4 M
glibc x86_64 2.5-123.el5_11.1 new-glibc 4.8 M
Updating for dependencies:
glibc-common x86_64 2.5-123.el5_11.1 new-glibc 16 M
glibc-devel i386 2.5-123.el5_11.1 new-glibc 2.1 M
glibc-devel x86_64 2.5-123.el5_11.1 new-glibc 2.4 M
glibc-headers x86_64 2.5-123.el5_11.1 new-glibc 602 k
nscd x86_64 2.5-123.el5_11.1 new-glibc 178 k

Transaction Summary
====================================================================
Install 0 Package(s)
Upgrade 7 Package(s)

Total download size: 32 M
Is this ok [y/N]: y
Downloading Packages:
——————————————————————-
Total 14 GB/s | 32 MB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : glibc-common 1/14
Updating : glibc 2/14
Updating : nscd 3/14
Updating : glibc-headers 4/14
Updating : glibc-devel 5/14
Updating : glibc 6/14
Updating : glibc-devel 7/14
Cleanup : glibc-headers 8/14
Cleanup : glibc-common 9/14
Cleanup : glibc 10/14
Cleanup : glibc 11/14
Cleanup : nscd 12/14
Cleanup : glibc-devel 13/14
Cleanup : glibc-devel 14/14
Installed products updated.

Updated:
glibc.i686 0:2.5-123.el5_11.1
glibc.x86_64 0:2.5-123.el5_11.1

Dependency Updated:
glibc-common.x86_64 0:2.5-123.el5_11.1
glibc-devel.i386 0:2.5-123.el5_11.1
glibc-devel.x86_64 0:2.5-123.el5_11.1
glibc-headers.x86_64 0:2.5-123.el5_11.1
nscd.x86_64 0:2.5-123.el5_11.1

Complete!

#rpm -qa | grep glibc
glibc-devel-2.5-123.el5_11.1
compat-glibc-headers-2.3.4-2.26
compat-glibc-2.3.4-2.26
compat-glibc-2.3.4-2.26
glibc-2.5-123.el5_11.1
glibc-2.5-123.el5_11.1
glibc-devel-2.5-123.el5_11.1
glibc-headers-2.5-123.el5_11.1
glibc-common-2.5-123.el5_11.1

1).

#rpm -qa | grep glibc
glibc-devel-2.5-123.el5_11.1
compat-glibc-headers-2.3.4-2.26
compat-glibc-2.3.4-2.26
compat-glibc-2.3.4-2.26
glibc-2.5-123.el5_11.1
glibc-2.5-123.el5_11.1
glibc-devel-2.5-123.el5_11.1
glibc-headers-2.5-123.el5_11.1
glibc-common-2.5-123.el5_11.1

2). yum downgrade

# yum downgrade glibc glibc-devel glibc-headers glibc-common nscd
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Repository ‘new-glibc’ is missing name in configuration, using id
Unable to read consumer identity
Setting up Downgrade Process
No Match for available package: nscd-2.5-81.x86_64
Resolving Dependencies
–> Running transaction check
—> Package glibc.i686 0:2.5-81 set to be updated
—> Package glibc.x86_64 0:2.5-81 set to be updated
—> Package glibc.i686 0:2.5-123.el5_11.1 set to be erased
—> Package glibc.x86_64 0:2.5-123.el5_11.1 set to be erased
—> Package glibc-common.x86_64 0:2.5-81 set to be updated
—> Package glibc-common.x86_64 0:2.5-123.el5_11.1 set to be erased
—> Package glibc-devel.i386 0:2.5-81 set to be updated
—> Package glibc-devel.x86_64 0:2.5-81 set to be updated
—> Package glibc-devel.i386 0:2.5-123.el5_11.1 set to be erased
—> Package glibc-devel.x86_64 0:2.5-123.el5_11.1 set to be erased
—> Package glibc-headers.x86_64 0:2.5-81 set to be updated
—> Package glibc-headers.x86_64 0:2.5-123.el5_11.1 set to be erased
–> Finished Dependency Resolution

Dependencies Resolved

===========================================
Package Arch Version Repository Size
===========================================
Downgrading:
glibc i686 2.5-81 rhel-DVD 5.3 M
glibc x86_64 2.5-81 rhel-DVD 4.8 M
glibc-common x86_64 2.5-81 rhel-DVD 16 M
glibc-devel i386 2.5-81 rhel-DVD 2.0 M
glibc-devel x86_64 2.5-81 rhel-DVD 2.4 M
glibc-headers x86_64 2.5-81 rhel-DVD 596 k

Transaction Summary
===========================================
Remove 0 Package(s)
Reinstall 0 Package(s)
Downgrade 6 Package(s)

Total download size: 32 M
Is this ok [y/N]: y
Downloading Packages:
——————————————-
Total 10 GB/s | 32 MB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : glibc-common 1/12
Installing : 2/12
Installing : glibc-headers 3/12
Installing : glibc-devel 4/12
Installing : glibc 5/12
Installing : glibc-devel 6/12
Cleanup : glibc-headers 7/12
Cleanup : glibc-common 8/12
Cleanup : glibc 9/12
Cleanup : glibc 10/12
Cleanup : glibc-devel 11/12
Cleanup : glibc-devel 12/12
Installed products updated.

Removed:
glibc.i686 0:2.5-123.el5_11.1
glibc.x86_64 0:2.5-123.el5_11.1
glibc-common.x86_64 0:2.5-123.el5_11.1
glibc-devel.i386 0:2.5-123.el5_11.1
glibc-devel.x86_64 0:2.5-123.el5_11.1
glibc-headers.x86_64 0:2.5-123.el5_11.1

Installed:
glibc.i686 0:2.5-81
glibc.x86_64 0:2.5-81
glibc-common.x86_64 0:2.5-81
glibc-devel.i386 0:2.5-81
glibc-devel.x86_64 0:2.5-81
glibc-headers.x86_64 0:2.5-81

Complete!

3).
# rpm -qa | grep glibc
glibc-2.5-81
glibc-2.5-81
compat-glibc-headers-2.3.4-2.26
compat-glibc-2.3.4-2.26
compat-glibc-2.3.4-2.26
glibc-headers-2.5-81
glibc-devel-2.5-81
glibc-common-2.5-81
glibc-devel-2.5-81

http://www.cgplacenta.to/
http://www.99res.com/?s=Linux+Academy
http://www.99res.com/archives/312346.html

tomcat guide to prepare
http://documents.tips/documents/tomcatppt.html

http://ebookee.org/Linux-Academy-Red-Hat-Certified-System-Administrator-rhcsa-V7-professional-Level_2778727.html

http://www.tnctr.com/topic/320006-linux-academy-docker-deep-dive/

http://www.heroturko.info/tutorials/9023-linux-academy-aws-certified-solutions-architect-professional-level.html

http://nitroflare.com/view/7815FEA616C50FA/6._LinuxAcademy_-_Jenkins_and_Build_Automation.part1.rar

http://youbookpdf.com/e-learning/260653-linux-academy-centos-7-enterprise-linux-server-update-professional-level.html

http://youbookpdf.com/e-learning/260656-linux-academy-postgresql-94-administration-professional-level.html

http://www.heroturko.info/tutorials/9023-linux-academy-aws-certified-solutions-architect-professional-level.html

http://www.99res.com/?s=Linux+Academy

http://dlebook.me/index.php?do=search

http://www.downturk.biz/index.php?do=search

MARIADB

http://itfish.net/article/40768.html

How to setup MariaDB Galera Cluster 10.0 on CentOS

RHEL 7 APACHE CLUSTER

Configure High-Avaliablity Cluster on CentOS 7 / RHEL 7

tomcat 8

https://www.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_HowTo.html

http://documents.tips/documents/tomcatppt.html

Ngnix

The company intends to replace http with https in the Ngxin environment. It requires http to force a jump to https. This search on the Internet, the basic summary
Configure rewrite ^(.*)$ https://$host$1 permanent;

Or in the server configuration return 301 https://$server_name$request_uri;

Or in the server with if, here refers to the need to configure multiple domain names

If ($host ~* “^rmohan.com$”) {

Rewrite ^/(.*)$ https://dev.rmohan.com/ permanent;

}

Or in the server configuration error_page 497 https://$host$uri?$args;

Basically on the above several methods, website visit is no problem, jump is ok

After the configuration is successful, prepare to change the address of the APP interface to https. This is a problem.

The investigation found that the first GET request is to receive information, POST pass in the past is no information, I configure the $ request_body in the nginx log, the log inside that does not come with parameters, view the front of the log, POST changed Become a GET. Finding the key to the problem

Through the online search, the discovery was caused by 301. Replaced by 307 problem solving.

301 Moved Permanently The
requested resource has been permanently moved to a new location, and any future references to this resource should use one of several URIs returned by this response

307 Temporary Redirect The
requested resource now temporarily responds to requests from different URIs. Because such redirection is temporary, the client should continue to send future requests to the original address.

From the above we can see that 301 jump is a permanent redirect, and 307 is a temporary redirect. This is the difference between 301 jumps and 307 jumps.

The above may not look very clear, simple and straightforward to express the difference:

Return 307 https://$server_name$request_uri;

307: For a POST request, indicating that the request has not yet been processed, the client should re-initiate a POST request to the URI in Location.

Change to the 307 status code to force the request to change the previous method.

The following configuration 80 and 443 coexist:

Need to be configured in a server, 443 port plus ssl. Comment out ssl on;, as follows:

Server{
listen 80;
listen 443 ssl;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
#ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE -RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404. Html;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

The two server wording:

Server{
listen 80;
server_name testapp.***.com;
rewrite ^(.*) https://$server_name$1 permanent;
}

Server{
listen 443;
server_name testapp.***.com;
root /data/vhost/test-app;
index index.html index.htm index.shtml index.php;
Ssl on;
ssl_certificate /usr/local/nginx/https/***.crt;
ssl_certificate_key /usr/local/nginx/https/***.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE- RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on
ssl_session_cache shared:SSL:10m;
error_page 404 /404.html ;
Location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
include fastcgi.conf;
fastcgi_pass 127.0.0.1:9000;
#include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
access_log /data/logs/ Nginx/access.log access;
error_log /data/logs/nginx/error.log crit;
}

Offer ssl optimization, the following can be used according to business, not all configuration, the general configuration of the red part on the line

Ssl on;
ssl_certificate /usr/local/https/www.localhost.com.crt;
ssl_certificate_key /usr/local/https/www.localhost.com.key;

Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #allows only TLS protocol
ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:! AESGCM; # cipher suite, here used CloudFlare’s Internet facing SSL cipher configurationssl_prefer_server_ciphers on; # negotiated the best encryption algorithm for the server ssl_session_cache builtin: 1000 shared: SSL: 10m;
# Session Cache, the Session cache to the server, which may take up More server resources ssl_session_tickets on; # Open the browser’s Session Ticket cache ssl_session_timeout 10m; # SSL session expiration time ssl_stapling on;
# OCSP Stapling is ON, OCSP is a service for online query certificate revocation, using OCSP Stapling can certificate The valid state information is cached to the server to increase the TLS handshake speed ssl_stapling_verify on; #OCSP Stapling verification opens the resolver 8.8.8.8 8.8.4.4 valid=300s; # is used to query the DNS resolver_timeout 5s of the OCSP server; # query domain timeout time

How to Install Configure AWS CLI -Linux, OS X, or Unix

How to Install Configure AWS CLI -Linux, OS X, or Unix
AWS CLI (Command Line Interface)

The AWS Command Line Interface is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

Steps to Install AWS CLI (Linux, OS X, or Unix)

Prerequisites

1)Linux Machine

2) Python above 2.6.5+

Here my machine Details

1)Fedora release 20 (Heisenbug) Linux rmohan 3.16.6-200.fc20.x86_64

2)[root@rmohan ~]# python –version
Python 2.7.5

Download the AWS CLI Bundled Installer

[root@rmohan tmp]# wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip

–2016-02-10 15:58:20– https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
Resolving s3.amazonaws.com (s3.amazonaws.com)… 54.231.81.252
Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.81.252|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 6678296 (6.4M) [application/zip]
Saving to: ‘awscli-bundle.zip’

awscli-bundle.zip 100%[========================================================>] 6.37M 122KB/s in 57s

2016-02-10 15:59:18 (114 KB/s) – ‘awscli-bundle.zip’ saved [6678296/6678296]

Unzip the package.

[root@rmohan tmp]# unzip awscli-bundle.zip
Archive: awscli-bundle.zip
inflating: awscli-bundle/install
inflating: awscli-bundle/packages/argparse-1.2.1.tar.gz
inflating: awscli-bundle/packages/awscli-1.10.3.tar.gz
inflating: awscli-bundle/packages/botocore-1.3.25.tar.gz
inflating: awscli-bundle/packages/colorama-0.3.3.tar.gz
inflating: awscli-bundle/packages/docutils-0.12.tar.gz
inflating: awscli-bundle/packages/futures-3.0.4.tar.gz
inflating: awscli-bundle/packages/jmespath-0.9.0.tar.gz
inflating: awscli-bundle/packages/ordereddict-1.1.tar.gz
inflating: awscli-bundle/packages/pyasn1-0.1.9.tar.gz
inflating: awscli-bundle/packages/python-dateutil-2.4.2.tar.gz
inflating: awscli-bundle/packages/rsa-3.3.tar.gz
inflating: awscli-bundle/packages/s3transfer-0.0.1.tar.gz
inflating: awscli-bundle/packages/simplejson-3.3.0.tar.gz
inflating: awscli-bundle/packages/six-1.10.0.tar.gz
inflating: awscli-bundle/packages/virtualenv-13.0.3.tar.gz

Switch the Dir & Install the same.

[root@rmohan tmp]# cd awscli-bundle/

[root@rmohan awscli-bundle]# ll
total 8
-rwxr-xr-x 1 root root 4528 Feb 9 14:57 install
drwxr-xr-x 2 root root 340 Feb 10 16:01 packages

[root@rmohan awscli-bundle]# ./install -i /usr/local/aws -b /usr/local/bin/aws
Running cmd: /bin/python virtualenv.py –python /bin/python /usr/local/aws
Running cmd: /usr/local/aws/bin/pip install –no-index –find-links file:///tmp/awscli-bundle/packages awscli-1.10.3.tar.gz
You can now run: /usr/local/bin/aws –version

Verify the same.

[root@rmohan awscli-bundle]# aws –version
aws-cli/1.10.3 Python/2.7.5 Linux/3.16.6-200.fc20.x86_64 botocore/1.3.25

[rmohan@rmohan ~]$ aws help

To Read Further Make sure AWS IAM in Place.

Now time to Configuring AWS CLI

[rmohan@rmohan ~]$ aws configure
AWS Access Key ID [None]: AKIA***********4OA
AWS Secret Access Key [None]: zdi*******************iZG2oej
Default region name [None]: ap-southeast-1a
Default output format [None]: json

After this when i test it through me below error.

[rmohan@rmohan ~]$ aws ec2 describe-instances

Could not connect to the endpoint URL: “https://ec2.ap-southeast-1a.amazonaws.com/”

Fix :- region which i configured is wrong. only last with 1 not 1a.

Open following file to fix.

[rmohan@rmohan ~]$ vi .aws/config

[rmohan@rmohan ~]$ aws ec2 describe-instances
{
“Reservations”: []
}

Two IMP Conf file..Just for ref.

[rmohan@rmohan ~]$ cat .aws/config
[default]
output = json
region = ap-southeast-1

[rmohan@rmohan ~]$ cat .aws/credentials
[default]
aws_access_key_id = AKIA***********4OA
aws_secret_access_key = zdi*******************iZG2oejT

[rmohan@rmohan ~]$ aws ec2 create-security-group –group-name rk-sg –description “test security group”
{
“GroupId”: “sg-33777***56”
}

Here you can find screen of GUI as well.

[rmohan@rmohan ~]$ aws ec2 describe-instances –output table –region ap-southeast-1
——————-
|DescribeInstances|
+—————–+
More AWS CLI Command

Redis master-slave + KeepAlived achieve high availability

Redis master-slave + KeepAlived achieve high availability

Redis is a non-relational database that we currently use more frequently. It can support diverse data types, multi-threaded high concurrency support, and redis running in memory with faster read and write. Because of the excellent performance of redis, how can we ensure that redis can deal with downtime during operation?

So today’s summary of the construction of the redis master-slave high-availability system, with reference to online bloggers of some great gods, found that many are pitted, so I share this one time, hoping to help everyone.

Redis Features
Redis is completely open source free, complies with the BSD protocol and is a high performance key-value database.

Redis and other key-value cache products have the following three characteristics:

Redis supports the persistence of data, can keep the data in memory on the disk, and can be loaded again for use when restarted.

Redis not only supports simple key-value types of data, but also provides data structures such as: Strings, Maps, Lists, Sets, and sorted sets. Storage.

Redis supports data backup, that is, data backup in master-slave mode.

The Redis advantage
is extremely high – Redis can read 100K+ times/s and write at 80K+ times/s.

Rich data types – Redis supports Strings, Lists, Hashes, Sets, and Ordered Sets data type operations for binary cases.

Atoms – All operations of Redis are atomic, and Redis also supports atomic execution of all operations after all operations.

Rich features – Redis also supports publish/subscribe, notifications, key expiration, and more.

Prepare environment

CentOS 7 –> 172.16.81.140 –> Master Redis –> Master Keepalived

CentOS7 –> 172.16.81.141 –> From Redis –> Prepared Keepalived

VIP –> 172.16.81.139

Redis (normally 3.0 or later)

KeepAlived (direct online installation)

Redis compile and install

cd /opt
tar -zxvf redis-4.0.6.tar.gz
mv redis-4.0.6 redis
cd redis
makeMALLOC=libc
make PREFIX=/usr/local/redis install

2, configure the redis startup script

vim /etc/init.d/redis

#!/bin/sh

#chkconfig:2345 80 90
# Simple Redisinit.d script conceived to work on Linux systems
# as it doeSUSE of the /proc filesystem.

# Configure the redis port number
REDISPORT=6379
# Configure the redis startup command path
EXE=/usr/local/redis/bin/ redisserver
# Configure the redis connection command path
CLIEXE=/usr/local/redis/bin/redis-cli
# Configure
Redis Run PID path PIDFILE=/var/run/redis_6379.pid
# Configure the path of redis configuration file
CONF=”/etc/redis/redis.conf”
# Configure the connection authentication password for redis
REDISPASSWORD=123456
function start () {
if [ -f $PIDFILE ]

then

echo “$PIDFILE exists,process is already running or crashed”

else

echo “Starting Redisserver…”

$EXE $CONF &

fi
}

function stop () {
if [ ! -f $PIDFILE ]

then

echo “$PIDFILE does not exist, process is not running”

else

PID=$(cat $PIDFILE)

echo “Stopping …”

$CLIEXE -p $REDISPORT -a $REDISPASSWORD shutdown

while [ -x /proc/${PID} ]

do

echo “Waiting forRedis to shutdown …”

sleep 1

done

echo “Redis stopped”

fi
}

function restart () {
stop

sleep 3

start
}

case “$1” in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo -e “\e[31m Please use $0 [start|stop|restart] asfirst argument \e[0m”
;;
esac

Grant execution permissions:

chmod +x /etc/init.d/redis

Add boot start:

chkconfig –add redis

chkconfig redis on

See: chkconfig –list | grep redis

The test closed the firewall and selinux beforehand. The production environment is recommended to open the firewall.

3, add redis command environment variables

#vi /etc/profile #Add the
next line of parameters
exportPATH=”$PATH:/usr/local/redis/bin” #Environment
variables become effective
source /etc/profile

4. Start redis service

Service redis start #Check
startup

ps -ef | grep redis

Note: After we perform the same operation on our two servers, the installation completes redis. After the installation is completed, we directly enter the configuration master-slave environment.

Redis master-slave configuration

To extend back to the previous design pattern, our idea is to use 140 as the master, 141 as the slave, and 139 as the VIP elegant address. The application accesses the redis database through the 6379 port of the 139.

In normal operation, when the master node 140 goes down, the VIP floats to 141, and then 141 will take over 140 as the master node, and 140 will become the slave node, continuing to provide read and write operations.

When 140 returns to normal, 140 will perform data synchronization with 141 at this time, 140 the original data will not be lost, and the data that has been written into 141 between synchronization machines will be synchronized. After the data synchronization is completed,

The VIP will return to the 140 node and become the master node because of the weight. 141 will lose the VIP and become the slave node again, and restore to the initial state to continue providing uninterrupted read and write services.

1, configure the redis configuration file

Master-140 configuration file

vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
daemonize yes
requirepass 123456
slave-serve-stale-data yes
slave-read-only no

Slave-141 configuration file

vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
daemonize yes
slaveof 172.16.81.140 6379
masterauth 123456
slave-serve-stale-data yes
slave-read-only no

2. Restart the redis service after the configuration is complete! Verify that the master and slave are normal.

Master node 140 terminal login test:

[root@localhost ~]# redis-cli -a 123456
127.0.0.1:6379> INFO
.
.
.
# Replication
role:master
connected_slaves:1
slave0:ip=172.16.81.141,port=6379,state=online,offset=105768,lag=1
master_replid:f83fcc3c98614d770f2205831fef1e877fa3f482
master_replid2:1f25604997a4ad3eb8344e8155990e78acd93312
master_repl_offset:105768
second_repl_offset:447
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:447
repl_backlog_histlen:105322

Login test from node 141 terminal:

[root@localhost ~]# redis-cli -a 123456
127.0.0.1:6379> info
.
.
.
# Replication
role:slave
master_host:172.16.81.140
master_port:6379
master_link_status:up
master_last_io_seconds_ago:5
master_sync_in_progress:0
slave_repl_offset:105992
slave_priority:100
slave_read_only:0
connected_slaves:0
master_replid:f83fcc3c98614d770f2205831fef1e877fa3f482
master_replid2:1f25604997a4ad3eb8344e8155990e78acd93312
master_repl_offset:105992
second_repl_offset:447
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:239
repl_backlog_histlen:105754
3, synchronous test

Master node 140

The masters and slaves of this redis have been completed!

KeepAlived configuration to achieve dual hot standby

Use Keepalived to implement VIP, and achieve disaster recovery through notify_master, notify_backup, notify_fault, notify_stop.

1, configure Keepalived configuration file

Master Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id redis01
}

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2
}

vrrp_instance VI_1 {
state MASTER
interface eno16777984
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

track_script {
chk_redis
}
virtual_ipaddress {
172.16.81.139
}

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh
}

Spare Keepalived Profile

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id redis02
}

vrrp_script chk_redis {
script “/etc/keepalived/script/redis_check.sh”
interval 2
}

vrrp_instance VI_1 {
state BACKUP
interface eno16777984
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}

track_script {
chk_redis
}
virtual_ipaddress {
172.16.81.139
}

notify_master /etc/keepalived/script/redis_master.sh
notify_backup /etc/keepalived/script/redis_backup.sh
notify_fault /etc/keepalived/script/redis_fault.sh
notify_stop /etc/keepalived/script/redis_stop.sh
}

2, configure the script

Master KeepAlived — 140

Create a script directory: mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh
#!/bin/bash

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ];then

echo $ALIVE

exit 0

else

echo $ALIVE

exit 1

fi

[root@localhost script]# cat redis_master.sh
#!/bin/bash
REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″
LOGFILE=”/var/log/keepalived-redis-state.log”
sleep 15
echo “[master]” >> $LOGFILE
date >> $LOGFILE
echo “Being master….” >>$LOGFILE 2>&1
echo “Run SLAVEOF cmd …”>> $LOGFILE
$REDISCLI SLAVEOF 172.16.81.141 6379 >>$LOGFILE 2>&1
if [ $? -ne 0 ];then
echo “data rsync fail.” >>$LOGFILE 2>&1
else
echo “data rsync OK.” >> $LOGFILE 2>&1
fi

Sleep 10 # delay 10 seconds later to cancel synchronization after the data synchronization is complete

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

$REDISCLI SLAVEOF NO ONE >> $LOGFILE 2>&1
if [ $? -ne 0 ];then
echo “Run SLAVEOF NO ONE cmd fail.” >>$LOGFILE 2>&1
else
echo “Run SLAVEOF NO ONE cmd OK.” >> $LOGFILE 2>&1
fi

[root@localhost script]# cat redis_backup.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

Sleep 15 #delay 15 seconds to wait until the data is synchronized to the other side and then switch the role of master-slave

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.141 6379 >>$LOGFILE 2>&1

[root@localhost script]# cat redis_fault.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[stop]” >> $LOGFILE

date >> $LOGFILE

Slave KeepAlived — 141

mkdir -p /etc/keepalived/script/

cd /etc/keepalived/script/

[root@localhost script]# cat redis_check.sh
#!/bin/bash

ALIVE=`/usr/local/redis/bin/redis-cli -a 123456 PING`

if [ “$ALIVE” == “PONG” ]; then

echo $ALIVE

exit 0

else

echo $ALIVE

exit 1

fi

[root@localhost script]# cat redis_master.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[master]” >> $LOGFILE

date >> $LOGFILE

echo “Being master….” >>$LOGFILE 2>&1

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.140 6379 >>$LOGFILE 2>&1

sleep 10 #

echo “Run SLAVEOF NO ONE cmd …”>> $LOGFILE

$REDISCLI SLAVEOF NO ONE >> $LOGFILE 2>&1

[root@localhost script]# cat redis_backup.sh
#!/bin/bash

REDISCLI=”/usr/local/redis/bin/redis-cli -a 123456″

LOGFILE=”/var/log/keepalived-redis-state.log”

echo “[backup]” >> $LOGFILE

date >> $LOGFILE

echo “Being slave….” >>$LOGFILE 2>&1

sleep 15 #

echo “Run SLAVEOF cmd …”>> $LOGFILE

$REDISCLI SLAVEOF 172.16.81.140 6379 >>$LOGFILE 2>&1

[root@localhost script]# cat redis_fault.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[fault]” >> $LOGFILE

date >> $LOGFILE

[root@localhost script]# cat redis_stop.sh
#!/bin/bash

LOGFILE=/var/log/keepalived-redis-state.log

echo “[stop]” >> $LOGFILE

date >> $LOGFILE

systemctl start keepalived

systemctl enable keepalived

ps -ef | grep keepalived

JBoss Too Many Files Open Error

 

First of all you want to determine what file(s) remain open. I’m assuming your server runs linux, so once you know JBoss’es PID

ps ax | grep something-that-makes-your-jboss-process-unique

you can do

ls -l /proc/jbosspid/fd

to get a nice list of files that are open at that instant.

What you’re going to do next depends a bit on what you see here:

  1. you may just need to up the number of files the server can open a bit with ulimit (also look at systemwide limits on your server)
  2. maybe you spot a number of files your application forgot to close
  3. ….

 

Also the max limit is high

linux-server:~# cat /proc/sys/fs/file-max
202989

and the max ever occupied is well below the limit:
cat /proc/sys/fs/file-nr
6304 0 202989

all users return the same limit (including the jboss user who initiates the app server):
jboss@linux-server:/home$ ulimit -n

A wat to have a look at this is to run the lsof command (only as root) – it will show you all the open file descriptors.

In order to fix that, edit the file in /etc/security/limits.conf and add the following lines and restart your jboss.

jboss          soft    nofile          16384
jboss          hard    nofile          16384

(assuming your jboss is run by the “jboss” user)

 

 

  • Settings in /etc/security/limits.conf take the following form:
    # vi /etc/security/limits.conf
    #<domain>        <type>  <item>  <value>
    
    *               -       core             <value>
    *               -       data             <value>
    *               -       priority         <value>
    *               -       fsize            <value>
    *               soft    sigpending       <value> eg:57344
    *               hard    sigpending       <value> eg:57444
    *               -       memlock          <value>
    *               -       nofile           <value> eg:1024
    *               -       msgqueue         <value> eg:819200
    *               -       locks            <value>
    *               soft    core             <value>
    *               hard    nofile           <value>
    @<group>        hard    nproc            <value>
    <user>          soft    nproc            <value>
    %<group>        hard    nproc            <value>
    <user>          hard    nproc            <value>
    @<group>        -       maxlogins        <value>
    <user>          hard    cpu              <value>
    <user>          soft    cpu              <value>
    <user>          hard    locks            <value>
    
    • <domain> can be:
      • an user name
      • a group name, with @group syntax
      • the wildcard *, for default entry
      • the wildcard %, can be also used with %group syntax, for maxlogin limit
    • <type> can have the two values:
      • “soft” for enforcing the soft limits
      • “hard” for enforcing hard limits
    • <item> can be one of the following:
      • core – limits the core file size (KB)
      • data – max data size (KB)
      • fsize – maximum filesize (KB)
      • memlock – max locked-in-memory address space (KB)
      • nofile – max number of open files
      • rss – max resident set size (KB)
      • stack – max stack size (KB)
      • cpu – max CPU time (MIN)
      • nproc – max number of processes
      • as – address space limit (KB)
      • maxlogins – max number of logins for this user
      • maxsyslogins – max number of logins on the system
      • priority – the priority to run user process with
      • locks – max number of file locks the user can hold
      • sigpending – max number of pending signals
      • msgqueue – max memory used by POSIX message queues (bytes)
      • nice – max nice priority allowed to raise to values: [-20, 19]
      • rtprio – max realtime priority
  • Exit and re-login from the terminal for the change to take effect.
  • More details can be found from below command:
# man limits.conf

Diagnostic Steps

  • To improve performance, we can safely set the limit of processes for the super-user root to be unlimited. Edit the .bashrc file vi /root/.bashrcand add the following line:
# vi /root/.bashrc
ulimit -u unlimited

HTTPS TLS performance optimization details

HTTPS TLS performance optimization details

HTTPS (HTTP over SSL) is a security-oriented HTTP channel and can be understood as HTTP + SSL/TLS, that is, adding an SSL/TLS layer under HTTP as a security foundation. The predecessor of TLS is SSL. Currently, TLS 1.2 is widely used.

 

 

TLS performance tuning
TLS is widely considered to slow down services, mainly because early CPUs are still slow, and only a few sites can afford cryptographic services. But today’s computing power is no longer the bottleneck of TLS. In 2010, Google enabled encryption on its e-mail service by default, after which they stated that SSL/TLS no longer costly calculations:

In our front-end services, SSL/TLS calculations account for less than 1% of CPU load, less than 10KB of memory per connection, and less than 2% of network overhead.

1. 
The speed of delay and connection management network communication is determined by two major factors: bandwidth and delay.

Bandwidth: Used to measure how much data can be sent in a unit of time. 
Delay: Describe the time required for a message to be sent from one end to the other. 
Among them, bandwidth is a secondary factor because usually you can buy more bandwidth at any time; This is unavoidable because it is a limitation that is imposed when data is transmitted over a network connection.

Latency has a particularly large impact on TLS because it has its own well-designed handshaking, adding an additional two round trips during connection initialization.

1.1 TCP Optimization 
Each TCP connection has a speed limit called a congestion window that is initially small and grows over time with guaranteed reliability. This mechanism is called slow start.

Therefore, for all TCP connections, the startup is slow and worse for the TLS connection because the TLS handshake protocol consumes precious initial connection bytes (when the congestion window is small). If the congestion window is large enough, there is no additional delay for slow start. However, if the long handshake protocol exceeds the size of the congestion window, the sender must split it into two blocks, send a block first, wait for confirmation (a round trip), increase the congestion window, and then send the rest.

1.1.1 Congestion Window Tuning The 
startup speed limit is called the initial congestion window. RFC6928 recommends that the initial congestion window be set to 10 network segments (approximately 15 KB). The early advice was to start with 2-4 network segments.

 

 

 

 

On older Linux platforms, you can change the initial congestion window of the route:

# ip route | while read p; do ip route change $p initcwnd 10; done

1.1.2 Preventing Slow Start Slow Start 
Slow start can affect the connection over a period of time without any traffic, reducing its speed, and the speed drops very quickly. On Linux, you can disable slow start when the connection is idle:

# sysctl -w net.ipv4.tcp_slow_start_after_idle=0 can be made permanent by adding this setting to the /etc/sysctl.conf configuration.

1.2 Long Connections In 
most cases, the TLS performance impact is concentrated on the start handshake phase of each connection. An important optimization technique is to keep every connection as close as possible with the number of connections allowed.

The current trend is to use an event-driven WEB server to handle all communications by using a fixed thread pool (even a single thread), thereby reducing the cost of each connection and the possibility of being attacked.

The disadvantage of long connections is that after the last HTTP connection is completed, the server waits for a certain amount of time before closing the connection, although a connection does not consume too many resources, but it reduces the overall scalability of the server. Long connections are suitable for scenarios where the client bursts a large number of requests.

When configuring large long connection timeouts, it is important to limit the number of concurrent connections to avoid server overload. Adjust the server by testing to run within capacity limits. If TLS is handled by OpenSSL, make sure that the server correctly sets the SSL_MODE_RELEASE_BUFFERS flag.

1.3 HTTP/2.0 
HTTP/2.0 is a binary protocol that provides features such as multiplexing and header compression to improve performance.

1.4 CDNs 
use CDNs to achieve world-class performance, using geographically dispersed servers to provide edge caching and traffic optimization.

The further away the user is from your server, the slower the access to the network, in which case connection establishment is a big limiting factor. For the server to be as close to the end user as possible, the CDN operates a large number of geographically distributed servers, which can provide two ways to reduce latency, namely edge caching and connection management.

1.4.1 Edge Cache 
Since the CDN server is close to the user, you can provide your file to the user just as if your server is really there.

1.4.2 Connection Management 
If your content is dynamic, user-specific, you cannot cache data through the CDN for a long time. However, a good CDN can help with connection management even without any cache, which is that it can eliminate most of the cost of establishing a connection through a long connection that is maintained for a long time.

Most of the time spent establishing a connection is spent waiting. To minimize waiting, the CDN routes traffic to its closest point to the destination through its own basic settings. Because it is the CDN’s own fully controllable server, it can maintain long internal connections for a long time.

When using a CDN, the user connects to the nearest CDN node. This is only a short distance. The network delay of the TLS handshake is also very short. The existing long-distance connection can be directly reused between the CDN and the server. This means that the user and server have established a valid connection with the CDN Fast Initial TLS handshake.

2. TLS protocol optimization 
After connection management, we can focus on the performance characteristics of TLS and have the knowledge of security and speed tuning of the TLS protocol.

2.1 Key Exchange The 
maximum cost of using TLS is the CPU-intensive cryptographic operations used for security parameter negotiation except for delays. This part of the communication is called key exchange. The CPU consumption of key exchange largely depends on the server’s chosen private key algorithm, private key length, and key exchange algorithm.

Key length 
The difficulty of cracking a key depends on the length of the key. The longer the key, the more secure it is. However, a longer key also means that it takes more time for encryption and decryption.

Key Algorithms 
There are currently two key algorithms available: RSA and ECDSA. The current RSA key algorithm recommends a minimum length of 2048 bits (112-bit encryption strength), and 3072 bits (128-bit encryption strength) will be deployed in the future. ECDSA is superior to RSA in terms of performance and security. 256-bit ECDSA (128-bit encryption strength) provides the same security as 3072-bit RSA, but with better performance.

Key Exchange 
There are currently two key exchange algorithms available: DHE and ECDHE. Which DHE is too slow is not recommended. The performance of the key exchange algorithm depends on the length of the configured negotiation parameters. For DHE, the commonly used 1024 and 2048 bits provide 80 and 112 bit security levels, respectively. For ECDHE, security and performance depend on something called a **curve**. Secp256r1 provides a 128-bit security level.

In practice, you cannot combine key and key exchange algorithms at will, but you can use combinations specified by the protocol.

2.2 Certificate During 
a complete TLS handshake, the server sends its certificate chain to the client for authentication. The length and correctness of the certificate chain have a great influence on the performance of the handshake.

Using as few certificates as possible 
for each certificate in the certificate chain increases the handshaking packet. Too many certificates in the certificate chain may cause the TCP initial congestion window to overflow.

Including Only 
Required Certificates It is a common mistake to include non-required certificates in the certificate chain. Each such certificate will add an additional 1-2 KB to the handshake protocol.

Providing a complete certificate chain 
server must provide a complete certificate chain that is trusted by the root certificate.

Using elliptic curve certificate chains 
Because ECDSA private key length uses fewer bits, ECDSA certificates can be smaller.

Avoiding the binding of too many domain names with the same certificate 
Each additional domain name increases the size of the certificate, which has a significant impact on a large number of domain names.

2.3 Revocation Checks 
Although the status of certificate revocation is constantly changing and the behavior of user agents in revocation of certificates is very different, as a server, the only thing to do is to deliver the revocation information as quickly as possible.

Certificate OCSP using OCSP information is designed to provide real-time queries, allowing the user agent to request only access to the website’s revocation information, and the query is brief and fast (an HTTP request). In contrast, the CRL is a list containing a large number of revoked certificates.

Using OCSP Responders with Fast and Reliable OCSP Responders The performance of OCSP Responders 
differs between different CAs and you check their historical OCSP Responders before submitting them to the CA. Another criterion for choosing a CA is how quickly it updates OCSP responders.

Deploying OCSP stapling 
OCSP stapling is a protocol feature that allows revocation information (entire OCSP response) to be included in the TLS handshake. After it is enabled, by giving the user agent all the information to revoke the check for better performance, the user agent can be omitted to obtain the CA’s OCSP response program through a separate connection to query the revocation information.

2.4 Protocol Compatibility 
If your server is incompatible with the features of some new version protocols (eg TLS 1.2), the browser may need to make multiple attempts with the server to negotiate an encrypted connection. The best way to ensure good TLS performance is to upgrade the latest TLS protocol stack to support newer protocol versions and extensions.

2.5 Hardware Acceleration 
As the CPU speed continues to increase, software-based TLS implementations have run fast enough on normal CPUs to process large numbers of HTTPS requests without specialized encryption hardware. However, installing an accelerator card may increase speed.

oracle xe backup

ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export ORACLE_HOME
ORACLE_SID=XE
export ORACLE_SID
PATH=$ORACLE_HOME/bin:$PATH
export PATH

BKUP_DEST=/home/mohan/backups
find $BKUP_DEST -name 'backup*.dmp' -mtime +10 -exec rm {} \;



cd /home/mohan/backups && /u01/app/oracle/product/11.2.0/xe/bin/exp schema/password FILE=backup_`date +'%Y%m%d-%H%M'`.dmp STATISTICS=NONE


startup nomount pfile=/u01/app/oracle/product/11.2.0/xe/dbs/initXE.ora
/

create database
        maxinstances 1
        maxloghistory 2
        maxlogfiles 16
        maxlogmembers 2
        maxdatafiles 30
      datafile '/u01/app/oracle/oradata/XE/system.dbf'
        size 200M reuse autoextend on next 10M maxsize 600M
        extent management local
      sysaux datafile '/u01/app/oracle/oradata/XE/sysaux.dbf'
        size 10M reuse autoextend on next  10M
      default temporary tablespace temp tempfile 
 '/u01/app/oracle/oradata/XE/temp.dbf'
        size 20M reuse autoextend on next  10M maxsize 500M
      undo tablespace undotbs1 datafile '/u01/app/oracle/oradata/XE/undotbs1.dbf'
        size 50M reuse autoextend on next  5M maxsize 500M
       character set cl8mswin1251
       national character set al16utf16
       set time_zone='00:00'
       controlfile reuse
       logfile '/u01/app/oracle/oradata/XE/log1.dbf' size 50m reuse
             , '/u01/app/oracle/oradata/XE/log2.dbf' size 50m reuse
             , '/u01/app/oracle/oradata/XE/log3.dbf' size 50m reuse
      user system identified by oracle
      user sys identified by oracle
/

create tablespace users
       datafile '/u01/app/oracle/oradata/XE/users.dbf'
       size 100M reuse autoextend on next 10M maxsize 11G
       extent management local
/

exit;

 Create additional database in Oracle Express edition 

 Create additional database in Oracle Express edition 

*In my windows i have installed oracle11g express edition i want to create new database for my testing purpose but the express edition doesnot support DBCA utiltiy let us we can discuss How to Create additional database in Oracle Express edition or How to create manual database in oracle11g windows enivornment.

S-1:

create directory

 

C:\Windows\system32>mkdir C:\oraclexe\app\oracle\admin\TEST

C:\Windows\system32>cd C:\oraclexe\app\oracle\admin\TEST



C:\oraclexe\app\oracle\admin\TEST>mkdir adump

C:\oraclexe\app\oracle\admin\TEST>mkdir bdump

C:\oraclexe\app\oracle\admin\TEST>mkdir dpdump

C:\oraclexe\app\oracle\admin\TEST>mkdir pfile

 

S-2:

Create new instance


  
C:\Windows\System32>oradim -new -sid test

Instance created.

 

S-3:

create pfile and Password file like below

 

C:\Windows\System32>orapwd file=C:\oraclexe\app\oracle\product\11.2.0\server\dat
abase\PWDTEST.ora password=oracle

 

Note: I just copied the pfile (InitXE.ora )from Xe database into new database pfile(my manual database) location then I changed the file name “initXE.ora” into “initTEST.ora” and opened that file

 

S-4:

Open the pfile “InitTEST.ora”



xe.__db_cache_size=411041792
xe.__java_pool_size=4194304
xe.__large_pool_size=4194304
xe.__oracle_base='C:\oraclexe\app\oracle'#ORACLE_BASE set from environment
xe.__pga_aggregate_target=432013312
xe.__sga_target=641728512
xe.__shared_io_pool_size=0
xe.__shared_pool_size=205520896
xe.__streams_pool_size=8388608
*.audit_file_dest='C:\oraclexe\app\oracle\admin\XE\adump'
*.compatible='11.2.0.0.0'
*.control_files='C:\oraclexe\app\oracle\oradata\XE\control.dbf'
*.db_name='XE'
*.DB_RECOVERY_FILE_DEST_SIZE=10G
*.DB_RECOVERY_FILE_DEST='C:\oraclexe\app\oracle\fast_recovery_area'
*.diagnostic_dest='C:\oraclexe\app\oracle\.'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=XEXDB)'
*.job_queue_processes=4
*.memory_target=1024M
*.open_cursors=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=20
*.shared_servers=4
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'

 

S-5:

Change The parameter like below



*.audit_file_dest='C:\oraclexe\app\oracle\admin\TEST\adump'
*.compatible='11.2.0.0.0'
*.control_files='C:\oraclexe\app\oracle\oradata\TEST\control.dbf'
*.db_name='TEST'
*.DB_RECOVERY_FILE_DEST_SIZE=10G
*.DB_RECOVERY_FILE_DEST='C:\oraclexe\app\oracle\fast_recovery_area'
*.diagnostic_dest='C:\oraclexe\app\oracle\.'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TESTXDB)'
*.job_queue_processes=4
*.memory_target=1024M
*.open_cursors=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=20
*.shared_servers=4
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'

 

S-6:

After modifying the pfile and i started the new instance like below



  C:\Windows\System32>set ORACLE_SID=TEST

C:\Windows\System32>sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 20 12:41:38 2017

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

 

S-7:

Start the database in nomount stage using pfile



SQL> startup nomount pfile='C:\oraclexe\app\oracle\admin\TEST\pfile\initTEST.ora
'
ORACLE instance started.

Total System Global Area  644468736 bytes
Fixed Size                  1385488 bytes
Variable Size             192941040 bytes
Database Buffers          444596224 bytes
Redo Buffers                5545984 bytes

 

S-8:

Create the database script


create database TEST
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
  GROUP 1 'C:\oraclexe\app\oracle\oradata\TEST\REDO01.LOG'  SIZE 50M BLOCKSIZE 512,
  GROUP 2 'C:\oraclexe\app\oracle\oradata\TEST\REDO02.LOG'  SIZE 50M BLOCKSIZE 512
DATAFILE'C:\oraclexe\app\oracle\oradata\TEST\SYSTEM.DBF' size 100m autoextend on
sysaux datafile 'C:\oraclexe\app\oracle\oradata\TEST\SYSAUX.DBF' size 100m autoextend on
undo tablespace undotbs1 datafile  'C:\oraclexe\app\oracle\oradata\TEST\UNDOTBS1.DBF' size 100m autoextend on
CHARACTER SET AL32UTF8
;

 

S-9:

Execute the @createdatabase.sql file


SQL> @C:\oraclexe\app\oracle\CREATEDATABASE.SQL

Database created.

 

S-10:

Test our database name and instance status



SQL> select status from v$instance;

STATUS
------------
OPEN

SQL> select * from V$version;

BANNER
--------------------------------------------------------------------------------

Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for 32-bit Windows: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> select name from V$database;

NAME
---------
TEST

 

S-11:

Execute this below two scripts


  SQL> @C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\admin\catalog.sql

  SQL> @C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\admin\catproc.sql

How to list all database names in Oracle

1) To view database

select * from v$database;

2) To view instance

select * from v$instance;

3) To view all users

select * from all_users;


4) To view table and columns for a particular user

select tc.table_name Table_name
,tc.column_id Column_id
,lower(tc.column_name) Column_name
,lower(tc.data_type) Data_type
,nvl(tc.data_precision,tc.data_length) Length
,lower(tc.data_scale) Data_scale
,tc.nullable nullable
FROM all_tab_columns tc
,all_tables t
WHERE tc.table_name = t.table_name;



select owner from dba_tables
union
select owner from dba_views;


select username from dba_users;


QL> SELECT TABLESPACE_NAME FROM USER_TABLESPACES;

Resulting in:

SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
EXAMPLE
DEV_DB

It is also possible to query the users in all tablespaces:

SQL> select USERNAME, DEFAULT_TABLESPACE from DBA_USERS;

Or within a specific tablespace (using my DEV_DB tablespace as an example):

SQL> select USERNAME, DEFAULT_TABLESPACE from DBA_USERS where DEFAULT_TABLESPACE = 'DEV_DB';

ROLES DEV_DB
DATAWARE DEV_DB
DATAMART DEV_DB
STAGING DEV_DB

EXP-00002: error in writing to export file

EXP-00002: error in writing to export file

While exporting table or schema using exp/imp utility you may come across below error.
Most of the time this error occurs due to insufficient space available on disk.so confirm space is available where you are taking taking export dump and re-run export.
[oracle@DEV admin]$ exp test/test@DEV tables=t1,t2,t3,t4 file=exp_tables.dmp log=exp_tables.log
Export: Release 9.2.0.8.0 – Production on Thu Sep 10 12:25:52 2015
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 – Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 – Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path …
. . exporting table t1 1270880 rows exported
. . exporting table t2 2248883 rows exported
. . exporting table t3 2864492 rows exported
. . exporting table t4
EXP-00002: error in writing to export file
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully