May 2018
M T W T F S S
« Apr    
 123456
78910111213
14151617181920
21222324252627
28293031  

Categories

WordPress Quotes

Cherish your visions and your dreams as they are the children of your soul, the blueprints of your ultimate achievements.
Napoleon Hill

Recent Comments

May 2018
M T W T F S S
« Apr    
 123456
78910111213
14151617181920
21222324252627
28293031  

Short Cuts

2012 SERVER (64)
2016 windows (9)
AIX (13)
Amazon (13)
Ansibile (17)
Apache (123)
Asterisk (2)
cassandra (2)
Centos (207)
Centos RHEL 7 (245)
chef (3)
cloud (2)
cluster (3)
Coherence (1)
DB2 (5)
DISK (25)
DNS (9)
Docker (21)
Eassy (11)
EXCHANGE (3)
Fedora (6)
ftp (4)
GOD (2)
Grub (1)
Hacking (10)
Hadoop (6)
horoscope (23)
Hyper-V (10)
IIS (15)
IPTABLES (15)
JAVA (6)
JBOSS (31)
jenkins (1)
Kubernetes (1)
Ldap (4)
Linux (186)
Linux Commands (167)
Load balancer (5)
mariadb (14)
Mongodb (4)
MQ Server (21)
MYSQL (74)
Nagios (4)
NaturalOil (13)
Nginx (23)
Ngix (1)
Openstack (6)
Oracle (29)
Perl (3)
Postfix (19)
Postgresql (1)
PowerShell (2)
Python (3)
qmail (36)
Redis (9)
RHCE (28)
SCALEIO (1)
Security on Centos (29)
SFTP (1)
Shell (64)
Solaris (58)
Sql Server 2012 (4)
squid (3)
SSH (10)
SSL (14)
Storage (1)
swap (3)
TIPS on Linux (28)
tomcat (58)
Uncategorized (29)
Veritas (2)
vfabric (1)
VMware (28)
Weblogic (38)
Websphere (71)
Windows (19)
Windows Software (2)
wordpress (1)
ZIMBRA (17)

WP Cumulus Flash tag cloud by Roy Tanck requires Flash Player 9 or better.

Who's Online

19 visitors online now
1 guests, 18 bots, 0 members

Hit Counter provided by dental implants orange county

ZFS installation and configuration in CentOS 6 & 7

ZFS installation and configuration in CentOS 6 & 7
ZFS is a combined file system and logical volume manager designed by Sun Microsystems.
The features of ZFS include protection against data corruption, support for high storage capacities,
efficient data compression, integration of the concepts of file system and volume management,
snapshots and copy-on-write clones, continuous integrity checking and automatic repair,
RAID-Z and native NFSv4 ACLs.
Installation:
CentOS 6
# yum install epel-release -y
# sudo yum localinstall –nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
# yum install kernel-devel zfs -y
CentOS 7
# yum localinstall –nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
# yum install epel-release -y
# yum install kernel-devel zfs -y
And check wether the zfs kernel are loaded or not.
# lsmod | grep zfs
Usage :
Add 3 or more disks to your machine. And check the disk label. Consider we selected 3 disks named sda , sdb , sdc .
# zpool create -f mypool raidz sda sdb sdc
# zpool status

IPV6 DHCP Server on CentOS 7

Without any talk we can go straight to configure IPV6 DHCP server .

IPV6 NS : 2015:9:19:ffff::1

IPV6 DHCP SERVER : 2015:9:19:ffff::2

 

# yum install dhcp -y

# cp /usr/share/doc/dhcp-4.2.5/dhcpd6.conf.example /etc/dhcp/dhcpd6.conf

# cat /etc/dhcp/dhcpd6.conf

default-lease-time 2592000;
preferred-lifetime 604800;
option dhcp-renewal-time 3600;
option dhcp-rebinding-time 7200;
allow leasequery;
option dhcp6.info-refresh-time 21600;
dhcpv6-lease-file-name “/var/lib/dhcpd/dhcpd6.leases”;

 

subnet6 2015:9:19:ffff::/64 {
range6 2015:9:19:ffff::10 2015:9:19:ffff::1000;
option dhcp6.name-servers 2015:9:19:ffff::1;
option dhcp6.domain-search “iptechnics.ae”;
}

# cat /etc/sysconfig/network-scripts/ifcfg-enp3s0

DEVICE=enp3s0
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
TYPE=Ethernet
NETMASK=255.255.255.192
IPADDR=10.254.254.154
GATEWAY=10.254.254.129
IPV6INIT=yes
IPV6ADDR=”2015:9:19:ffff::2/64?
IPV6_AUTOCONF=yes

# systemctl start dhpcd6

# systemctl enable dhcpd6

BTRFS in CentOS 7

In these days we can find lot of file systems and they are also used in many Operating systems and block devices . Comparing with this file systems BTRFS is a newbie . Btrfs is a new copy on write (CoW) filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration.

Recently CentOS/RHEL is also supports BTRFS .

So lets start the tutorial ,

Consider we have one disk /dev/sdb .

# fdisk /dev/sdb

And create disk called /dev/sdb1 .

Format it using BTRFS .

# mkfs.btrfs /dev/sdb1

For testing purpose we are mounting this to /mnt .

# mount /dev/sdb1 /mnt

Now , BTRFS is mounted under /mnt . Just check with df command .

Basically BTRFS is working with subvolume’s . First we need to create a subvolume . Fo that we don’t need to unmount file system .

# btrfs sub create /mnt/volume1

# tree /mnt

# btrfs sub list /mnt

Now we need to remount with “volume1” as our default mount mount. For that ,

# umount /mnt

# mount -o subvol=volume1 /dev/sdb1 /mnt

With these commands we can able to mount particular subvoumes to our mount points .

# btrfs sub get-default /mnt

Above command will show default sub volume .

To create new sub volume under volume1 ,

# btrfs sub create /mnt/newsub1

# btrfs sub create /mnt/newsub2

Above command will create new subvolume newsub1 under volume1 .

Now the subvolume tree is look like,

VOLUME1 -> NEWSUB1

-> NEWSUB2

Ok. Now we need to create another main subvolume like volume1 . For that ,

# umount /mnt

# mount -o subvolid=0 /dev/sdb1 /mnt

# btrfs sub create /mnt/volume2

# cd /mnt

# ls

volume1    volume2

 

Snapshot using BTRFS

Now we know how to create btrfs filesystem and subvolumes . This is only  just beginning , using BTRFS we can configure RAID levels , remote transfer..etc . Later we can discuss about these mechanism .

How can we take a snapshot ?  Its simple .

Consider we need to take a snapshot of volume2 .

# cd /mnt

# btrfs sub snap volume2 volume2.snap

Format is btrfs sub snap <source> <destination>

# btrfs sub list /mnt

This command will list all sub volumes including snapshots .

 

How to take CentOS 7 root (/) snapshot ?

By default CentOS/RHEL is installing using XFS file system . Try to install centos using BTRFS file system . We can convert our existing file system to BTRFS , that will cover on next section .

After installing CentOS/rhel using BTRFS filesystem ,

# btrfs sub list /

ID 257 gen 871 top level 5 path root
ID 260 gen 41 top level 257 path var/lib/machines

From this we know that root and var/lib/machines are the sub volumes . But from below command we can see that default sub volume mounted under / is root (ID 5). For more clarification you can check /etc/fstab .

# btrfs sub get-default /

ID 5 (FS_TREE)

To take snapshot we need to mount btrfs partition to other folder , for that first we need to know which partition .

# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G  1.1G   15G   7% /

From this we can see that /dev/sda3 is the partition which we need to mount .

# mkdir /btrfs

# mount -o subvolid=0 /dev/sda3 /btrfs

# cd /btrfs

# ls

root

To make snapshot,

# btrfs sub snap root root-$(date +%Y%m%d)

# ls

root  root.snap

# btrfs sub list /

ID 257 gen 946 top level 5 path root
ID 260 gen 41 top level 257 path root/var/lib/machines
ID 262 gen 946 top level 5 path root-20160607

Now we have one snapshot of our root file system .

To boot with our new snapshot need to set some configurations ,

Remove all the “rootflags=subvol=root” arguments from /boot/grub2/grub.cfg. If you don’t do this, it will disregard the default subvolume id we just set and always boot into the root subvolume.

# sed -i ‘s/rootflags=subvol=root //’ /boot/grub2/grub.cfg

# btrfs sub set-default 262 /.

# reboot

After booting up , we need to check root subvolume .

# btrfs subvolume show /.

Name:                   root-20160607
uuid:                   1a65c55f-4e07-134c-8e34-f17574d2f4ac
Parent uuid:            dfd7589a-0fa4-5c4b-8b95-312d6c619f69
Creation time:          2016-06-07 02:05:05
Object ID:              262
Generation (Gen):       958
Gen at creation:        946
Parent:                 5
Top Level:              5
Flags:                  –
Snapshot(s):

Success . Now we know how to create snapshot and point that to default right ?

So , how can we delete those unwanted subvolumes ? Its easy .

# mount -o subvolid=0 /dev/sda3 /btrfs

# cd /btrfs

# ls

root root-20160607

# btrfs sub delete root

Default rm command will not work here .

 

Snapper Utility

Snapper is an advanced tool for snapshoting BTRFS systems  .

To install ,

# yum install snapper -y

Snapper utility is working with configuration  file and .snapshot hidden directory . Configurations are stored under /etc/snapper/configs/ location.

When we are run a snapper configuration it will automatically create conf file under /etc and create .snapshot directory under that sub volume .

For example ,

We can use /dev/sdb1 and volume1 as default sub volume .

# mount -o subvol=volume1 /dev/sdb1 /mnt

# cd /mnt

# snapper -c volume1 create-config /mnt/

This command will create volume1 config file under /etc/snapper and .snapshot folder on this location .

To take a snapshot using snapper ,

# snapper -c volume1 create -d “First snapshot”

# snapper -c volume1 list

To see the difference between snapshots ,

# snapper -c volume1 status 1..0

This means snapper is comparing snapshot ID 1 with 0 and list differences .

If you need to recover previous snapshot files ,

# snapper -c volume1 undochange 1..0

Task to do your self : Create some files under sub volume and take snapshot . Revert back using snapper.

GlusterFS and NFS with High Availability on CentOS 7

GlusterFS and NFS with High Availability on CentOS 7

GlusterFS and NFS with High Availability on CentOS 7
Technical requirements
3 x CentOS 7 Machines
4 IP’s
Additional hard drive for each machine with same size. (eg : /dev/sdb )
Disable SELinux

Installation
Install CentOS 7 Minimal on 3 machines and give static IP’s .
node1.rmohan.com = 192.168.1.10
node2.rmohan.com = 192.168.1.11
node3.rmohan.com = 192.168.1.12

Add hostnames on /etc/hosts file .

Install GlusterFS packages on all servers.

# yum install centos-release-gluster –y
# yum install glusterfs-server –y
# systemctl start glusterd
# systemctl enable glusterd

Create partition,

# fdisk /dev/sdb
# mkfs.ext4 /dev/sdb1
# mkdir /data
# mount /dev/sdb1 /data (Add this to /etc/fstab)
# mkdir /data/br0

Configure GlusterFS,

node1# gluster peer probe node2
node1# gluster peer probe node3
node1# gluster volume create br0 replica 3 node1:/data/br0 node2:/data/br0 node3:/data/br0
node1# gluster volume start br0
node1# gluster volume info br0
node1# gluster volume set br0 nfs.disable off

Install and Configure HA

Install below packages on all servers,

# yum -y install corosync pacemaker pcs
# systemctl enable pcsd.service
# systemctl start pcsd.service
# systemctl enable corosync.service
# systemctl enable pacemaker.service

These packages will create “hacluster” user on all machines. We need to assign password for this user and it must be same on all servers.

node1# passwd hacluster
node2# passwd hacluster
node3# passwd hacluster

And run below command’s on a single server,

node1# pcs cluster auth node1 node2 node3
node1# pcs cluster setup –name NFS node1 node2 node3
node1# pcs cluster start –all
node1# pcs property set no-quorum-policy=ignore
node1# pcs property set stonith-enabled=false

Create virtual heartbeat IP resource,

node1# pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.13 cidr_netmask=24 op monitor interval=20s
node1# pcs status

Done.

Client Configuration

client# yum install nfs-utils –y
client# mount -o mountproto=tcp -t nfs 192.168.1.13:/br0 /gluster

Disaster Recovery

If a glusterFS server goes down,
Assume Node2 goes down, after repairing all things we need to start that machine.
node2# pcs cluster start node2
node2# pcs status
This commands will automatically joins lost machine to existing PCS cluster.
Also check gluster Auto healer,
node2# gluster volume heal br0 info
Note : If 2 machines goes down, left over machine will act as read only file system. Need at least 2 live machines .

Samba in CentOS 6.8 as Secondary DC with Microsoft Active Directory 2012R2

1 . https://bugzilla.samba.org/show_bug.cgi?id=10265
It’s necessary to manually lower the domain and forest functional levels on the Windows 2012 server first, via Powershell:
Set-ADForestMode -Identity “mydom.local” -ForestMode Windows2008R2Forest
Set-ADDomainMode -Identity “mydom.local” -DomainMode Windows2008R2Domain
2. Need a fresh installed minimal CentOS 6.x OS . Disable SELinux and firewall . Update software packages .
Please check above notes and do as it is . Lets start ,
Primary AD ( Microsoft ) : 192.168.1.10 / ad.rmohan.com
Secondary DC (CentOS ) : 192.168.1.11 / ldap.rmohan.com
Login to Linux server ,
# cat /etc/resolv.conf
search rmohan.com
nameserver 192.168.1.10
nameserver 192.168.1.11
# yum groupinstall “development tools” -y
# yum install python-devel libgnutls-dev gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel wget gcc gcc-c++ krb5-server krb5-workstation -y
# wget https://download.samba.org/pub/samba/stable/samba-4.5.0.tar.gz
# tar -xvzf samba-4.5.0.tar.gz
# cd samba-4.5.0
# ./configure
# make
# make install
Now we successfully compiled Samba source package . We need to remove default samba configuration first then remount file system ( Some times AD join will popup an ACL error ) .
# rm -rf /usr/local/samba/etc/smb.conf
# mount -o remount,acl,user_xattr /dev/mapper/vg_ldap-lv_root
Now we are ready to add our Linux machine to Windows AD .
# /usr/local/samba/bin/samba-tool domain join rmohan.com DC -Uadministrator –realm=rmohan.com
Now we successfully added our linux system to Active directory as a Secondary DC . But we need to configure some more settings . Lets check authentication .
Before that check both systems time (NTP) . If its not same authentication will get error .
# yum install ntp -y
# service ntpd start
# chkconfig ntpd on
Add Our primary DC as NTP server .
# vi /etc/ntp.conf
server ad.rmohan.com iburst
# service ntpd restart
Now we need to change Kerberos configuration file .
# rm -rf /etc/krb5.conf
# cp -vr /usr/local/samba/private/krb5.conf /etc/krb5.conf
# kinit administrator@rmohan.com
# klist
For successful AD replication we need to Add A record and CNAME record in Microsoft AD .
# /usr/local/samba/bin/ldbsearch -H /usr/local/samba/private/sam.ldb ‘(invocationid=*)’ –cross-ncs objectguid
# record 1
dn: CN=NTDS Settings,CN=LDAP,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=example,DC=com
objectGUID: 640bcd46-cbc3-4451-8d82-cb37a255cbe1
# record 2
dn: CN=NTDS Settings,CN=AD,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=example,DC=com
objectGUID: 89f017ee-dacf-4d51-a19b-fe54da97a79a

Copy that ObjectGUID and goto Microsoft Active directory .
First create A record for ldap.rmohan.com .
Then goto Forward Lookup Zone > _msdcs.rmohan.com .
Create a CNAME here with our host objectGUID . In my case it is like below ,

640bcd46-cbc3-4451-8d82-cb37a255cbe1 Alias(CNAME) ldap.rmohan.com
Now authentication is working fine .Now we need to start DC replication . Every user created by master or slave need to replicated .
# /usr/local/samba/sbin/samba
# /usr/local/samba/bin/samba-tool drs showrepl
Default-First-Site-Name\LDAP
DSA Options: 0x00000001
DSA object GUID: 640bcd46-cbc3-4451-8d82-cb37a255cbe1
DSA invocationId: 4c115875-28b5-4c91-bcf0-66f4d74d935b
==== INBOUND NEIGHBORS ====
DC=DomainDnsZones,DC=example,DC=com
Default-First-Site-Name\AD01 via RPC
DSA object GUID: 89f017ee-dacf-4d51-a19b-fe54da97a79a
Last attempt @ Tue Oct 11 03:13:07 2016 EDT was successful
0 consecutive failure(s).
Last success @ Tue Oct 11 03:13:07 2016 EDT
Now we can see that replication is working fine . Lets check now ,
List all AD users.
# /usr/local/samba/bin/samba-tool user list
Create new user in Active directory and check again . If its showing all is good. Your secondary server is ready to go .
List all member computers .
# /usr/local/samba/bin/pdbedit -L -w | grep ‘\[[WI]’

This setup is very useful if you have single windows license and you need Active Directory replica . This is for you .
Enjoy .

python code for disk space

#!/usr/bin/env python
#coding:utf-8
import subprocess
import json
import os,sys
space = []
df= os.popen(‘df -P -k’).read()
df = subprocess.Popen([“df”, “-P”, “-k”], stdout=subprocess.PIPE)
output = df.communicate()[0]
for line in output.split(“\n”)[1:]:
if len(line):
try:
device, size, used, available, percent, mountpoint = line.split()
space.append(dict(mountpoint=mountpoint, available=available))
except:
pass
print json.dumps(dict(space=space), indent=4)

Install Cassandra 3.9 on CentOS 6.8

Install Cassandra 3.9 on CentOS 6.8

Install Java 8

$  yum install java-1.8.0-openjdk

Add DataStax Repo for Apache Cassandra

$  vi /etc/yum.repos.d/datastax.repo

Add the following lines to the new file:

[datastax-ddc]
name = DataStax Repo for Apache Cassandra
baseurl = http://rpm.datastax.com/datastax-ddc/3.9
enabled = 1
gpgcheck = 0

Save the file above and run:

$  yum install datastax-ddc
$  service cassandra start
$  nodetool status

Add Python 2.7

$ cd /usr/src
$  wget http://python.org/ftp/python/2.7.6/Python-2.7.6.tgz
$  tar -xvzf Python-2.7.6.tgz
$ cd Python-2.7.6
$  ./configure –prefix=/usr/local
$  make
$  make install

Fix python libs:

$ cd /usr/lib/python2.7/
$ mv * /usr/local/lib/python2.7/site-packages/
$ rm -R site-packages
$ ln -s /usr/local/lib/python2.7/site-packages ./

There should now be a symlink in /usr/lib/python2.7 that points site-packages to /usr/local/lib/python2.7/site-packages.

site-packages -> /usr/local/lib/python2.7/site-packages

In the /usr/local/lib/python2.7/site-packages you should see a cqlshlib folder.
Run cqlsh

# cqlsh
# cqlsh> DESCRIBE keyspaces;

Apache Cassandra Centos 7

Apache Cassandra is a NoSQL database intended for storing large amounts of data in a decentralized, highly available cluster.
Cassandra or Apache Cassandra is a distributed database system which manages large amounts of structured data across different commodity servers by providing highly available service with no point of failure.
NoSQL refers to a database with a data model other than the tabular relations used in relational databases such as MySQL, PostgreSQL, and Microsoft SQL.
The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance.

Cassandra is one the popular and robust distributed database management system. It is known for providing high availability with no single point of failure.
It has one of the awesome feature that is asynchronous replication between multiple nodes without requiring master nodes.
Cassandra is a reliable, clusterable, highly-scalable database capable of handling large quantities of data on commodity hardware.
If you have big data needs, and are looking for a proven open source solution that has received battle testing from many large companies, then Cassandra may be exactly what you’re looking for.

If you have a CentOS 7 server, this guide will get you up and running with a single Cassandra node.
It will use pre-packaged Cassandra distributions built for CentOS, making installation and upgrades a snap.
You can then build it out by performing additional installations on other servers, then clustering the resulting instances for higher scalability and reliability.

This article will guide you on how to install Apache Cassandra on CentOS 7 Server.

CentOS 7 Cassandra systemd service file

# /usr/lib/systemd/system/cassandra.service

[Unit]
Description=Cassandra
After=network.target

[Service]
PIDFile=/var/run/cassandra/cassandra.pid
User=cassandra
Group=cassandra
ExecStart=/usr/sbin/cassandra -f -p /var/run/cassandra/cassandra.pid
StandardOutput=journal
StandardError=journal
LimitNOFILE=100000
LimitMEMLOCK=infinity
LimitNPROC=32768
LimitAS=infinity
Restart=always

[Install]
WantedBy=multi-user.target

How To Install Cassandra on CentOS 7

Step 1: Install Java

First, you’ll follow a simple best practice: ensuring the list of available packages is up to date before installing anything new.

yum -y update

At this point, installing lsyncd is as simple as running just one command:

yum -y install java

tar -zxvf jdk-8u121-linux-x64.tar.gz

mv jdk-8u121-linux-x64.tar.gz /root/software/

vi /etc/profile.d/java.sh
# Uncomment the following line to set the JAVA_HOME variable a
#JAVA_HOME=/usr/lib/jvm/jre
export JAVA_HOME=/usr/java/jdk1.8.0_121
export JRE_HOME=/usr/java/jdk1.8.0_121/jre
export PATH=$PATH:/usr/java/jdk1.8.0_121/bin
export CLASSPATH=./:/usr/java/jdk1.8.0_121/lib:/usr/java/jdk1.8.0_121/jre/lib

source /etc/profile.d/java.sh
export
alternatives –config java
alternatives –install /usr/bin/java java /usr/java/jdk1.8.0_121/bin/java 5
alternatives –config java

Step #2: Add the DataStax Community Repository

vim /etc/yum.repos.d/datastax.repo

Add the following information to the file you’ve created, using i to insert:

[datastax]
name = DataStax Repo for Apache Cassandra
baseurl = http://rpm.datastax.com/community
enabled = 1
gpgcheck = 0

Step #3: Install Apache Cassandra 2

At this point, installing Cassandra is as simple as running just one command:

yum -y install dsc20

Step #4: Get Cassandra Running

Start-Up Cassandra

systemctl start cassandra

Check Cassandra Service Status

systemctl status cassandra

Enable Cassandra to Start at Boot

systemctl enable cassandra

Enter the Cassandra Command Line

cqlsh

With Cassandra installed, we must now start the daemon via systemd.
/etc/init.d/cassandra start

The systemd unit is now created. Use “systemctl start cassandra” to launch the new unit.

systemctl enable cassandra.service

While the database should be running, it is not yet configured to launch on boot. Let’s tell systemd that Cassandra should automatically launch whenever your system boots.

[root@cassandra ~] systemctl status cassandra

cassandra.service – SYSV: Starts and stops Cassandra
Loaded: loaded (/etc/rc.d/init.d/cassandra)
Active: active (exited) since Thu 2016-09-15 04:36:47 UTC; 14s ago
Docs: man:systemd-sysv-generator(8)
Process: 9413 ExecStart=/etc/rc.d/init.d/cassandra start (code=exited, status=0/SUCCESS)

Let’s ensure that Cassandra is running using this command.

[root@cassandra ~] cqlsh

Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.17 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh>

assandra ships with a powerful command line utility, cqlsh. Launch it to perform various vital tasks with your database.

[root@rmohan.com ~] nodetool status

Datacenter: rmohan
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
— Address Load Tokens Owns (effective) Host ID Rack
UN 127.0.0.1 46.21 KB 256 100.0% 7dd2b7d9-404e-4a77-a36d-cc8f55168c0d rack1
[root@rmohan.com ~]#

Restart Cassandra

systemctl restart cassandra

Oracle ASM 12c on Linux

How to setup Oracle ASM 12c on Linux
Software used:-
1.VMWARE 10
2.Redhat enterprise linux 6.5(64 bit)
3.Oracle database 12C (64 bit)
4.Oracle Grid infrastructure 12c(64 bit)

What to Setup:-

1. Setup oracle grid infrastructure for standalone server “ASM”
2. Setup oracle Database

Update /etc/sysctl.conf
[root@server1]# vi /etc/sysctl.conf
Scroll to the bottom and add the following:

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
:wq

Run the following command to change the current kernel parameters.
/sbin/sysctl -p

Update /etc/security/limits.conf
[root@server1]# vi /etc/security/limits.conf
Scroll to the bottom and above the “# End of file” line, add:

oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768

:wq

Amend the “/etc/security/limits.d/90-nproc.conf” file as described below.
# Change this
*          soft    nproc    1024

# To this
* – soft    nproc    16384

**IMPORTANT: Make sure selinux is disabled.

Packages required for oracle database installation:-

[root@server1] yum -y install binutils-2.17.50.0.6
[root@server1] yum -y install compat-libstdc++-33-3.2.3 (*)
[root@server1] yum -y install elfutils-libelf-0.125
[root@server1] yum -y install elfutils-libelf-devel-0.125 (*)
[root@server1] yum -y install gcc-4.1.2
[root@server1] yum -y install gcc-c++-4.1.2 (*)
[root@server1] yum -y install glibc-2.5-24
[root@server1] yum -y install glibc-common-2.5
[root@server1] yum -y install glibc-devel-2.5
[root@server1] yum -y install glibc-headers-2.5
[root@server1] yum -y install ksh-20060214 (*)
[root@server1] yum -y install libaio-0.3.106
[root@server1] yum -y install libaio-devel-0.3.106
[root@server1] yum -y install libgcc-4.1.2
[root@server1] yum -y install libgomp-4.1.2
[root@server1] yum -y install libstdc++-4.1.2
[root@server1] yum -y install libstdc++-devel-4.1.2
[root@server1] yum -y install make-3.81
[root@server1] yum -y install numactl-devel-0.9.8.i386 (*)
[root@server1] yum -y install sysstat-7.0.2 (*)

Check the kernel version:-
[root@server1 var]# uname -r
2.6.32-358.el6.x86_64

Now we would require to install the rpm’s required for asm installation
these are the packages required for asm.

–oracleasm
–oracleasm-support
–oracleasmlib
the last 2 packages can be found from the following link
http://www.oracle.com/technetwork/server-storage/linux/asmlib/rhel6-1940776.html

oracle kmod-oracleasm rpm download link for el6
http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/getPackage/kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm

[root@server3 ~]# rpm -Uvh oracleasm-support-2.1.8-1.el6.x86_64.rpm
[root@server3 ~]# rpm -Uvh kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm
[root@server3 ~]# rpm -Uvh oracleasmlib-2.0.4-1.el6.x86_64.rpm

Create groups:-
[root@server3 tmp]# groupadd -g 1000 oinstall
[root@server3 tmp]# groupadd -g 1200 dba
[root@server3 tmp]# useradd -g oinstall -G dba -d /home/oracle oracle

Create directory structures:-
[root@server3 u01]# mkdir -p /u01/app/oracle/product/12.1.0/grid
[root@server3 u01]# mkdir -p /u01/app/oracle/product/12.1.0/db_1

Assigning proper permission:-
[root@server3 u01]# chown -Rf oracle:oinstall /u01/
[root@server3 u01]# chmod -Rf 775 /u01/

Set up the oracle user environment
For oracle user:-
[root@server1 var]# su – oracle

[oracle@server1]#vi .bash_profile
#export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=server3.soumya.com; export ORACLE_HOSTNAME
ORACLE_UNQNAME=orcl; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
GRID_HOME=/u01/app/oracle/product/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=orcl; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATHH
if [ $USER = “oracle” ]; then
if [ $SHELL = “/bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

alias grid_env=’. /home/oracle/grid_env’
alias db_env=’. /home/oracle/db_env’

:wq(save & exit)

[oracle@server1 ~]$ . .bash_profile

Create a file called “/home/oracle/db_env” with the following contents :-

[oracle@server1 ~]$vi /home/oracle/db_env

ORACLE_SID=orcl; export ORACLE_SID
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

:wq(save & exit)

Create a file called “/home/oracle/grid_env” with the following contents:-

[oracle@server1 ~]$vi /home/oracle/gid_env
ORACLE_SID=+ASM; export ORACLE_SID
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

:wq(save & exit)

[oracle@server1 ~]$ chmod 775 /home/oracle/db_env
[oracle@server1 ~]$ chmod 775 /home/oracle/grid_env

Now you will be able to switch environments between oracle and asm instance as follows.
[oracle@server3 ~]$ db_env
[oracle@server3 ~]$ echo $ORACLE_SID
orcl
[oracle@server3 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0/db_1
[oracle@server3 ~]$ grid_env
[oracle@server3 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0/grid
[oracle@server3 ~]$ echo $ORACLE_SID
+ASM

Now  we will add 3 disks using vmware.
So open vmware workstation and go to settings and add hard disk from there,add 3 different disks size of atleast 10GB each.

[root@server1]# echo “- – -“> /sys/class/scsi_host/host0/scan

******
P.S. if the above command doesnt show the newly added disk try this
[root@server1]#grep mpt /sys/class/scsi_host/host?/proc_name
/sys/class/scsi_host/host2/proc_name:mptspi

then run this
[root@server1]# echo “- – -“> /sys/class/scsi_host/host2/scan
******

using the above command we can avoid rebooting the machine to  mount the hard disks.

[root@server1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xa4bd7fb9.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-261, default 261):
Using default value 261

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@server1 ~]# fdisk /dev/sdc
[root@server1 ~]# fdisk /dev/sdd

[root@server1 dev]# fdisk -l
Disk /dev/sda: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006f980

Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        5737    46080000   83  Linux
/dev/sda2   *        5737        6885     9216000   83  Linux
/dev/sda3            6885        7458     4608000   83  Linux
/dev/sda4            7458        7833     3009536    5  Extended
/dev/sda5            7459        7731     2188288   82  Linux swap / Solaris
/dev/sda6            7731        7833      818176   83  Linux

Disk /dev/sdb: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x44ac96a0

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         261     2096451   83  Linux

Disk /dev/sdc: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x004b1011

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         261     2096451   83  Linux

Disk /dev/sdd: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf5898159

Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         261     2096451   83  Linux

Give proper ownership and permissions to the new partition:-
chown -Rf oracle:oinstall /dev/sdb1
chown -Rf oracle:oinstall /dev/sdc1
chown -Rf oracle:oinstall /dev/sdd1

chmod -Rf 664 /dev/sdb1
chmod -Rf 664 /dev/sdc1
chmod -Rf 664 /dev/sdd1

Now configure ASM and create ASM disks:-

[root@server1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets (‘[]’).  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [  OK  ]
Scanning the system for Oracle ASMLib disks: [  OK  ]

To create ASM disks:-
[root@server1 ~]#/etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
[root@server1 ~]#/etc/init.d/oracleasm createdisk VOL2 /dev/sdc1
[root@server1 ~]#/etc/init.d/oracleasm createdisk VOL3 /dev/sdd1

Now we will install grid infrastructure software.
Give proper permission to the software folder.

[root@server1 ] chown -Rf oracle:oinstall /u01/linuxamd64_12102_grid_1of2.zip
root@server1 ] chown -Rf oracle:oinstall /u01/linuxamd64_12102_grid_2of2.zip

[root@server1 u01]# unzip linuxamd64_12102_grid_1of2.zip
[root@server1 u01]# unzip linuxamd64_12102_grid_2of2.zip
[root@server1 u01]#su – oracle
[oracle@server1 u01]$ cd grid/
[oracle@server1 grid]$ sh runInstaller

Select “install and configure grid infrastructure for a standalone server” -> Next -> select 3 Disks from candidate disk option rest option will be unchanged  ->
select “use same password for these accounts ” and provide password -> specify os groups OSDBA-oinstall,
OSOPER-oinstall, OSASM-oinstall -> Select Install location “oracle base- /u01/app/oracle ” , “software location- /u01/app/oracle/product/12.1.0/grid” -> Next and
start the installation.> execute “/u01/app/oracle/product/12.1.0/grid/root.sh” script from root user from another terminal.

I got this error while installing grid infrastructure.To fix this steps are below:-
**INFO: Read: ORA-00845: MEMORY_TARGET not supported on this system

To increase the size
# mount -o remount,size=3G /dev/shm
Verify the size
# df -h
To make permanent changes to your file system update your fstab
# vi /etc/fstab
tmpfs  /dev/shm  tmpfs  defaults,size=3G  0 0

[root@server1 u01]#

[grid@server3 app]$ sqlplus  / as sysasm
SQL> select instance_name from v$instance;

INSTANCE_NAME
—————-
+ASM

Now we will setup oracle database .

[oracle@server1 u01]$ cd /u01/database/
[oracle@server3 u01]$ sh runInstaller

select “Create & configure a database” -> Server Class -> Single instance database installation -> Advanced Installa -> Next ->Enterprise edition ->
Select “oracle base- /u01/app/oracle ” , “software location- /u01/app/oracle/product/12.1.0/db_1” -> select “general purpose” -> Global database
name- orcl , SID name – orcl -> Next -> select ” Oracle automatic storage management”-> Next ->Next ->Next-> select ” use same password for all accounts”->
Next -> Next -> Install

Done!!!…

grubby fatal error: unable to find a suitable template

grubby fatal error: unable to find a suitable template

Updating   : selinux-policy-3.7.19-292.el6_8.3.noarch                                                                                                             8/28
Updating   : selinux-policy-targeted-3.7.19-292.el6_8.3.noarch                                                                                                    9/28
Installing : kernel-2.6.32-642.15.1.el6.x86_64                                                                                                                   10/28
grubby fatal error: unable to find a suitable template
Updating   : ntp-4.2.6p5-10.el6.centos.2.x86_64                                                                                                                  11/28
Updating   : libtiff-3.9.4-21.el6_8.x86_64                                                                                                                       12/28
Updating   : kernel-headers-2.6.32-642.15.1.el6.x86_64                                                                                                           13/28
Updating   : tzdata-2017a-1.el6.noarch                                                                                                                           14/28
Cleanup    : kernel-2.6.32-573.26.1.el6.x86_64                                                                                                                   15/28
warning:    erase unlink of /lib/modules/2.6.32-573.26.1.el6.x86_64/weak-updates failed: No such file or directory
warning:    erase unlink of /lib/modules/2.6.32-573.26.1.el6.x86_64/modules.order failed: No such file or directory
warning:    erase unlink of /lib/modules/2.6.32-573.26.1.el6.x86_64/modules.networking failed: No such file or directory
warning:    erase unlink of /lib/modules/2.6.32-573.26.1.el6.x86_64/modules.modesetting failed: No such file or directory
warning:    erase unlink of /lib/modules/2.6.32-573.26.1.el6.x86_64/modules.drm failed: No such file or directory
warning:    erase unlink of /lib/modules/2.6.32-573.26.1.el6.x86_64/modules.block failed: No such file or directory
Cleanup    : selinux-policy-targeted-3.7.19-292.el6_8.2.noarch

mv /boot/grub/grub.conf /boot/grub/bk_grub.conf
yum -y update && yum -y reinstall kernel

add in grub.conf 


title CentOS (2.6.32-431.el6.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-431.el6.x86_64 ro root=UUID=c5f51db1-bfef-4480-868f-dc6049906512 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
        initrd /boot/initramfs-2.6.32-431.el6.x86_64.img
        
        
        
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.32-642.15.1.el6.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-642.15.1.el6.x86_64 ro root=UUID=c5f51db1-bfef-4480-868f-dc6049906512 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
        initrd /initramfs-2.6.32-642.15.1.el6.x86_64.img
title CentOS (2.6.32-573.3.1.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-573.3.1.el6.x86_64 ro root=/dev/mapper/vg_db2-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_db2/lv_swap rd_NO_MD rd_LVM_LV=vg_db2/lv_root SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
        initrd /initramfs-2.6.32-573.3.1.el6.x86_64.img
title CentOS (2.6.32-431.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=/dev/mapper/vg_db2-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_db2/lv_swap rd_NO_MD rd_LVM_LV=vg_db2/lv_root SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
        initrd /initramfs-2.6.32-431.el6.x86_64.img


or 

It is absolutely impossible to regenerate a grub.conf from scratch with any of the tools delivered by CentOS. My solution:

  1. boot your system via Install-Disk or by grub command line prompt
  2. create an empty new /boot/grub/grub.conf
  3. add the next code snippet to your grub.conf
1
2
3
4
5
6
7
8
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.32-279.22.1.el6.x86_64)
  root (hd0,0)
  kernel /vmlinuz-2.6.32-279.22.1.el6.x86_64 ro root=/dev/sda3
  initrd /initramfs-2.6.32-279.22.1.el6.x86_64.img

N O T E:
I have a separated /boot partition on my systems. In standard configuration delivered by CentOS /boot and / will be on the same partition. In this case, the path to kernel and initrd will start with /boot/vmlinuz... and /boot/initramfs... . The root partition mostly will be root=/dev/sda1.

Try to boot your system with your manually built grub.conf. If anything works fine you can add new boot entries by CentOS’ tool grubby. For example:

1
2
3
root@host:~ $ grubby --add-kernel="/boot/vmlinuz-2.6.32-279.22.1.el6.x86_64"\
--initrd="/boot/initramfs-2.6.32-279.22.1.el6.x86_64.img"\
--title="CentOS (2.6.32-279.22.1.el6.x86_64)" --copy-default --make-default

The tool grubby will replace the /dev/sda? device file with the UUID string of the partition.
You can use the next line to generate an entry for each kernel image in /boot/:

1
2
3
4
5
6
7
for kernel in /boot/vmlinuz-*; do \
version=`echo $kernel | awk -F'vmlinuz-' '{print $NF}'`; \
grubby --add-kernel="/boot/vmlinuz-${version}" \
--initrd="/boot/initramfs-${version}.img" \
--title="CentOS (${version})" \
--copy-default --make-default; \
done

You should check the /etc/grub.conf for duplicate entries or maybe you will resort the boot order. Reboot your system to check if anything works fine again.


Issue

    When I install a kernel from RHN, I am getting the error: grubby fatal error: unable to find a suitable template

Raw

    [root@rhel5 ~]# rpm -vhi kernel-2.6.18-274.el5.x86_64.rpm 
    Preparing...                ########################################### [100%]
       1:kernel                 ########################################### [100%]
    grubby fatal error: unable to find a suitable template 

Resolution

    Several things can cause this error: One is when /boot is not currently mounted. Remounting /boot can properly install the kernel.

Raw

        [root@rhel5 ~]# rpm -e kernel-2.6.18-274.el5

If the above command fails because of installed dependencies, use it in the following form:
Raw

        [root@rhel5 ~]# rpm -e --nodeps kernel-2.6.18-274.el5

Afterwards, ensure /boot is mounted and proceed to reinstall the kernel:
Raw

        [root@rhel5 ~]# mount /boot
        [root@rhel5 ~]# rpm -ivh kernel-2.6.18-274.el5.x86_64.rpm 
        Preparing...                ########################################### [100%]
           1:kernel                 ########################################### [100%]
        [root@rhel5 ~]# 

If you don't have the RPM available, you can always use yum:
Raw

        [root@rhel5 ~]# yum install kernel

    This error can also happen when there are multiple filesystems with same label for the root device, and the root device is specified with LABEL= in grub.conf. In that case, change the label into an unique one, or use the device name or UUID= to specify the root device in grub.conf. To change the label on /dev/sdb2 to /root-1 for example:

Raw

# e2label /dev/sdb2 /root-1

    Another cause for this error can be a bad path to the initrd in grub.conf.

    If root device in your kernel line is invalid will also cause this message. To resolve this you can edit /boot/grub/grub.conf, changing the root entry in the most recent kernel entry to point to the correct root device.

Diagnostic Steps

    To check if you have multiple filesystems with the same label run the following command (uuid's simplified for clarity):

Raw

# blkid
$ cat sos_commands/filesys/blkid 
/dev/sda1: LABEL="/boot" UUID="aaaaa" TYPE="ext3" SEC_TYPE="ext2" 
/dev/sdb1: LABEL="/boot" UUID="aaaaa" TYPE="ext3" SEC_TYPE="ext2" 
/dev/sda2: LABEL="/" UUID="bbbbb" SEC_TYPE="ext2" TYPE="ext3" 
/dev/sdb2: LABEL="/" UUID="bbbbb" SEC_TYPE="ext2" TYPE="ext3" 

We can see above that there are 2 disks with the same label. Check to see if they have the same wwwid (these commands are for RHEL5. For RHEL6 you need to use scsi_id --whitelisted /dev/sd* instead.
Raw

# scsi_id -gus /block/sda
3600001234567
# scsi_id -gus /block/sdb
HITATCHI-abc123

The above clearly shows that they are different disks (i.e. not a multiple path to the same device).
Page 30 of 174« First...1020...2829303132...405060...Last »