Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Nginx for serving files bigger than 1GB

ealize that nginx was not serving files larger than 1GB. After investigation I found that it was due to the proxy_max_temp_file_size variable, that is configured by default to serve up to 1024 MB max.

This variable indicates the max size of a temporary file when the data served is bigger than the proxy buffer. If it is indeed bigger than the buffer, it will be served synchronously from the upstream server, avoiding the disk buffering.
If you configure proxy_max_temp_file_size to 0, then your temporary files will be disabled.

In this fix it was enough to locate this variable inside the location block, although you can use it inside server and httpd blocks. With this configuration you will optimize nginx for serving more than 1GB of data.

location / {
...
proxy_max_temp_file_size 1924m;
...
}

Restart nginx to take the changes:

service nginx restart

find large files on Linux

Find large files on Fedora / CentOS / RHEL

Search for big files (50MB or more) on your current directory:

find ./ -type f -size +50000k -exec ls -lh {} ; | awk '{ print $9 ": " $5 }'

Output:

[root@my.server.com:~]pwd
/home

[root@my.server.com:~]find . -type f -size +50000k -exec ls -lh {} ; | awk '{ print $9 ": " $5 }'
./user1/tmp/analog/cache: 79M
./syscall8/public_html/wp.zip: 146M
./bob54/public_html/adserver/var/debug.log: 86M
./marqu35/logs/adserver.site.com-May-2014.gz: 70M
./astrolab72/tmp/analog/cache: 75M

Search in my /var directory for 80MB or max file size:

find /var -type f -size +80000k -exec ls -lh {} ; | awk '{ print $9 ": " $5 }'

Find large files on Debian / Ubuntu Linux

Search in current directory:

find ./ -type f -size +10000k -exec ls -lh {} ; | awk '{ print $8 ": " $5 }'

Search the /home directory:

find /home -type f -size +10000k -exec ls -lh {} ; | awk '{ print $8 ": " $5 }'

If you know other ways to quickly find large files on Linux please share it with us.

Creating a partition size larger than 2TB on Linux

I had to mount and use a 3TB SATA HD on Linux, and as I thought, I wasn’t going to be able to format and create a new partition using fdisk command.

fdisk tool won’t create partitions larger than 2 TB. This is fine for desktop users, but on a production server you may need a large partition.

The way to solve this issue was to use GNU parted command with GPT (partition table). This is the full how to in case you need it:

Create 3TB partition size on Linux

Run gparted as follows, replace sdb with your real disk name:

parted /dev/sdb
Output:

GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted)
Now type “mklabel gpt“, as you see below:

(parted) mklabel gpt
Output should be something like this:

Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted)
Now, let’s set default size unit to TB, type “unit TB” and press enter:

(parted) unit TB
Type: mkpart primary 0.00TB 3.00TB to create a 3TB partition:

(parted) mkpart primary 0.00TB 3.00TB
To print the current partitions, type p :

(parted) p
Sample outputs:

Disk /dev/sdb: 3.00TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 0.00TB 3.00TB 3.00TB ext4 primary
Now type quit to exit the gparted console.

(parted) quit
Now format the new patition:

mkfs.ext4 /dev/sdb1
Mount the new drive:

mkdir /mnt/appdisk
mount /dev/sdb1 /mnt/appdisk
Now add that mount point to /etc/fstab, example:

/dev/sdb1 /mnt/appdisk ext4 defaults 0 1
Now type df -ah to see if the drive is mounted properly, you should see something like this:

/dev/sdb1 3.0T 321M 2.9T 1% /mnt/appdisk

HylaFAX with AvantFAX in CentOS 7

hylafax with avantfax installation on CentOS 7  .

With out any talk , lets start…..

Install needed packages ,

# yum install epel-release -y

# yum update -y

# yum groupinstall “development tools” -y

# yum install hylafax wget -y

Download AvantFax from there web site . AvantFax is using for Fax web GUI .

# wget http://jaist.dl.sourceforge.net/project/avantfax/avantfax-3.3.5.tgz

# tar -xvzf avantfax-3.3.5.tgz

# cd avantfax-3.3.5

# mv avantfax /var/www/

# yum install mariadb mariadb-server httpd php php-devel php-pear php-mysql php-mbstring php-pecl-Fileinfo ImageMagick-devel -y

# vi /etc/httpd/conf.d/avantfax.conf

<VirtualHost *:80>
ServerName fax.example.com
ServerAlias example.com
DocumentRoot /var/www/avantfax
ErrorLog /var/www/avantfax/error.log
CustomLog /var/www/avantfax/requests.log combined
</VirtualHost>

# chown -R apache.apache /var/www/avantfax

# chmod -R 0770 /var/www/avantfax/tmp /var/www/avantfax/faxes

# chown -R apache.uucp /var/www/avantfax/tmp /var/www/avantfax/faxes

# ln -s /var/www/avantfax/includes/faxrcvd.php /var/spool/hylafax/bin/faxrcvd.php

# ln -s /var/www/avantfax/includes/dynconf.php /var/spool/hylafax/bin/dynconf.php

# ln -s /var/www/avantfax/includes/notify.php /var/spool/hylafax/bin/notify.php

# pear install Mail Net_SMTP Mail_mime MDB2_driver_mysql

Asterisk 13 video call configuration on CentOS 7

Asterisk 13 video call configuration on CentOS 7
Now we can configure video calling through asterisk .
This is simple and easy .
# vi /etc/asterisk/sip.conf
[general]
videosupport=yes
And add below configuration under your context area . My context is [LocalExt] .
[LocalExt]
disallow=all
allow=ulaw
allow=alow
allow=h263
allow=h264
allow=h263p
Save the file and reload asterisk .
Use soft phones like Xlite or Zoiper and see the magic …!

ZFS installation and configuration in CentOS 6 & 7

ZFS installation and configuration in CentOS 6 & 7
ZFS is a combined file system and logical volume manager designed by Sun Microsystems.
The features of ZFS include protection against data corruption, support for high storage capacities,
efficient data compression, integration of the concepts of file system and volume management,
snapshots and copy-on-write clones, continuous integrity checking and automatic repair,
RAID-Z and native NFSv4 ACLs.
Installation:
CentOS 6
# yum install epel-release -y
# sudo yum localinstall –nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
# yum install kernel-devel zfs -y
CentOS 7
# yum localinstall –nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
# yum install epel-release -y
# yum install kernel-devel zfs -y
And check wether the zfs kernel are loaded or not.
# lsmod | grep zfs
Usage :
Add 3 or more disks to your machine. And check the disk label. Consider we selected 3 disks named sda , sdb , sdc .
# zpool create -f mypool raidz sda sdb sdc
# zpool status

IPV6 DHCP Server on CentOS 7

Without any talk we can go straight to configure IPV6 DHCP server .

IPV6 NS : 2015:9:19:ffff::1

IPV6 DHCP SERVER : 2015:9:19:ffff::2

 

# yum install dhcp -y

# cp /usr/share/doc/dhcp-4.2.5/dhcpd6.conf.example /etc/dhcp/dhcpd6.conf

# cat /etc/dhcp/dhcpd6.conf

default-lease-time 2592000;
preferred-lifetime 604800;
option dhcp-renewal-time 3600;
option dhcp-rebinding-time 7200;
allow leasequery;
option dhcp6.info-refresh-time 21600;
dhcpv6-lease-file-name “/var/lib/dhcpd/dhcpd6.leases”;

 

subnet6 2015:9:19:ffff::/64 {
range6 2015:9:19:ffff::10 2015:9:19:ffff::1000;
option dhcp6.name-servers 2015:9:19:ffff::1;
option dhcp6.domain-search “iptechnics.ae”;
}

# cat /etc/sysconfig/network-scripts/ifcfg-enp3s0

DEVICE=enp3s0
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
TYPE=Ethernet
NETMASK=255.255.255.192
IPADDR=10.254.254.154
GATEWAY=10.254.254.129
IPV6INIT=yes
IPV6ADDR=”2015:9:19:ffff::2/64?
IPV6_AUTOCONF=yes

# systemctl start dhpcd6

# systemctl enable dhcpd6

BTRFS in CentOS 7

In these days we can find lot of file systems and they are also used in many Operating systems and block devices . Comparing with this file systems BTRFS is a newbie . Btrfs is a new copy on write (CoW) filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration.

Recently CentOS/RHEL is also supports BTRFS .

So lets start the tutorial ,

Consider we have one disk /dev/sdb .

# fdisk /dev/sdb

And create disk called /dev/sdb1 .

Format it using BTRFS .

# mkfs.btrfs /dev/sdb1

For testing purpose we are mounting this to /mnt .

# mount /dev/sdb1 /mnt

Now , BTRFS is mounted under /mnt . Just check with df command .

Basically BTRFS is working with subvolume’s . First we need to create a subvolume . Fo that we don’t need to unmount file system .

# btrfs sub create /mnt/volume1

# tree /mnt

# btrfs sub list /mnt

Now we need to remount with “volume1” as our default mount mount. For that ,

# umount /mnt

# mount -o subvol=volume1 /dev/sdb1 /mnt

With these commands we can able to mount particular subvoumes to our mount points .

# btrfs sub get-default /mnt

Above command will show default sub volume .

To create new sub volume under volume1 ,

# btrfs sub create /mnt/newsub1

# btrfs sub create /mnt/newsub2

Above command will create new subvolume newsub1 under volume1 .

Now the subvolume tree is look like,

VOLUME1 -> NEWSUB1

-> NEWSUB2

Ok. Now we need to create another main subvolume like volume1 . For that ,

# umount /mnt

# mount -o subvolid=0 /dev/sdb1 /mnt

# btrfs sub create /mnt/volume2

# cd /mnt

# ls

volume1    volume2

 

Snapshot using BTRFS

Now we know how to create btrfs filesystem and subvolumes . This is only  just beginning , using BTRFS we can configure RAID levels , remote transfer..etc . Later we can discuss about these mechanism .

How can we take a snapshot ?  Its simple .

Consider we need to take a snapshot of volume2 .

# cd /mnt

# btrfs sub snap volume2 volume2.snap

Format is btrfs sub snap <source> <destination>

# btrfs sub list /mnt

This command will list all sub volumes including snapshots .

 

How to take CentOS 7 root (/) snapshot ?

By default CentOS/RHEL is installing using XFS file system . Try to install centos using BTRFS file system . We can convert our existing file system to BTRFS , that will cover on next section .

After installing CentOS/rhel using BTRFS filesystem ,

# btrfs sub list /

ID 257 gen 871 top level 5 path root
ID 260 gen 41 top level 257 path var/lib/machines

From this we know that root and var/lib/machines are the sub volumes . But from below command we can see that default sub volume mounted under / is root (ID 5). For more clarification you can check /etc/fstab .

# btrfs sub get-default /

ID 5 (FS_TREE)

To take snapshot we need to mount btrfs partition to other folder , for that first we need to know which partition .

# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G  1.1G   15G   7% /

From this we can see that /dev/sda3 is the partition which we need to mount .

# mkdir /btrfs

# mount -o subvolid=0 /dev/sda3 /btrfs

# cd /btrfs

# ls

root

To make snapshot,

# btrfs sub snap root root-$(date +%Y%m%d)

# ls

root  root.snap

# btrfs sub list /

ID 257 gen 946 top level 5 path root
ID 260 gen 41 top level 257 path root/var/lib/machines
ID 262 gen 946 top level 5 path root-20160607

Now we have one snapshot of our root file system .

To boot with our new snapshot need to set some configurations ,

Remove all the “rootflags=subvol=root” arguments from /boot/grub2/grub.cfg. If you don’t do this, it will disregard the default subvolume id we just set and always boot into the root subvolume.

# sed -i ‘s/rootflags=subvol=root //’ /boot/grub2/grub.cfg

# btrfs sub set-default 262 /.

# reboot

After booting up , we need to check root subvolume .

# btrfs subvolume show /.

Name:                   root-20160607
uuid:                   1a65c55f-4e07-134c-8e34-f17574d2f4ac
Parent uuid:            dfd7589a-0fa4-5c4b-8b95-312d6c619f69
Creation time:          2016-06-07 02:05:05
Object ID:              262
Generation (Gen):       958
Gen at creation:        946
Parent:                 5
Top Level:              5
Flags:                  –
Snapshot(s):

Success . Now we know how to create snapshot and point that to default right ?

So , how can we delete those unwanted subvolumes ? Its easy .

# mount -o subvolid=0 /dev/sda3 /btrfs

# cd /btrfs

# ls

root root-20160607

# btrfs sub delete root

Default rm command will not work here .

 

Snapper Utility

Snapper is an advanced tool for snapshoting BTRFS systems  .

To install ,

# yum install snapper -y

Snapper utility is working with configuration  file and .snapshot hidden directory . Configurations are stored under /etc/snapper/configs/ location.

When we are run a snapper configuration it will automatically create conf file under /etc and create .snapshot directory under that sub volume .

For example ,

We can use /dev/sdb1 and volume1 as default sub volume .

# mount -o subvol=volume1 /dev/sdb1 /mnt

# cd /mnt

# snapper -c volume1 create-config /mnt/

This command will create volume1 config file under /etc/snapper and .snapshot folder on this location .

To take a snapshot using snapper ,

# snapper -c volume1 create -d “First snapshot”

# snapper -c volume1 list

To see the difference between snapshots ,

# snapper -c volume1 status 1..0

This means snapper is comparing snapshot ID 1 with 0 and list differences .

If you need to recover previous snapshot files ,

# snapper -c volume1 undochange 1..0

Task to do your self : Create some files under sub volume and take snapshot . Revert back using snapper.

GlusterFS and NFS with High Availability on CentOS 7

GlusterFS and NFS with High Availability on CentOS 7

GlusterFS and NFS with High Availability on CentOS 7
Technical requirements
3 x CentOS 7 Machines
4 IP’s
Additional hard drive for each machine with same size. (eg : /dev/sdb )
Disable SELinux

Installation
Install CentOS 7 Minimal on 3 machines and give static IP’s .
node1.rmohan.com = 192.168.1.10
node2.rmohan.com = 192.168.1.11
node3.rmohan.com = 192.168.1.12

Add hostnames on /etc/hosts file .

Install GlusterFS packages on all servers.

# yum install centos-release-gluster –y
# yum install glusterfs-server –y
# systemctl start glusterd
# systemctl enable glusterd

Create partition,

# fdisk /dev/sdb
# mkfs.ext4 /dev/sdb1
# mkdir /data
# mount /dev/sdb1 /data (Add this to /etc/fstab)
# mkdir /data/br0

Configure GlusterFS,

node1# gluster peer probe node2
node1# gluster peer probe node3
node1# gluster volume create br0 replica 3 node1:/data/br0 node2:/data/br0 node3:/data/br0
node1# gluster volume start br0
node1# gluster volume info br0
node1# gluster volume set br0 nfs.disable off

Install and Configure HA

Install below packages on all servers,

# yum -y install corosync pacemaker pcs
# systemctl enable pcsd.service
# systemctl start pcsd.service
# systemctl enable corosync.service
# systemctl enable pacemaker.service

These packages will create “hacluster” user on all machines. We need to assign password for this user and it must be same on all servers.

node1# passwd hacluster
node2# passwd hacluster
node3# passwd hacluster

And run below command’s on a single server,

node1# pcs cluster auth node1 node2 node3
node1# pcs cluster setup –name NFS node1 node2 node3
node1# pcs cluster start –all
node1# pcs property set no-quorum-policy=ignore
node1# pcs property set stonith-enabled=false

Create virtual heartbeat IP resource,

node1# pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.13 cidr_netmask=24 op monitor interval=20s
node1# pcs status

Done.

Client Configuration

client# yum install nfs-utils –y
client# mount -o mountproto=tcp -t nfs 192.168.1.13:/br0 /gluster

Disaster Recovery

If a glusterFS server goes down,
Assume Node2 goes down, after repairing all things we need to start that machine.
node2# pcs cluster start node2
node2# pcs status
This commands will automatically joins lost machine to existing PCS cluster.
Also check gluster Auto healer,
node2# gluster volume heal br0 info
Note : If 2 machines goes down, left over machine will act as read only file system. Need at least 2 live machines .

Samba in CentOS 6.8 as Secondary DC with Microsoft Active Directory 2012R2

1 . https://bugzilla.samba.org/show_bug.cgi?id=10265
It’s necessary to manually lower the domain and forest functional levels on the Windows 2012 server first, via Powershell:
Set-ADForestMode -Identity “mydom.local” -ForestMode Windows2008R2Forest
Set-ADDomainMode -Identity “mydom.local” -DomainMode Windows2008R2Domain
2. Need a fresh installed minimal CentOS 6.x OS . Disable SELinux and firewall . Update software packages .
Please check above notes and do as it is . Lets start ,
Primary AD ( Microsoft ) : 192.168.1.10 / ad.rmohan.com
Secondary DC (CentOS ) : 192.168.1.11 / ldap.rmohan.com
Login to Linux server ,
# cat /etc/resolv.conf
search rmohan.com
nameserver 192.168.1.10
nameserver 192.168.1.11
# yum groupinstall “development tools” -y
# yum install python-devel libgnutls-dev gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel wget gcc gcc-c++ krb5-server krb5-workstation -y
# wget https://download.samba.org/pub/samba/stable/samba-4.5.0.tar.gz
# tar -xvzf samba-4.5.0.tar.gz
# cd samba-4.5.0
# ./configure
# make
# make install
Now we successfully compiled Samba source package . We need to remove default samba configuration first then remount file system ( Some times AD join will popup an ACL error ) .
# rm -rf /usr/local/samba/etc/smb.conf
# mount -o remount,acl,user_xattr /dev/mapper/vg_ldap-lv_root
Now we are ready to add our Linux machine to Windows AD .
# /usr/local/samba/bin/samba-tool domain join rmohan.com DC -Uadministrator –realm=rmohan.com
Now we successfully added our linux system to Active directory as a Secondary DC . But we need to configure some more settings . Lets check authentication .
Before that check both systems time (NTP) . If its not same authentication will get error .
# yum install ntp -y
# service ntpd start
# chkconfig ntpd on
Add Our primary DC as NTP server .
# vi /etc/ntp.conf
server ad.rmohan.com iburst
# service ntpd restart
Now we need to change Kerberos configuration file .
# rm -rf /etc/krb5.conf
# cp -vr /usr/local/samba/private/krb5.conf /etc/krb5.conf
# kinit administrator@rmohan.com
# klist
For successful AD replication we need to Add A record and CNAME record in Microsoft AD .
# /usr/local/samba/bin/ldbsearch -H /usr/local/samba/private/sam.ldb ‘(invocationid=*)’ –cross-ncs objectguid
# record 1
dn: CN=NTDS Settings,CN=LDAP,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=example,DC=com
objectGUID: 640bcd46-cbc3-4451-8d82-cb37a255cbe1
# record 2
dn: CN=NTDS Settings,CN=AD,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=example,DC=com
objectGUID: 89f017ee-dacf-4d51-a19b-fe54da97a79a

Copy that ObjectGUID and goto Microsoft Active directory .
First create A record for ldap.rmohan.com .
Then goto Forward Lookup Zone > _msdcs.rmohan.com .
Create a CNAME here with our host objectGUID . In my case it is like below ,

640bcd46-cbc3-4451-8d82-cb37a255cbe1 Alias(CNAME) ldap.rmohan.com
Now authentication is working fine .Now we need to start DC replication . Every user created by master or slave need to replicated .
# /usr/local/samba/sbin/samba
# /usr/local/samba/bin/samba-tool drs showrepl
Default-First-Site-Name\LDAP
DSA Options: 0x00000001
DSA object GUID: 640bcd46-cbc3-4451-8d82-cb37a255cbe1
DSA invocationId: 4c115875-28b5-4c91-bcf0-66f4d74d935b
==== INBOUND NEIGHBORS ====
DC=DomainDnsZones,DC=example,DC=com
Default-First-Site-Name\AD01 via RPC
DSA object GUID: 89f017ee-dacf-4d51-a19b-fe54da97a79a
Last attempt @ Tue Oct 11 03:13:07 2016 EDT was successful
0 consecutive failure(s).
Last success @ Tue Oct 11 03:13:07 2016 EDT
Now we can see that replication is working fine . Lets check now ,
List all AD users.
# /usr/local/samba/bin/samba-tool user list
Create new user in Active directory and check again . If its showing all is good. Your secondary server is ready to go .
List all member computers .
# /usr/local/samba/bin/pdbedit -L -w | grep ‘\[[WI]’

This setup is very useful if you have single windows license and you need Active Directory replica . This is for you .
Enjoy .

Page 20 of 164« First...10...1819202122...304050...Last »