Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Compare Linux XFS vs EXT4 File System

Linux Operating System has lots of different file system alternatives, with all the existing default is commonly used ext4. File systems are generally utilized in order for handling how the information is kept soon after any program no longer is utilizing it, how accessibility to the information is managed, what other information (metadata) is linked to the data itself, etc. This article helps you to understand the difference between Linux XFS vs EXT4 File System.

Compare Linux XFS vs EXT4 File SystemLinux XFS vs EXT4
Ext4 File System:
Ext4 is short for fourth extended file system, it was introduced in 2008. It is really a reliable file system which has long been the default option for almost the majority of all distributions for the past couple of years, it is produced from an aging code base. Using several techniques ext4 improved in speed compair with ext3. It’s a journaling file system because of this it will maintain a journal of where the files are mainly located on the disk as well as any other changes that occur to the disk. If your system crashes, the chance of file system corruption is less due to journaling.

Maximum Individual file size can be from 16 GB to 16 TB
Maximum File System Size is 1EB(exabyte)
Maximum it contains 64,000 subdirectories (32,000 in ext3)

XFS File System:
XFS is a high-performance 64-bit journaling file system, which was designed by SGI for their IRIX platform. XFS features a variety of improvements that allow it to be stand out on the list of file system crowd, for example journaling for metadata operations, scalable/parallel I/O, suspend/resume I/O, online defragmentation, delayed allocation for performance, etc.

XFS was combined into Linux kernel in about 2002 and In 2009 RHEL Linux version 5.4 usage of the XFS file system. XFS has always been a preferred option for many enterprise systems particularly with massive amount of data, because of its high performance, architectural scalability and robustness. Now RHEL/CentOS 7 and Oracle Linux utilize XFS as their default file system.

Maximum Individual file size can be from 16 TB 16 Exabytes
Maximum File System Size is 8EB(exabyte)
Drawback: XFS file system cannot be shrunk and poor performance with deletions of the large numbers of files.

Common Commands for ext3/4 and XFS:
Task ext3/4 XFS
Create a file system mkfs.ext4 or mkfs.ext3 mkfs.xfs
File system check e2fsck xfs_repair
Resizing a file system resize2fs xfs_growfs
Save an image of a file system e2image xfs_metadump and xfs_mdrestore
Label or tune a file system tune2fs xfs_admin
Backup a file system dump and restore xfsdump and xfsrestore
Generic tools for ext2 and XFS:
Task ext4 XFS
Quota quota xfs_quota
File mapping filefrag xfs_bmap
I hope this article provides you the information about, difference Linux XFS vs EXT4 File System. Thank you for studying!!. Be Social and share it in social media,if you really feel worth sharing it.

LVM(LOGICAL VOLUME MANAGER)

LVM(LOGICAL VOLUME MANAGER)
LVM(LOGICAL VOLUME MANAGER)
—————————————–

1>first create partion

#partprobe -s

#pvcreate /dev/sda?

#pvcreate /dev/sda?

#vgcreate vg0 /dev/sda? /dev/sda?

#vgdisplay

#pvdisplay

#lvdisplay

#lvcreate -n lvm0 -L 150M vg0 (where -n—->name, -L—>length)

#lvdisplay

#mkfs.ext3 /dev/vg0/lvm0

#mkdir /mnt/lvm1

#mount /dev/vg0/lvm0 /mnt/lvm0

#vim /etc/fstab
/dev/vg0/lvm0 /mnt/lvm0 ext3 defaults 0 0

#mount -a

——————————————————————————–

Steps To Extend The LVM
—————————–

#lvextend -L 300M /dev/vg0/lvm0
(OR)
#lvextend -L +150M /dev/vg0/lvm0

#lvdisplay

#df -h

#resize2fs /dev/vg0/lvm0 (To give the information abt changing the size to the system)

#df -h
——————————————————————————–

Steps To Reduce The Size Of LVM
———————————–

#umount /mnt/lvm0

#e2fsck -f /dev/vg0/lvm0 (forcefully check the file system type)

#resize2fs /dev/vg0/lvm0 200M

#lvreduce -L 200M /dev/vg0/lvm0

#lvdisplay /dev/vg0/lvm0

#mount /dev/vg0/lvm0 /mnt/lvm0

#df -h
——————————————————————————–

To Extend Existing VG
—————————
1> create a partion

#partprobe -s

#pvcreate /dev/sda?

#vgextend vg0 /dev/sda?

#vgdispaly

Linux Boot Process

***************
Summary
=======

A) START – BIOS (Basic Input Output System) = when BIOS load in RAM Called BOOT STARPPING – CMOS = Called this process KERNAL LANDING –
MBR in Hard Disk – BOOTLOADER – KERNEL – INITRD IMAGE – Inittab = Upto this process called USER LANDING

B) There are two tyeps of boot loader in linux

a) LILO
i) 1st stage
ii) 2nd stage

b) GRUB
i) 1st stage
ii) 1.5 stage(This stage is optional)
iii) 2nd stage

Scratch \ start
============
A) Power on by USER

B) Power goes into SMPS.

C) Power goes into MOTHERBOARD.

D) Power goes then into CPU’s single pin.

E) Then CPU awakes BIOS.

F) Bios goes into RAM and start POST (power on self test) —>means BIOS will
check all hardware and periferels. This step is called BOOTSTRAPPING.

G) Then Bios goes into CMOS (Complementary metal.oxide.semiconductor) to check
the which device load in RAM. We can change the CMOS. CMOS is samll program
which take the information from Bios. CMOS battery supply power to CMOS
program. Therefore always change the CMOS battery after every 6 months.
Bios read the CMOS program for load the device.

H) BIOS load first sector / first track / 0 cylinder of harddisk ie. MBR(master boot record) into RAM.

a) MBR size is 512 MB. It divided into three parts:-
1) Bootsector(446 bytes),partition table(64 bytes),magic number(2 bytes)
Where boot sector contains boot loader (LILO or GRUB for linux and NTLDR
for windows)
2) Partion table contains 4 programs of 16 bytes.So we can create only 4
partition in a harddisk,ie four primary partition and last primary will
be extended by logical partition
3) Magic number of 2 bytes contais 0 or 1.If 0 means no and 1 means yes.
If bootsector and partition table in MBR contains errors then magic
number will be 0 otherwise 1.

##############################################################################
MBR(master Boot Record) First secter of harddisk=512 bytes
********************************************
446 bytes – Boot sector –
LILO or GRUB for linux and NTLDR for windows
===========================================================================

64 bytes – Partition table –
1st primary partition 16 bytes
2nd primary partition 16 bytes
3rd primary partition 16 bytes
4th primary partition 16 bytes / extended partition can be subdivided into logical partitions
===========================================================================
2 bytes – Magic number –
If bootsector and partition table in MBR contains errors |
then magic number will be 0 otherwise 1.

#################################################################################

I) Bios first check whether magic number is yes or no in MBR
If yes it will go for active partition in partition table else displays
error as boot failure. Active partition denoted by * in partition table
of linux and in windows C drive by default.

J) Then goes into first sector of active partition and whatever finds in first
sector of active partition loads in RAM. But its same MBR are there. So
BIOS goes to boot sector of MBR. IN boot sector, there is boot-loader
LILO or GRUB for linux and TLDR for windows. Bios load LILO/GRUB into RAM
for linux and ntldr for windows. NTLDR load windows.

K) In linux ANACONDA INSTALLER load LILO/GRUB in MBR while the time of
installation. LILO/GRUB is powerfull than ntldr. In dual boot first install
windows then install linux.

L) Bios load LILO/GRUB into RAM. This is called the FIRST STAGE OF BOOT
PROCESS. The purpose of first stage to load Second Stage ie boot.b.

M) LILO have a Map Code file. In this file have a address of ‘boot.b’ in CHS
format. LILO can not read this format. Therefore call the Bios to read
this file. If you change the path of LILO in configuration file then you
need to re-read LILO conf file. Using this command ‘/$ lilo -v’

N) Bios read map code and load BOOT.B file in RAM. This step called SECOND
STAGE OF BOOT PROCESS.

O) Map and Message these two file locate in the Boot.b file. Once again boot.b
can’t read these two files, therefore boot.b file again call the Bios for
read this two files. Bios read this files and load into RAM. Message file
show the splash screen to select the OS.

##########################################
Prompt message
************
Timeout (in seconds)
Default OS (if you not select any os)

=============================================
/boot/boot.b (2nd stage of boot loading)
=============================================
/boot/map | /boot/message
/boot/boot.0300 | /boot/boot.0800
=============================================

###########################################

P) After slecet the linux os boot.b load the Linux Kernal in RAM. Kernal
located in /boot/vmlinuz-2.4.20-8, in comparc’d format. At the loading time
LILO un-comparc’d it and load into RAM.
This steps called KERNEL LANDING.

Q) Once kernel load into RAM then kernel himself load the INITRD IMAGE in RAM.
Initrd image also in comparc’d format. Initrd image have a linuxrc script.
Initrd image run the linuxrc script.

R) Linuxrc script load ext3.o filesystem, jbd.o harddisk driver and others
*.o driver. Linuxrc script mount ‘/’ as a read only partition, it is call

S) Init – System Daemon this script run by linuxrc script
(located at /sbin/init )

a) Inittab this is a script file at /etc/inittab

1) check default runlevel

2) RC.SYSINIT this is a important file for boot
processing. This file read only one time at the
booting time and located at /etc/rc.d/rc.sysinit.
(as a sub-shell ). This script called ‘systemv’.

i) Network – reload the /etc/sysconfig/network this file
Set the hostname, this file run command $ hostname,
if hostname found then set it, else set the default
hostname i.e. ‘localhost’

ii) mounts ‘/proc’ filesystem (command mount -n -t proc /proc /proc)

iii) /etc/init.d/functions (same env)
After finish the proc mounting inittab run this file.
This cmd set around 23 functions.

iv) global UMASK
v) global PATH
vi) defines 17 shell functions
1) success
2) failure
3) passed
4) warning
5) echo_success
6) echo_failure
7) echo_passed
8) echo_warning
9) killproc
10) pidofproc
11) pidfileofproc
12) action
13) checkpid
14) confirm
15) status
16) strstr
17) daemon
vii) /etc/redhat-release
This cmnd display redhat version
which install your system.

viii) Then display interactive mode to
customize start. Press ‘I’ option to
boot manually.

ix) Set Localtime /etc/localtime
‘hwclock’ & ‘date’ cmnd change
/etc/localtime & /etc/adjtime file.

x) /proc -to check kernel parameters

xi) /etc/sysctl.conf -for kernal tunning

xii) Keyboard Mapping

xiii) FASTBOOT – if we create a blank
file as a ‘fastboot’ in ‘/’ then
fsck not process

xiv) /etc/sysconfig/readonly-root
– rwtab file, all file system
mounted as a read only

xv) /etc/rwtab.d/* – some exceptional
files in read write mode

xvi) FSCK – goto fstab and check the
parameter in 5th column if there are
‘1’ then run fsck cmnd, else check
next line.

xvii) Mounting tmpfs – /dev/shm shared
application memory.

xviii) Read /etc/fstab file and remounts
‘/’ read-write mode and all others
partition.

xix) Quota on.

xx) Enableing SWAP partition.

xxi) /bin/dmesg – /var/log/dmesg
collect hardware info from
BIOS and display.

b) RC scripts – /etc/rc.d/rc ( as a subshell )
RC manage the which services started and which not started.
This script read any time.

i) checks runlevel ( function )

ii) finding previous runlevel

iii) /etc/init.d/functions (same env)

iv) checks user confirmation mode and
interactive mode / startup. Setting new runlevel

v) /etc/rc.d/rc3.d/K* (K – means stop)
This is used for stop. However here are 5 functions
1) STOP
2) START
3) RESTART
4) STATUS
5) CONDRESTART

vi) /etc/rc.d/rc3.d/S* (S – means start)
This is used for start only. Same as above. However
some files are in this directory, those are symblink
with some files that located in ‘/etc/init.d/’. These
files called Init Script file. These files
run by /etc/init.d/function cmd

vii) /usr/bin/rhgb-client functions,used only stop
( start
stop
restart
status
condrestart )

viii) /etc/rc.d/rc.local this is a last file
run by rc script. If you want to run some cmnd
automatic then you can enter those cmnd or scirpt
in this file.

c) Now part ofthe /sbin/init call /etc/inittab file. inittab read
/sbin/shutdown -t3 -r now
poweroff considerations – /sbin/shutdown -f -h +2
poweron considerations – /sbin/shutdown -c
power ok wait
Now inttab read following line in his file
1:2345:respawn:/sbin/mingetty tty1
IF runlevel 5 then – /etc/X11/predfm -nodaemon

I) Inittab call MINGETTY
a) loads /dev/tty1
b) Reads /etc/issue file
c) /bin/login
1) /usr/bin/passwd
i)PAM (Plugabel Authantication Module) security.
ii) /etc/passwd
iii) /etc/shadow
iv) /etc/group
v) /etc/gshadow

d) puts login daemon in sleep state

e) root/.hushlogin exists ? Ja! mail,motd,lastlog NOT RUN!

f) /etc/motd (Message of the day)

g) lastlog* using /var/log/lastlog

h) Checks user’s mail – /var/spool/mail/root

i) wakes the /bin/login process which forks off as
independent application daemon & mingetty goes to Zombie state.

2) login calls /bin/bash
puts login to sleep state

3) /etc/profile (global sets HOSTNAME, HISTSIZE, PATH etc)
a) /etc/inputrc (sets keyboard mappings)
b) /etc/termcap (sets term. capabilities)
c) /etc/profile.d/*.sh ( 13 files )
customize *.sh files
( colorls.sh
vim.sh
glib2.sh
gnome-ssh-askpass.sh
krb5.sh
lam.sh
lang.sh
less.sh
mc.sh
pvm.sh
qt.sh
vim.sh
which-2.sh
xpvm.sh )

J) /etc/bash.rc – global shell

a) umask – permission set
1) root -0 – 0022
2) above uid 99 (user) – 0022

b) PS1 – define veriable (roota@localhost#). we can change ‘#’ to ‘$’
i.e. ‘root@localohost$’

K) Users profile
a) /root/.bash_profile
b) /root/.bashrc
c) /root/.bash_history
d) /root/.bash_logout

========================
END OF THE BOOT PROCESS.
========================

PXE Network Installation with Kick start

Just configured and tested PXE Network Installation . Its Amazing!!

Please bear with my rough notes on this Topic :

>> Mount your CENTOS DVD to /media .

>>>yum -y install dhcp tftp-server syslinux vsftpd system-config-kickstart

>>> Configure DHCP Server as below :

#vi /etc/dhcpd.conf

ddns-update-style interim;

ignore client-updates;

allow booting;

allow bootp;

authoritative;

subnet 192.168.182.0 netmask 255.255.255.0 {

range dynamic-bootp 192.168.182.138 192.168.182.254;

default-lease-time 21600;

max-lease-time 43200;

next-server 192.168.182.137;

filename “pxelinux.0”;

}

>>>> Configure TFTP Server

# vi /etc/xinetd.d/tftp

service tftp

{

socket_type = dgram

protocol = udp

wait = yes

user = root

server = /usr/sbin/in.tftpd

server_args = -s /tftpboot

disable = no

per_source = 11

cps = 100 2

flags = IPv4

}

>>> copy all boot required files to tftp-server’s home directory

#cp -a /media/isolinux/* /tftpboot/

>>> create a new directory at /tftpboot/pxelinux.cfg

>>> # cp /tftpboot/isolinux.cfg /tftpboot/pxelinux.cfg/default

>>> #cp /usr/lib/syslinux/pxelinux.0 /tftpboot/

>>> # service xinetd start

>>>>> Configure FTP server to provide CD Dump.

# cp -vr /media/* /var/ftp/pub

>>> Ensure that anonymous_enable=YES at /etc/vsftpd/vsftpd.conf and start the

VSFTPD service

>>> At Client Machine ,go to BIOS setting and change the boot device priority to

keep “Network boot from xxxx ” at first position

>>> The above Steps will facilitate you with Network Installation in Attended Mode

i.e you have to physically do package selection and other stuff manually .

To configure unattended Network Installation , follow these steps:

1. Go to your PXE Server and check for kickstart file at /root/anaconda-
ks.cfg, uncomment the following lines for Partitions: (Note: you can also edit this

file with system-config-kickstart utility as per your requirements)

clearpart –linux

part /boot –fstype ext3 –size=100

part swap –size=2000

part / –fstype ext3 –size=100 –grow

save this file and copy it to /var/ftp/pub.

#cp /root/anaconda-ks.cfg /var/ftp/pub (Please ensure required permission is

granted for this kickstart file).

2. Go to your Client Machine ,boot again and at boot option give :

linux ks=ftp://192.168.182.137/pub/anaconda-ks.cfg and there you go , your

Installation will be unattended .

NOTE : You can completely make Installation unattended , meaning you don’t even

have to specify the boot option (eg: linux ks=ftp://192.168.182.137/pub/

anaconda-ks.cfg) . Even this can be automated to make your Installation completely

INDEPENDENT from you . For this follow this steps:

1. Edit /tftpboot/pxelinux.cfg/default file and add the following lines :

label linux

kernel vmlinuz

append initrd=initrd.img linux ks=ftp://192.168.182.137/pub/anaconda-ks.cfg

2. Save the above file and Start your Client Machine’s Installation .

ENJOY PXE!!!!!!

Thanks

SSH passwordless multiple login

SSH passwordless multiple login

I’ve already written about howto log in, on your local system, and make passwordless ssh connections using ssh-keygen command. However, you cannot just follow these instructions over and over again, as you will overwrite the previous keys.
It is also possible to upload multiple public keys to your remote server, allowing one or more users to log in without a password from different computers.
Step # 1: Generate first ssh key
Type the following command to generate your first public and private key on a local workstation. Next provide the required input or accept the defaults. Please do not change the filename and directory location.
workstation#1 $ ssh-keygen -t rsa
Finally, copy your public key to your remote server using scp
workstation#1 $ scp ~/.ssh/id_rsa.pub user@remote.server.com:.ssh/authorized_keys
Step # 2: Generate next/multiple ssh key
a) Login to 2nd workstation
b) Download original the authorized_keys file from remote server using scp:
workstation#2 $ scp user@remote.server.com:.ssh/authorized_keys ~/.ssh
c) Now create the new pub/private key:
workstation#2 $ ssh-keygen -t rsa
d) Now you have new public key. APPEND this key to the downloaded authorized_keys file using cat command:
workstation#2 $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
e) Finally upload authorized_keys to remote server again:
workstation#2 $ scp ~/.ssh/authorized_keys user@remote.server.com:.ssh/
You can repeat step #2 for each user or workstations for remote server.
Step #3: Test your setup
Now try to login from Workstation #1, #2 and so on to remote server. You should not be asked for a password:
workstation#1 $ ssh user@remote.server.com
workstation#2 $ ssh user@remote.server.com
Updated for accuracy.

LVM Snapshot : Backup & restore LVM Partition in linux

An LVM snapshot is an exact mirror copy of an LVM partition which has all the data from the LVM volume from the time the snapshot was created. The main advantage of LVM snapshots is that they can reduce the amount of time that your services / application are down during backups because a snapshot is usually created in fractions of a second. After the snapshot has been created, we can back up the snapshot while our services and applications are in normal operation.

LVM snapshot is the feature provided by LVM(Logical Volume Manager) in linux. While creating lvm snapshot , one of most common question comes to our mind is that what should be the size of snapshot ?

“snapshot size can vary depending on your requirement but a minimum recommended size is 30% of the logical volume for which you are taking the snapshot but if you think that you might end up changing all the data in logical volume then make the snapshot size same as logical volume ”

Scenario : We will take snapshot of /home which is LVM based parition.

[root@localhost ~]# df -h /home/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_home
5.0G 139M 4.6G 3% /home

Taking Snapshot of ‘/dev/mapper/VolGroup-lv_home’ partition.

LVM snapshot is created using lvcreate command , one must have enough free space in the volume group otherwise we can’t take the snapshot , Exact syntax is given below :

# lvcreate -s -n -L
http://www.nextstep4it.com/categories/unix-command/lvm-snapshot/

Example :

[root@localhost ~]# lvcreate -s -n home_snap -L1G /dev/mapper/VolGroup-lv_home
Logical volume “home_snap” created

Now verify the newly create LVM ‘home_snap’ using lvdisplay command

Now Create the mount point(directory ) and mount it
[root@localhost ~]# mkdir /mnt/home-backup
[root@localhost ~]# mount /dev/mapper/VolGroup-home_snap /mnt/home-backup/
[root@localhost ~]# ls -l /mnt/home-backup/

Above command will show all directories and files that we know from our /home partition

Now take the backup of snapshot on /opt folder .

[root@localhost ~]# tar zcpvf /opt/home-backup.tgz /mnt/home-backup/

If you want the bitwise backup , then use the below command :

[root@localhost ~]# dd if=/dev/mapper/VolGroup-home_snap of=/opt/bitwise-home-backup
10485760+0 records in
10485760+0 records out
5368709120 bytes (5.4 GB) copied, 79.5741 s, 67.5 MB/s

Restoring Snapshot Backup :

If anything goes wrong with your /home file system , then you can restore the backup that we have taken in above steps. You can also mount the lvm snapshot on /home folder.

Remove LVM snapshot

Once you are done with lvm snapshot backup and restore activity , you should umount and remove lvm snapshot partition using below commands as snapshot is consuming system resources like diskspace of respective voulme group.

[root@localhost ~]# umount /mnt/home-backup/
[root@localhost ~]# lvremove /dev/mapper/VolGroup-home_snap
Do you really want to remove active logical volume home_snap? [y/n]: y
Logical volume “home_snap” successfully removed

pam_tally2 command – lock & unlock ssh failed logins in linux

pam_tally2 command – lock & unlock ssh failed logins in linux

pam_tally2 command is used to lock and unlock ssh failed logins in linux like operating system. To implment a security feature like a user’s account must be locked after a number of failed login attempts . We can achieve this security via pam module called pam_tally2. This module can display user’s login attempts,set counts on individual basis, unlock all user counts.

pam_tally2 comes in two parts: pam_tally2.so and pam_tally2. The former is the PAM module and the latter, a stand-alone program. pam_tally2 is an application which can be used to interrogate and manipulate the counter file

In this article we will discuss how to lock and unlock user’s account after reaching a fixed number of failed ssh attempts inRHEL 6.X / CentOS 6.X

By default pam_tally2 module is already installed in linux. To set the lock and unlock rules, edit the two files : ‘/etc/pam.d/system-auth’ & ‘/etc/pam.d/password-auth’ and add the below line at the starting of auth section in both the files

auth required pam_tally2.so file=/var/log/tallylog deny=3 even_deny_root unlock_time=120

And then add the below line in the account Section in both the files

account required pam_tally2.so

Sample File of /etc/pam.d/system-auth

Sample File of /etc/pam.d/password-auth

whereas :
file=/var/log/tallylog – Default log file whic keep login counts.
deny=3 – Deny access after 3 attempts and lock down user.
even_deny_root – Policy is also apply to root user.
unlock_time=1200 – Account will be locked till 20 Min after that it will be unlocked
Now Try to Login the linux box with incorrect password :

Now check user’s login attempts using pam_tally2 Command
[root@localhost ~]# pam_tally2 -u nextstep4it
Login Failures Latest failure From
nextstep4it 3 06/14/14 02:01:25 192.168.1.8

Now reset or unlock user’s account’s using pam_tally2 command :
[root@localhost ~]# pam_tally2 –user nextstep4it –reset
Login Failures Latest failure From
nextstep4it 4 06/14/14 02:20:42 192.168.1.8

Now Verify the login Attempt is reset or not
[root@localhost ~]# pam_tally2 –user nextstep4it
Login Failures Latest failure From
nextstep4it 0

Hardware Serial Numbers Linux Command to Retrieve

Hardware Serial Numbers Linux Command to Retrieve

Ever needed to obtain the serial number (or other details) for a remote server? Couldn’t be bothered to walk/run/drive/fly all the way there just to read a sticky label on the back or bottom of said server? Read on then.
The command you want to run, as root, is dmidecode. For example, to get the make and model and serial number of a server, do this:
dmidecode -t system
The result will be similar to:
# dmidecode 2.11
SMBIOS 2.5 present.

Handle 0x0002, DMI type 1, 27 bytes
System Information
Manufacturer: Dell Inc.
Product Name: Vostro 1720
Version: Null
Serial Number: 996C4L1
UUID: Not Settable
Wake-up Type: Power Switch
SKU Number: Null
Family: Vostro

Handle 0x000F, DMI type 12, 5 bytes
System Configuration Options
Option 1: Jumper settings can be described here.

Handle 0x0018, DMI type 32, 20 bytes
System Boot Information
Status: No errors detected
Other options for the -t parameter are:
bios – tells you all about your bios.
system – tells you about the system hardware.
baseboard – all about the mother board.
chassis – all you need to know about the “box” the system is made up of.
processor – fairly obvious.
memory – again, fairly obvious.
cache – information about your CPU cache.
connector – what sockets are present on the computer. USB, firewire, ethernet etc.
slot – appears to be the bus information, and voltages present, supplied etc.
There’s brief help available:
dmidecode –help

Usage: dmidecode [OPTIONS]
Options are:
-d, –dev-mem FILE Read memory from device FILE (default: /dev/mem)
-h, –help Display this help text and exit
-q, –quiet Less verbose output
-s, –string KEYWORD Only display the value of the given DMI string
-t, –type TYPE Only display the entries of given type
-u, –dump Do not decode the entries
–dump-bin FILE Dump the DMI data to a binary file
–from-dump FILE Read the DMI data from a binary file
-V, –version Display the version and exit
However, to find out the different types you can supply, you need to supply an erroneous type:
dmidecode -t left_leg

Invalid type keyword: left_leg
Valid type keywords are:
bios
system
baseboard
chassis
processor
memory
cache
connector
slot

Raid Partition how to

RAID (Redundant Array of Inexpensive Disks)

Create 3 partitions for implementing RAID using fdisk command.

e.g. #fdisk /dev/hda

Press n to create the 3 new partitions each of 100Mb in size.

Press p to see the partition table.

Press t to change the partition id of all the three partitions created by you to fd

(linux raid auto).

Press wq to save and exit from fdisk utility in linux.

#partprobe

Use fdisk -l to list the partition table.

Creating RAID

# mdadm –create /dev/md0 –level=5 –raid-devices=3 /dev/hda6 /dev/hda7 /dev/

hda8

Press y to create the arrays.

To see the details of raid use the following command: –

# cat /proc/mdstat

# mdadm –detail /dev/md0

Creating the file system for your RAID devices

#mkfs.ext3 /dev/md0

Mounting the RAID partition

#mkdir data

# mount /dev/md0 data

#df -h /root/data (Command is used to see the space allocation).

Crashing the raid devices

# mdadm –manage /dev/md0 –fail /dev/hda8

Removing raid devices

# mdadm –manage /dev/md0 –remove /dev/hda8

Adding raid devices

# mdadm –manage /dev/md0 –add /dev/hda8

View failed and working raid devices

# cat /proc/mdstat

# mdadm –detail /dev/md0

# tail /var/log/messages

To remove the RAID follow these steps: –

1) unmount the mounted directory where raid is mounted.

e.g. umount data

2) Stop the device

e.g. mdadm –stop /dev/md0

3) View the details of your raid level using following command: –

#cat /proc/mdstat

#mdadm –detail /dev/md0

Kdump for Linux Kernel Crash Analysis

Kdump is an utility used to capture the system core dump in the event of system crashes.
These captured core dumps can be used later to analyze the exact cause of the system failure and implement the necessary fix to prevent the crashes in future.
Kdump reserves a small portion of the memory for the secondary kernel called crashkernel.
This secondary or crash kernel is used the capture the core dump image whenever the system crashes.
1. Install Kdump Tools

First, install the kdump, which is part of kexec-tools package.
# yum install kexec-tools

2. Set crashkernel in grub.conf

Once the package is installed, edit /boot/grub/grub.conf file and set the amount of memory to be reserved for the kdump crash kernel.
You can edit the /boot/grub/grub.conf for the value crashkernel and set it to either auto or user-specified value. It is recommended to use minimum of 128M for a machine with 2G memory or higher.
In the following example, look for the line that start with “kernel”, where it is set to “crashkernel=auto”.
# vi /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux (2.6.32-419.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-419.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-419.el6.x86_64.img
3. Configure Dump Location

Once the kernel crashes, the core dump can be captured to local filesystem or remote filesystem(NFS) based on the settings defined in /etc/kdump.conf (in SLES operating system the path is /etc/sysconfig/kdump).
This file is automatically created when the kexec-tools package is installed.
All the entries in this file will be commented out by default. You can uncomment the ones that are needed for your best options.
# vi /etc/kdump.conf
#raw /dev/sda5
#ext4 /dev/sda3
#ext4 LABEL=/boot
#ext4 UUID=03138356-5e61-4ab3-b58e-27507ac41937
#net my.server.com:/export/tmp
#net user@my.server.com
path /var/crash
core_collector makedumpfile -c –message-level 1 -d 31
#core_collector scp
#core_collector cp –sparse=always
#extra_bins /bin/cp
#link_delay 60
#kdump_post /var/crash/scripts/kdump-post.sh
#extra_bins /usr/bin/lftp
#disk_timeout 30
#extra_modules gfs2
#options modulename options
#default shell
#debug_mem_level 0
#force_rebuild 1
#sshkey /root/.ssh/kdump_id_rsa
In the above file:
To write the dump to a raw device, you can uncomment “raw /dev/sda5? and change it to point to correct dump location.
If you want to change the path of the dump location, uncomment and change “path /var/crash” to point to the new location.
For NFS, you can uncomment “#net my.server.com:/export/tmp” and point to the current NFS server location.
4. Configure Core Collector

The next step is to configure the core collector in Kdump configuration file. It is important to compress the data captured and filter all the unnecessary information from the captured core file.
To enable the core collector, uncomment the following line that starts with core_collector.
core_collector makedumpfile -c –message-level 1 -d 31

makedumpfile specified in the core_collector actually makes a small DUMPFILE by compressing the data.
makedumpfile provides two DUMPFILE formats (the ELF format and the kdump-compressed format).
By default, makedumpfile makes a DUMPFILE in the kdump-compressed format.
The kdump-compressed format can be read only with the crash utility, and it can be smaller than the ELF format because of the compression support.
The ELF format is readable with GDB and the crash utility.
-c is to compresses dump data by each page
-d is the number of pages that are unnecessary and can be ignored.
If you uncomment the line #default shell then the shell is invoked if the kdump fails to collect the core. Then the administrator can manually take the core dump using makedumpfile commands.
5. Restart kdump Services

Once kdump is configured, restart the kdump services,
# service kdump restart
Stopping kdump: [ OK ]
Starting kdump: [ OK ]

# service kdump status
Kdump is operational
If you have any issues in starting the services, then kdump module or crashkernel parameter has not been setup properly. So, verify /proc/cmdline and make sure it reflects to include the crashkernel value.
6. Manually Trigger the Core Dump

You can manually trigger the core dump using the following commands:
echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger
The server will reboot itself and the crash dump will be generated.

7. View the Core Files

Once the server is rebooted, you will see the core file is generated under /var/crash based on location defined in /var/crash.
You will see vmcore and vmcore-dmseg.txt file:
# ls -lR /var/crash
drwxr-xr-x. 2 root root 4096 Mar 26 11:06 127.0.0.1-2014-03-26-11:06:43

/var/crash/127.0.0.1-2014-03-26-11:06:43:
-rw——-. 1 root root 33595159 Mar 26 11:06 vmcore
-rw-r–r–. 1 root root 79498 Mar 26 11:06 vmcore-dmesg.txt
8. Kdump analysis using crash

Crash utility is used to analyze the core file captured by kdump.
It can also be used to analyze the core files created by other dump utilities like netdump, diskdump, xendump.
You need to ensure the “kernel-debuginfo” package is present and it is at the same level as the kernel.
Launch the crash tool as shown below. After you this command, you will get a cash prompt, where you can execute crash commands:
# crash /var/crash/127.0.0.1-2014-03-26-12\:24\:39/vmcore /usr/lib/debug/lib/modules/ /vmlinux

crash>

9. View the Process when System Crashed

Execute ps command at the crash prompt, which will display all the running process when the system crashed.
crash> ps
PID PPID CPU TASK ST %MEM VSZ RSS COMM
0 0 0 ffffffff81a8d020 RU 0.0 0 0 [swapper]
1 0 0 ffff88013e7db500 IN 0.0 19356 1544 init
2 0 0 ffff88013e7daaa0 IN 0.0 0 0 [kthreadd]
3 2 0 ffff88013e7da040 IN 0.0 0 0 [migration/0]
4 2 0 ffff88013e7e9540 IN 0.0 0 0 [ksoftirqd/0]
7 2 0 ffff88013dc19500 IN 0.0 0 0 [events/0]

10. View Swap space when System Crashed

Execute swap command at the crash prompt, which will display the swap space usage when the system crashed.
crash> swap
FILENAME TYPE SIZE USED PCT PRIORITY
/dm-1 PARTITION 2064376k 0k 0% -1
11. View IPCS when System Crashed

Execute ipcs command at the crash prompt, which will display the shared memory usage when the system crashed.
crash> ipcs
SHMID_KERNEL KEY SHMID UID PERMS BYTES NATTCH STATUS
(none allocated)

SEM_ARRAY KEY SEMID UID PERMS NSEMS
ffff8801394c0990 00000000 0 0 600 1
ffff880138f09bd0 00000000 65537 0 600 1

MSG_QUEUE KEY MSQID UID PERMS USED-BYTES MESSAGES
(none allocated)

12. View IRQ when System Crashed

Execute irq command at the crash prompt, which will display the IRQ stats when the system crashed.
crash> irq -s
CPU0
0: 149 IO-APIC-edge timer
1: 453 IO-APIC-edge i8042
7: 0 IO-APIC-edge parport0
8: 0 IO-APIC-edge rtc0
9: 0 IO-APIC-fasteoi acpi
12: 111 IO-APIC-edge i8042
14: 108 IO-APIC-edge ata_piix
.
.

vtop – This command translates a user or kernel virtual address to its physical address.
foreach – This command displays data for multiple tasks in the system
waitq – This command displays all the tasks queued on a wait queue.
13. View the Virtual Memory when System Crashed

Execute vm command at the crash prompt, which will display the virtual memory usage when the system crashed.
crash> vm
PID: 5210 TASK: ffff8801396f6aa0 CPU: 0 COMMAND: “bash”
MM PGD RSS TOTAL_VM
ffff88013975d880 ffff88013a0c5000 1808k 108340k
VMA START END FLAGS FILE
ffff88013a0c4ed0 400000 4d4000 8001875 /bin/bash
ffff88013cd63210 3804800000 3804820000 8000875 /lib64/ld-2.12.so
ffff880138cf8ed0 3804c00000 3804c02000 8000075 /lib64/libdl-2.12.so
14. View the Open Files when System Crashed

Execute files command at the crash prompt, which will display the open files when the system crashed.
crash> files
PID: 5210 TASK: ffff8801396f6aa0 CPU: 0 COMMAND: “bash”
ROOT: / CWD: /root
FD FILE DENTRY INODE TYPE PATH
0 ffff88013cf76d40 ffff88013a836480 ffff880139b70d48 CHR /tty1
1 ffff88013c4a5d80 ffff88013c90a440 ffff880135992308 REG /proc/sysrq-trigger
255 ffff88013cf76d40 ffff88013a836480 ffff880139b70d48 CHR /tty1
..

15. View System Information when System Crashed

Execute sys command at the crash prompt, which will display system information when the system crashed.
crash> sys
KERNEL: /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2014-03-26-12:24:39/vmcore [PARTIAL DUMP]
CPUS: 1
DATE: Wed Mar 26 12:24:36 2014
UPTIME: 00:01:32
LOAD AVERAGE: 0.17, 0.09, 0.03
TASKS: 159
NODENAME: elserver1.abc.com
RELEASE: 2.6.32-431.5.1.el6.x86_64
VERSION: #1 SMP Fri Jan 10 14:46:43 EST 2014
MACHINE: x86_64 (2132 Mhz)
MEMORY: 4 GB
PANIC: “Oops: 0002 [#1] SMP ” (check log for details)

Note : For kernel debugging we need following package to be installed,
2014 kernel-debuginfo-2.6.32-220.el6.i686.rpm
kernel-debuginfo-common-i686-2.6.32-220.el6.i686.rpm

Listing 2: Panic Routine for NMI Event
#?cat?/proc/sys/kernel/unknown_nmi_panic
1
#?sysctl?kernel.unknown_nmi_panic
kernel.unknown_nmi_panic?=?1
#?grep?nmi?/etc/sysctl.conf
kernel.unknown_nmi_panic?=?1

Page 2 of 16312345...102030...Last »