Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Shorewall Firewall

Installing and configuring Shorewall in CentOS

Netfilter is a packet filtering in Linux 2.4.x and 2.6.x kernels Enables packet filtering (network address and port), NAT and other packages. Redesigned and highly improved from the previous kernel 2.2.x, ipchains and ipfwadm kernel 2.0.x.

Netfilter is a set of structures within the kernel that allows modules to register with the network functions.

A record of the information is returned to tell the fate of this package. DENY, ACCEPT, REJECT is returned information to inform the target / request.

DROP – Reject the packet without sending a message.

REJECT – Do the same function as DROP, with the difference that sends an ICMP “icmp-port-unreachable” to the source machine.

Iptables is a table structure for the definition of rulesets. Each rule within a table (IP) consists of a request and action (rules).

Netfilter, ip_tables, connection tracking (ip_conntrack, nf_conntrack) and NAT are subsystems together to build the main parts of the structure.

Reference: http://www.netfilter.org

 

Shoreline Firewall (netfilter)
Site Developer: www.shorewall.net / index.htm
Go to Documentation (Documentation) that there is going item by item and include other things that can add to Shorewall.
Using the “shorewall” you will be using iptables, but in an easier way.

 

yum install shorewall

Checking the processes that are to be started during reboot:

# Chkconfig – list

iptables 0: off 1: off 2: on 3: on 4: on 5: on 6: off
shorewall 0: off 1: off 2: off 3: off 4: off 5: off 6: off

Note that Shorewall is off. That is, every time the machine is rebooted, Shorewall will not rise.

# Chkconfig shorewall on

Leaving it (shorewall) enabled looks like this:

shorewall 0: off 1: off 2: on 3: on 4: on 5: on 6: off

Shorewall configuration files:

/ etc / shorewall / shorewall.conf
/ etc / shorewall / interfaces
/ etc / shorewall / masq
/ etc / shorewall / policy
/ etc / shorewall / rules
/ etc / shorewall / zones

where:

interfaces – each interface definition that will
masq – Definition of Masquerade / SNAT (eth0, eth1, eth2 …)
Police – Policies (ACCEPT, DROP, REJECT …)
rules – Firewall Rules
zones – zones Statement

 

Configuring Shorewall
Change in shorewall.conf:

STARTUP_ENABLED=Yes

SHOREWALL_COMPILER=

SHOREWALL_COMPILER=perl

 

cat /etc/shorewall/interfaces

#
# Shorewall version 4 – Interfaces File
#
# For information about entries in this file, type “man shorewall-interfaces”
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-interfaces.html
#
#######################################
#ZONE   INTERFACE       BROADCAST       OPTIONS
net     eth0      detect        tcpflags,dhcp,routefilter,nosmurfs,logmartians
loc     eth1      detect        tcpflags,nosmurfs
#LAST LINE — ADD YOUR ENTRIES BEFORE THIS ONE — DO NOT REMOVE# cat masq
#
# Shorewall version 4 – Masq file
#
# For information about entries in this file, type “man shorewall-masq”
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-masq.html
#
#######################################
#INTERFACE              SOURCE          ADDRESS         PROTO   PORT(S) IPSEC   MARK
eth0    203.x.x.x
eth0:1  203.y.y.y
eth1    10.x.y.z
#LAST LINE — ADD YOUR ENTRIES ABOVE THIS LINE — DO NOT REMOVE

Note: Read the fine manual, has several options to configure.

 

cat policy

#
# Shorewall version 4 – Policy File
#
# For information about entries in this file, type “man shorewall-policy”
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-policy.html
#
#######################################
#SOURCE         DEST            POLICY          LOG             LIMIT:BURST
fw      loc     ACCEPT
fw      net     ACCEPT
fw      fw      ACCEPTloc     fw      ACCEPT
loc     net     ACCEPT
loc     loc     ACCEPTnet     fw      DROP    info
net     loc     DROP    info
net     net     DROP    info

all     all     DROP    info
#                                               LEVEL
#LAST LINE — DO NOT REMOVE

Note: I left that last line (all all DROP info) because I decree that passes and does not pass the firewall and will generate a log of what is.
cat rules

#
# Shorewall version 4 – Rules File
#
# For information on the settings in this file, type “man shorewall-rules”
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-rules.html
#
############################################
#ACTION         SOURCE          DEST            PROTO   DEST    SOURCE          ORIGINAL        RATE            USER/   MARK
#                                                       PORT    PORT(S)         DEST            LIMIT           GROUP
#SECTION ESTABLISHED
#SECTION RELATED
SECTION NEW### NET to FWACCEPT  net     fw   icmp  echo-request
ACCEPT  net     fw   tcp   80
ACCEPT  net     fw   tcp   22

### LOC to FW

ACCEPT  loc     fw   tcp   ssh
ACCEPT  loc     fw   icmp  echo-request
ACCEPT  loc     fw   udp   snmp

### LOC to NET

ACCEPT  loc     net  udp   domain
ACCEPT  loc     net  tcp   domain
ACCEPT  loc     net  tcp   http,https
ACCEPT  loc     net  icmp  echo-request
#LAST LINE — ADD YOUR ENTRIES BEFORE THIS ONE — DO NOT REMOVE

# cat zones
#
# Shorewall version 4 – Zones File
#
# For information about this file, type “man shorewall-zones”
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-zones.html
#
#######################################
#ZONE   TYPE            OPTIONS         IN                      OUT
#                                       OPTIONS                 OPTIONS
fw      firewall
net     ipv4
loc     ipv4
#LAST LINE – ADD YOUR ENTRIES ABOVE THIS ONE – DO NOT REMOVE

# service shorewall start

If you get any error:# Restart shorewall debugYou can check where the error is giving informed.

Setting up Clustering on Apache/Jboss/Tomcat using Jakarta mod_jk

Clustering Jboss for two reasons  load balancing and failover.

Fail-over

Fail-over is probably the most important issue for web applications. If the front end load balancer detects that one of the nodes has gone down it will redirect all the traffic to the second instance and your clients, apart from any on the failed node, won’t even notice. Well that’s the theory – if the site is under high load you may find that the surviving Tomcat cannot cope with the increased load and fails in short order. You need to size and correctly configure your hardware and software.

You can also manually fail-over a system to upgrade the hardware, JVM or other application without end users being aware. If your management doesn’t tolerate any downtime this is an extremely important feature. Your JVM’s can be installed on separate or the same physical host. The latter leaves you exposed to hardware problems.

Load-balancing

There are two easy ways to increase application performance without tuning the application software itself. Buy faster hardware or add more hardware. If you add a second server running Jboss you can use load balancing to distribute the work between the two servers. Requests come in to the Apache front end, static content is served directly by Apache and any dynamic requests forwarded to the Tomcats based on some algorithm. The Apache Jakarta mod_jk connector is the most popular way of doing this.

In this exercise we will configure two physical computers each running one Apache web server and two tomcats for both failover and load balancing. The example ignores any back-end data storage considerations but will discuss load balancing between the two computers.

Front End Load Balancing

DNS Round Robin

The simplest technique for load balancing between the two Apache front end servers is the DNS Round Robin strategy. This performs load balancing by returning a different server IP address each time a (DNS) name server is queried by a client. It is usually found as front-end service on large clusters of servers to provide a server address that is geographically close to the client. Normally an intelligent name server such as lbnamed is used. This will poll the web servers in the list to check if they are online and not overloaded.

DNS Load balancing has some significant drawbacks. Addresses may be cached by clients and local DNS resolvers and it can be hard to accurately manage load. The system simply shuffles the order of the address records each time the name server is queried. No account is taken of real server load or network congestion.

Load Balanced

The advantage of using a load balancer compared to using round robin DNS is that it takes care of the load on the web server nodes and will direct requests to the node with the least load. It can also remove failed nodes from the list. Some round robin DNS software can do this, the problem is they have little control about how client’s cache IP addresses.

Many web applications (e.g. forum software, shopping carts, etc.) make use of sessions. If you are in a session on Apache node 1, you would lose that session if suddenly node 2 served your requests. This assumes that the web servers don’t share their session data, say via a database.

For DNS load balancing this is not a problem, the IP address will resolve to a single server. However a front end load balancer must be aware of sessions and always redirect requests from the same user to the same backend server.

There are a number of packages to support front end load balancing. For example RedHat Piranha or Ultramonkey. There are also hardware/firmware solutions from companies such as Cisco.

 

Apache httpd

MaxClients

MaxClients limits the maximum simultaneous connections that Apache can serve. This value should be tuned carefully based on the amount of memory available on the system.

MaxClients ? (RAM - size_all_other_processes)/(size_apache_process(es))

If Apache is using the PreFork MPM a separate Apache process will be created for each connection, so the total number of processes will be MaxClients + 1. Each process will have a single connection the mod_jk pool to talk to a Jboss instance. The Worker MPM module uses the thread model to handle client requests and should be slightly more efficient.

As a guide Apache MaxClients and the number of available jboss threads configured in server.xml should be equal. Mod_jk maintains persistent connections between Apache and jboss. If the Apache front end accepts more users than jboss can handle you will see connection errors. In a clustered environment we have to take account of errors the load balancer makes when it tracks the number of concurrent threads to the Tomcat back-ends so the following formula is a starting point:

Apache.MaxClients = 0.8 * (Tomcat-1.maxThreads + … Tomcat-n.maxThreads)

i.e.

400 = 0.8 * (250 + 250)

Use:

  • httpd –V to see how Apache is compiled
  • ps or top to see how much memory is being used on the system
  • vmstat to tune memory usage

The Sezame Apache front end also serves static content and a number of client connections will be used for this purpose for each requested page of content.

MaxRequestsPerChild

Sets the total number of requests that each Apache process will handle before it quits. This is useful to control memory leaks issues with loaded modules. Because mod_jk maintains permanent connections in its pool between Apache and jboss we will set this to zero which means that unlimited requests can be handled.

Redirection requests to the mod_jk load balancer

We will need to load the mod_jk module (a shared library) into Apache when it starts and tell the module where to find the workers.properties file and where to write logs (useful for debugging/crashes).
/etc/httpd/conf/httpd.conf

1. httpd.conf:
———————————–
# Include mod_jk’s specific configuration file
Include conf/mod-jk.conf

2. mod-jk.conf:
———————————
# Load mod_jk module
# Specify the filename of the mod_jk lib
LoadModule jk_module modules/mod_jk.so
# Where to find workers.properties
JkWorkersFile conf/workers.properties
# Where to put jk logs
JkLogFile logs/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel info
# Select the log format
JkLogStampFormat “[%a %b %d %H:%M:%S %Y]”
# JkOptions indicates to send SSK KEY SIZE
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
# JkRequestLogFormat
JkRequestLogFormat “%w %V %T”
# Mount your applications
JkMount /myapp/* loadbalancer
JkMount /jspellhtml2k4/* loadbalancer
JkMount /jspelliframe2k4/* loadbalancer
JkMount /jspellhtml2k4 loadbalancer
JkMount /jspelliframe2k4 loadbalancer

# You can use external file for mount points.
# It will be checked for updates each 60 seconds.
# The format of the file is: /url=worker
# /examples/*=loadbalancer
JkMountFile conf/uriworkermap.properties

# Add shared memory.
# This directive is present with 1.2.10 and
# later versions of mod_jk, and is needed for
# for load balancing to work properly
# Note: Replaced JkShmFile logs/jk.shm due to SELinux issues. Refer to
# https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=225452
JkShmFile run/jk.shm

# Add jkstatus for managing runtime data
<Location /jkstatus>
JkMount status
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>

2. uriworkermap.properties
————————————————-
# Simple worker configuration file
# Mount the Servlet context to the ajp13 worker
/jmx-console=loadbalancer
/jmx-console/*=loadbalancer
/web-console=loadbalancer
/web-console/*=loadbalancer
/myapp/*=loadbalancer
/myapp=loadbalancer
/jspellhtml2k4/*=loadbalancer
/jspellhtml2k4=loadbalancer
/jspelliframe2k4/*=loadbalancer
/jspelliframe2k4=loadbalancer

The following is a suggested configuration for two physical servers with a pair Tomcat instances running on each. The rational is that the front end load balancer will distribute load between the two physical hosts. The f/e load balancer is itself configured for fail-over using a virtual IP address and a heartbeat type operation to determine if the balancer is still operational.

Each physical server has an Apache web server and two tomcat servlet engines. Apache will serve static content either from an NFS mounted disk or from a replicated file system using something like Web. Apache will load balance between the two local  jboss but if these become overloaded or crash it can forward requests on to the other physical server.

The Apache mod_jk load-balancer can distribute work using one of three methods:

  • B (busyness): choose the worker with the lowest number of requests currently in progress
  • R (requests): choose the worker that has processed the lowest number of requests overall
  • T (traffic): choose the worker that transmitted the lowest number of bytes

The sticky_session is set to true to maintain a session between the client and Tomcat.

worker.list=loadbalancer

worker.loadbalancer.type=lb
worker.loadbalancer.method=B
worker.loadbalancer.balance_workers=worker1,worker2,worker3,worker4
worker.loadbalancer.sticky_session=true

This is common configuration for all workers. Note if Prefork MPM is used you will want to set the cachesize property. This should reflect the estimated average concurrent number of users for the Tomcat instance. Cache_timeout should be set to close cached sockets. This reduces the number of idle threads on the Tomcat server. Note that connection_pool_size and connection_pool_timeout should be used with mod_jk 1.2.16 and later.

If there is a firewall between Apache and Jboss t the socket_keepalive property should be used. Socket_timeout will tell Apache to close an ajp13 connection after some inactivity time. This also reduces idle threads on Tomcat.

# common config
worker.basic.type=ajp13
worker.basic.socket_timeout=300
worker.basic.lbfactor=5

3. workers.properties
————————————————-
# The configuration directives are valid
# for the mod_jk version 1.2.18 and later
#
worker.list=loadbalancer,status

# Define Node1
# modify the host as your host IP or DNS name.
worker.node1.port=8009
worker.node1.host=myapp-myapp.myapp.com
worker.node1.type=ajp13
worker.node1.lbfactor=1
worker.node1.connection_pool_size=10
worker.node1.cachesize=10

# Define Node2
# modify the host as your host IP or DNS name.
worker.node2.port=8009
worker.node2.host=myapp-myapp1.myapp.com
worker.node2.type=ajp13
worker.node2.lbfactor=1
worker.node2.connection_pool_size=10
worker.node2.cachesize=10

# Load-balancing behaviour
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1

# Status worker for managing load balancer
worker.status.type=status

2.    Make Sure all below given settings are available in JBoss configurations in all nodes of  Jboss cluster.

JBoss Server Side Configuration Changes:
———————————————–

1. Remove/comment out below two ports from ja.properties file.
———————————————————————-
vi /opt/jboss/jboss-5.0/jboss-as/server/template_cluster/conf/ja.properties file.
Search for below two lines
OAPort=3528
OASSLPort=3529
Commentout both lines like below and save the file.
#OAPort=3528
#OASSLPort=3529

2. Add jvmRoute
——————
vi /opt/jboss/jboss-5.06/jboss-as/server/template_cluster/deploy/jboss-web.deployer/server.xml

Search For Below Text:
<Engine name=”jboss.web” defaultHost=”localhost”>
Change it to:
<Engine name=”jboss.web” defaultHost=”localhost” jvmRoute=”node1″>

Note: This configuration needs to be done in all the nodes and all the configuration files need to be changed accordingly.
Check the workers.properties file to know which host is node1 or nodex
Example: For node2 the jvmRoute will be node2(HostName can be found out from workers.properties file)
<Engine name=”jboss.web” defaultHost=”localhost” jvmRoute=”node2″>

3. Change UseJK to true
————————-
vi /opt/jboss/jboss-5.0/jboss-as/server/template_cluster/deploy/jboss-web.deployer/META-INF/jboss-service.xml

Search For Below Text:
<attribute name=”UseJK”>false</attribute>
Change it to:
<attribute name=”UseJK”>true</attribute>

Note: This configuration needs to be done in all the nodes and all the configuration files need to be changed accordingly.

4. Make Sure Below properties are present in jboss-service.xml config file.
——————————————————————————
vi /opt/jboss/jboss-5.0/jboss-as/server/template_cluster/deploy/jboss-web-cluster.sar/META-INF/jboss-service.xml

<mbean code=”org.jboss.cache.aop.TreeCacheAop” name=”jboss.cache:service=TomcatClusteringCache”>
<depends>jboss:service=Naming</depends>
<depends>jboss:service=TransactionManager</depends>
<depends>jboss.aop:service=AspectDeployer</depends>
<attribute name=”TransactionManagerLookupClass”>org.jboss.cache.BatchModeTransactionManagerLookup</attribute>
<attribute name=”IsolationLevel”>REPEATABLE_READ</attribute>
<attribute name=”CacheMode”>REPL_ASYNC</attribute>
<attribute name=”ClusterName”>Tomcat-${jboss.partition.name:Cluster}</attribute>
<attribute name=”UseMarshalling”>false</attribute>
<attribute name=”InactiveOnStartup”>false</attribute>
<attribute name=”ClusterConfig”>
… …
</attribute>
<attribute name=”LockAcquisitionTimeout”>15000</attribute>
</mbean>

The sticky_session is set to true to maintain a session between the client and Tomcat.

worker.list=loadbalancer

worker.loadbalancer.type=lb
worker.loadbalancer.method=B
worker.loadbalancer.balance_workers=worker1,worker2,worker3,worker4
worker.loadbalancer.sticky_session=true

This is common configuration for all workers. Note if Prefork MPM is used you will want to set the cachesize property. This should reflect the estimated average concurrent number of users for the Tomcat instance. Cache_timeout should be set to close cached sockets. This reduces the number of idle threads on the Tomcat server. Note that connection_pool_size and connection_pool_timeout should be used with mod_jk 1.2.16 and later.

If there is a firewall between Apache and Tomcat the socket_keepalive property should be used. Socket_timeout will tell Apache to close an ajp13 connection after some inactivity time. This also reduces idle threads on Tomcat.

# common config
worker.basic.type=ajp13
worker.basic.socket_timeout=300
worker.basic.lbfactor=5

These are the two workers. Notice the use of the local_worker property. The load-balancer will always favour local_workers with a value set to 1.

worker.worker1.host=server1
worker.worker1.port=8009
worker.worker1.local_worker=1
worker.worker1.reference=worker.basic

worker.worker2.host=server1
worker.worker2.port=8010
worker.worker2.local_worker=1
worker.worker2.reference=worker.basic

worker.worker3.host=server2
worker.worker3.port=8009
worker.worker3.local_worker=0
worker.worker3.reference=worker.basic

worker.worker4.host=server2
worker.worker4.port=8010
worker.worker4.local_worker=0
worker.worker4.reference=worker.basic

Tomcat Configuration

Http access is only required by administrators for checking if a Tomcat instance is still running by connecting directly to the specified port. The firewall should block client access to this port.

<Connector port="8899"  minSpareThreads="5" maxThreads="10" enableLookups="false" redirectPort="8443" acceptCount="10" debug="0" connectionTimeout="60000"/>

The Tomcat instance must listen on the same port as is specified in the corresponding worker’s section in worker.propeties.

<Connector port="8009" enableLookups="false" redirectPort="8443" 
protocol="AJP/1.3" maxThreads="400" minSpareThreads="50"  maxSpareThreads="300" />

maxThreads should be configured so that the CPU can be loaded to no more than 100% without seeing JVM error messages about insufficient threads.

The Engine jvmRoute property should correspond to the worker name in the worker.properties file or the load balancer will not be able to handle stickyness.

<Engine name="Catalina" defaultHost="localhost" jvmRoute="worker1">

 

 

Three Tier Architecture Diagram

Three Tier  Architecture Diagram

Application Solution is designed to be compliant with web standards, which recommends minimum three tier servers and three network zones for enterprise standard secure applications.

The first tier, presentation layer, provides an interface to the user for user interaction, and secure access (https).  This layer is implemented by http servers.  The http servers host all the static data providing certificates for secure network access and direct the user requests to web application server.  It also helps in providing high availability/load balancing among web application servers when configured.

The next layer, application layer, is provided by a J2EE application server which implements application Solution compliant processes, which help update business data for various applications, provide notification, reports and logs as needed.

The core Application Web-Tier is the server platform which handles all aspects of all applications – security, data entry, reporting, operations and management.

The database layer, data integration layer with the data storage, makes up the third layer which hosts application data in reliable and robust way.

Each layer is hosted by a single or set of hardware servers running any of the server operating systems.  These servers are connected by IP based LANs.  These LANs could be isolated using firewalls if needed.

Below is a sample architecture diagram of all layers

Three Tier  Architecture Diagram

 

 

 

Apache Server Load Balancing for Multiple Virtual Hosts Tomcat

Apache Server Load Balancing for Multiple Virtual Hosts Tomcat

 

apache
vi /etc/http/conf/httpd.conf

NameVirtualHost *:443
NameVirtualHost *:80

<VirtualHost *:80>
ServerName server1.rmohan.com
Redirect permanent / https://server1.rmohan.com/
</VirtualHost>

<VirtualHost *:443>
ServerName server1.rmohan.com
SSLProxyEngine On
KeepAlive On

<Proxy balancer://Cluster>
BalancerMember ajp://localhost:18009 disablereuse=On route=jvm1
BalancerMember ajp://localhost:28009 disablereuse=On route=jvm2
</Proxy>

ProxyPass / balancer://Cluster/ stickysession=JSESSIONID
</VirtualHost>

<VirtualHost *:80>
ServerName server2.rmohan.comm
Redirect permanent / https://server2.rrmohan.com/
</VirtualHost>

<VirtualHost *:443>
ServerName server2.rmohan.com
SSLProxyEngine On
KeepAlive On

<Proxy balancer://Cluster>
BalancerMember ajp://localhost:18009 disablereuse=On route=jvm1
BalancerMember ajp://localhost:28009 disablereuse=On route=jvm2
</Proxy>

ProxyPass / balancer://Cluster/ stickysession=JSESSIONID
</VirtualHost> [/apache]

And these VirtualHost blocks go on for as many servers as needed. I know it doesn’t look pretty and in fact it’s incorrect since the VirtualHost for SSL connection shouldn’t have a ServerName directive. Not unless you’re using the SNI with mod_gnutls or some specific combination of server and browser like Apache 2.2.12 or later and Mozilla Firefox 2.0 or later. In my environment it did work and so this was used for a while. The first improvement on the configuration that I attempted looked like this:

[apache] NameVirtualHost *:80

<VirtualHost *:80>
ServerName server1.rmohan.com
Redirect permanent / https://server1.rmohan.com/
</VirtualHost>

<VirtualHost *:80>
ServerName server2.domain.com
Redirect permanent / https://server2.rmohan.com/
</VirtualHost>

KeepAlive On

<Proxy balancer://wwwCluster>
BalancerMember ajp://localhost:18009 route=jvm1
BalancerMember ajp://localhost:28009 route=jvm2
</Proxy>

NameVirtualHost *:443

<VirtualHost *:443>
ProxyPass / balancer://Cluster/ stickysession=JSESSIONID
</VirtualHost>

[/apache]

At first it semed ok, but then I noticed that it produced a strange side-effect.
The balancer worked, but the redirect from HTTP to HTTPS didn’t. Therefore this configuration was not acceptable.
Thinking about it some more I realized that I don’t actually need to know the name of the host when redirecting it to my backend server because in my case it will process the Host header and provide the appropriate result. Googling for the redirection to HTTPS alternative I found this page which suggests that Apache’s RewriteEngine could be used to achieve this result. So here’s my final configuration that does the redirect to HTTPS, load balancer’s sticky sessions are functioning properly and everything gets redirected to one of the backend servers chosen by the balancer regardless of the hostname that was used to access it. In other words it works the way I need it to.

[apache]

NameVirtualHost *:80

<VirtualHost *:80>
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
</VirtualHost>

ProxyPass / balancer://Cluster/ stickysession=JSESSIONID

<Proxy balancer://Cluster>
BalancerMember ajp://localhost:18009 route=jvm1
BalancerMember ajp://localhost:28009 route=jvm2
</Proxy>

NameVirtualHost *:443

<VirtualHost *:443>
SSLEngine on
</VirtualHost>

[/apache]

And if you’re using Apache server with PHP rather than JAVA I found this article to be a good read on setting up load balancer with sticky sessions for PHP.

ProxyPass / balancer://cluster/ lbmethod=bytraffic

BalancerMember http://server1.rmohan.com:8080/
BalancerMember http://server2.rmohan.com:8080/
BalancerMember http://server3.rmohan.com:8080/

ProxyPassReverse / http://www.rmohan.com/

 

Apache mod_proxy as Load Balancer

Apache mod_proxy  as  Load Balancer

This article presents an example of load balancing using Apache2.

For example take the host 192.168.1.100 as a balancer, this host will install the apache front end acting for all requests and will balance the load on other hosts.

The load is redirected to the host 192.168.1.101,192.168.1.102,192.168.1.103 and 192.168.1.104.

Configuring apache2

First of all we need to install apache server on the host server acting principal

On 192.168.1.100, install Apache2:

yum install httpd httpd-devel

yum intall mod_proxy

grep -i proxy /etc/httpd/conf/httpd.conf
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.s

proxy Settings

On 192.168.1.100, add the /etc/httpd/conf.d/proxy.conf, this file will contain the nodes that will balance the incoming load.

<Proxy balancer://mycluster>
BalancerMember http://192.168.1.101
BalancerMember http://192.168.1.102
BalancerMember http://192.168.1.103
BalancerMember http://192.168.1.104
Options Follow
SymLinksOrder allow,denyAllow from all
</Proxy>
ProxyPass /myCluster balancer://mycluster

root@www1 ~]#
vi /etc/httpd/conf.d/proxy.conf
# create new

<IfModule mod_proxy.c>
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>

# specify the way of load balancing with “lbmethod”. it’s also possible to set “bytraffic”.

ProxyPass /proxy balancer://cluster lbmethod=byrequests
<proxy balancer://cluster>
BalancerMember http://www1.rmohan.com loadfactor=1
BalancerMember http://www2.rmohan.com loadfactor=1
</proxy>
</IfModule>
[root@www ~]#
/etc/rc.d/init.d/httpd restart

Stopping httpd:
[  OK  ]

Starting httpd:
[  OK  ]

Performance Benchmarks a Webserver Apache concurrency

Performance Benchmarks a Webserver

Apache Benchmark Procedures

Tweak include the KeepAlive, KeepAliveTimeout, and MaxKeepAliveRequests settings. Recommended settings, which can all be set in the httpd.conf file, would be:

Code:
ServerLimit 128
MaxClients 128
KeepAlive On
KeepAliveTimeout 2
MaxKeepAliveRequests 100

Testing a stock Apache configuration (MaxClients is 256, ServerLimit is 256, KeepAliveTimeout is 15) using ab configured to make 1000 requests with a concurrency of 100 consecutive requests would be as follows.

 

$ ab -n 1000 -c 5 http://10.4.85.106/index.html
Where,

-n 1000: ab will send 1000 number of requests to server 10.4.85.106 in order to perform for the benchmarking session
-c 5 : 5 is concurrency number i.e. ab will send 5 number of multiple requests to perform at a time to server 10.4.85.106

For example if you want to send 10 request, type following command:
$ ab -n 10 -c 2 http://www.test.com/

Please note that 1000 request is a small number you need to send bigger (i.e. the hits you want to test) requests, for example following command will send 50000 requests :
$ ab -k -n 50000 -c 2 http://10.4.85.106/index.html

How do I carry out Web server Static KeepAlive test?

Use -k option that enables the HTTP KeepAlive feature using ab test tool. For example:
$ ab -k -n 1000 -c 5 http://10.4.85.106/index.html

How do I save result as a Comma separated value?

Use -e option that allows to write a comma separated value (CSV) file which contains for each percentage (from 1% to 100%) the time (in milliseconds) it took to serve that percentage of the requests:
$ ab -k -n 50000 -c 2 -e apache2r1.cvs http://10.4.85.106/index.html
How do I import result into excel or gnuplot programs so that I can create graphs?

Use above command or -g option as follows:
$ ab -k -n 50000 -c 2 -g apache2r3.txt http://10.4.85.106/index.html

Sample psql.php (php+mysql) file

$link = mysql_connect(“localhost”, “USERNAME”, “PASSWORD”);
mysql_select_db(“DATABASE”);

$query = “SELECT * FROM TABLENAME”;
$result = mysql_query($query);

while ($line = mysql_fetch_array($result))
{
foreach ($line as $value)
{
print “$value\n”;
}
}

mysql_close($link);
?>
Run ab command as follows:
$ ab -n 1000 -c 5 http://10.4.85.106/psql.php

Script For Load test Apache

Here is my bash script that use wget command to bombard your web servers:

#/usr/bin/bash
for((i=0;i<=100;i++))
do
wget http://192.168.1.7/index.html
done
rm -f index.html*

 

 

apache bench max concurrency
echo “10240” > /proc/sys/net/core/somaxconn

or set (to make permanent)

net.core.somaxconn = 10240 (or any number over what you are using as max connections)

in

/etc/sysctl.conf

then run

/sbin/sysctl -p /etc/sysctl.conf

Also set your local ulimit to

ulimit -n 65535

just to have some overhead in case you want to increase the concurrent connections in ab. Let me know if that helps. If you want my thoughts on why I think this fixes things (assuming it helps you) let me know

Also if you move to Worker MPM (you will have to redo php to be threadsafe if you use php) it might lower you CPU usage for a higher RAM usage depending on what your running off your server.

Tomcat

Tomcat

The reference is taken from Apache Tomcat 7 Essentials By: Tanuj Khare

Tomcat
Apache Tomcat is an open source Java-based web and servlet container, which is used to host Java-based applications. It was first developed for Jakarta Tomcat. Due to an increase in demand, it was later hosted as a separate project called Apache Tomcat, which is supported by The Apache Software Foundation. It was initially developed by James Duncan Davidson, a software architect at Sun Microsystems. He later helped make this project open source and played a key role in donating this project from Sun Microsystems to The Apache Software Foundation. Tomcat implements the Java Servlet and the JavaServer Pages (JSP) specifications from Sun Microsystems, and provides a “pure Java” HTTP web server environment for Java code to run.


Installation of Tomcat 7

In the previous section, we have discussed the new enhancements in Apache Tomcat 7. Now, it’s time to move on to the Tomcat installation.
How to download the Tomcat software

Perform the following steps to download the software:

Before we start the installation of Apache Tomcat 7 software, the first thing that comes to mind is where can you download the software from and also how much does the license cost? By default, Apache comes with Apache License, Version 2.0 ,which is compatible to GPL (General Public License). In simple terms, it is free of cost! For more information on licenses, you can visit http://www.apache.org/licenses/. Now, the second problem is how to download the software.
It is always recommended to download the software from its official site, http://tomcat.apache.org/download-70.cgi. By default, on http://tomcat.apache.org/, we get the latest stable version of Tomcat package and we have to download the package based on the operating system, where we want to install it.

Common problems and troubleshooting in installation

There are multiple issues which may arise during the installation of Tomcat 7. Let’s discuss these issues:
Error: Permission denied for the Java binary

Scenario 1: The Java installation is not working, while executing the Java binary.

[root@localhost opt]# ./jdk-6u24-linux-i586.bin
-bash: ./jdk-6u24-linux-i586.bin: Permission denied

Issue: The Java binary doesn’t have execute permissions with a specific user.

Fix: Change the permission to 0755 for ./jdk-6u24-linux-i586.bin using the following command:

chmod 0755 jdk-6u24-linux-i586.bin

The chmod 0755 file is equivalent to u=rwx (4+2+1),go=rx (4+1 & 4+1). The 0 specifies no special modes.
Error: Tomcat is not able to find JAVA_HOME

Performance tuning for Tomcat 7

Performance tuning plays a vital role to run a web application without downtime. Also, it helps in improving the performance of Tomcat while running the applications. Tuning of the Tomcat server may vary from application to application. Since every application has its own requirements, it is a very tricky task to tune Tomcat 7 for every application. In this topic, we will discuss tuning of various components of Tomcat, and how they are useful in the server performance. Before we start with the configuration changes, let us quickly discuss why we need to tune Tomcat.
Why we need performance tuning?

People always ask, why do we need to do performance tuning for Tomcat 7 when, by default, Tomcat 7 packages are customized for the production run.

How to start performance tuning

Performance tuning starts from the day the application deployment stage begins. One may ask, as the application is only in the development phase, why do we need to do performance tuning now?

At the time of the application development, we are in the state to define the architecture of the application, how the application is going to perform in reality, and how many resources are required for an application. There are no predefined steps for performance tuning. But there are certain thumb rules which all administrators should follow for performance tuning.

The following figure shows the process flow for performance tuning:

Tomcat components tuning

In Tomcat 7, you can do many configurations to improve the server performance, threads tuning, port customization, and JVM tuning. Let’s quickly discuss the major components of Tomcat 7 which are important for performance improvement.
Types of connectors for Tomcat 7

Connectors can be defined as a point of intersection where requests are accepted and responses are returned. There are three types of connectors used in Tomcat 7, as shown in the following figure. These connectors are used according to different application requirements. Let’s discuss the usability of each connector:

Java HTTP Connector

The Java HTTP Connector is based on the HTTP Protocol, which supports only the HTTP/1.1 protocol. It makes the Tomcat server act as a standalone web server and also enables the functionality to host the JSP/servlet.

JVM tuning

Before we start with JVM tuning, we should note that there are various vendors available in the market for JVM. Based on the application requirement, we should select the JDK from the vendor.

Sun JDK is widely used in the IT industries.
Why do we need to tune the JDK for Tomcat?

Tomcat 7 comes with a heap size of 256 MB. Applications today need a large memory to run. In order to run the application, we have to tune the JVM parameter for Tomcat 7. Let’s quickly discuss the default JVM configurations for Tomcat. The following steps describe the method to find out the Tomcat Process ID (PID) and memory values as shown in the next screenshot:

Run the following command on the terminal in Linux:

ps -ef |grep java

OS tuning

Every OS has its own prerequisites to run Tomcat 7 and the system has to be tuned based on the application’s requirement, but there are some similarities between each OS. Let’s discuss the common module used for optimization of Tomcat 7 for every OS. The OS plays a vital role for increasing the performance. Depending on the hardware, the application’s performance will increase or decrease. Some of the points which are very much useful for the application are:

Performance characteristics of the 64 bit versus 32 bit VM: The benefits of using 64 bit VMs are being able to address larger amounts of memory, which comes with a small performance loss in 64 bit VMs versus running the same application on a 32 bit VM. You can allocate more than 4 GB JVM for a memory-intensive application.

Integration of Tomcat with the Apache Web Server

The Apache HTTP server is one of the most widely used frontend web servers across the IT industry. This project was open sourced in 1995 and is owned by The Apache Software Foundation.

This chapter is very useful for the web administrator who works on enterprise-level web integration. It gives a very good idea about how integration is implemented in IT organizations. So, if you are thinking of enhancing your career in enterprise-level integrations of applications, then read this chapter carefully.

In this chapter, we will discuss the following topics:

The Apache HTTP installation
The various modules of Apache
Integration of Apache with Tomcat 7
How IT industry environments are set up

User request flow (web/application level)

Before we discuss the installation of Apache, let’s discuss a high-level overview of how the request flows from the web and application server for an application in IT industries. The following figure shows the process flow for a user request, in a web application. The step-by-step involvement of each component is as follows:

The user hits the URL in the browser and the request goes to the HTTP server instead of Tomcat.
The HTTP server accepts the request and redirects it to Tomcat for business logic processing.
Tomcat internally contacts the database server to fetch the data, and sends the response back to the user through the same channel of request:

Why the Apache HTTP server

The Apache HTTP server is one of the most successful and common web servers used in IT industries. The reason being that it is supported by open source communities. In IT industries, the Apache HTTP server is heavily used as a frontend web server for the following reasons:

Efficiently serves static content: Static content such as images, JS, CSS, and HTML files are more efficiently served by the HTTP server in a heavy user environment. Tomcat is also capable, but it increases the response time.
Increase the speed by 10 percent: As compared to Tomcat, Apache serves static content 10 percent more efficiently. Integration of Apache is very helpful in the scenario of a high user load.
Clustering: Apache is one of the most cost-effective and stable solutions to connect multiple instances of Tomcat. The biggest advantage of this feature is that the application will be online in case one of the instances goes down. Also, during deployment, we can deploy the code on one instance while the other instance is still online, serving requests to users. In simple terms, there is no downtime for the application.

Installation of the Apache HTTP

The Apache installation can be done using various methods, based on the requirement of the infrastructure. For example, if you want to run multiple Apache instances on a single machine, then the Source installation will be used. There are mainly three types of installations done in various web environments:

Source
Binary
RPM/exe

Source is preferred by web administrators, as it can be customized based on system requirements.
Apache HTTP installation on Windows

In this topic, we will discuss the installation of the Apache HTTP as a service. The installation of the Apache HTTP server on the Windows platform is quite simple. Following are the steps to be performed:

The Apache HTTP server can be downloaded from various different sites, but it is always recommended to download it from its official site http://httpd.apache.org/download.cgi. On this site, you can find the stable and beta release details. Download the latest Win32 Binary without crypto (no mod_ssl) (MSI Installer) given in the website. Click on httpd-2.2.X-win32-x86-no_ssl.msi to begin the download. Here 2.2 is the major version and X is the minor version, which changes almost every month. The following screenshot shows the different versions available for the download:

Apache Jserv protocol

This protocol was mainly developed to transfer data over the network in binary format instead of plain text. It uses TCP and a packet-based protocol, hence, increasing the performance of the web servers. Another informational point is that decryption of requests is done on the web server end so that the application server doesn’t have a high load.

If you are using AJP, the network traffic is reduced, as the tariff passes over the TCP protocol.

mod_jk and mod_proxy are based on the AJP protocol. They are also helpful in transmitting a high content response over the browser.

If we use the latest version of mod_jk for integration of Apache and Tomcat, then we can store the response header of 64k in the web browsers. This process is very useful in the case of SSO enabled applications or storing Java session values in the browser.

Securing Tomcat 7

The Internet has created a revolution in the 21st century; it provides us the capability of collecting information in seconds, whereas it would have taken months to collect the information previously. This has also raised security concerns for information privacy and has created the requirement of securing information over the Internet.

Everyday, new technologies are emerging to improve Internet usage for applications. With these technologies in the market, it becomes a tricky job for hackers and other communities to access secure information.

In this chapter, we will discuss the following topics:

Tomcat security permissions
Catalina properties
SSL implementation on Tomcat 7

Tomcat Manager

The security being a major concern for IT companies, a separate department for IT security administration is created in every company. Their major responsibility is to make sure that there are no vulnerabilities in terms of the networks, web, and OS infrastructure.

We should download Tomcat from the Tomcat website or any secure, known host. There is a chance that malicious software is shipped with Tomcat if we download it from an unknown source. Once the download is complete, verify the integrity of Tomcat using MD5/PGP. In case of Linux, the MD5 can be verified with Open Specification for Pretty Good Privacy (OpenPGP). This is a must in the process of production systems.

Tomcat security permissions

Apache Tomcat comes with good security-enabled options, but every environment has its own requirement for security, based on the usage of the application. For example, banking sites require a high level of security, on the other hand, user-based applications require little security.

In Tomcat 7, the default permission is configured in TOMCAT_HOME/Conf directory. The security is a collective effort of four files which make the system. Let’s discuss about each file and their functionality.
catalina.properties

This file contains information related to the access of the package, package definition, common loader, shared loader, and a list of JAR files, which are not necessary to be scanned at the startup of Tomcat. It helps in improving the performance, as adding too many JAR files to the skip list improves memory consumption. If you want to add any common JAR, you have to define it under catalina.properties.

Enabling Tomcat Manager

By default, the Tomcat Manager is disabled in Tomcat 7. It is a very powerful tool, but if it goes to the wrong hands, then it can create a problem for the system administrator or the application administrator. So it’s very important that you enable Tomcat Manager with proper security.
How to enable the Tomcat Manager

For enabling the Manager, we have to edit tomcat-users.xml, which is present in TOMCAT_HOME/conf. You will see that Tomcat users are commented out, as shown in the following screenshot:
How to enable the Tomcat Manager

Uncomment the user and save the file, followed by reloading Apache Tomcat 7, as shown in the following screenshot:
How to enable the Tomcat Manager

If you enable Tomcat Manager in a production environment, make sure it can be accessed only from the internal environment and not the DMZ.

Securing Tomcat 7 for production

In this topic, we will discuss the best practices used for securing Tomcat 7. Securing Tomcat does not mean only Tomcat, it includes both Tomcat configurations and other infrastructure configurations. Let’s first start with the Tomcat configurations.
Tomcat settings

There are different methods of securing Tomcat 7 and these come into picture based on the application’s requirement and the security policy used by an IT organization.

Every organization has their own security policies and the IT administrator follows them while implementing the security in Tomcat.

In Tomcat 7, there are different configurations, which need to be changed or enabled in order to secure Tomcat for the external environment. Let’s discuss each configuration and their usage for a real-time environment.

SSL configuration on Tomcat 7

Secure Socket Layer (SSL) is another way of securing data communication. It is a cryptographic protocol, in which data travels through a secure channel. The server sends a secure key to the client browser, the client browser decrypts it and a handshake takes place between the server and the client or we can say it’s a two-way handshake over the secure layer.

When is SSL required for Tomcat?

SSL will be more efficient if you are using Tomcat as a frontend server. In case you are using Apache or IIS, then it’s recommended to install SSL on Apache or the IIS server.
Types of SSL certificates

Before we go ahead and install SSL, let’s discuss the two types of SSL certificates, which are explained as follows:


Logging in Tomcat 7

Logging services play a vital role in the life of the administrator and developer to manage the application from the phase of development to production issues. It’s the logging services that help you to find the actual problem in the web application. Also, it plays an essential role in performance tuning for many applications.

In this chapter, we will discuss:

Logging services in Tomcat 7
JULI
Log4j
Log level
Valve component
Analysis of logs

JULI

Previous versions of Tomcat (until 5.x) use Apache common logging services for generating logs. A major disadvantage with this logging mechanism is that it can handle only a single JVM configuration and it makes it difficult to configure separate logging for each class loader for independent applications. In order to resolve this issue, Tomcat developers have introduced a separate API for the Tomcat 6 version, that comes with the capability of capturing each class loader activity in the Tomcat logs. It is based on the java.util.logging framework.

By default, Tomcat 7 uses its own Java logging API to implement logging services. This is also called JULI. This API can be found in TOMCAT_HOME/bin of the Tomcat 7 directory structures (tomcat-juli.jar). The following screenshot shows the directory structure of the bin directory where tomcat-juli.jar is placed. JULI also provides the feature for custom logging for each web application, and it also supports private per-application logging configurations. With the enhanced feature of separate class loader logging, it also helps in detecting memory issues while unloading the classes at runtime.

Loggers, appenders, and layouts

There are some important components of logging which we use at the time of implementing the logging mechanism for applications. Each term has its individual importance in tracking the events of the application. Let’s discuss each term individually to find out their usage:

Loggers: It can be defined as the logical name for the log file. This logical name is written in the application code. We can configure an independent logger for each application.
Appenders: The process of generating logs is handled by appenders. There are many types of appenders, such as FileAppender, ConsoleAppender, SocketAppender, and so on, which are available in log4j. The following are some examples of appenders for log4j:

log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.out
log4j.appender.CATALINA.Append=true
log4j.appender.CATALINA.Encoding=UTF-8

The previous four lines of appender define the DailyRollingFileAppender in log4j, where the filename is catalina.out. These logs will have UTF-8 encoding enabled.

If log4j.appender.CATALINA.append=false, then logs will not get updated in the log files.

# Roll-over the log once per day
log4j.appender.CATALINA.DatePattern=’.’dd-MM-yyyy’.log’
log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout
log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

The previous three lines of code show the roll-over of the log once per day.

Types of logging in Tomcat 7

We can enable logging in Tomcat 7 in different ways, based on the requirement. There are a total of five types of logging that we can configure in Tomcat, such as application, server, console, and so on. The following figure shows the different types of logging for Tomcat 7. These methods are used in combination with each other based on the environment needs. For example, if you have issues where Tomcat services are not displayed, then the console logs are very helpful to identify the issue, as we can verify the real-time boot sequence. Let’s discuss each logging method briefly:

Application log

These logs are used to capture the application event while running the application transaction. These logs are very useful in order to identify the application level issues. For example, suppose your application performance is slow on a particular transition, then the details of that transition can only be traced in the application log. The biggest advantage of application logs is we can configure separate log levels and log files for each application, making it very easy for the administrators to troubleshoot the application.

Types of log levels in Tomcat 7

There are seven levels defined for Tomcat logging services (JULI). They can be set based on the application’s requirement. The following figure shows the sequence of the log levels for JULI:

Every log level in JULI has its own functionality. The following table shows the functionality of each log level in JULI:

Log4j

Log4j is the project run by The Apache Software Foundation. This project helps in enabling the logs at the various levels of the server and application.

The major advantage of log4j is manageability. It provides the developer a freedom to change the log level at the configuration file level. Also, you can enable/disable logs at the configuration level, so there is no need to change the code. We can customize the log pattern based on the application, separately. Log4j has six log levels. The following figure shows the different types of log levels in log4j:

Values for Tomcat 7

Values are defined as identifiers which change the pattern of the string in the log. Suppose you want to know the IP address of a remote host, which has accessed the website, then you add the combination of the following values mentioned in the log appenders. For example, let’s customize the access logs for Tomcat 7. By default, access logs for Tomcat are defined as follows:

We want to change the log pattern to show the time taken to process the request. We have to add the %T in the patterns. The changed code is shown as follows:

Troubleshooting in Tomcat

Every day, IT administrators face new problems with servers in the production environment. Administrators have to troubleshoot these issues to make sure that the applications work perfectly.

Troubleshooting is an art of solving critical issues in the environment. It comes with experience and the number of issues you have come across in your career. But there is a set of rules for fixing the issues. We will discuss the real-time issues, which may occur in the production environment. We will also discuss tips and tricks for resolving issues.

In this chapter, we will discuss:

Common issues
Third-party tools for thread dump analysis
Tomcat specific issues related to the OS, JVM, and database
How to troubleshoot a proble

Common problem areas for web administrators

Web administrators always find issues with applications, not due to server failure of the Tomcat server, but because applications start malfunctioning due to other components as well. The following figure shows different components for a typical middleware environment:

let’s briefly discuss the issues encountered by web administrators in real-time production support:

Application: These issues occur when an application doesn’t work correctly due to reasons such as class loader conflicts, application deployment conflicts, configuration parameters missing, and so on.
Database: Database issues are very critical for the web administrator. It is very difficult to find the issues related to the DB. Some of them are; JNDI not found, broken pipe errors, and so on.

How to obtain a thread dump in Tomcat 7

The thread dump is a way through which we can determine the application-level thread status for any Java process. There are many ways to obtain a thread dump in Tomcat; here we will discuss two different ways which are widely used in the IT environment.
Thread dump using Kill command

This command generates and redirects the thread dump in catalina.out log. But, the limitation to this command is it works in a non-DOS environment such as Linux, Unix, and so on.

Kill -3 java process id

For example:

Kill -3 10638

Web server benchmarking

Now we know how to troubleshoot problems and find potential solutions in the systems. There is one more point left to discuss, Web server benchmarking. Without discussing this topic, troubleshooting in Tomcat 7 cannot be marked as complete. It’s a process through which we gauge the performance of a web server, also known as Load testing. In this process, we run the server virtually on a heavy load and estimate the real-time performance. This process is very useful if we want to do capacity planning for the web server. There are many tools available for performing load testing on the server such as ApacheBench (ab), JMeter, LoadRunner, OpenSTA, and so on. Let’s discuss the commonly used open source tools such as ApacheBench and JMeter. If we do the benchmarking of the server before the go live stage, then we will face less issues in production support. Also, it helps in improving the performance and designing the scalable environment architecture

Monitoring and Management of Tomcat 7

Monitoring plays a vital role in an IT administrator’s life. It makes the life of a web/infrastructure engineer predictable. When I started my career in web infrastructure support, I always wondered, how does my boss know that a process is 90 percent utilized for a particular system or how does he know that a particular process will die after about 90 minutes from now, without logging into the application? Later, I found out that they have set up a monitoring system using various third-party tools available in the market for servers and application monitoring.

In this chapter, we will discuss:

How to monitor Tomcat 7
Management of applications using the Tomcat Manager
A third-party utility used for monitoring Tomcat 7

Different ways of monitoring

In today’s world, with increasing infrastructure, it becomes very difficult for administrators to manage servers. In order to identify the issue beforehand, and to minimize the downtime, monitors are configured on the system. We can configure multi-level monitoring on the systems, based on the infrastructure requirements for example, the OS, Web, Application, and Database level servers and individual application level. There are different ways of configuring multi-level monitoring. The following figure shows different ways to configure monitoring for any infrastructure:


Monitoring can be mainly done in three ways on a system, which are as follows:

Third-party tools
Monitoring setups are configured using third-party tools present in the market, such as Wily, SiteScope, Nagios, and so on.

Monitoring setup for a web application and database server

In the previous section, we have discussed the types of monitors, but still we don’t know which monitors are configured on these systems. Let’s prepare a table for the different infrastructure system monitors and why they are configured. The following table shows the basic monitors, which are normally configured for web application and database servers:

Tomcat Manager in Tomcat 7

The Tomcat Manager is a default tool for managing operations of Apache Tomcat 7. It provides freedom to the IT administrators to remotely manage the application and monitor the systems. Following are the advantages of the Tomcat Manager:

Allow remote deploy, rollback, start, and stop features for the administrator.
Provide detailed monitoring status for the application and server.
Administrators need not stay in the office 24×7. In case of any issues, he/she can log in to the Tomcat Manager to resolve the issue. In short, we can say remote administration of Tomcat becomes very easy for administrators.

It’s not recommended to open the Tomcat Manager from the Internet. In case you have to do so, then we have to enforce strong security policies on Tomcat 7 or we can configure the Virtual Private Network (VPN) for the administrators.

Monitoring in Tomcat 7

Monitoring in Tomcat 7 can be done using the Tomcat Manager. By default, the Tomcat Manager provides the status of the server with a detailed description of requests and their status. This information is very useful to administrators at the time of troubleshooting. Apart from this, the administrator need not log in to the machine for collecting this information. It takes a minimum of 30 minutes to collect the entire information of the application, if you are checking the server status manually, but using the Tomcat Manager, you are getting it online. That’s truly amazing and a great help for IT administrators.

Let’s discuss the various components used for monitoring that are available in the Tomcat Manager.

JConsole configuration on Tomcat 7

JConsole is one of the best monitoring utilities that comes with JDK 1.5 or later. The full form of the JConsole is the Java Monitoring and Management Console. It’s a graphical tool, which gives complete details of the application and server performance. It gives us the following information about the application hosted in Tomcat 7:

Detect low memory
Enable or disable the GC and class loading verbose tracing
Detect deadlocks
Control the log level of any loggers in an application
Access the OS resources—Sun’s platform extension
Manage an application’s Managed Beans (MBeans)

Remote JMX enabling

In order to use the JConsole for Tomcat 7 monitoring, we have to enable the Java Management Extension (JMX) on Tomcat 7. By doing this, we can monitor the Tomcat 7 server details from our desktop machine also, or in simple terms, we can monitor the server status remotely without logging into the server machine. It gives great flexibility to the administrator to work from any location and troubleshoot the problem. In order to enable it in Tomcat 7, we have to add the CATALINA_OPTS parameter in catalina.sh. By default, the following values are added to enable the details:

Clustering in Tomcat 7

I would like to start this topic with a story. There were two teams; A and B, in an IT organization, managing different systems. Both teams consisted of highly qualified experts in middleware. One day, the CEO of that organization called a meeting for both teams, stating that they had to manage two different middleware environments, one middleware environment was individually assigned to teams A and B. Each team had to follow their own approaches to fix the environmental issues. After 3 months, each client performed a process review and the results surprised the higher management. Team A had maintained 50 percent uptime for the application and Team B had maintained 99 percent uptime for the application hosted in their environment.

What is a cluster?

The cluster is a group of servers or computers connected together, which may perform similar functionality in the system. These systems are normally connected to each other through high speed Ethernet. Cluster systems are used where quick processing or system availability is required. Some of the examples where clusters are highly used are the financial sector, banking, security areas, and so on. The following figure shows the J2EE containers clustered in the environment:

Cluster topology varies from environment to environment, based on the application requirements.
Benefits of clustering

There are many advantages of clustering in a middleware environment. It also depends on which cluster techniques we are using. We will discuss the various advantages of clustering:

Clustering architecture

In this topic, we will discuss the various architectures of clustering used by IT industries. These architectures may vary on each implementation, depending on the application and business requirements. There are basically two types of clustering architectures implemented in a real-time IT infrastructure:

Vertical clustering
Horizontal clustering

By default, Apache Tomcat 7 supports both horizontal and vertical clustering. In the next section, we will discuss the implementation of both types of clustering in Apache Tomcat 7. Before that, let’s discuss clustering architectures, where they can be implemented, and their advantages.
Vertical clustering

Vertical clustering consists of a single hardware with multiple instances running, using shared resources from the system. This kind of setup is mainly done in development and quality systems for the developer to test the functionality of the application. Also, vertical clustering can be implemented in production in certain cases, where there is a resource crunch for the hardware. It uses the concept of a shared resource such as CPU, RAM, and so on. The following figure shows the pictorial presentation of vertical clustering:

Vertical clustering in Apache Tomcat 7

In the previous topics, we have discussed the different types of cluster architecture, supported by Apache Tomcat 7. It’s time to take a real-time challenge to implement clustering. Let’s start with vertical clustering.

For vertical clustering, we have to configure at least two instances of Apache Tomcat and the complete process consists of three stages. Let’s discuss and implement the steps for vertical cluster in Tomcat 7:

Installation of the Tomcat instance.
Configuration of the cluster.
Apache HTTP web server configuration for the vertical cluster.

Horizontal clustering in Apache Tomcat 7

For horizontal clustering, we have to configure at least two instances of Apache Tomcat on two different physical or virtual systems. These physical machines can be on the same physical network. It also helps in providing a high-speed bandwidth to the system.

If you want to configure clustering on different networks, then you have to open the firewall between the two networks for the AJP port and the clustering port.

There are prerequisites for configuring horizontal clustering. The following are the details:

Time sync between the two servers
Proper network connectivity between the two servers
Firewall ports between the two servers (if you are connecting from a different network)

In order to configure horizontal clustering, you have to perform the following steps:

Testing of the clustered instance

To perform cluster testing, we are going to take you through a sequence of events. In the following event, we only plan to use two Tomcat instances—tomcatnode1 and tomcatnode2. We will cover the following sequence of events:

Start tomcatnode1.
Start tomcatnode2 (wait for node 1 to start completely).
Node 1 crashes.
Node 2 takes over the user session of node 1 to node 2.
Start node 1 (wait for node 1 to start completely).
Node 2 and node 1 are in running state.

Now we have a good scenario with us, we will walk through how the entire process works:

Start instance 1: tomcatnode1 starts up using the standard startup sequence. When the host object is created, a cluster object is associated with it. Tomcat asks the cluster class (in this case SimpleTcpCluster) to create a manager for the cluster and the cluster class will start up a membership service.

The membership service is a mechanism in the cluster instance through the cluster domain, which adds the member node in the cluster. In simple terms, it is a service through which members are able to join the cluster.

Monitoring of Tomcat clustering

Once the cluster is up and working, the next stage is to set up the monitoring of the clustering. This can be done in the following ways:

Various monitoring tools
Scripts
Manual

Following are the steps to manually monitor the clusters:

Check the Tomcat process using the following command:

root@localhost bin]# ps -ef |grep java

Check the logs to verify the connectivity of the cluster.
Verify the URL for both cluster members.

Tomcat Upgrade

Technological changes and innovations happen at a very rapid pace. In order to accommodate the current technology requirements and serve the user with the latest technology, an upgrade of the systems is needed. The new version of the systems comes with the latest features and bug fixes, making them more stable and reliable.

In this chapter, we will discuss:

The life cycle of the upgrade process
Best practices followed by the IT industry
How to upgrade Tomcat 6 to Tomcat 7

Every organization follows their process to upgrade the servers, based on the criticality of the system. Normally, evaluation of a product is done by the Technical Architect. Based on the application’s criticality, the architect defines the architecture, which needs to be followed to upgrade the application. Upgrades in production can be done only after a successful upgrade on the development server(s).

Different types of environment

Development environment

It can be defined as the combination of software and hardware, which is required for a team to build the code and deploy it. In simple words, it is a complete package required to build the code and deploy it.

The following points describe why we need a development environment and its advantages:

Consolidation: For example, by looking at the infrastructure needs of the development environment as a whole, you might find that you only need a single web/application server to deploy the application.

Life cycle of the upgrade

In this topic, we will discuss the various steps performed during the upgrade. The life cycle consists of the end-to-end processes involved in the upgrade. Normally, upgrades are initiated from the development environment followed by the QA/stage/production environment. The following screenshot shows the basic sequence of steps followed in the upgrade for any system in the IT industry:

Analysis of Enhanced Features and Business Requirements: This step plays a very crucial role in the upgrade process. In this process, the standing committee (technical architect, business owner, and functional owner) decides which features are essential for the new version and how they are useful in supporting the business requirement.

Tomcat upgrade from 6 to 7

Until now, we have discussed various theoretical processes of the upgrade. Now it’s time to have some fun, which every administrator wants in his/her career.

In this topic, we will discuss the most waited topic of the book, that is the Tomcat upgrade. It’s always a wish for the web administrator to perform the upgrade from its previous major version to the new version. It also changes the perception of the administrator from day-to-day maintenance issues to the architecture-level integration. If you are involved in the upgrade activity for the product, then you are the first person in the organization who is working on that product, which gives you better visibility among other people. But before performing the upgrade, let’s discuss what new features/updates are offered by Tomcat 7 as compared to Tomcat 6. Following are the features:

ITIL process implementation

Until now, we have discussed the technical process of Tomcat and its configuration.Now it’s time to understand the Information Technology Infrastructure Library (ITIL) process followed during the upgrade process and its use in different sections of the upgrade, based on the features and implementation methods.
Availability management

It can be defined as the process which allows the organization to make sure its services support at minimal cost to the environment. It consists of the following features:

Reliability: It’s a process through which IT components are measured, based on the Statement of Work (SOW).
Maintainability: It’s a process through which we manage the entire system without any unplanned downtime.

Advanced Configuration for Apache Tomcat 7

In the previous chapters, we have discussed various topics for Tomcat 7 such as clustering, load balancing and so on. But, in practice, there are some different configurations needed to perform on the system, apart from the Tomcat internal configuration, in order to manage the systems. In this chapter, we will discuss the advanced topics for Tomcat 7, used in real-world industries, to create the web infrastructure and support multiple web applications.

In this chapter, we will discuss the following topics:

Virtual hosting
Running multiple applications on a single Tomcat server
Multiple Tomcat environments such as Development, QA, Stage, and Production
Tuning cache
Optimization of Tomcat

Virtual hosting

It’s a method through which you can host multiple domain names on the same web server or on a single IP. The concept is called shared hosting, where one server is used to host multiple websites. For example, if you want to host abc.com and xyz.com, and later you want to add one more website on the same web server, that can be achieved by virtual hosting. Basically, there are two types of virtual hosting:

Name-based virtual hosting
IP-based virtual hosting


Name-based virtual hosting

It’s a method through which you can host multiple domains on a single IP. It uses the concept of shared services. In practice, web hosting companies follow this approach to host multiple sites for a low cost. For example, we have multiple sites such as www.abc.com,www.xyz.com, and www.xzy.com, and we want to configure it on the single web server using a single IP, then name-based virtual hosting is used. Following are the advantages of name-based virtual hosting:

Virtual hosting in Tomcat 7

Tomcat 7 supports name-based virtual hosting. This approach is very useful in hosting multiple web applications on the single instance of Tomcat 7. It also gives more privileges to the administrator to separate the applications from each other and their access control restrictions. You cannot understand the real concept of virtual hosting unless you implement it. So why wait, let’s do the real-life implementation for virtual hosting in Tomcat 7.

For example, if you want to host the previously mentioned sites on the web server, then the DNS will be configured in the following manner. Let us assume the web server name is webserver1.yxz.com and is hosted on the IP 192.168.0.1. To implement the previous scenario, the following steps need to be performed:

Hostname aliases

There is one more important feature that comes with Tomcat 7 called Host name aliases. It’s a very good feature that gives freedom to the administrator for multiple sites on the same network

For example, if you have a website which needs to be accessed through a subdomain by different users, then host aliases are created. It’s also called Sub domain aliases for the main domain. It is not possible to implement aliases in the previous versions of Tomcat. In case we want to implement aliases for any website, we have to use Apache, IIS, or a separate web server before Tomcat as a front-end server.

The following mentioned code describes how to set the alias for a particular site:



tomcatalias.com

Multiple applications hosting on a single Tomcat 7 instance

Once we are done with virtual hosting, a few potential problems may arise such as multiple application hosting, security, and deployment of multiple applications on a single instance of Tomcat 7. Configuration of multiple domains on one single instance of Tomcat 7 is a bit tricky. If we give the applications one document root, then all applications can be accessed by all developers. The solution is to implement a separate document root for each domain. This way, we can implement a separate security on each application that is hosted in the Tomcat instance. Let us implement the solution by creating multiple document roots in Tomcat 7. To do so, we have to edit the server.xml to enable multiple document roots in the server, as shown in the following code snippet:

Multiple Tomcat environments—Development/QA/Stage/Production

Information technology organizations follow a set of environments to manage their applications. These environments are based on their functionality and usage. Support available for any environment depends on the environment’s functionality. Based on the functionality, the production environment has a high priority and development the least priority, as shown in the following figure:

The following table compares the different environments and their functionalities with respect to different tasks performed during creation and management of the web infrastructure:

Tuning cache

When we are running multiple applications on Tomcat 7, it is always recommended to utilize the resource correctly. In order to do so, we need to optimize the tuning parameter. Every time the server receives a request, it consumes the amount of CPU and memory in the system. In order to resolve this problem, we generate cache on the server from the first request. One of the best examples used for caching in major web hosting organizations is to generate cache for the static content.

The following code shows the configuration for adding Expires and Cache-Control: max-age= headers to images, CSS, and JavaScript. This code is added in web.xml, which is present in TOMCAT_HOME/CONF.


ExpiresFilter
org.apache.catalina.filters.ExpiresFilter
ExpiresByType image access plus 15 minutes
ExpiresByType text/css access plus 15 minutes
ExpiresByType text/javascript access plus 15 minutes


ExpiresFilter
/*
REQUEST

INSTALL SSL Wildcard Certificate MULTIPLE SUBDOMAINS

INSTALL SSL Wildcard Certificate MULTIPLE SUBDOMAINS

Some sub-rmohans share the master rmohan’s SSL Wildcard Certificate,

like as a.rmohan.com and b.rmohan.com share a SSL Wildcard Certificate of *.rmohan.com.

1. put the certificate to the propel place
2. configure apache
a. enable name base virtual host

NameVirtualHost *:443

b. make sure listen 443 port

Listen 443

c. make sure load mod_ssl.so

LoadModule ssl_module modules/mod_ssl.so

SSLRandomSeed startup /dev/urandom 256
SSLRandomSeed connect builtin

SSLPassPhraseDialog builtin
SSLSessionCache shmcb:/var/cache/mod_ssl/ssl_scache(512000)
SSLSessionCacheTimeout 300
SSLMutex default


d. configure a wildcard rmohan

DocumentRoot /data/webapps/rmohan/web
ServerName www.rmohan.com
ServerAlias rmohan.com
DirectoryIndex index.html index.php

SSLEngine on
SSLProtocol all -SSLv2
#SSLStrictSNIVHostCheck off
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP

SSLCertificateFile “/etc/mohan/rmohan.com.crt”
SSLCertificateKeyFile “/etc/mohan/server.key”
SSLCertificateChainFile “/etc/mohan/gd_bundle.crt”

AllowOverride All
Order allow,deny
Allow from all


e. configure sub rmohan virtualhost

DocumentRoot /data/webapps/rmohan/web
ServerName a.rmohan.com
DirectoryIndex index.html index.php

SSLEngine on
SSLProtocol all -SSLv2
#SSLStrictSNIVHostCheck off
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP

SSLCertificateFile “/etc/mohan/rmohan.com.crt”
SSLCertificateKeyFile “/etc/mohan/server.key”
SSLCertificateChainFile “/etc/mohan/gd_bundle.crt”

AllowOverride All
Order allow,deny
Allow from all

LVM

LVM

First LVM Setup

1) fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-13054, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-13054, default 13054):
Using default value 13054

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

fdisk /dev/sdc

fdisk /dev/sdd

fdisk /dev/sde

2) Now we prepare our new partitions for LVM:

[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1
Writing physical volume data to disk “/dev/sdb1”
Physical volume “/dev/sdb1” successfully created
Writing physical volume data to disk “/dev/sdc1”
Physical volume “/dev/sdc1” successfully created
Writing physical volume data to disk “/dev/sdd1”
Physical volume “/dev/sdd1” successfully created

Revert this last action

[root@localhost ~]# pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1
Labels on physical volume “/dev/sdb1” successfully wiped
Labels on physical volume “/dev/sdc1” successfully wiped
Labels on physical volume “/dev/sdd1” successfully wiped

To create again

pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1

[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1
Writing physical volume data to disk “/dev/sdb1”
Physical volume “/dev/sdb1” successfully created
Writing physical volume data to disk “/dev/sdc1”
Physical volume “/dev/sdc1” successfully created
Writing physical volume data to disk “/dev/sdd1”
Physical volume “/dev/sdd1” successfully created

Let dispay the current physical volumes

[root@localhost ~]# pvdisplay

3 ) create our volume group datashare

vgcreate datashare /dev/sdb1 /dev/sdc1 /dev/sdd1

[root@localhost ~]# vgcreate datashare /dev/sdb1 /dev/sdc1 /dev/sdd1
Volume group “datashare” successfully created

Dispaly the volume groups
volume groups: [root@localhost ~]# vgdisplay

Dispaly the volume groups

[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while…
Found volume group “datashare” using metadata type lvm2
Found volume group “VolGroup” using metadata type lvm2

let’s rename our volumegroup datashare into fileshare:

vgrename datashare fileshare

[root@localhost ~]# vgrename datashare fileshare
Volume group “datashare” successfully renamed to “fileshare”

[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while…
Found volume group “fileshare” using metadata type lvm2
Found volume group “VolGroup” using metadata type lvm2

let’s delete our volume group fileshare

vgremove fileshare

[root@localhost ~]# vgremove fileshare
Volume group “fileshare” successfully removed

[root@localhost ~]# vgcreate datashare /dev/sdb1 /dev/sdc1 /dev/sdd1
Volume group “datashare” successfully created

4) Next we create our logical volumes storage (40GB), backup (5GB), and data (1GB) in the volume group datashare

lvcreate –name storage –size 30G datashare

lvcreate –name backup –size 20G datashare

lvcreate –name data –size 10G datashare

[root@localhost ~]# lvcreate –name storage –size 30G datashare
Logical volume “storage” created
[root@localhost ~]# lvcreate –name backup –size 20G datashare
Logical volume “backup” created
[root@localhost ~]# lvcreate –name data –size 10G datashare
Logical volume “data” created

check using command : lvdisplay

[root@localhost ~]# lvdisplay
/dev/datashare/storage
/dev/datashare/backup
/dev/datashare/data

[root@localhost ~]# lvscan
ACTIVE ‘/dev/datashare/storage’ [30.00 GiB] inherit
ACTIVE ‘/dev/datashare/backup’ [20.00 GiB] inherit
ACTIVE ‘/dev/datashare/data’ [10.00 GiB] inherit
ACTIVE ‘/dev/VolGroup/lv_root’ [48.62 GiB] inherit
ACTIVE ‘/dev/VolGroup/lv_home’ [50.63 GiB] inherit
ACTIVE ‘/dev/VolGroup/lv_swap’ [1.97 GiB] inherit

5) Next let’s delete the logical volume data

lvremove /dev/datashare/data

[root@localhost ~]# lvremove /dev/datashare/data
Do you really want to remove active logical volume data? [y/n]: y
Logical volume “data” successfully removed

Now let create logical volume servershare

lvcreate –name servershare –size 5G datashare

[root@localhost ~]# lvcreate –name servershare –size 5G datashare
Logical volume “servershare” created

Now let’s enlarge servershare to 10 G

lvextend -L10G /dev/datashare/servershare

root@localhost ~]# lvextend -L10G /dev/datashare/servershare
Extending logical volume servershare to 10.00 GiB
Logical volume servershare successfully resized

Let’s shrink it to 5GB Again

lvreduce -L5G /dev/datashare/servershare

root@localhost ~]# lvreduce -L5G /dev/datashare/servershare
WARNING: Reducing active logical volume to 5.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce servershare? [y/n]: Y
Reducing logical volume servershare to 5.00 GiB
Logical volume servershare successfully resized

6) Lets format and mount the partition

[root@localhost ~]# mkfs.ext4 /dev/datashare/storage
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1966080 inodes, 7864320 blocks
393216 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
240 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

same case

mkfs.ext4 /dev/datashare/backup
mkfs.ext4 /dev/datashare/servershare

7) mkdir storage backup servershare in /

mount the logical volumes

mount /dev/datashare/backup /backup
mount /dev/datashare/servershare /servershare
mount /dev/datashare/storage /storage

[root@localhost /]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 52G 2.6G 47G 6% /
tmpfs tmpfs 528M 0 528M 0% /dev/shm
/dev/sda1 ext4 508M 32M 450M 7% /boot
/dev/mapper/VolGroup-lv_home
ext4 54G 189M 51G 1% /home
/dev/mapper/datashare-backup
ext4 22G 181M 20G 1% /backup
/dev/mapper/datashare-servershare
ext4 5.3G 145M 4.9G 3% /servershare
/dev/mapper/datashare-storage
ext4 32G 181M 30G 1% /storage

Add in the fstab to in order mount after the reboot

8) Resize Logical Volumes And Their Filesystems

Now let’s enlarge share from 30GB to 40GB:

[root@localhost /]# lvextend -L50G /dev/datashare/storage
Extending logical volume storage to 50.00 GiB
Logical volume storage successfully resized

we have enlarged only storage, but not the ext4 filesystem on share.

we can do it by

[root@localhost /]# e2fsck -f /dev/datashare/storage
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/datashare/storage: 11/1966080 files (0.0% non-contiguous), 167409/7864320 blocks

Make a note of the total amount of blocks (7864320) we need it when we shrink storage later on.
[root@localhost /]# resize2fs /dev/datashare/storage
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/datashare/storage to 13107200 (4k) blocks.
The filesystem on /dev/datashare/storage is now 13107200 blocks long.

Let is mount we will have 50 GB

[root@localhost /]# mount /dev/datashare/storage /storage/

[root@localhost /]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 52G 2.6G 47G 6% /
tmpfs tmpfs 528M 0 528M 0% /dev/shm
/dev/sda1 ext4 508M 32M 450M 7% /boot
/dev/mapper/VolGroup-lv_home
ext4 54G 189M 51G 1% /home
/dev/mapper/datashare-backup
ext4 22G 181M 20G 1% /backup
/dev/mapper/datashare-servershare
ext4 5.3G 145M 4.9G 3% /servershare
/dev/mapper/datashare-storage
ext4 53G 189M 50G 1% /storage

Lets is Umount the backup and reduce the disk space 14G

umount /backup/

/dev/mapper/datashare-backup

e2fsck -f /dev/datashare/backup

root@localhost /]# e2fsck -f /dev/datashare/backup
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/datashare/backup: 11/1310720 files (0.0% non-contiguous), 126289/5242880 blocks

[root@localhost /]# resize2fs /dev/datashare/backup 3242880
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/datashare/backup to 3242880 (4k) blocks.
The filesystem on /dev/datashare/backup is now 3242880 blocks long.

[root@localhost /]# lvreduce -L14G /dev/datashare/backup
WARNING: Reducing active logical volume to 14.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce backup? [y/n]: y
Reducing logical volume backup to 15.00 GiB
Logical volume backup successfully resized
[root@localhost /]# mount /dev/datashare/backup /backup/
[root@localhost /]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 52G 2.6G 47G 6% /
tmpfs tmpfs 528M 0 528M 0% /dev/shm
/dev/sda1 ext4 508M 32M 450M 7% /boot
/dev/mapper/VolGroup-lv_home
ext4 54G 189M 51G 1% /home
/dev/mapper/datashare-servershare
ext4 5.3G 145M 4.9G 3% /servershare
/dev/mapper/datashare-storage
ext4 53G 189M 50G 1% /storage
/dev/mapper/datashare-backup
ext4 14G 177M 13G 2% /backup

Create a Logical Partion

Creation deletion of Logical Partition and extended

1) List the Partition Table

[root@localhost ~]# fdisk -l

2) Create Extended Partition on the New disk /dev/sdb

fdisk /dev/sdb

[root@localhost ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xca3de5fc.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-13054, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-13054, default 13054):
Using default value 13054

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

4) Delete the extended partition

[root@localhost ~]# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): d
Selected partition 1

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

5) Add Logical Partition on the Disk

[root@localhost ~]# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-13054, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-13054, default 13054):
Using default value 13054

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

6) Format New Logical partition as ext4

[root@localhost ~]# mkfs.ext4 /dev/sdb1

mkdir /backup

[root@localhost ~]# mount /dev/sdb1 /backup/
[root@localhost ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 52G 2.6G 47G 6% /
tmpfs tmpfs 528M 0 528M 0% /dev/shm
/dev/sda1 ext4 508M 32M 450M 7% /boot
/dev/mapper/VolGroup-lv_home
ext4 54G 189M 51G 1% /home
/dev/sdb1 ext4 106G 197M 101G 1% /backup

Add in the fstab so after reboot we can make the disk available

/dev/sdb1 /backup ext4 defaults 1 2

7) Remove Logical partition

umount /backup/

remove form fstab

root@localhost ~]# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): d
Selected partition 1

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Page 159 of 166« First...102030...157158159160161...Last »