Sync sync disk

echo 3 > /proc/sys/vm/drop_caches # Clean up useless memory space

Tomcat8 final configuration
1.${tomcat}/bin/ Join
JAVA_OPTS=”-Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms1G -Xmx1G -Xss256k -XX:NewSize=1G -XX:MaxNewSize=1G
-XX:PermSize=128m -XX:MaxPermSize=128m -XX:+DisableExplicitGC”

JAVA_OPTS=”$JAVA_OPTS -server -Xms3G -Xmx3G -Xss256k -XX:PermSize=128m -XX:MaxPermSize=128m -XX:+UseParallelOldGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/aaa/dump -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/usr/tomcat/dump/heap_trace.txt -XX:NewSize=1G -XX:MaxNewSize=1G”

Open commented out

<Executor name=”tomcatThreadPool” namePrefix=”catalina-exec-”
maxThreads=”300″ minSpareThreads=”50″/>

Add options not found in Connector

<Connector port=”80″ protocol=”org.apache.coyote.http11.Http11NioProtocol”

Detailed explanation of each parameter:

-Xms: Set the JVM initial memory size (default is 1/64 of physical memory)

-Xmx: Set the maximum memory that the JVM can use (default is 1/4 of physical memory, recommended: 80% of physical memory)

-Xmn: Set the minimum memory of the JVM (128-256m is enough, generally not set)

The default free heap memory is less than
At 40%, the JVM will increase the heap until the maximum limit of -Xmx; when the free heap memory is greater than 70%, the JVM will reduce the heap to the minimum limit of -Xms. So the server is generally set to -Xms,
-Xmx is equal to avoid resizing the heap after each GC.

In larger applications, the default memory is not enough and may cause the system to fail. A common problem is to report a Tomcat memory overflow error “java.lang.OutOfMemoryError:
Java heap space”, causing the client to display 500 errors.

-XX:PermSize : Perm memory size when starting the JVM

-XX:MaxPermSize : is the maximum available Perm memory size (default is 32M)

-XX:MaxNewSize, default is 16M

The full name of PermGen space is Permanent Generation
Space, refers to the permanent storage area of ??memory, this memory is mainly stored by the JVM Class and Meta information, Class will be placed in PermGen when it is Loader
In space, it is different from the Heap area that stores the instance (Instance), GC (Garbage
Collection) will not be in the main program runtime against PermGen
Space is cleaned up, so if your application has a very CLASS, it is likely to appear “java.lang.OutOfMemoryError:
PermGen space” error.

For WEB projects, when jvm loads a class, the objects in the permanent domain increase sharply, so that jvm constantly adjusts the size of the permanent domain. To avoid adjustments, you can use more parameter configuration. If your WEB
APP uses a large number of third-party jars, the size of which exceeds the default size of jvm, then this error message will be generated.

Other parameters:

-XX:NewSize: The default is 2M. This value is set to a large adjustable new object area, reducing Full.
GC times

-XX:NewRatio : Change the proportion of new and old space, which means that the size of the new space is 1/8 of the old space (default is 8)

-XX:SurvivorRatio: Change the size ratio of the Eden object space and the remaining space, meaning that the Eden object is empty.

The size between the two is greater than the survivor space by 2 times survivorRatio (default is 10)

-XX:userParNewGC can be used to set parallel collection [multiple CPU]

-XX:ParallelGCThreads can be used to increase parallelism [multiple CPU]

-XXUseParallelGC can be set to use parallel clear collector [multi-CPU]

The maximum number of request processing threads to be created by this Connector, which therefore determines the maximum number of simultaneous requests that can be handled.If not specified, this attribute is set to 200. If an executor is associated with this connector, this attribute is ignored as the connector will execute tasks using the executor rather than an internal thread pool.
The minimum number of threads always kept running. If not specified, the default of 10 is used.
The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. Use a value of -1 to indicate no (i.e. infinite) timeout. The default value is 60000 (i.e. 60 seconds) but note that the standard server.xml that ships with Tomcat sets this to 20000 (i.e. 20 seconds). Unless disableUploadTimeout is set to false, this timeout will also be used when reading the request body (if any).
If set to true, the TCP_NO_DELAY option will be set on the server socket, which improves performance under most circumstances. This is set to true by default.
The size (in bytes) of the buffer to be provided for socket output buffering. -1 can be specified to disable the use of a buffer. By default, a buffers of 9000 bytes will be used.
Overrides the Server header for the http response. If set, the value for this attribute overrides the Tomcat default and any Server header set by a web application. If not set, any value specified by the application is used. If the application does not specify a value then Apache-Coyote/1.1 is used. Unless you are paranoid, you won’t need this feature.
The maximum size of the request and response HTTP header, specified in bytes. If not specified, this attribute is set to 8192 (8 KB).
The maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well as HTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute is set to 100.
For BIO the default is the value of maxThreads unless an Executor is used in which case the default will be the value of maxThreads from the executor. For NIO the default is 10000. For APR/native, the default is 8192.
The number of milliseconds this Connector will wait for another HTTP request before closing the connection. The default value is to use the value that has been set for the connectionTimeout attribute. Use a value of -1 to indicate no (i.e. infinite) timeout.

Database Pool Configuration

<Resource name=”jdbc/productdb” auth=”Container” type=”javax.sql.DataSource”
maxTotal=”10″ maxIdle=”30″ maxWaitMillis=”10000″ logAbandoned=”true”
username=”root” password=”admin” driverClassName=”com.mysql.jdbc.Driver”

JVM Settings
We have set the minimum and maximum heap size to 1GB respectively as below:

export CATALINA_OPTS=”-Xms1024m -Xmx1024m”

-Xms – Specifies the initial heap memory
-Xmx – Specifies the maximum heap memory

AJP Connector configuration
The AJP connector configuration below is configured so that there are two threads allocated to accept new connections.
This should be configured to the number of processors on the machine however two should be suffice here.
We have also allocated 400 threads to process requests, the default value is 200.
The “acceptCount” is set to 100 which denotes the maximum queue length to be used for incoming connections.
The default value is 10. Lastly we have set the minimum threads to 20 so that there are always 20 threads running in the pool to service requests:

<Connector port=”8009″ protocol=”AJP/1.3″ redirectPort=”8443″ acceptorThreadCount=”2″ maxThreads=”400″ acceptCount=”200″ minSpareThreads=”20″/>

Database Pool Configuration
We have modified the maximum number of pooled connections to 200 so that there are ample connections in the pool to service requests.

<Resource name=”jdbc/productdb” auth=”Container” type=”javax.sql.DataSource”
maxTotal=”200″ maxIdle=”30″ maxWaitMillis=”10000″ logAbandoned=”true”
username=”xxxx” password=”xxxx” driverClassName=”com.mysql.jdbc.Driver”

JVM Settings
Since we have increased the maximum number of pooled connections and AJP connector thread thresholds above,
we should increase the heap size appropriately. We have set the minimum and maximum heap size to 2GB respectively as below:

export CATALINA_OPTS=”-Xms2048m -Xmx2048m”

JVM Heap Monitoring and Tuning

Specifying appropriate JVM heap parameters to service your deployed applications on Tomcat is paramount to application performance.
There are a number of different ways which we can monitor JVM heap usage including using JDK hotspot tools such as jstat, JConsole etc. –
however to gather detailed data on when and how garbage collection is being performed, it is useful to turn on GC logging on the Tomcat instance.
We can turn on GC logging by modifying the catalina start up script with the following command:

JAVA_OPTS=”$JAVA_OPTS -verbose:gc -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps”

We can set the minimum and maximum heap size,
the size of the young generation and the maximum amount of memory to be allocated to the permanent generation used to store application class metadata by specifying the setting the CATALINA_OPTS parameter by executing this command:

export CATALINA_OPTS=”-Xms1024m -Xmx2048m -XX:MaxNewSize=512m -XX:MaxPermSize=256m”

This configuration is optimized for REST/HTTP API call. And it doesn’t use any reverse proxy like Apache, NginX etc. We will reside simple L4 switch infront of tomcat groups.

In addition we will not use Tomcat Clustering, Session etc. So the clustering configuration is omitted.

Listener Setting
<Listener className=”” checkedOsUsers=”root” />

checkedOsUser setting means Unix system user “root” cannot start Tomcat. If user starts tomcat as a root user it makes log file as a root user permission. In that case tomcat user cannot delete the log file.

<Listener className=”org.apache.catalina.core.JreMemoryLeakPreventionListener” />

This makes detect memory leak.

Connector Setting

It makes tomcat use BIO. Tomcat has options for IO (BIO,NIO,APR). APR is fastest IO setting. It uses Apache web server IO module, so it is fastest. But it uses C code (JNI call), it can have a risk to kill tomcat instance. (with core dump). APR is more faster about 10% than BIO. But BIO is more stable. Use BIO. (Default is BIO)


It specifies server request queue length. If message is queued in the request queue, it means server cannot handle incoming message (it is overloaded). It will wait for idle thead and the request message will be pending. This setting reduce total size of request queue to 10. If the queue has been overflowed, client will get a error. It can protect server from high overload and let system manager to know the server has been overloaded.


In Java Servlet Code, user can look up request message origin (IP or URL).

For example user in send request to server, and Tomcat try to resolve incoming request IP address.
“enableLooksups” option enables return DNS name not a IP address. During this processing Tomcat look up DNS.
It brings performance degradation. This option removes DNS look up stage and increase performance.


We are using REST protocol not a normal web contents like HTML,Image etc.
This options allows to compress HTTP message. It consumes computing power but it can reduce network payload.
In our environment compression is not required. It is better to save computing power. And in some particular Telco network, compression is not supported.


It is HTTP Connection time out (client to server). It is milliseconds. (10,000 = 10 sec).

If server cannot make a connection from client til 10 sec. It will throw HTTP time out error.
In normal situation, our API response time is under 5 sec. So 10 sec means, server has been overloaded.
The reason why I increased the time up to 10 sec is, depends on network condition, connection time will be deferred.


The maximum number of connection, tomcat can handle. It means tomcat can handle maximum 8192 socket connection in a time. This value is restricted by Unix system parameter “ulimit –f” (You can check up in unix console)


As I mentioned above, this configuration is optimized to REST API request not a common web system. It means client will send REST API call only. It sends the request and get a response. Client will not send request in a short time. It means we cannot reuse the connection from the client. So this setting turn of HTTP Keep Alive. (After response the request from client, tomcat disconnect the connection immediately)


This defines total number of thread in Tomcat. It represents max number of active user at that time. Usually 50~500 is good for performance. And 100~200 is best (it is different depends on use case scenario).

Please test with 100 and 200 values and find value for performance. This parameter also get a impact from DB connection pool setting, even if we have a lot of thread , and the total number of db connection is not enough, the thread will wait to acquire the connection.


This allows us to use TCP_NO_DELAY in tcp/ip layer. It makes send small packet without delay. In TCP, to reduce small package congestion, it gathers small packet to tcp buffer until it has been filled and send the packet. TCP_NO_DELAY option makes send small packet immediately even though TCP buffer is not full.

JVM Tuning
Java Virtual Machine tuning is also very important factor to run Tomcat

The focus of JVM tuning is reducing Full GC time.


This option makes JVM to optimize server application. It tunes HotSpot compiler etc internally. This option is very important and mandatory in server side application

-Xmx1024m –Xms1024m -XX:MaxNewSize=384m -XX:MaxPermSize=128m

This memory tuning options, our infrastructure is using c1.mediuem amazon instance, so the available memory is about 1.7 gb total. Heap size is 1G and let them to have fixed size. It defines max 1Gb, min 1Gb heap size. The NewSize is 384mb (1/3 size of total heap size). 1/3 New Size is best performance usually. Perm size is defines area of memory to load class. 64mb is enough. But we will use 128m first time and tune based on gc log analysis later.

Total physical memory consumption is 1G heap + 128mb perm = 1.128 GB and JVM internally uses memory to run JVM itself. It consumes about 350~500mb. So total estimated required memory is about 1.128GB+500m = 1.5 GB.

As I mentioned, c1.mediuem size has only 1.7GB physical memory. If consumed memory exceeds actual physical memory, it makes disk swapping. If JVM memory is swapped out to disk, the performance is significantly degraded. Please take care swapping is not occurred.

-XX:-HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./java_pid<pid>.hprof

These options are for trouble shooting “OOM (Java Out Of Memory Error”. If out of memory error has been occurred. The memory layout will be dumped to disk. The location of dumpfile is specified by “-XX:HeapDumpPath” option

-XX:ParallelGCThreads=2 -XX:-UseConcMarkSweepGC

These options specify GC strategy. It uses ParallelGC for Minor collection and 2 threads will be used for the Minor GC. And for Old area, concurrent gc will be used. It will reduce Full gc time

-XX:-PrintGC -XX:-PrintGCDetails -XX:-PrintGCTimeStamps -XX:-TraceClassUnloading -XX:-TraceClassLoading

These option specifies GC logging. It logs the GC log detail to stderr (console output). It shows usage trend os Java Heap memory, time stamp etc. (it contains old,new & perm area usage).

Especially, ClassLoading & UnLoading option show what class is loaded and unloaded to memory. It helps us to trace Perm Out of memory error.

Leave a comment

Your email address will not be published. Required fields are marked *

Blue Captcha Image


Protected by WP Anti Spam

Hit Counter provided by dental implants orange county