Some good advice in this thread already, but given the power of server there
should be no problem serving even more requests ( as long as they are small,
not bound by CPU or I/O).

I'd start looking at JVM GC properties.

Turn on gc logging with 
-Xloggc:/someplace/gclog_tomcat.txt  -XX:+PrintGCDetails
( or some verbose / timestamped variant of this )

And see what your gc pauses look like, how frequent young gen / full gc are.
With hundreds of requests coming in every sec, you get into situation where
garbage collection pause will create a burst of requests once its done ( and
there is a risk that those burst objects will get promoted to old gen! ) and
cause stampede if server is near limits already. Generally You want your
requests to never hit old gen memory in JDK and get collected in some magic
parallel young gen GC collector ( -XX:+UseParNewGC, -XX:NewSize=???
-XX:MaxNewSize=??? -XX:SurvivorRatio=??? are worth checking / or even trying
G1C in JDK7 if You feel adventurous ).

As already was mentioned - finetuning JVM params is good idea, but only
after checking GC logs. Only thing i recommend outright is pinning JVM heap
with matching  -Xmx and -Xms directives - or else JVM will keep
allocating/deallocating memory from OS, potentially creating fragmentation
issues in long term.


And there is a question of 64bit JVM, unless you need Java heap above ~1.5G,
32bit JVM should do just fine, otherwise you are just paying huge tax in
memory usage and CPU cache/TLB misses.  -XX:+UseCompressedOops can help to
remove some of this tax, but in my opinion using 64bit JVM with such a small
heap is only needed if performance testing shows gains versus 32bit JVM.

-- 
View this message in context: 
http://old.nabble.com/Performance-for-many-small-requests-tp32372424p32392622.html
Sent from the Tomcat - User mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to