Thanks for your time Rainer,

I get what you mean regarding the application getting slow. This server was
also logging the garbage collection activity and it seems normal even when
the problem is occuring, there is no big variation in the time taken to do
a garbage collection operation.

I don't have a clear view of the server response time around the test I
made, so I can't tell if the application gets "slow" before the file
descriptor peak but as mentioned before this happen also during low traffic
period (in such a period there should have not reason to get slow). Also,
it feels unexpected that this version of tomcat makes the application
getting slower more often than a server with the other version of tomcat.

Thomas


2015-04-20 16:32 GMT+02:00 Rainer Jung <rainer.j...@kippdata.de>:

> Am 20.04.2015 um 15:41 schrieb Rainer Jung:
>
>> Am 20.04.2015 um 14:11 schrieb Thomas Boniface:
>>
>>> Hi,
>>>
>>> I have tried to find help regarding an issue we experience with our
>>> platform leading to random file descriptor peaks. This happens more often
>>> on heavy load but can also happen on low traffic periods.
>>>
>>> Our application is using servlet 3.0 async features and an async
>>> connector.
>>> We noticed that a lot of issues regarding asynchronous feature were fixed
>>> between our production version and the last stable build. We decided to
>>> give it a try to see if it improves things or at least give clues on what
>>> can cause the issue; Unfortunately it did neither.
>>>
>>> The file descriptor peaks and application blocking happens frequently
>>> with
>>> this version when it only happens rarely on previous version (tomcat7
>>> 7.0.28-4).
>>>
>>> Tomcat is behind an nginx server. The tomcat connector used is configured
>>> as follows:
>>>
>>> We use an Nio connector:
>>> <Connector port="8080" protocol="org.apache.coyote.
>>> http11.Http11NioProtocol"
>>>        selectorTimeout="1000"
>>>        maxThreads="200"
>>>        maxHttpHeaderSize="16384"
>>>        address="127.0.0.1"
>>>        redirectPort="8443"/>
>>>
>>> In catalina I can see some Broken pipe message that were not happening
>>> with
>>> previous version.
>>>
>>> I compared thread dumps from server with both the new and "old"
>>> version of
>>> tomcat and both look similar from my stand point.
>>>
>>> My explanation may not be very clear, but I hope this gives an idea how
>>> what we are experiencing. Any pointer would be welcomed.
>>>
>>
>> If the peaks happen long enough and your platforms has the tools
>> available you can use lsof to look for what those FDs are - or on Linux
>> looking at "ls -l /proc/PID/fd/*" (PID is the process PID file) - or on
>> Solaris use the pfiles command.
>>
>> If the result is what is expected, namely that by far the most FDs are
>> coming from network connections for port 8080, then you can check via
>> "netstat" in which connection state those are.
>>
>> If most are in ESTABLISHED state, then you/we need to further break down
>> the strategy.
>>
>
> One more thing: the connection peak might happen, if for some reason your
> application or the JVM (GC) gets slow. The reason doesn't have to still be
> there at the time when you take the thread dump.
>
> You might want to add "%D" to your Tomcat access log and ty to estimate,
> whether the connection peaks are due to (temporary) application slow down.
>
> The same holds for activating a GC log and check for long or many
> cumulative GC pauses.
>
>
> Regards,
>
> Rainer
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to