Hi,

Both nginx and tomcat are hosted on the same server when listing the
connections I see both the connections from nginx to tomcat (the first one
create) and the one from tomcat to nginx used to reply. I may have
presented things the bad way though (I'm not too good regarding system
level).

I do agree the high number of close wait seems strange, I really feel like
nginx closed the connection before tomcat did (what I think leads to the
broken pipe expections observed in the catalina.out). In case someone want
to have a look I uploaded a netstat log here:
http://www.filedropper.com/netsat

Thomas

2015-04-20 17:13 GMT+02:00 André Warnier <a...@ice-sa.com>:

> Thomas Boniface wrote:
>
>> I did some captures during a peak this morning, I have some lsof and
>> netstat data.
>>
>> It seems to me that most file descriptors used by tomcat are some http
>> connections:
>>
>>  thomas@localhost  ~/ads3/tbo11h12  cat lsof| wc -l
>> 17772
>>  thomas@localhost  ~/ads3/tbo11h12  cat lsof | grep TCP | wc -l
>> 13966
>>
>> (Note that the application also send request to external servers via http)
>>
>>
>> Regarding netstat I did a small script to try to aggregate connections
>> with
>> a human readable name, if my script is right the connections between nginx
>> and tomcat are as follows:
>>
>> tomcat => nginx SYN_RECV 127
>> tomcat => nginx ESTABLISHED 1650
>> tomcat => nginx CLOSE_WAIT 8381
>> tomcat => nginx TIME_WAIT 65
>>
>> nginx => tomcat SYN_SENT 20119
>> nginx => tomcat ESTABLISHED 4692
>> nginx => tomcat TIME_WAIT 122
>> nginx => tomcat FIN_WAIT2 488
>> nginx => tomcat FIN_WAIT1 13
>>
>
> I don't understand the distinction here.  Tomcat should never initiate
> connections *to* nginx, or ?
>
> For personal historical reasons, the high number of connections in
> CLOSE_WAIT state above triggered my interest.  Search Google for : "tcp
> close_wait state meaning"
> Basically, it can mean that the client wants to go away, and closes its
> end of the connection to the server, but the application on the server
> never properly closes the connection to the client. And as long as it
> doesn't, the corresponding connection will remain stuck in the CLOSE_WAIT
> state (and continue to use resources on the server, such as an fd and
> associated resources).
> All that doesn't mean that this is your main issue here, but it's
> something to look into.
>
>
>
>
>> Concerning the other response and the system max number of file, I am not
>> sure this is where our issue lies. The peak itself seems to be a sympton
>> of
>> an issue, tomcat fd are around 1000 almost all the time except when a peak
>> occurs. In such cases it can go up to 10000 or more sometimes.
>>
>> Thomas
>>
>>
>>
>> 2015-04-20 15:41 GMT+02:00 Rainer Jung <rainer.j...@kippdata.de>:
>>
>>  Am 20.04.2015 um 14:11 schrieb Thomas Boniface:
>>>
>>>  Hi,
>>>>
>>>> I have tried to find help regarding an issue we experience with our
>>>> platform leading to random file descriptor peaks. This happens more
>>>> often
>>>> on heavy load but can also happen on low traffic periods.
>>>>
>>>> Our application is using servlet 3.0 async features and an async
>>>> connector.
>>>> We noticed that a lot of issues regarding asynchronous feature were
>>>> fixed
>>>> between our production version and the last stable build. We decided to
>>>> give it a try to see if it improves things or at least give clues on
>>>> what
>>>> can cause the issue; Unfortunately it did neither.
>>>>
>>>> The file descriptor peaks and application blocking happens frequently
>>>> with
>>>> this version when it only happens rarely on previous version (tomcat7
>>>> 7.0.28-4).
>>>>
>>>> Tomcat is behind an nginx server. The tomcat connector used is
>>>> configured
>>>> as follows:
>>>>
>>>> We use an Nio connector:
>>>> <Connector port="8080" protocol="org.apache.coyote.
>>>> http11.Http11NioProtocol"
>>>>        selectorTimeout="1000"
>>>>        maxThreads="200"
>>>>        maxHttpHeaderSize="16384"
>>>>        address="127.0.0.1"
>>>>        redirectPort="8443"/>
>>>>
>>>> In catalina I can see some Broken pipe message that were not happening
>>>> with
>>>> previous version.
>>>>
>>>> I compared thread dumps from server with both the new and "old" version
>>>> of
>>>> tomcat and both look similar from my stand point.
>>>>
>>>> My explanation may not be very clear, but I hope this gives an idea how
>>>> what we are experiencing. Any pointer would be welcomed.
>>>>
>>>>  If the peaks happen long enough and your platforms has the tools
>>> available
>>> you can use lsof to look for what those FDs are - or on Linux looking at
>>> "ls -l /proc/PID/fd/*" (PID is the process PID file) - or on Solaris use
>>> the pfiles command.
>>>
>>> If the result is what is expected, namely that by far the most FDs are
>>> coming from network connections for port 8080, then you can check via
>>> "netstat" in which connection state those are.
>>>
>>> If most are in ESTABLISHED state, then you/we need to further break down
>>> the strategy.
>>>
>>> Regards,
>>>
>>> Rainer
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>>
>>>
>>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to