Re: Async servlet and request recycling

2016-11-02 Thread Thomas Boniface
I'm glad this was useful in the end.

Could you send the link to the corresponding issue ? I'd be interesting to
read the commit for the fix and known the tomcat versions this will be
pushed to.

Thanks

2016-11-01 11:08 GMT+01:00 Mark Thomas <ma...@apache.org>:

> On 25/10/2016 09:59, Mark Thomas wrote:
> > Thanks for this.
> >
> > While I haven't had a chance to look into this, there is enough
> > information here to justify opening a bug report. That has the added
> > advantage that it won't get forogtten.
>
> I've found the time to look at this.
>
> There was a bug. While Tomcat did clean up its internal resources
> correctly, it didn't call onError() so the app was never given a chance
> to clean up unless it did so as a result of the original IOException.
>
> The app should call onComplete() from onError() (as required by the
> spec). It it doesn't, that will trigger another error and Tomcat will
> call on onComplete()
>
> This should be fixed in the next set of releases.
>
> Mark
>
>
> >
> > Mark
> >
> >
> > On 18/10/2016 09:55, Thomas Boniface wrote:
> >> Just a small correction, the callback was a bit weird. The correct one
> is
> >> below (but that doesn't impact the behavior of the scenario)
> >>
> >> @Override
> >> public void completed(HttpResponse result) {
> >> try {
> >> try {
> >> Thread.sleep(200);
> >> } catch (InterruptedException e) {
> >> e.printStackTrace();
> >> }
> >>
> >> ServletOutputStream out = res.getOutputStream();
> >> out.print("Hello world");
> >> out.flush();
> >> out.close();
> >> } catch (IOException e) {
> >> } finally {
> >> req.getAsyncContext().complete();
> >> }
> >> }
> >>
> >>
> >> 2016-10-18 10:38 GMT+02:00 Thomas Boniface <tho...@stickyads.tv>:
> >>>
> >>> I think I managed to narrow things down a bit.
> >>>
> >>>
> >>> I managed to reproduced locally the issue (it's not occuring 100% of
> the
> >> time but it appears after a very few attempt for me). I created a really
> >> basic webapps :
> >>>
> >>> package stickyads.tv;
> >>>
> >>> import org.apache.http.HttpResponse;
> >>> import org.apache.http.client.methods.HttpGet;
> >>> import org.apache.http.concurrent.FutureCallback;
> >>> import org.apache.http.impl.nio.client.CloseableHttpAsyncClient;
> >>> import org.apache.http.impl.nio.client.HttpAsyncClients;
> >>>
> >>> import javax.servlet.*;
> >>> import javax.servlet.annotation.WebServlet;
> >>> import java.io.IOException;
> >>>
> >>> @WebServlet(value="/hello", name="helloServlet", asyncSupported = true)
> >>> public class HelloServlet extends GenericServlet {
> >>>
> >>> private static final CloseableHttpAsyncClient httpclient =
> >> HttpAsyncClients.createDefault();
> >>> static {
> >>> httpclient.start();
> >>> }
> >>>
> >>> @Override
> >>> public void service(final ServletRequest req, final ServletResponse
> >> res) throws ServletException, IOException {
> >>> req.startAsync();
> >>>
> >>> HttpGet request = new HttpGet("http://www.apache.org/;);
> >>> httpclient.execute(request, new FutureCallback()
> {
> >>> @Override
> >>> public void completed(HttpResponse result) {
> >>> try {
> >>> try {
> >>> Thread.sleep(200);
> >>> } catch (InterruptedException e) {
> >>> e.printStackTrace();
> >>> }
> >>>
> >>> ServletOutputStream out = res.getOutputStream();
> >>> out.print("Hello world");
> >>> out.flush();
> >>> out.close();
> >>> } catch (IOException e) {
> >>> // this will generate an NPE
> >>> req.getAsyncContext().complete();
> >>> }
> >>> }
> >>>
> >>> 

Re: Async servlet and request recycling

2016-10-18 Thread Thomas Boniface
Just a small correction, the callback was a bit weird. The correct one is
below (but that doesn't impact the behavior of the scenario)

@Override
public void completed(HttpResponse result) {
try {
try {
Thread.sleep(200);
} catch (InterruptedException e) {
e.printStackTrace();
}

ServletOutputStream out = res.getOutputStream();
out.print("Hello world");
out.flush();
out.close();
} catch (IOException e) {
} finally {
req.getAsyncContext().complete();
}
}


2016-10-18 10:38 GMT+02:00 Thomas Boniface <tho...@stickyads.tv>:
>
> I think I managed to narrow things down a bit.
>
>
> I managed to reproduced locally the issue (it's not occuring 100% of the
time but it appears after a very few attempt for me). I created a really
basic webapps :
>
> package stickyads.tv;
>
> import org.apache.http.HttpResponse;
> import org.apache.http.client.methods.HttpGet;
> import org.apache.http.concurrent.FutureCallback;
> import org.apache.http.impl.nio.client.CloseableHttpAsyncClient;
> import org.apache.http.impl.nio.client.HttpAsyncClients;
>
> import javax.servlet.*;
> import javax.servlet.annotation.WebServlet;
> import java.io.IOException;
>
> @WebServlet(value="/hello", name="helloServlet", asyncSupported = true)
> public class HelloServlet extends GenericServlet {
>
> private static final CloseableHttpAsyncClient httpclient =
HttpAsyncClients.createDefault();
> static {
> httpclient.start();
> }
>
> @Override
> public void service(final ServletRequest req, final ServletResponse
res) throws ServletException, IOException {
> req.startAsync();
>
> HttpGet request = new HttpGet("http://www.apache.org/;);
> httpclient.execute(request, new FutureCallback() {
> @Override
> public void completed(HttpResponse result) {
> try {
> try {
> Thread.sleep(200);
> } catch (InterruptedException e) {
> e.printStackTrace();
> }
>
> ServletOutputStream out = res.getOutputStream();
> out.print("Hello world");
> out.flush();
> out.close();
> } catch (IOException e) {
> // this will generate an NPE
> req.getAsyncContext().complete();
> }
> }
>
> @Override
> public void failed(Exception ex) {
>
> }
>
> @Override
> public void cancelled() {
>
> }
> });
> }
> }
>
> I deploy the war on tomcat 7.0.72 and then call it using a simple python
program that sends a request and closes the requests without waiting for a
response:
>
> import socket
>
> # create an INET, STREAMing socket
> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
> # now connect to the web server on port 80 - the normal http port
> s.connect(("127.0.0.1", 8080))
> s.send("GET /broken-pipe-1.0-SNAPSHOT/hello
HTTP/1.1\nHost:127.0.0.1\nX-Forwarded-For: 127.0.0.1\nX-Forwarded-Protocol:
http\n\n")
>
> This result in a broken pipe exception, a recycling of the async context
but no complete:
>
> INFO: An error occurred in processing while on a non-container thread.
The connection will be closed immediately
> java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.$$YJP$$socketWrite0(Native Method)
> at java.net.SocketOutputStream.socketWrite0(SocketOutputStream.java)
> at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
> at
org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
> at
org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
> at
org.apache.coyote.http11.InternalOutputBuffer.endRequest(InternalOutputBuffer.java:159)
> at
org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:761)
> at org.apache.coyote.Response.action(Response.java:174)
> at org.apache.coyote.Response.finish(Response.java:274)
> at
org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:322)
> at
org.apache.catalina.connector.CoyoteOutputStream.close(CoyoteOutputStream.java:108)
> at stickyads.tv.HelloServlet$1.completed(HelloServlet.java:33)
> at stickyads.tv.HelloServlet$1.completed(HelloServlet.java:26)
> at
org.apache.http.concurrent.BasicFuture.comp

Re: Async servlet and request recycling

2016-10-18 Thread Thomas Boniface
apache.http.impl.nio.reactor.AbstractIOReactor.
execute(AbstractIOReactor.java:278)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(
BaseIOReactor.java:106)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$
Worker.run(AbstractMultiworkerIOReactor.java:590)
at java.lang.Thread.run(Thread.java:745)

Oct 18, 2016 9:47:24 AM org.apache.catalina.core.AsyncContextImpl logDebug
FINE: Req: 1f68c4e1  CReq: 3a716fa9  RP: 4c7aa6ad  Stage: 7  Thread:
http-bio-8080-exec-5  State:  N/A  Method: recycle
URI: /broken-pipe-1.0-SNAPSHOT/hello

Here is also a tcpdump corresponding to this scenario:

09:47:43.916628 IP (tos 0x0, ttl 64, id 63341, offset 0, flags [DF], proto
TCP (6), length 60)
127.0.0.1.44078 > 127.0.0.1.8080: Flags [S], cksum 0xfe30 (incorrect ->
0x0bcc), seq 3052296773, win 43690, options [mss 65495,sackOK,TS val
273198376 ecr 0,nop,wscale 7], length 0
E..<.m@.@.EL..ZE.0.
.H.(
09:47:43.916642 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP
(6), length 60)
127.0.0.1.8080 > 127.0.0.1.44078: Flags [S.], cksum 0xfe30 (incorrect
-> 0x44d5), seq 705486696, ack 3052296774, win 43690, options [mss
65495,sackOK,TS val 273198376 ecr 273198376,nop,wscale 7], length 0
E..<..@.@.<.*..h..ZF.0.
.H.(.H.(
09:47:43.916653 IP (tos 0x0, ttl 64, id 63342, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.44078 > 127.0.0.1.8080: Flags [.], cksum 0xfe28 (incorrect ->
0x171a), seq 1, ack 1, win 342, options [nop,nop,TS val 273198376 ecr
273198376], length 0
E..4.n@.@.ES..ZF*..i...V.(.
.H.(.H.(
09:47:43.916917 IP (tos 0x0, ttl 64, id 63343, offset 0, flags [DF], proto
TCP (6), length 172)
127.0.0.1.44078 > 127.0.0.1.8080: Flags [P.], cksum 0xfea0 (incorrect
-> 0xbf79), seq 1:121, ack 1, win 342, options [nop,nop,TS val 273198376
ecr 273198376], length 120
Eo@.@.D...ZF*..i...V...
.H.(.H.(GET /broken-pipe-1.0-SNAPSHOT/hello HTTP/1.1
Host:127.0.0.1
X-Forwarded-For: 127.0.0.1
X-Forwarded-Protocol: http


09:47:43.916923 IP (tos 0x0, ttl 64, id 29249, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.8080 > 127.0.0.1.44078: Flags [.], cksum 0xfe28 (incorrect ->
0x16a2), seq 1, ack 121, win 342, options [nop,nop,TS val 273198376 ecr
273198376], length 0
E..4rA@.@...*..i..ZV.(.
.H.(.H.(
09:47:43.923131 IP (tos 0x0, ttl 64, id 63344, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.44078 > 127.0.0.1.8080: Flags [F.], cksum 0xfe28 (incorrect
-> 0x16a0), seq 121, ack 1, win 342, options [nop,nop,TS val 273198377 ecr
273198376], length 0
E..4.p@.@.EQ..Z.*..i...V.(.
.H.).H.(
09:47:43.960144 IP (tos 0x0, ttl 64, id 29250, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.8080 > 127.0.0.1.44078: Flags [.], cksum 0xfe28 (incorrect ->
0x1695), seq 1, ack 122, win 342, options [nop,nop,TS val 273198387 ecr
273198377], length 0
E..4rB@.@...*..i..ZV.(.
.H.3.H.)
09:47:44.380384 IP (tos 0x0, ttl 64, id 29251, offset 0, flags [DF], proto
TCP (6), length 179)
127.0.0.1.8080 > 127.0.0.1.44078: Flags [P.], cksum 0xfea7 (incorrect
-> 0xe005), seq 1:128, ack 122, win 342, options [nop,nop,TS val 273198492
ecr 273198377], length 127
E...rC@.@...*..i..ZV...
.H...H.)HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Transfer-Encoding: chunked
Date: Tue, 18 Oct 2016 07:47:44 GMT

b
Hello world

09:47:44.380398 IP (tos 0x0, ttl 64, id 52626, offset 0, flags [DF], proto
TCP (6), length 40)
127.0.0.1.44078 > 127.0.0.1.8080: Flags [R], cksum 0xd571 (correct),
seq 3052296895, win 0, length 0
E..(..@.@.o;..Z.Pq..

This shows the python "client", sends a FIN to tomcat, tomcat PUSH anyway
and then the "client" RESETs.

Thomas

2016-10-10 19:10 GMT+02:00 Mark Thomas <ma...@apache.org>:

> On 10/10/2016 15:47, Thomas Boniface wrote:
> > Hello,
> >
> > I have managed to test this use case on the server revealing the issue
> with
> > the latest tomcat release (7.0.72). I still can observe the issue on
> > catalina.out side: a broken pipe exception pops up and a recycle is shown
> > for the async context but no complete.
>
> Steps to reproduce?
>
> Better still, a test case that demonstrates the issue?
>
> Mark
>
>
> >
> > Oct 10, 2016 4:41:07 PM org.apache.coyote.AbstractProcessor
> setErrorState
> > INFO: An error occurred in processing while on a non-container thread.
> The
> > connection will be closed immediately
> > java.net.SocketException: Broken pipe
> > at java.net.SocketOutputStream.$$YJP$$socketWrite0(Native Method)
> > at java.net.SocketOutputStream.socketWrite0(SocketOutputStream.java)
> > 

Re: Async servlet and request recycling

2016-10-10 Thread Thomas Boniface
Hello,

I have managed to test this use case on the server revealing the issue with
the latest tomcat release (7.0.72). I still can observe the issue on
catalina.out side: a broken pipe exception pops up and a recycle is shown
for the async context but no complete.

Oct 10, 2016 4:41:07 PM org.apache.coyote.AbstractProcessor setErrorState
INFO: An error occurred in processing while on a non-container thread. The
connection will be closed immediately
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.$$YJP$$socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite0(SocketOutputStream.java)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at
org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
at
org.apache.coyote.http11.InternalOutputBuffer.endRequest(InternalOutputBuffer.java:159)
at
org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:761)
at org.apache.coyote.Response.action(Response.java:174)
at org.apache.coyote.Response.finish(Response.java:274)
at
org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:322)
at
org.apache.catalina.connector.CoyoteOutputStream.close(CoyoteOutputStream.java:108)
...
Oct 10, 2016 4:41:07 PM org.apache.catalina.core.AsyncContextImpl logDebug
FINE: Req: 2de85b25  CReq: 70e2c044  RP: 1dff11c5  Stage: 7  Thread:
http-bio-8080-exec-8  State:  N/A  Method: recycle
URI:
/www/delivery/swfIndex.php?reqType=AdsSetup=2.0=5335

On application logs side the behavior changed a bit as I now have a
IllegalStateException:

java.lang.IllegalStateException: It is illegal to call this method if the
current request is not in asynchronous mode (i.e. isAsyncStarted() returns
false)
at
org.apache.catalina.connector.Request.getAsyncContext(Request.java:1740)
~[catalina.jar:7.0.72]

In the end, the situation seems pretty similar. The container appears to
recycle the request without completing the async context and I can't
complete on application side as the request I have is now recycled.

Thanks,
Thomas


2016-10-03 17:06 GMT+02:00 Thomas Boniface <tho...@stickyads.tv>:

> Hi,
>
> Thanks for your feedbacks. I noticed the issue I described do not occur on
> my local or integration environment. I still try to figure out how to
> reproduce this in an environment I can easely update to the latest tomcat
> version.
>
> Although it may not occur in latest release. Could you give me an input on
> what are the consequences of the behaviour I described ?
>
> Thanks,
> Thomas
>
> 2016-09-29 9:41 GMT+02:00 Violeta Georgieva <violet...@apache.org>:
>
>> Hi,
>>
>> 2016-09-29 10:14 GMT+03:00 Thomas Boniface <tho...@stickyads.tv>:
>> >
>> > The tomcat version is 7.0.64.
>>
>> I would recommend you to verify the behaviour against the latest Tomcat 7
>> (7.0.72).
>> We have changes in the async processing since 7.0.64.
>> http://tomcat.apache.org/tomcat-7.0-doc/changelog.html
>>
>> Regards,
>> Violeta
>>
>> > Thomas
>> >
>> > 2016-09-28 22:43 GMT+02:00 Christopher Schultz <
>> ch...@christopherschultz.net
>> > >:
>> >
>> > > -BEGIN PGP SIGNED MESSAGE-
>> > > Hash: SHA256
>> > >
>> > > Thomas,
>> > >
>> > > On 9/28/16 11:55 AM, Thomas Boniface wrote:
>> > > > Hi,
>> > > >
>> > > > When a client calls an asynchronous servlet and closes the
>> > > > connection a java.io.IOException: Broken pipe is catched by Tomcat
>> > > > level when the webapp tries to write to the socket.
>> > > >
>> > > > This exception is not transmited to the webapp level but it seems
>> > > > the request has been recycled (all content is reinitialised), in
>> > > > such a case it impossible for the webapp to retrieve the
>> > > > AsyncContext from the HttpServletRequest making the AsyncContext
>> > > > complete call impossible.
>> > > >
>> > > > Activating the tomcat logging for AsyncContext
>> > > > (org.apache.catalina.core.AsyncContextImpl.level = FINE) shows the
>> > > > recycle method is called but not the complete method, what seems to
>> > > > confirm my assumption. In a use case were the client waits for the
>> > > > response, I can see both complete and recycle are called.
>> > > >
>> > > > My question is, what is the i

Re: Async servlet and request recycling

2016-10-03 Thread Thomas Boniface
Hi,

Thanks for your feedbacks. I noticed the issue I described do not occur on
my local or integration environment. I still try to figure out how to
reproduce this in an environment I can easely update to the latest tomcat
version.

Although it may not occur in latest release. Could you give me an input on
what are the consequences of the behaviour I described ?

Thanks,
Thomas

2016-09-29 9:41 GMT+02:00 Violeta Georgieva <violet...@apache.org>:

> Hi,
>
> 2016-09-29 10:14 GMT+03:00 Thomas Boniface <tho...@stickyads.tv>:
> >
> > The tomcat version is 7.0.64.
>
> I would recommend you to verify the behaviour against the latest Tomcat 7
> (7.0.72).
> We have changes in the async processing since 7.0.64.
> http://tomcat.apache.org/tomcat-7.0-doc/changelog.html
>
> Regards,
> Violeta
>
> > Thomas
> >
> > 2016-09-28 22:43 GMT+02:00 Christopher Schultz <
> ch...@christopherschultz.net
> > >:
> >
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA256
> > >
> > > Thomas,
> > >
> > > On 9/28/16 11:55 AM, Thomas Boniface wrote:
> > > > Hi,
> > > >
> > > > When a client calls an asynchronous servlet and closes the
> > > > connection a java.io.IOException: Broken pipe is catched by Tomcat
> > > > level when the webapp tries to write to the socket.
> > > >
> > > > This exception is not transmited to the webapp level but it seems
> > > > the request has been recycled (all content is reinitialised), in
> > > > such a case it impossible for the webapp to retrieve the
> > > > AsyncContext from the HttpServletRequest making the AsyncContext
> > > > complete call impossible.
> > > >
> > > > Activating the tomcat logging for AsyncContext
> > > > (org.apache.catalina.core.AsyncContextImpl.level = FINE) shows the
> > > > recycle method is called but not the complete method, what seems to
> > > > confirm my assumption. In a use case were the client waits for the
> > > > response, I can see both complete and recycle are called.
> > > >
> > > > My question is, what is the impact of the complete not being called
> > > > on the AsyncContext, is the socket cleaned up properly ?
> > >
> > > Tomcat version?
> > >
> > > - -chris
> > > -BEGIN PGP SIGNATURE-
> > > Comment: GPGTools - http://gpgtools.org
> > > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> > >
> > > iQIcBAEBCAAGBQJX7CtcAAoJEBzwKT+lPKRYwekP/R1wirv0g7wJ3uR1Xk4mYIQo
> > > jPUYBirzVcewTWrDUpOe4BdXUBzgk7zDVrOsWU9PGlc0Prwik9YHeFWlG9ItxeEs
> > > 0ZJ0vJ1z6Od0KsxN6E8KobsE3rQu+td1Mh7d0g76zbHQKiLmrJNb8/hGuHVQr9Fd
> > > M597bec0JYiQSXU+8/SMErx/bdoA8HcApaeJpnl/RuCLfYwQ5ZSS/e0SCuSqMi1W
> > > bEU0vj0pBfK6h1WuweCRoBL5Shxa2XBpbc8nlPgb7IHNlQ15dwlD10nnuYDLb7DR
> > > VmOYEx2fmynZ/fOajfTsHoWUpoHjK47vMjtLUpIXARN8LY6tR2A2iUqJ6gXlM+QL
> > > gNRkucxkI3RSV3U7ipx7y5IJTglFC7uAyFlJpPLx8gLhGWSUz+q46lDr+332kF5x
> > > VU7rKLY/3RcSJG0ZLfIzPly5tz8wssMvwu94nI8lQb4SweEJDa6cT5Z8aUUTFaf6
> > > kjy34jSgsi6QyN+NK9WKapdDNzvIo1X18zK2CqfDSeyBsgprU62o1P8R/BxIiM9f
> > > YAnK98kPtmmKyJHcS7+fBngO1/TZvsdGlYj+cXcnCNi0Fnp50WKlHOPb6wcZo5q5
> > > lcpLkwj4izmdgW8rONjMDAZj3gal7OKw0WQ/srU6XIfUa1kgR0NAtb7YQGvHJA5g
> > > ljFdLIuRnMu+43OsbSKC
> > > =zrQ5
> > > -END PGP SIGNATURE-
> > >
> > > -
> > > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> > > For additional commands, e-mail: users-h...@tomcat.apache.org
> > >
> > >
>


Re: Async servlet and request recycling

2016-09-29 Thread Thomas Boniface
The tomcat version is 7.0.64.

Thomas

2016-09-28 22:43 GMT+02:00 Christopher Schultz <ch...@christopherschultz.net
>:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Thomas,
>
> On 9/28/16 11:55 AM, Thomas Boniface wrote:
> > Hi,
> >
> > When a client calls an asynchronous servlet and closes the
> > connection a java.io.IOException: Broken pipe is catched by Tomcat
> > level when the webapp tries to write to the socket.
> >
> > This exception is not transmited to the webapp level but it seems
> > the request has been recycled (all content is reinitialised), in
> > such a case it impossible for the webapp to retrieve the
> > AsyncContext from the HttpServletRequest making the AsyncContext
> > complete call impossible.
> >
> > Activating the tomcat logging for AsyncContext
> > (org.apache.catalina.core.AsyncContextImpl.level = FINE) shows the
> > recycle method is called but not the complete method, what seems to
> > confirm my assumption. In a use case were the client waits for the
> > response, I can see both complete and recycle are called.
> >
> > My question is, what is the impact of the complete not being called
> > on the AsyncContext, is the socket cleaned up properly ?
>
> Tomcat version?
>
> - -chris
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJX7CtcAAoJEBzwKT+lPKRYwekP/R1wirv0g7wJ3uR1Xk4mYIQo
> jPUYBirzVcewTWrDUpOe4BdXUBzgk7zDVrOsWU9PGlc0Prwik9YHeFWlG9ItxeEs
> 0ZJ0vJ1z6Od0KsxN6E8KobsE3rQu+td1Mh7d0g76zbHQKiLmrJNb8/hGuHVQr9Fd
> M597bec0JYiQSXU+8/SMErx/bdoA8HcApaeJpnl/RuCLfYwQ5ZSS/e0SCuSqMi1W
> bEU0vj0pBfK6h1WuweCRoBL5Shxa2XBpbc8nlPgb7IHNlQ15dwlD10nnuYDLb7DR
> VmOYEx2fmynZ/fOajfTsHoWUpoHjK47vMjtLUpIXARN8LY6tR2A2iUqJ6gXlM+QL
> gNRkucxkI3RSV3U7ipx7y5IJTglFC7uAyFlJpPLx8gLhGWSUz+q46lDr+332kF5x
> VU7rKLY/3RcSJG0ZLfIzPly5tz8wssMvwu94nI8lQb4SweEJDa6cT5Z8aUUTFaf6
> kjy34jSgsi6QyN+NK9WKapdDNzvIo1X18zK2CqfDSeyBsgprU62o1P8R/BxIiM9f
> YAnK98kPtmmKyJHcS7+fBngO1/TZvsdGlYj+cXcnCNi0Fnp50WKlHOPb6wcZo5q5
> lcpLkwj4izmdgW8rONjMDAZj3gal7OKw0WQ/srU6XIfUa1kgR0NAtb7YQGvHJA5g
> ljFdLIuRnMu+43OsbSKC
> =zrQ5
> -END PGP SIGNATURE-
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Async servlet and request recycling

2016-09-28 Thread Thomas Boniface
Hi,

When a client calls an asynchronous servlet and closes the connection a
java.io.IOException: Broken pipe is catched by Tomcat level when the webapp
tries to write to the socket.

This exception is not transmited to the webapp level but it seems the
request has been recycled (all content is reinitialised), in such a case it
impossible for the webapp to retrieve the AsyncContext from the
HttpServletRequest making the AsyncContext complete call impossible.

Activating the tomcat logging for AsyncContext
(org.apache.catalina.core.AsyncContextImpl.level = FINE) shows the recycle
method is called but not the complete method, what seems to confirm my
assumption. In a use case were the client waits for the response, I can see
both complete and recycle are called.

My question is, what is the impact of the complete not being called on the
AsyncContext, is the socket cleaned up properly ?

Thanks,
Thomas


Non recycled request

2016-05-04 Thread Thomas Boniface
Hi,

I can observer RecycleRequiredException (example below) in the catalina.out
of my application. This application relies on async servlets with some
async libraries, this exception can lead to strange behavior I guess like
mixing user cookies.

Is there some specific logs I could activate to isolate the use case
leading to this error ? Or some known deprecated behaviour that would lead
to this state ?

May 02, 2016 6:32:10 AM org.apache.catalina.connector.CoyoteAdapter
checkRecycled
INFO: Encountered a non-recycled request and recycled it forcedly.
org.apache.catalina.connector.CoyoteAdapter$RecycleRequiredException
at
org.apache.catalina.connector.CoyoteAdapter.checkRecycled(CoyoteAdapter.java:592)
at
org.apache.coyote.http11.AbstractHttp11Processor.recycle(AbstractHttp11Processor.java:1814)
at
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.release(Http11NioProtocol.java:218)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:708)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1760)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1719)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)

Thanks,
Thomas


Async servlet timeout behaviour

2016-02-11 Thread Thomas Boniface
Hi,

I'm using async servlet with a timeout configured to 300ms but I can
observe in live environnement some big differences with the actual applied
timeout  700ms or more for instance.

I was wondering how are triggered onTimeout events to AsyncListeners, is
there a limited number of thread handling this ? Could this be explained by
too many onTimeout occuring concurrently on a busy server ?

I also wonder what can be done or not while in the onTimeout thread, can I
still do some asynchronous operation (retrieving content from http request)
and is writing to the reponse from this thread, as long as I ensured this
done in a thread safe way, ok ?

Thanks,
Thomas


Re: Async servlet timeout behaviour

2016-02-11 Thread Thomas Boniface
2016-02-11 12:16 GMT+01:00 Mark Thomas <ma...@apache.org>:

> On 11/02/2016 11:07, Thomas Boniface wrote:
> > Hi,
> >
> > I'm using async servlet with a timeout configured to 300ms but I can
> > observe in live environnement some big differences with the actual
> applied
> > timeout  700ms or more for instance.
> >
> > I was wondering how are triggered onTimeout events to AsyncListeners, is
> > there a limited number of thread handling this ? Could this be explained
> by
> > too many onTimeout occuring concurrently on a busy server ?
>
> Timeouts are only checked once a second.
>
That's explains this use case indeed.

>
> > I also wonder what can be done or not while in the onTimeout thread, can
> I
> > still do some asynchronous operation (retrieving content from http
> request)
> > and is writing to the reponse from this thread, as long as I ensured this
> > done in a thread safe way, ok ?
>
> This is all in the Servlet spec.
>
I'll study the spec, I spent a lot of time looking for some documentation
without thinking of reading the specs themselves, my bad.

>
> In summary:
> - You can perform any operation that is valid for a request in async mode.
>
This will have no impact on other request onTimeout triggering no matter
what I do within the onTimeout ?

> - You MUST call complete() or dispatch() before exiting onTimeout().
>
If the onTimeout occurs but another thread created by the same request is
already writing the response I assume this thread will call complete() and
the onTimeout thread must not do anything ?

>
> Mark
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: Async servlet timeout behaviour

2016-02-11 Thread Thomas Boniface
2016-02-11 12:31 GMT+01:00 André Warnier (tomcat) <a...@ice-sa.com>:

> On 11.02.2016 12:07, Thomas Boniface wrote:
>
>> Hi,
>>
>> I'm using async servlet with a timeout configured to 300ms
>>
>
> naive question : is that not awfully short, if this relates to some
> over-the-Internet communication ?
>
That's a matter of context I guess, in a real time bidding context for
video advertising we expect to have a non noticable delay before being able
to reply to a request.

>
> but I can
>
>> observe in live environnement some big differences with the actual applied
>> timeout  700ms or more for instance.
>>
>> I was wondering how are triggered onTimeout events to AsyncListeners, is
>> there a limited number of thread handling this ? Could this be explained
>> by
>> too many onTimeout occuring concurrently on a busy server ?
>>
>> I also wonder what can be done or not while in the onTimeout thread, can I
>> still do some asynchronous operation (retrieving content from http
>> request)
>> and is writing to the reponse from this thread, as long as I ensured this
>> done in a thread safe way, ok ?
>>
>> Thanks,
>> Thomas
>>
>>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-30 Thread Thomas Boniface
Thanks for your reply, we'll give a try to your suggestions.

2015-04-29 23:15 GMT+02:00 Christopher Schultz ch...@christopherschultz.net
:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Thomas,

 On 4/25/15 4:25 AM, Thomas Boniface wrote:
  When talking about the strategy for our next test on the release we
  checked at the tomcat connector configuration but we are unsure how
  to applies your advices:
 
  1. Check the nginx configuration. Specifically, the keep-alive and
  timeout associated with the proxy configuration.
 
  2. Make sure that Tomcat's timeouts are appropriate for those
  matching settings in nginx.
 
  It seems were have 100 connections max keept alive at nginx level
  ( keepalive), a timeout to connect to tomcat of 2s
  (proxy_connect_timeout) and a timeout to read from tomcat of 10s
  (proxy_read_timeout).
 
  On tomcat side we have a connector like follows:
 
  Connector port=8080
  protocol=org.apache.coyote.http11.Http11NioProtocol
  selectorTimeout=1000 maxThreads=200 maxHttpHeaderSize=16384
  address=127.0.0.1 redirectPort=8443/

 It sounds like you need to add this to your Connector configuration:

connectionTimeout=1

 This matches your value for proxy_read_timeout. You should probably
 also set keepAliveTimeout if you think it needs to be different from
 connectionTimeout (keepAliveTimeout defaults to connectionTimeout).

 I'm not sure if Nginx's proxy_read_timeout is the same timeout used to
 terminate a connection to Tomcat if Nginx hasn't tried to send a
 request over that connection for a while, but if so, the
 connectionTimeout/keepAliveTimeout is what you want to set.

 I'm not sure that setting selectorTimeout to something other than the
 default helps you at all (1000ms is the default).

 The goal is to get both Nginx and Tomcat to close their connections at
 the same time when they decide that the connection is no loner
 necessary. If Nginx times-out more quickly than Tomcat, then re-opens
 a new connection to Tomcat, it will make Tomcat artificially run out
 of connections (and file descriptors) even though Tomcat is largely idle
 .

 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 Comment: GPGTools - http://gpgtools.org

 iQIcBAEBCAAGBQJVQUnhAAoJEBzwKT+lPKRYzZwQAIgYxw6OuCgPeks/1S8x7bVP
 MdBdLddY9ruDNCRq9kLzKxEouo/WD5zuQW3kMRyTlX9I36HVRRcE6boaIwFBjiws
 LhoEMy6f5cZQj0FzRfstmyiyOFmZKtvAxwMVa8p1ykqkAhysDTU4fDKxmsKDk1fM
 fakJkqj4nRYP86ekFq/kIb/TNdMbzq+qx32QlevB/z+p0t7frR1DXadRK5KGXGVu
 dOHclY3Z29nzIGe+hdZULkZgpmAUDtk+Y7/bePeWv7ln6IBBoka7hYZGLj1+shdy
 PHrWs0ikTKTB9+kgS7OaipZD8r8x0yvtYYTEjZt3Jcsno0W2kKW600oTFI9YFJ2M
 XDu87+TUvb+E/NYLjJIPQICtDK71b0JpPt8ijQCx+91RFiFRYS8tuWNABcWbtRBb
 C2WlHmNilI/i+kAc7Syvao9gKO594jpao4nlPWhOXJK75QDw5K1szgo/ONgwujtU
 YRtpyZCVVB8UCUk8QIESL8WQT7zlP4MDlEpmeyRzhEGRcelCMoXEq22rZ4HVygAP
 iZg8KbkwUN/Ul7FMcwBbxoWOVE9iTBEj2nHuriAH5oKPnSJbuI2lfxOpxKSVMQaI
 NKV8Zb+yNby11UWWQxxI0QaStZB9IMVnCTLEMXT/M/okwd12xZKuChhh6RFaXKxL
 WIZLFHnxc4C5yWay7OPx
 =tLMj
 -END PGP SIGNATURE-

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-25 Thread Thomas Boniface
Hi,

When talking about the strategy for our next test on the release we checked
at the tomcat connector configuration but we are unsure how to applies your
advices:

1. Check the nginx configuration. Specifically, the keep-alive and
timeout associated with the proxy configuration.

2. Make sure that Tomcat's timeouts are appropriate for those matching
settings in nginx.

It seems were have 100 connections max keept alive at nginx level (
keepalive), a timeout to connect to tomcat of 2s (proxy_connect_timeout)
and a timeout to read from tomcat of 10s (proxy_read_timeout).

On tomcat side we have a connector like follows:

Connector port=8080 protocol=org.apache.coyote.http11.Http11NioProtocol
  selectorTimeout=1000
  maxThreads=200
  maxHttpHeaderSize=16384
  address=127.0.0.1
  redirectPort=8443/

Thomas

2015-04-23 14:50 GMT+02:00 Thomas Boniface tho...@stickyads.tv:

 I just want to keep you updated and tell you that all your replies are
 very helpful. It give me clues on what to look for and sometimes confirm
 some of our suspicion.

 I have transmitted some of the element collected in this thread to our
 platform team but we were not able to setup new test so far due to other
 task keeping the team busy. I'll post an update when some new tests will be
 possible.

 Thanks again,
 Thomas

 2015-04-22 17:19 GMT+02:00 Frederik Nosi frederik.n...@postecom.it:

 On 04/22/2015 05:15 PM, Christopher Schultz wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Frederik,

 On 4/22/15 10:53 AM, Frederik Nosi wrote:

 Hi, On 04/22/2015 04:35 PM, Christopher Schultz wrote: Neill,

 On 4/22/15 9:12 AM, Neill Lima wrote:

 If I am not wrong, if the application in question is
 monitored in VisualVM through JMX
 (https://visualvm.java.net/) you could trigger a Force GC
 from its monitoring console.

 You can do this, but it won't close any CLOSE_WAIT connections.
 Tomcat's timeout must be reached. I suspect that the timeout(s)
 are simply way too long.

 You can tune the network stack's timeout using sysctl, eg:
 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1
 net.ipv4.tcp_fin_timeout = 3

 This won't do anything, either. As far as the OS is concerned, the
 application (Tomcat) is still using that connection. Therefore it
 can't be cleaned up.


 Indeed you are right, tuning the network stack help with TIME_WAIT, not
 CLOSE_WAIT, my bad.


 Tomcat has to actively hang up the connection, and the best way to do
 that is with synchronized timeouts between the reverse proxy and Tomcat.

 You can try all other kinds of tricks, but the fact of the matter is
 that the application is still trying to use the socket, so no other
 component can step-in and kill it.


 Probably the application is slow then, or the server overloaded.



 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 Comment: GPGTools - http://gpgtools.org

 iQIcBAEBCAAGBQJVN7sZAAoJEBzwKT+lPKRYRhkP/j0GBtPH/+/QU2YEgZxbRoJE
 z2lmWxDrbFNxiYFS5332SvN4bXhG/Khog83CeBM0bg0VLciSxKYqm5J8YziMlrlo
 omqk3gUiNeViyjsjO5SBW9hxT1qhC1PLdtx7uZ7xUiNmmE24wQ3Gi2edyjyvYDJ0
 pzLT+bEp8BjXgm0c6aOONO0PJ+PbyZPeF56PXq6iqn426IhebEUlDP8kxuSh3RwL
 LQW7tg05bg3yTuP1ZjiwH4gmBfbomJ+xpY6F+zwDkZgk7Cs4okp5/Tr0uTNhsHQM
 lgGaIZc9SCoqKaMFqWila3RaAnnpqDe1cdg2N44zluIaMkcO94kDSWBuT25t5dGe
 GBiFG2HGczwyo5MCrx0RgYgLtb2bQ0QZQ8nHzNis8wkNQdHWzziWsvsVQOCnCqL/
 3FOkWUbbJTdmnB8lx84sRcuMsDYQ0BYOYW4W/F2WmSxzBnm7V4NixHG9dD4lZ3vJ
 fhIO/d0VNOpI+wesZyQg+pwWRHInbigZ0+5A3InOLHW84rWa2qX0wvt6a7rBb0YP
 gonBY4xbrPTHoXDHH7ZCs3JW+gwstA5avA/Obp45C5LessbduqRPtBvMUZizyZR5
 ByPtJcrCvHlFux1fwc7Idj/9seqaYvllyvO6evvhqgYVU3jV2tekOUNuFGDJ8KRt
 HmrzuiH3cmU1JpT6FSen
 =XyQw
 -END PGP SIGNATURE-

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org





Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-23 Thread Thomas Boniface
I just want to keep you updated and tell you that all your replies are very
helpful. It give me clues on what to look for and sometimes confirm some of
our suspicion.

I have transmitted some of the element collected in this thread to our
platform team but we were not able to setup new test so far due to other
task keeping the team busy. I'll post an update when some new tests will be
possible.

Thanks again,
Thomas

2015-04-22 17:19 GMT+02:00 Frederik Nosi frederik.n...@postecom.it:

 On 04/22/2015 05:15 PM, Christopher Schultz wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Frederik,

 On 4/22/15 10:53 AM, Frederik Nosi wrote:

 Hi, On 04/22/2015 04:35 PM, Christopher Schultz wrote: Neill,

 On 4/22/15 9:12 AM, Neill Lima wrote:

 If I am not wrong, if the application in question is
 monitored in VisualVM through JMX
 (https://visualvm.java.net/) you could trigger a Force GC
 from its monitoring console.

 You can do this, but it won't close any CLOSE_WAIT connections.
 Tomcat's timeout must be reached. I suspect that the timeout(s)
 are simply way too long.

 You can tune the network stack's timeout using sysctl, eg:
 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1
 net.ipv4.tcp_fin_timeout = 3

 This won't do anything, either. As far as the OS is concerned, the
 application (Tomcat) is still using that connection. Therefore it
 can't be cleaned up.


 Indeed you are right, tuning the network stack help with TIME_WAIT, not
 CLOSE_WAIT, my bad.


 Tomcat has to actively hang up the connection, and the best way to do
 that is with synchronized timeouts between the reverse proxy and Tomcat.

 You can try all other kinds of tricks, but the fact of the matter is
 that the application is still trying to use the socket, so no other
 component can step-in and kill it.


 Probably the application is slow then, or the server overloaded.



 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 Comment: GPGTools - http://gpgtools.org

 iQIcBAEBCAAGBQJVN7sZAAoJEBzwKT+lPKRYRhkP/j0GBtPH/+/QU2YEgZxbRoJE
 z2lmWxDrbFNxiYFS5332SvN4bXhG/Khog83CeBM0bg0VLciSxKYqm5J8YziMlrlo
 omqk3gUiNeViyjsjO5SBW9hxT1qhC1PLdtx7uZ7xUiNmmE24wQ3Gi2edyjyvYDJ0
 pzLT+bEp8BjXgm0c6aOONO0PJ+PbyZPeF56PXq6iqn426IhebEUlDP8kxuSh3RwL
 LQW7tg05bg3yTuP1ZjiwH4gmBfbomJ+xpY6F+zwDkZgk7Cs4okp5/Tr0uTNhsHQM
 lgGaIZc9SCoqKaMFqWila3RaAnnpqDe1cdg2N44zluIaMkcO94kDSWBuT25t5dGe
 GBiFG2HGczwyo5MCrx0RgYgLtb2bQ0QZQ8nHzNis8wkNQdHWzziWsvsVQOCnCqL/
 3FOkWUbbJTdmnB8lx84sRcuMsDYQ0BYOYW4W/F2WmSxzBnm7V4NixHG9dD4lZ3vJ
 fhIO/d0VNOpI+wesZyQg+pwWRHInbigZ0+5A3InOLHW84rWa2qX0wvt6a7rBb0YP
 gonBY4xbrPTHoXDHH7ZCs3JW+gwstA5avA/Obp45C5LessbduqRPtBvMUZizyZR5
 ByPtJcrCvHlFux1fwc7Idj/9seqaYvllyvO6evvhqgYVU3jV2tekOUNuFGDJ8KRt
 HmrzuiH3cmU1JpT6FSen
 =XyQw
 -END PGP SIGNATURE-

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-22 Thread Thomas Boniface
What concerns me the most is the CLOSE_WAIT on tomcat side because when an
fd peak appears the web application appears to be stuck. It feels like all
its connections are consumed and none can be established from nginx
anymore. Shouldn't the CLOSE_WAIT connection be recycled to received new
connections from nginx ?

I am less concerned by the webapp-to-external servers connections in
CLOSE_WAIT state, these connections are handled using httpAsyncClient with
a pool system that has a defined size and an eviction strategy (if too many
connections are opened new connections request will only wait for say 100
ms before failling).

We will look into the configuration you advice to check on nginx and tomcat
size to see how it's setup.

Thanks
Thomas

2015-04-22 11:38 GMT+02:00 André Warnier a...@ice-sa.com:

 Rainer Jung wrote:

 Am 22.04.2015 um 00:08 schrieb André Warnier:
 ...

 The OP has a complex setup, where we are not even sure that the various
 connections in various states are even related directly to Tomcat or not.
 Graphically, we have this :

 client -- TCP -- nginx -- TCP -- Tomcat -- webapp -- TCP --
 external servers

 The output of netstat shows all the connections and their state, at the
 OS level.  Even assuming that nginx runs on a separate host, that still
 leaves the possibility that most of the connections in CLOSE_WAIT state
 for example, would be connections between the webapps and external
 servers, having not much to do with Tomcat per se.
 But of course they use fd's and resources, just like the others. And for
 lsof, they would appear as belonging to the Tomcat process.


 See my response from 1.5 days ago which contains the individual
 statistics for each of the above three TCP parts.


 Yes, sorry Rainer, I did not read that as carefully as I should have.

 And I do agree that the two main things which the OP should have a good
 look at, are
 - the nginx settings for keep-alive (client-nginx side)
 - the various webapp-to-external servers connections in CLOSE_WAIT state
 Collectively, these things must be using a lot of resources on the server,
 and probably slow it down significantly.



 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-21 Thread Thomas Boniface
)
at
org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:105)
at
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:584)
at java.lang.Thread.run(Thread.java:745)

Also I'm glad to get a lot of feed back that help confirming my opinions or
give me new path to follow. Unfortunetly so far nothing seems to explain
why the peaks occur a lot in the last release of tomcat but only
occassionally in the previous one we used.

Thomas

2015-04-21 16:56 GMT+02:00 André Warnier a...@ice-sa.com:

 Thomas Boniface wrote:

 The file descriptor peak show up in our monitoring application. We have
 some charts showing the number of file descriptors owned by the tomcat
 process (ls /proc/$(pgrep -u tomcat7)/fd/ | wc -l).

 The calatalina.out log shows errors, the most frequent being a
 java.io.IOException: Broken pipe.

  [..]

 A broken pipe, from the server perspective while sending a response to
 the client, is a rather usual thing.  It usually means that the (human)
 client got tired of waiting for a response, and clicked somewhere else in
 the browser (maybe a cancel button; maybe he closed the window; etc..).
 The browser would then immediately close the connection with the server,
 and when the server eventually tries to write anything else to that
 connection, the broken pipe exception would be the result.
 With the numbers you quoted previously regarding the number of
 simultaneous client sessions, it doesn't look extraordinary that this would
 happen regularly.
 Maybe the thing to investigate here is whether your server is really so
 slow in answering clients, that a significant portion of them do get tired
 of waiting and get an irresistible urge to click elsewhere..

 Apart from the human client, browsers themselves have a built-in timeout
 for waiting for a server response, and will themselves give up after a
 while.  That is on the order of 4-5 minutes after sending the request and
 not receiving anything from the server in response.
 Some applications are such that they can sometimes take more than that to
 be able to send a response.  In such cases, to avoid the browser timeout
 (and connection close), there are tricks to use, to send intermediate
 kind of wait message to the browser, so that it does not hang up.




 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-21 Thread Thomas Boniface
://www.filedropper.com/catalinatoday11


Thomas

2015-04-21 16:02 GMT+02:00 Christopher Schultz ch...@christopherschultz.net
:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Thomas,

 On 4/20/15 8:11 AM, Thomas Boniface wrote:
  I have tried to find help regarding an issue we experience with
  our platform leading to random file descriptor peaks. This happens
  more often on heavy load but can also happen on low traffic
  periods.
 
  Our application is using servlet 3.0 async features and an async
  connector. We noticed that a lot of issues regarding asynchronous
  feature were fixed between our production version and the last
  stable build. We decided to give it a try to see if it improves
  things or at least give clues on what can cause the issue;
  Unfortunately it did neither.
 
  The file descriptor peaks and application blocking happens
  frequently with this version when it only happens rarely on
  previous version (tomcat7 7.0.28-4).
 
  Tomcat is behind an nginx server. The tomcat connector used is
  configured as follows:
 
  We use an Nio connector: Connector port=8080
  protocol=org.apache.coyote. http11.Http11NioProtocol
  selectorTimeout=1000 maxThreads=200 maxHttpHeaderSize=16384
  address=127.0.0.1 redirectPort=8443/

 The default maxConnections is 1, so nginx can open that many
 connections before Tomcat starts to refuse them.

 How are you observing the file descriptor peak? Are you getting errors
 ?

 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 Comment: GPGTools - http://gpgtools.org

 iQIcBAEBCAAGBQJVNlhgAAoJEBzwKT+lPKRYMWIQAKWiP7T8HkZq0L9SmqmZr3rv
 DrlWBVXBq21M3/e0P3MNEnKp73SbZtq82i6Ib58cjBi6DMQQJTGisdb1WUqNkxWL
 f0J0fizl5wwDho9FJzxHzR/uV3Nm67Bx7QzvroEEyAmE/wRXdFFOlq/rSdKWfVDC
 jlBE5Seo+AQiURCONEZ9CYHPwm50yeSy9JzGuH1VXcfUTl0NVXS63vOjLp8XeJKO
 68jT6CuY5uzjvv6ZXeES73zvcthkCbF1/Si1KSVshQ/+aXAFDJDuXLLx0D7PWNV7
 N6jxVeHOoTdogYtfVyuOhQ4Xu6d9d9NddKC1ycMBeRfJP/5zG3YXHAbDdwWP8Sc7
 ip9Md6Y+KA089bRhQ92+6kWqWqtxx1Rg1lhRPkY9nOFc5kEFFsWT8NIEIVWtaN80
 zcanU29juMtJK/+Ov/vwyHljTxOikl2So3l19K2bBaCa1pDQ+NeKRQq4KISLlfjB
 05w88zu7uS8gYbf+uiw/TMZte1skT8tR3AD1Ye5XRV22zz+yKy6Z7nAdGg/bvIug
 8ngVWAQ7mxWt6QAtLRMFS4nw+xBNNNRyMYzvFEkZ6d6Wr67SnQXxRKqAIf8nhZ3h
 tqAnU0iXlhrdQCcVsMq/iv9lMgHo2x1NBNfeClkIz3XDvgVDJBbHNAr49WmlC5/H
 3xUS2AOTCIJNuK+W1CTm
 =i7t2
 -END PGP SIGNATURE-

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Thomas Boniface
Hi,

I have tried to find help regarding an issue we experience with our
platform leading to random file descriptor peaks. This happens more often
on heavy load but can also happen on low traffic periods.

Our application is using servlet 3.0 async features and an async connector.
We noticed that a lot of issues regarding asynchronous feature were fixed
between our production version and the last stable build. We decided to
give it a try to see if it improves things or at least give clues on what
can cause the issue; Unfortunately it did neither.

The file descriptor peaks and application blocking happens frequently with
this version when it only happens rarely on previous version (tomcat7
7.0.28-4).

Tomcat is behind an nginx server. The tomcat connector used is configured
as follows:

We use an Nio connector:
Connector port=8080 protocol=org.apache.coyote.
http11.Http11NioProtocol
  selectorTimeout=1000
  maxThreads=200
  maxHttpHeaderSize=16384
  address=127.0.0.1
  redirectPort=8443/

In catalina I can see some Broken pipe message that were not happening with
previous version.

I compared thread dumps from server with both the new and old version of
tomcat and both look similar from my stand point.

My explanation may not be very clear, but I hope this gives an idea how
what we are experiencing. Any pointer would be welcomed.

Thomas


Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Thomas Boniface
I did some captures during a peak this morning, I have some lsof and
netstat data.

It seems to me that most file descriptors used by tomcat are some http
connections:

 thomas@localhost  ~/ads3/tbo11h12  cat lsof| wc -l
17772
 thomas@localhost  ~/ads3/tbo11h12  cat lsof | grep TCP | wc -l
13966

(Note that the application also send request to external servers via http)


Regarding netstat I did a small script to try to aggregate connections with
a human readable name, if my script is right the connections between nginx
and tomcat are as follows:

tomcat = nginx SYN_RECV 127
tomcat = nginx ESTABLISHED 1650
tomcat = nginx CLOSE_WAIT 8381
tomcat = nginx TIME_WAIT 65

nginx = tomcat SYN_SENT 20119
nginx = tomcat ESTABLISHED 4692
nginx = tomcat TIME_WAIT 122
nginx = tomcat FIN_WAIT2 488
nginx = tomcat FIN_WAIT1 13

Concerning the other response and the system max number of file, I am not
sure this is where our issue lies. The peak itself seems to be a sympton of
an issue, tomcat fd are around 1000 almost all the time except when a peak
occurs. In such cases it can go up to 1 or more sometimes.

Thomas



2015-04-20 15:41 GMT+02:00 Rainer Jung rainer.j...@kippdata.de:

 Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

 Hi,

 I have tried to find help regarding an issue we experience with our
 platform leading to random file descriptor peaks. This happens more often
 on heavy load but can also happen on low traffic periods.

 Our application is using servlet 3.0 async features and an async
 connector.
 We noticed that a lot of issues regarding asynchronous feature were fixed
 between our production version and the last stable build. We decided to
 give it a try to see if it improves things or at least give clues on what
 can cause the issue; Unfortunately it did neither.

 The file descriptor peaks and application blocking happens frequently with
 this version when it only happens rarely on previous version (tomcat7
 7.0.28-4).

 Tomcat is behind an nginx server. The tomcat connector used is configured
 as follows:

 We use an Nio connector:
 Connector port=8080 protocol=org.apache.coyote.
 http11.Http11NioProtocol
selectorTimeout=1000
maxThreads=200
maxHttpHeaderSize=16384
address=127.0.0.1
redirectPort=8443/

 In catalina I can see some Broken pipe message that were not happening
 with
 previous version.

 I compared thread dumps from server with both the new and old version of
 tomcat and both look similar from my stand point.

 My explanation may not be very clear, but I hope this gives an idea how
 what we are experiencing. Any pointer would be welcomed.


 If the peaks happen long enough and your platforms has the tools available
 you can use lsof to look for what those FDs are - or on Linux looking at
 ls -l /proc/PID/fd/* (PID is the process PID file) - or on Solaris use
 the pfiles command.

 If the result is what is expected, namely that by far the most FDs are
 coming from network connections for port 8080, then you can check via
 netstat in which connection state those are.

 If most are in ESTABLISHED state, then you/we need to further break down
 the strategy.

 Regards,

 Rainer


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Thomas Boniface
Hi,

Both nginx and tomcat are hosted on the same server when listing the
connections I see both the connections from nginx to tomcat (the first one
create) and the one from tomcat to nginx used to reply. I may have
presented things the bad way though (I'm not too good regarding system
level).

I do agree the high number of close wait seems strange, I really feel like
nginx closed the connection before tomcat did (what I think leads to the
broken pipe expections observed in the catalina.out). In case someone want
to have a look I uploaded a netstat log here:
http://www.filedropper.com/netsat

Thomas

2015-04-20 17:13 GMT+02:00 André Warnier a...@ice-sa.com:

 Thomas Boniface wrote:

 I did some captures during a peak this morning, I have some lsof and
 netstat data.

 It seems to me that most file descriptors used by tomcat are some http
 connections:

  thomas@localhost  ~/ads3/tbo11h12  cat lsof| wc -l
 17772
  thomas@localhost  ~/ads3/tbo11h12  cat lsof | grep TCP | wc -l
 13966

 (Note that the application also send request to external servers via http)


 Regarding netstat I did a small script to try to aggregate connections
 with
 a human readable name, if my script is right the connections between nginx
 and tomcat are as follows:

 tomcat = nginx SYN_RECV 127
 tomcat = nginx ESTABLISHED 1650
 tomcat = nginx CLOSE_WAIT 8381
 tomcat = nginx TIME_WAIT 65

 nginx = tomcat SYN_SENT 20119
 nginx = tomcat ESTABLISHED 4692
 nginx = tomcat TIME_WAIT 122
 nginx = tomcat FIN_WAIT2 488
 nginx = tomcat FIN_WAIT1 13


 I don't understand the distinction here.  Tomcat should never initiate
 connections *to* nginx, or ?

 For personal historical reasons, the high number of connections in
 CLOSE_WAIT state above triggered my interest.  Search Google for : tcp
 close_wait state meaning
 Basically, it can mean that the client wants to go away, and closes its
 end of the connection to the server, but the application on the server
 never properly closes the connection to the client. And as long as it
 doesn't, the corresponding connection will remain stuck in the CLOSE_WAIT
 state (and continue to use resources on the server, such as an fd and
 associated resources).
 All that doesn't mean that this is your main issue here, but it's
 something to look into.




 Concerning the other response and the system max number of file, I am not
 sure this is where our issue lies. The peak itself seems to be a sympton
 of
 an issue, tomcat fd are around 1000 almost all the time except when a peak
 occurs. In such cases it can go up to 1 or more sometimes.

 Thomas



 2015-04-20 15:41 GMT+02:00 Rainer Jung rainer.j...@kippdata.de:

  Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

  Hi,

 I have tried to find help regarding an issue we experience with our
 platform leading to random file descriptor peaks. This happens more
 often
 on heavy load but can also happen on low traffic periods.

 Our application is using servlet 3.0 async features and an async
 connector.
 We noticed that a lot of issues regarding asynchronous feature were
 fixed
 between our production version and the last stable build. We decided to
 give it a try to see if it improves things or at least give clues on
 what
 can cause the issue; Unfortunately it did neither.

 The file descriptor peaks and application blocking happens frequently
 with
 this version when it only happens rarely on previous version (tomcat7
 7.0.28-4).

 Tomcat is behind an nginx server. The tomcat connector used is
 configured
 as follows:

 We use an Nio connector:
 Connector port=8080 protocol=org.apache.coyote.
 http11.Http11NioProtocol
selectorTimeout=1000
maxThreads=200
maxHttpHeaderSize=16384
address=127.0.0.1
redirectPort=8443/

 In catalina I can see some Broken pipe message that were not happening
 with
 previous version.

 I compared thread dumps from server with both the new and old version
 of
 tomcat and both look similar from my stand point.

 My explanation may not be very clear, but I hope this gives an idea how
 what we are experiencing. Any pointer would be welcomed.

  If the peaks happen long enough and your platforms has the tools
 available
 you can use lsof to look for what those FDs are - or on Linux looking at
 ls -l /proc/PID/fd/* (PID is the process PID file) - or on Solaris use
 the pfiles command.

 If the result is what is expected, namely that by far the most FDs are
 coming from network connections for port 8080, then you can check via
 netstat in which connection state those are.

 If most are in ESTABLISHED state, then you/we need to further break down
 the strategy.

 Regards,

 Rainer


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org





 -
 To unsubscribe, e-mail: users-unsubscr

Re: File descriptors peaks with latest stable build of Tomcat 7

2015-04-20 Thread Thomas Boniface
Thanks for your time Rainer,

I get what you mean regarding the application getting slow. This server was
also logging the garbage collection activity and it seems normal even when
the problem is occuring, there is no big variation in the time taken to do
a garbage collection operation.

I don't have a clear view of the server response time around the test I
made, so I can't tell if the application gets slow before the file
descriptor peak but as mentioned before this happen also during low traffic
period (in such a period there should have not reason to get slow). Also,
it feels unexpected that this version of tomcat makes the application
getting slower more often than a server with the other version of tomcat.

Thomas


2015-04-20 16:32 GMT+02:00 Rainer Jung rainer.j...@kippdata.de:

 Am 20.04.2015 um 15:41 schrieb Rainer Jung:

 Am 20.04.2015 um 14:11 schrieb Thomas Boniface:

 Hi,

 I have tried to find help regarding an issue we experience with our
 platform leading to random file descriptor peaks. This happens more often
 on heavy load but can also happen on low traffic periods.

 Our application is using servlet 3.0 async features and an async
 connector.
 We noticed that a lot of issues regarding asynchronous feature were fixed
 between our production version and the last stable build. We decided to
 give it a try to see if it improves things or at least give clues on what
 can cause the issue; Unfortunately it did neither.

 The file descriptor peaks and application blocking happens frequently
 with
 this version when it only happens rarely on previous version (tomcat7
 7.0.28-4).

 Tomcat is behind an nginx server. The tomcat connector used is configured
 as follows:

 We use an Nio connector:
 Connector port=8080 protocol=org.apache.coyote.
 http11.Http11NioProtocol
selectorTimeout=1000
maxThreads=200
maxHttpHeaderSize=16384
address=127.0.0.1
redirectPort=8443/

 In catalina I can see some Broken pipe message that were not happening
 with
 previous version.

 I compared thread dumps from server with both the new and old
 version of
 tomcat and both look similar from my stand point.

 My explanation may not be very clear, but I hope this gives an idea how
 what we are experiencing. Any pointer would be welcomed.


 If the peaks happen long enough and your platforms has the tools
 available you can use lsof to look for what those FDs are - or on Linux
 looking at ls -l /proc/PID/fd/* (PID is the process PID file) - or on
 Solaris use the pfiles command.

 If the result is what is expected, namely that by far the most FDs are
 coming from network connections for port 8080, then you can check via
 netstat in which connection state those are.

 If most are in ESTABLISHED state, then you/we need to further break down
 the strategy.


 One more thing: the connection peak might happen, if for some reason your
 application or the JVM (GC) gets slow. The reason doesn't have to still be
 there at the time when you take the thread dump.

 You might want to add %D to your Tomcat access log and ty to estimate,
 whether the connection peaks are due to (temporary) application slow down.

 The same holds for activating a GC log and check for long or many
 cumulative GC pauses.


 Regards,

 Rainer

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: Tomcat 7 upgrade from 7.0.28-4 to 7.0.56-1 causes application freeze

2015-03-30 Thread Thomas Boniface
Hi Christopher,

Here are responses to your questions:

No thread dump were made during when the issue occured unfortunetly.

We use an Nio connector:
Connector port=8080 protocol=org.apache.coyote.http11.Http11NioProtocol
  selectorTimeout=1000
  maxThreads=200
  maxHttpHeaderSize=16384
  address=127.0.0.1
  redirectPort=8443/

According to the catalina log the Broken pipes happend when trying to write
the response :
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:128)
at
org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:101)
at
org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:174)
at
org.apache.coyote.http11.InternalNioOutputBuffer.writeToSocket(InternalNioOutputBuffer.java:163)
at
org.apache.coyote.http11.InternalNioOutputBuffer.flushBuffer(InternalNioOutputBuffer.java:242)
at
org.apache.coyote.http11.InternalNioOutputBuffer.endRequest(InternalNioOutputBuffer.java:121)
at
org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:761)
at org.apache.coyote.Response.action(Response.java:174)
at org.apache.coyote.Response.finish(Response.java:274)
at
org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:319)
at
org.apache.catalina.connector.CoyoteWriter.close(CoyoteWriter.java:112)
at
networkComm.commands.HttpCommand.sendResponse(HttpCommand.java:224)


We do handle requests from mobile devices but this is not the biggest part
of the traffic, I don't have information on how many request it could
represent for the test we made.

Thanks,
Thomas






2015-03-27 14:08 GMT+01:00 Christopher Schultz ch...@christopherschultz.net
:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Thomas,

 On 3/26/15 12:16 PM, Thomas Boniface wrote:
  Following the update of the tomcat 7 version from 7.0.28-4 to
  7.0.56-1 hoping to improve our application performances we
  experienced a quick freeze of the application under a standard load
  (that was handle by previous tomcat version without particular
  issue).
 
  Our application is behind a nginx server and receives a high volume
  of HTTP requests from client and is using servlet 3.0 async
  features.
 
  While investigating we made following observations: 1/ catalina.out
  shows a high number of IOException Broken pipe. 2/ a high number of
  CLOSE_WAIT appear between nginx and tomcat.
 
  I have uploaded a catalina log, an lsof and a netstat capture made
  while the issue was happening:
  http://s000.tinyupload.com/index.php?file_id=85064286189504330545
  (email size limit prevented me from attaching them to the
  message). Any pointer is welcomed. Also is there a way to have more
  details on connection status of tomcat ?

 What Connector are you using?

 Can you take a thread dump to see if threads are stuck somewhere?

 For the broken pipes... are they failing during reads or writes? Are
 you handling requests from a large number of limited- or
 unreliable-bandwidth clients -- like mobile devices?

 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 Comment: GPGTools - http://gpgtools.org

 iQIcBAEBCAAGBQJVFVZJAAoJEBzwKT+lPKRYwF0P/jM9+o7H0GEs4SiT8WGlVLNb
 lPF9lMmbGdfZd0y/4Chm7h0L693OqyttGpeXlDZgvC66YyEPV6rrR9pRvFN/F9dM
 KwJd29+5X7g2Mhm6A89CCYZOXvK9xeALt1diCnUth+QS65I97zgRcUD0L/kxdReZ
 g2BncEGdHGp/s7WkRpcI3YZ/dzOFiJQpEr8zUsqHjkGA5jLbwC6sYSa5dvOMKDFB
 55vWJwLzdMxtvJPcATEvaXNmCGkQr6izGWewMDHstkcalafSZPa3DIUF4QJYwUzz
 YIaVaclJsffkkllRM+go739vnd4i7Y746pNtC3lrRwh27CtuIMcBPEP6uPNZ+Bc/
 aBmCaPBcGxevhALJ86Ha3TacPBiwIoKpMBimgzzSvjzPklKFFI/b865VZPLUd+Xh
 Y+FCW3NotmnDzkpW+u7qLbGvsLca1cTqwzJzucit3iBDA7YZzAneznEl/v8DetqX
 uKwuvm7CmLb1gHhxhFotA1EHjB1UIwHmT20eovwvBFWWlfjT6P0UUN/TXP1ju1Vi
 1tPrX66ONheiyK8VXi2gj63RxmvlJKCrb2qG8/l4qcSNJ5OHzYijrtvAfGxeKWt8
 YahKqTuQCsfCHnh625SAqhGVav+nkL7cHV0ZGhUzX/hyrORmKFMK6GBhA+jGIbO3
 R52u4jQosMmfP5fSxl2S
 =IRiW
 -END PGP SIGNATURE-

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Tomcat 7 upgrade from 7.0.28-4 to 7.0.56-1 causes application freeze

2015-03-26 Thread Thomas Boniface
Following the update of the tomcat 7 version from 7.0.28-4 to 7.0.56-1
hoping to improve our application performances we experienced a quick
freeze of the application under a standard load (that was handle by
previous tomcat version without particular issue).

Our application is behind a nginx server and receives a high volume of HTTP
requests from client and is using servlet 3.0 async features.

While investigating we made following observations:
1/ catalina.out shows a high number of IOException Broken pipe.
2/ a high number of CLOSE_WAIT appear between nginx and tomcat.

I have uploaded a catalina log, an lsof and a netstat capture made while
the issue was happening:
http://s000.tinyupload.com/index.php?file_id=85064286189504330545 (email
size limit prevented me from attaching them to the message).
Any pointer is welcomed. Also is there a way to have more details on
connection status of tomcat ?

Thanks,
Thomas


Connections stacking between tomcat and nginx

2015-03-04 Thread Thomas Boniface
Hi,

We have a setup where tomcat is behind nginx and in some case the maximum
number of connection is reached between these two.  These connections are
in close wait state and a tomcat thread dump shows no blocking on its side.

I tried to check the connections status using jmxproxy. I was able see the
number of active connection but it can find out any details of what are
connections doing or what their status are internally for tomcat. Are these
kind of details available using JMX or is there any other path I could
follow in order to have more details on the connection state at tomcat
level ?

Thanks,
Thomas