Re: Serious non-native AJP connector issue

2007-06-14 Thread Bill Barker

"Jess Holle" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]
> Okay that's all starting to make some sense, but it hard to see how
> someone would come to this understanding from the documentation.
>
> I also don't see how "|connectionTimeout" will help here in that the
> documentation says:
> |
>
>The number of milliseconds this *Connector* will wait, after
>accepting a connection, for the request URI line to be presented.
>The default value is infinite (i.e. no timeout).
>

In practice, this is a timeout on a wait on a keep-alive connection (since 
request-body processing usually happens fast).  If this particular 
connection in the pool isn't reused in the time allowed, then Tomcat will 
close it and put the thread back in the queue to process new connections. 
Using CPing/CPong is recommended in this case to allow httpd to understand 
that Tomcat has hung up the phone.  So, if after the first request, httpd 
chooses to not reuse this connection for a long time, then the thread can 
recycle itself.

> If a connection is formed between Apache and Tomcat for 24 requests and
> 17 are immediately processed due to a maxThreads limit of 18, how would
> setting a low "connectionTimeout" help the 17 threads process the other
> 7 requests?
>

It would cause the TC threads listening for a new request to believe that it 
will never come, and they will throw themselves back into the pool.

> I'm trying to understand if there is any useful throttling configuration
> in which the Java AJP connector has a maxThreads less than Apache's max
> AJP connections -- plus 1.  Put another way, I'm not quite understanding
> any useful "acceptCount" scenario from the "connectionTimeout"
> description above.
>

Using acceptCount is normally useless.  The case where it is is when you 
suddenly get a very very large number of hits to your app faster than Tomcat 
can handle.

> I know the Java AJP connector uses a thread-per-connection model.  I had
> assumed that the "maxThreads" meant maximum /active request processing
> /threads (e.g. like some old Tomcat releases used to call this
> "maxProcessors") and that connections covered by "acceptCount" were
> still allocated threads and were still accepted (as the parameter name
> implies).  I'd further assumed that a fair blocking queue arrangement
> would allow "maxThreads" connection threads to run and keep the others
> waiting until their turn.  I don't see anything in the documentation to
> the contrary of this.  Further I'm trying to understand any sort of
> arrangement with "connectionTimeout" that would give this level of
> utility to "acceptCount".  As it stands it seems like users can easily
> produce devastatingly bad behavior by making assumptions about
> "acceptCount" that seem quite logical from the documentation.
>

The acceptCount is only useful to handle unexpected floods of requests. 
Using the connectionTimeout means that the Tomcat thread won't just sit and 
listen forever for that particler socket connection to send another request, 
and will be freed up to be re-assigned to another socket connection.  With 
any reasonable setting, the connectionTimeout is just how long Tomcat will 
maintain a keep-alive with httpd (which is very different from how long 
httpd will keep a keep-alive with the client).

> Obviously the APR connector can do much better by /not/ allocating
> threads to connections beyond "maxThreads".  That's great, but juggling
> native builds for many different OS's can be a real issue (which is why
> I still hold out hope that the NIO connection will come through with
> something better than the non-NIO Java connector's performance even if
> it is not APR-level performance).
>

It seems that we lost building the NIO/AJP connector in Tomcat, but since i 
recognize you from [EMAIL PROTECTED], you will have seen that already :).  Last 
time 
I tested, the experimental NIO/AJP connector performed better than either 
the default or the APR connector on Solaris (surprise, surprise), and came 
in dead last on Windows.

> I don't mean to be argumentative -- I'm just really struggling to
> understand and hopefully preventing future misunderstandings through
> more clarity in the documentation.
>

With the documentation, as with the code, patches are always welcome :).


> --
> Jess Holle
>
> Bill Barker wrote:
>> "Jess Holle" <[EMAIL PROTECTED]> wrote in message
>> news:[EMAIL PROTECTED]
>>
>>> There's no intent to handle this case?  Really?
>>>
>>> Tomcat should throw an "IllegalArgumentException" whenever 'acceptCount'
>>> is set to anything other than 0 explaining this if this is the case!
>>>
>>> If so, this is very unfortunate.  We use "maxThreads" as a throttle to
>>> limit the concurrency at this level under the (silly?) assumption that
>>> "acceptCount" behaves as documented in the documentation.  [Yes, one
>>> could argue that a separate throttle should be used behind this layer,
>>> which we might have been inclined to do if

svn commit: r547503 - /tomcat/connectors/trunk/jk/build.xml

2007-06-14 Thread billbarker
Author: billbarker
Date: Thu Jun 14 20:01:36 2007
New Revision: 547503

URL: http://svn.apache.org/viewvc?view=rev&rev=547503
Log:
re-include the experimental NIO/AJP connector in the build

Modified:
tomcat/connectors/trunk/jk/build.xml

Modified: tomcat/connectors/trunk/jk/build.xml
URL: 
http://svn.apache.org/viewvc/tomcat/connectors/trunk/jk/build.xml?view=diff&rev=547503&r1=547502&r2=547503
==
--- tomcat/connectors/trunk/jk/build.xml (original)
+++ tomcat/connectors/trunk/jk/build.xml Thu Jun 14 20:01:36 2007
@@ -205,7 +205,7 @@
 
 
 
-   
+   






-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Serious non-native AJP connector issue

2007-06-14 Thread Jess Holle
Okay that's all starting to make some sense, but it hard to see how 
someone would come to this understanding from the documentation.


I also don't see how "|connectionTimeout" will help here in that the 
documentation says:

|

   The number of milliseconds this *Connector* will wait, after
   accepting a connection, for the request URI line to be presented.
   The default value is infinite (i.e. no timeout).

If a connection is formed between Apache and Tomcat for 24 requests and 
17 are immediately processed due to a maxThreads limit of 18, how would 
setting a low "connectionTimeout" help the 17 threads process the other 
7 requests?


I'm trying to understand if there is any useful throttling configuration 
in which the Java AJP connector has a maxThreads less than Apache's max 
AJP connections -- plus 1.  Put another way, I'm not quite understanding 
any useful "acceptCount" scenario from the "connectionTimeout" 
description above.


I know the Java AJP connector uses a thread-per-connection model.  I had 
assumed that the "maxThreads" meant maximum /active request processing 
/threads (e.g. like some old Tomcat releases used to call this 
"maxProcessors") and that connections covered by "acceptCount" were 
still allocated threads and were still accepted (as the parameter name 
implies).  I'd further assumed that a fair blocking queue arrangement 
would allow "maxThreads" connection threads to run and keep the others 
waiting until their turn.  I don't see anything in the documentation to 
the contrary of this.  Further I'm trying to understand any sort of 
arrangement with "connectionTimeout" that would give this level of 
utility to "acceptCount".  As it stands it seems like users can easily 
produce devastatingly bad behavior by making assumptions about 
"acceptCount" that seem quite logical from the documentation.


Obviously the APR connector can do much better by /not/ allocating 
threads to connections beyond "maxThreads".  That's great, but juggling 
native builds for many different OS's can be a real issue (which is why 
I still hold out hope that the NIO connection will come through with 
something better than the non-NIO Java connector's performance even if 
it is not APR-level performance).


I don't mean to be argumentative -- I'm just really struggling to 
understand and hopefully preventing future misunderstandings through 
more clarity in the documentation.


--
Jess Holle

Bill Barker wrote:
"Jess Holle" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
  

There's no intent to handle this case?  Really?

Tomcat should throw an "IllegalArgumentException" whenever 'acceptCount'
is set to anything other than 0 explaining this if this is the case!

If so, this is very unfortunate.  We use "maxThreads" as a throttle to
limit the concurrency at this level under the (silly?) assumption that
"acceptCount" behaves as documented in the documentation.  [Yes, one
could argue that a separate throttle should be used behind this layer,
which we might have been inclined to do if the documentation said
"acceptCount does not work".]




You are misunderstanding how AJP works.  Since you don't have a 
connectionTimeout on the , the connections to httpd stay alive 
waiting to get another request on the same socket.  As a result, there won't 
be any free Threads to handle a new connection so it doesn't matter what 
acceptCount is.  Their isn't anyone there to accept them.
  




Re: Serious non-native AJP connector issue

2007-06-14 Thread Bill Barker

"Jess Holle" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> There's no intent to handle this case?  Really?
>
> Tomcat should throw an "IllegalArgumentException" whenever 'acceptCount'
> is set to anything other than 0 explaining this if this is the case!
>
> If so, this is very unfortunate.  We use "maxThreads" as a throttle to
> limit the concurrency at this level under the (silly?) assumption that
> "acceptCount" behaves as documented in the documentation.  [Yes, one
> could argue that a separate throttle should be used behind this layer,
> which we might have been inclined to do if the documentation said
> "acceptCount does not work".]
>

You are misunderstanding how AJP works.  Since you don't have a 
connectionTimeout on the , the connections to httpd stay alive 
waiting to get another request on the same socket.  As a result, there won't 
be any free Threads to handle a new connection so it doesn't matter what 
acceptCount is.  Their isn't anyone there to accept them.

> Further, I've been able to reproduce such issues using a concurrency of
> exactly the "maxThreads".  At *best* there is an off-by-one error here,
> but I've still seen problems in some testing using a concurrency level 2
> or 3 less than "maxThreads" -- which leads me to believe there is
> something a bit more nefarious going on here.  Another of my colleagues
> claims that he's encountered issues in simple ab tests quite a ways
> below "maxThreads".  I've not managed to reproduce this case myself, but
> I could certainly focus on this case if the Tomcat contributors have no
> intent to fix "acceptCount" behavior for the Java AJP connector.
>

The thread waiting to accept new connections counts as a thread for the 
purposes of maxThreads, so yes it is one less.  And, I can save you the 
trouble of investigating, since as long as your maxThreads <= pool size in 
httpd you will see this sort of thing.

> Is there a firm intent to handle this case in the _/native/_ AJP
> connector?  If so, then this distinction should be clearly stated (and
> an IllegalArgumentException thrown as per above when the native
> connector cannot be found/initialized).
>

The APR/AJP connector (as well as the experimental NIO/AJP connector) use a 
many-to-one connection-to-thread setup so you can have a large number of 
connections sitting and doing nothing and use relatively few threads.  With 
the default Java connector the mapping is one-to-one.

> Personally I see utility in having "acceptCount" working in the AJP
> case.  If one exceeds the mod_proxy_ajp hard maximum the client
> immediately gets a 503.  There are cases when just queuing a few
> requests makes more sense than either returning 503's or letting them
> rip -- which is where acceptCount comes in.  [Granted the acceptCount
> numbers below are excessive, but that's another matter...]
>

It's not that acceptCount doesn't work, it's that without a 
connectionTimeout each thread waits forever listening on it's connection for 
the next request to come in.  With a connectionTimeout, once httpd hasn't 
sent another request on this connection after the timeout, the thread is 
recycled and can handle a new connection.

> --
> Jess Holle
> [EMAIL PROTECTED]
>
> Bill Barker wrote:
>> Yes, since you know in advance how many connections you are going to get 
>> to
>> the AJP connector, you can configure it so that it has enough threads to
>> handle all of those connections.  That is why it doesn't attempt to 
>> handle
>> the case when the concurrency goes above maxThreads
>>> -Original Message-
>>> From: Jess Holle [mailto:[EMAIL PROTECTED]
>>> Sent: Thursday, June 14, 2007 2:58 PM
>>> To: Tomcat Developers List
>>> Cc: Dobson, Simon; Wang, Andy; Fenlason, Josh
>>> Subject: Serious non-native AJP connector issue
>>>
>>> We're facing a /serious /issue with the non-native AJP connector.
>>>
>>> To put it most simply, some requests seem to "get lost" in Tomcat in
>>> various cases involving concurrent requests -- and not
>>> egregious numbers
>>> of concurrent requests, either.
>>>
>>> For instance,
>>>
>>>1. Use a Tomcat 5.5.23 with a configuration like:
>>>
>>> >>minSpareThreads="4" maxSpareThreads="12"
>>> maxThreads="18" acceptCount="282"
>>>tomcatAuthentication="false"
>>> useBodyEncodingForURI="true" URIEncoding="UTF-8"
>>>enableLookups="false" redirectPort="8443"
>>> protocol="AJP/1.3" />
>>>
>>> (which are intended solely for making it easier to test
>>> concurrency
>>> issues that to overrun a realistic 'maxThreads' setting) and as a
>>> control, similar thread pool settings on the direct HTTP
>>> connector:
>>>
>>> >>minSpareThreads="4" maxSpareThreads="12"
>>> maxThreads="18" acceptCount="282"
>>>enableLookups="false" redirectPort="8443"
>>>connectionTimeout="2"
>>> disableUploadTimeout="true" />
>>>
>

Re: Serious non-native AJP connector issue

2007-06-14 Thread Jess Holle

There's no intent to handle this case?  Really?

Tomcat should throw an "IllegalArgumentException" whenever 'acceptCount' 
is set to anything other than 0 explaining this if this is the case!


If so, this is very unfortunate.  We use "maxThreads" as a throttle to 
limit the concurrency at this level under the (silly?) assumption that 
"acceptCount" behaves as documented in the documentation.  [Yes, one 
could argue that a separate throttle should be used behind this layer, 
which we might have been inclined to do if the documentation said 
"acceptCount does not work".]


Further, I've been able to reproduce such issues using a concurrency of 
exactly the "maxThreads".  At *best* there is an off-by-one error here, 
but I've still seen problems in some testing using a concurrency level 2 
or 3 less than "maxThreads" -- which leads me to believe there is 
something a bit more nefarious going on here.  Another of my colleagues 
claims that he's encountered issues in simple ab tests quite a ways 
below "maxThreads".  I've not managed to reproduce this case myself, but 
I could certainly focus on this case if the Tomcat contributors have no 
intent to fix "acceptCount" behavior for the Java AJP connector.


Is there a firm intent to handle this case in the _/native/_ AJP 
connector?  If so, then this distinction should be clearly stated (and 
an IllegalArgumentException thrown as per above when the native 
connector cannot be found/initialized).


Personally I see utility in having "acceptCount" working in the AJP 
case.  If one exceeds the mod_proxy_ajp hard maximum the client 
immediately gets a 503.  There are cases when just queuing a few 
requests makes more sense than either returning 503's or letting them 
rip -- which is where acceptCount comes in.  [Granted the acceptCount 
numbers below are excessive, but that's another matter...]


--
Jess Holle
[EMAIL PROTECTED]

Bill Barker wrote:

Yes, since you know in advance how many connections you are going to get to
the AJP connector, you can configure it so that it has enough threads to
handle all of those connections.  That is why it doesn't attempt to handle
the case when the concurrency goes above maxThreads

-Original Message-
From: Jess Holle [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 14, 2007 2:58 PM

To: Tomcat Developers List
Cc: Dobson, Simon; Wang, Andy; Fenlason, Josh
Subject: Serious non-native AJP connector issue

We're facing a /serious /issue with the non-native AJP connector.

To put it most simply, some requests seem to "get lost" in Tomcat in 
various cases involving concurrent requests -- and not 
egregious numbers 
of concurrent requests, either.


For instance,

   1. Use a Tomcat 5.5.23 with a configuration like:



(which are intended solely for making it easier to test 
concurrency

issues that to overrun a realistic 'maxThreads' setting) and as a
control, similar thread pool settings on the direct HTTP 
connector:


   connectionTimeout="2" 
disableUploadTimeout="true" />


   2. Use an Apache 2.2.4 with mod_proxy_ajp with a 
configuration like:



BalancerMember ajp://localhost:8010 min=16 max=300 smax=40
ttl=900 keepalive=Off timeout=900


RewriteEngine on
RewriteRule ^(/TestApp/(.*\.jsp(.*)|servlet/.*|.*\.jar))$
balancer://ajpWorker$1 [P]

(on Windows in this case; similar results can be obtained on Linux
at least)

   3. Use a simple test JSP page (placed in a web app 
containing nothing

  else):

<[EMAIL PROTECTED] session="false"
%><[EMAIL PROTECTED] contentType="text/html" pageEncoding="UTF-8"
%><%!
  private static final String  titleString = "Sleepy 
Test JSP Page";

%>

<%
  String  sleepSeconds = request.getParameter( "secs" );
  if ( sleepSeconds == null )
sleepSeconds = "1";
  long  secsToSleep = Long.parseLong( sleepSeconds );
  Thread.sleep( 1000L * secsToSleep );
%>


<%=titleString%>



<%=titleString%>: SUCCESS!
[Slept <%= secsToSleep %> seconds.]




   4. Hit the page with ab
  * First, test direct Tomcat connections:
o ab -n 24 -c 24 -A wcadmin:wcadmin
  http://hostname:*8080*/TestApp/test.jsp?secs=3
  + Result: All requests complete sucessfully in
6118 ms.
  * Second, test connections via Apache:
o ab -n 24 -c 24 -A wcadmin:wcadmin
  http://hostname/TestApp/test.jsp?secs=3
  + Result: Only 17 requests complete before ab
times out.
  * Third, test connections via Apache again soon (under the
BalancerMember 'timeout' seconds) after the last test
o ab -n 24 -c 24 -A wcadmin:wcadmin
  h

DO NOT REPLY [Bug 41766] - apache-tomcat-5.5.20-src.tar.gz includes broken class files

2007-06-14 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=41766


[EMAIL PROTECTED] changed:

   What|Removed |Added

 Status|NEEDINFO|RESOLVED
 Resolution||FIXED




--- Additional Comments From [EMAIL PROTECTED]  2007-06-14 17:33 ---
I have amended the build script to prevent this. The fix will apply to 5.5.25
onwards.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: 5.5.24 candidate binaries

2007-06-14 Thread Mark Thomas
Filip Hanik - Dev Lists wrote:
> http://people.apache.org/~fhanik/tomcat/tomcat-5.5/v5.5.24/
> will let these sit to mid next week, and then we can take a vote.
> feedback between now and then is welcome at any time.

One minor issue, the source zip (and I suspect the tarball) contains a
number of directories that it should not. Specifically:
apache-tomcat-5.5.24-src\connectors\jk\jkstatus\build\
apache-tomcat-5.5.24-src\connectors\jk\jkstatus\dist\

I have updated the build script to exclude these in future builds.

Mark


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r547465 - /tomcat/build/tc5.5.x/build.xml

2007-06-14 Thread markt
Author: markt
Date: Thu Jun 14 17:33:05 2007
New Revision: 547465

URL: http://svn.apache.org/viewvc?view=rev&rev=547465
Log:
Fix bug 41766. Don't include non-source dirs in source distro.

Modified:
tomcat/build/tc5.5.x/build.xml

Modified: tomcat/build/tc5.5.x/build.xml
URL: 
http://svn.apache.org/viewvc/tomcat/build/tc5.5.x/build.xml?view=diff&rev=547465&r1=547464&r2=547465
==
--- tomcat/build/tc5.5.x/build.xml (original)
+++ tomcat/build/tc5.5.x/build.xml Thu Jun 14 17:33:05 2007
@@ -9,6 +9,7 @@
   
   
 
+
   
 
   
@@ -1441,6 +1442,8 @@
 
 
 
+
+
 
 
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DO NOT REPLY [Bug 42593] - Win32 Apache/jk/tomcat configuration causes 100% cpu usage

2007-06-14 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=42593


[EMAIL PROTECTED] changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||WORKSFORME




--- Additional Comments From [EMAIL PROTECTED]  2007-06-14 16:48 ---
Unfortunately not. This works for me on WinXP SP2, httpd 2.2.4, mod_jk built a
few days ago from svn and the latest 5.5.x from svn.

Is there something in your set-up you didn't mention? Did you use the native
connector for Tomcat? If so, did you use the same version for TC5 and TC6?

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DO NOT REPLY [Bug 42574] - clients blocked on a server

2007-06-14 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=42574


[EMAIL PROTECTED] changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||INVALID




--- Additional Comments From [EMAIL PROTECTED]  2007-06-14 16:41 ---
This sounds like an application problem. Please use the users list for advice.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Serious non-native AJP connector issue

2007-06-14 Thread Bill Barker
Yes, since you know in advance how many connections you are going to get to
the AJP connector, you can configure it so that it has enough threads to
handle all of those connections.  That is why it doesn't attempt to handle
the case when the concurrency goes above maxThreads.

 

> -Original Message-
> From: Jess Holle [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, June 14, 2007 2:58 PM
> To: Tomcat Developers List
> Cc: Dobson, Simon; Wang, Andy; Fenlason, Josh
> Subject: Serious non-native AJP connector issue
> 
> We're facing a /serious /issue with the non-native AJP connector.
> 
> To put it most simply, some requests seem to "get lost" in Tomcat in 
> various cases involving concurrent requests -- and not 
> egregious numbers 
> of concurrent requests, either.
> 
> For instance,
> 
>1. Use a Tomcat 5.5.23 with a configuration like:
> 
> minSpareThreads="4" maxSpareThreads="12"
> maxThreads="18" acceptCount="282"
>tomcatAuthentication="false"
> useBodyEncodingForURI="true" URIEncoding="UTF-8"
>enableLookups="false" redirectPort="8443"
> protocol="AJP/1.3" />
> 
> (which are intended solely for making it easier to test 
> concurrency
> issues that to overrun a realistic 'maxThreads' setting) and as a
> control, similar thread pool settings on the direct HTTP 
> connector:
> 
> minSpareThreads="4" maxSpareThreads="12"
> maxThreads="18" acceptCount="282"
>enableLookups="false" redirectPort="8443"
>connectionTimeout="2" 
> disableUploadTimeout="true" />
> 
>2. Use an Apache 2.2.4 with mod_proxy_ajp with a 
> configuration like:
> 
> 
> BalancerMember ajp://localhost:8010 min=16 max=300 smax=40
> ttl=900 keepalive=Off timeout=900
> 
> 
> RewriteEngine on
> RewriteRule ^(/TestApp/(.*\.jsp(.*)|servlet/.*|.*\.jar))$
> balancer://ajpWorker$1 [P]
> 
> (on Windows in this case; similar results can be obtained on Linux
> at least)
> 
>3. Use a simple test JSP page (placed in a web app 
> containing nothing
>   else):
> 
> <[EMAIL PROTECTED] session="false"
> %><[EMAIL PROTECTED] contentType="text/html" pageEncoding="UTF-8"
> %><%!
>   private static final String  titleString = "Sleepy 
> Test JSP Page";
> %>
> 
> <%
>   String  sleepSeconds = request.getParameter( "secs" );
>   if ( sleepSeconds == null )
> sleepSeconds = "1";
>   long  secsToSleep = Long.parseLong( sleepSeconds );
>   Thread.sleep( 1000L * secsToSleep );
> %>
> 
> <%=titleString%>
> 
> 
> 
> <%=titleString%>: SUCCESS!
> [Slept <%= secsToSleep %> seconds.]
> 
> 
> 
> 
>4. Hit the page with ab
>   * First, test direct Tomcat connections:
> o ab -n 24 -c 24 -A wcadmin:wcadmin
>   http://hostname:*8080*/TestApp/test.jsp?secs=3
>   + Result: All requests complete sucessfully in
> 6118 ms.
>   * Second, test connections via Apache:
> o ab -n 24 -c 24 -A wcadmin:wcadmin
>   http://hostname/TestApp/test.jsp?secs=3
>   + Result: Only 17 requests complete before ab
> times out.
>   * Third, test connections via Apache again soon (under the
> BalancerMember 'timeout' seconds) after the last test
> o ab -n 24 -c 24 -A wcadmin:wcadmin
>   http://hostname/TestApp/test.jsp?secs=3
>   + Result: Only 9 requests complete 
> before ab times
> out.
> 
> Something is clearly /horribly/ wrong with the handling of any 
> concurrency over 'maxThreads' in this case.  Even so, there 
> seems to be 
> some sort of "off-by-one" error in that only 17 requests 
> complete, not 
> 18.  Worse, this has a lingering effect that decreases 
> Tomcat's ability 
> to concurrent requests thereafter (with this impact seemingly 
> being much 
> worse the longer the BalancerMember timeout is set to -- and we have 
> some very long running requests and thus need this timeout to 
> be /very/ 
> large).
> 
> This is not the only ill effect we've seen when hitting Tomcat 5.5.24 
> with concurrent requests in this manner, but it is a good 
> place to start.
> 
> As for the native connector, it just works here.  So why don't I just 
> use it?  Well, first off, we have to support Tomcat on 
> Windows (32 and 
> 64-bit), Linux, Solaris, HPUX (PA-RISC and Itanium), and AIX.  We've 
> been unable to get the connector built on some of these 
> platforms and on 
> some we can't get the resulting binary to function (specifically on 
> AIX).  Further, we had some stability issues with the native 
> connec

Serious non-native AJP connector issue

2007-06-14 Thread Jess Holle

We're facing a /serious /issue with the non-native AJP connector.

To put it most simply, some requests seem to "get lost" in Tomcat in 
various cases involving concurrent requests -- and not egregious numbers 
of concurrent requests, either.


For instance,

  1. Use a Tomcat 5.5.23 with a configuration like:

   

   (which are intended solely for making it easier to test concurrency
   issues that to overrun a realistic 'maxThreads' setting) and as a
   control, similar thread pool settings on the direct HTTP connector:

   

  2. Use an Apache 2.2.4 with mod_proxy_ajp with a configuration like:

   
   BalancerMember ajp://localhost:8010 min=16 max=300 smax=40
   ttl=900 keepalive=Off timeout=900
   

   RewriteEngine on
   RewriteRule ^(/TestApp/(.*\.jsp(.*)|servlet/.*|.*\.jar))$
   balancer://ajpWorker$1 [P]

   (on Windows in this case; similar results can be obtained on Linux
   at least)

  3. Use a simple test JSP page (placed in a web app containing nothing
 else):

   <[EMAIL PROTECTED] session="false"
   %><[EMAIL PROTECTED] contentType="text/html" pageEncoding="UTF-8"
   %><%!
 private static final String  titleString = "Sleepy Test JSP Page";
   %>
   
   <%
 String  sleepSeconds = request.getParameter( "secs" );
 if ( sleepSeconds == null )
   sleepSeconds = "1";
 long  secsToSleep = Long.parseLong( sleepSeconds );
 Thread.sleep( 1000L * secsToSleep );
   %>
   
   <%=titleString%>
   
   
   
   <%=titleString%>: SUCCESS!
   [Slept <%= secsToSleep %> seconds.]
   
   
   

  4. Hit the page with ab
 * First, test direct Tomcat connections:
   o ab -n 24 -c 24 -A wcadmin:wcadmin
 http://hostname:*8080*/TestApp/test.jsp?secs=3
 + Result: All requests complete sucessfully in
   6118 ms.
 * Second, test connections via Apache:
   o ab -n 24 -c 24 -A wcadmin:wcadmin
 http://hostname/TestApp/test.jsp?secs=3
 + Result: Only 17 requests complete before ab
   times out.
 * Third, test connections via Apache again soon (under the
   BalancerMember 'timeout' seconds) after the last test
   o ab -n 24 -c 24 -A wcadmin:wcadmin
 http://hostname/TestApp/test.jsp?secs=3
 + Result: Only 9 requests complete before ab times
   out.

Something is clearly /horribly/ wrong with the handling of any 
concurrency over 'maxThreads' in this case.  Even so, there seems to be 
some sort of "off-by-one" error in that only 17 requests complete, not 
18.  Worse, this has a lingering effect that decreases Tomcat's ability 
to concurrent requests thereafter (with this impact seemingly being much 
worse the longer the BalancerMember timeout is set to -- and we have 
some very long running requests and thus need this timeout to be /very/ 
large).


This is not the only ill effect we've seen when hitting Tomcat 5.5.24 
with concurrent requests in this manner, but it is a good place to start.


As for the native connector, it just works here.  So why don't I just 
use it?  Well, first off, we have to support Tomcat on Windows (32 and 
64-bit), Linux, Solaris, HPUX (PA-RISC and Itanium), and AIX.  We've 
been unable to get the connector built on some of these platforms and on 
some we can't get the resulting binary to function (specifically on 
AIX).  Further, we had some stability issues with the native connector 
in the past and had considered the Java connector the safest, if not 
fastest, option -- and to a degree given that everything is Java I still 
feel that's the case.  Finally, however, this connector should just 
plain work.  Tomcat shouldn't be a cripple unless/until you manage to 
build a native connector for your platform.


Any troubleshooting and/or debugging ideas (e.g. where exactly to place 
breakpoints, what logs to turn on, etc, etc) would be /greatly/ appreciated.


--
Jess Holle
[EMAIL PROTECTED]

P.S. If this should go to the user's mailing list instead that's fine, 
but this really seemed like a developer-oriented issue to me.




Re: Proposed simplification of CometEvent

2007-06-14 Thread Costin Manolache

On 6/14/07, Filip Hanik - Dev Lists <[EMAIL PROTECTED]> wrote:


Costin Manolache wrote:
>> >
>> >
>> > Sounds better - but as Remy explained you would first need to explain
>> > why blocking is needed in this context and how to deal with the
>> confusion
>> > of mixing blocking and non-blocking for users, and the implementation
>> > complexities it adds.
>> trunk doesn't mix them. a comet connection is either blocking or non
>> blocking, it doesn't shift between the two,
>> and it allows developers to choose what they want. Just like a
>> SocketChannel in java.nio.
>> there is nothing confusing about that, unless java.nio is confusing :)
>
>
>
> Well, nio is far from perfect - but that's not the point.
>
> Servlets have a very nice blocking mechanism already - it's the
> servlet API
> :-).
> The question is why would you need to have a Commet connection blocking.





because comet is not a servlet, and the ease of use for having blocking

write and read is huge,
especially when Tomcat can notify you when you can write or read, there
is no need to scratch your head and try to use non blocking.
non blocking is more complex, and not for everyone.




I'm not sure I understand - there is a perfectly fine blocking read/write
interface, it's the
plain servlet API.

I also agree that 'blocking' mode is sometimes easier to code than
event-based, and
I can see the benefit of doing some stuff in blocking mode and some in
non-blocking.

My concern was making the read and write blocking in a commet servlet based
on
a config in the CometEvent.

The alternative is to have the comet servlet allways non-blocking for
read/write, but
provide a convenience method that will simulate the blocking read or write (
which is
easy, all you need is a blocking waitForEvent(time) ). Benefits:
- the read/write implementation is simpler ( no need to check the config
mode or do tricks ),
- comet is easier to understand
- you can do more advanced things, like using reads in non-blocking mode and
writes in blocking
mode.

The code using waitForEvent() is a bit more complicated than the code for
blocking read/write (
but if you want the simplest solution - use regular servlet ), and it is
simpler than a pure
callback based model.





> I think it's very reasonable to add a blocking waitForEvent() to allow
> servlets have a
> simpler ( but less efficient ) implementation.





Comet inherently is a "wait for event" system. Tomcat acts as a

multiplexer/dispatcher,and fires events into the CometProcessor.



Comet seems to be an event-based system, that's why setting it to 'blocking
mode' is so
confusing and bad. I'm not sure I agree it has to be a 'blocking wait for
event' system - in Remy
examples at least it is not.



The "waitForEvent()" method is obsolete in the Comet implementation, as

you are either on a Tomcat thread,
which you can activate at any time or you are on a async thread, and not
sure why you would need to block an async background thread.
>
> Think about utilities that take the event as param - would they need to
> check first
> if it's blocking or not ? And what would blocking give you in addition
to
> waitForEvent() -
> which is actually better since it allows you to un-block on any event,
> not
> only a specific
> read/write.
I can see that there is a huge misunderstanding of how comet actually
does work and what it actually is.
It is true that is essentially just a TCP socket between server and
client, but in addition to that,
Tomcat provides a shell around it, and instead of you having to manage
when to read or when to write,
tomcat becomes an event API.



Sure, it hides some limitations of NIO ( the IO thread that has to do all
things )
and adds some higher-level HTTP stuff.





>> >> > - please don't call the method configure(), it's commonly used
>> with a
>> >> > different meaning ( i.e. setting the port or general
>> configuration).
>> >> > setConnectionMode, etc. And using the enum doesn't sound
consistent
>> >> with
>> >> > other APIs either.
>> >> we can call it whatever we want. But saying not using enum, its not
>> >> consistent with other APIs in Tomcat,
>> >> means would never take advantage of new language features ever, I
>> think
>> >> that would be a shame.
>> >
>> >
>> > Same as above - the question is not about using new features, but
>> if the
>> > features
>> > fit the use. I have no problem with using enums for the event types -
>> > just
>> > for
>> > configure, in the context of configure(enum) versus setBlocking(),
>> > setFoo().
>> this has been adjusted based on the feedback, the method is now
>> configureBlocking(boolean)
>> the state of it can be used by calling isBlocking()
>>
>> register is using enums, mainly cause Remy, while he was working with
>> this API insisted on it.
>> I had preferred using an int, just like the socket API, but since Remy
>> had initially agreed to register, and proposed enum and unregister
>> we went with that.
>
>
> Ok.
I still think an int is better :)

Re: Proposed simplification of CometEvent

2007-06-14 Thread Filip Hanik - Dev Lists

Remy Maucherat wrote:

Filip Hanik - Dev Lists wrote:

here we go, some examples

http://people.apache.org/~fhanik/tomcat/aio.html#Example%20code%20snippets 



and the entire document has been updated to reflect most changes
http://people.apache.org/~fhanik/tomcat/aio.html


Here is an alternative version of the examples using the sandbox API 
to give people an idea, along with some comments. No renaming of 
sleep/callback, or others at this time. Hopefully, I did not make too 
many mistakes.


First example, which can enter a busy loop if all events start 
returning false for isWriteable (which is not very likely, of course):


public class ExampleCometStockStreamer implements CometProcessor {
  ...
  public class StockUpdater extends Thread {
public void run() {
  ...
  StockUpdates[] updates = fetchUpdates();
  Client[] clients = getClients(updates);
  for (int i=0; iYes, this difference here is one that I would vouch that the API would 
be explicit, instead of implicit.

compare

   if (event.isWriteable()) {
 byte[] data = getUpdateChunk(client.getNextUpdates());
 event.getHttpServletResponse().getOutputStream().write(data);
   } else {
 event.register(OP_WRITE);
   }

with
   if (event.isWriteable()) {
 byte[] data = getUpdateChunk(client.getNextUpdates());
 event.getHttpServletResponse().getOutputStream().write(data);
   }

the implicit registration for a WRITE event is not made clear by the API 
alone, its something you would have to discover.

and one could look for a use case, where a WRITE event wasnt desired.


  ...
  public void event(CometEvent event) throws IOException, 
ServletException {

...
if ( event.getEventType() == CometEvent.EventType.BEGIN ) {
} if ( event.getEventType() == CometEvent.EventType.READ ) {
  //read client Id and stock list from client
  //and add the event to our list
  String clientId = readClientInfo(event,stocks);
  clients.add(clientId, event, stocks);
} if ( event.getEventType() == CometEvent.EventType.WRITE ) {
  //we can now write
  byte[] data = getUpdateChunk(client.getNextUpdates());
  event.getHttpServletResponse().getOutputStream().write(data);
} else if (...) {
  ...
}
...
  }

}

What this example should be doing is remove the client from the list 
when isWriteable returns false, and add it back when it gets a write 

I can translate the second example, but it could lead to an abusive 
poller use and number of events (all writes are also done 
synchronously with blocking IO, which never makes sense to me).


public class ExampleCometStockStreamer implements CometProcessor {
  ...
  public class StockUpdater extends Thread {
public void run() {
  ...
  StockUpdates[] updates = fetchUpdates();
  Client[] clients = getClients(updates);
  for (int i=0; inow its starting to look funny, In the trunk version of the example, I'm 
interested if the socket buffer is ready to receive data,
but the sandbox version of it simply doesn't care, it just calls for a 
tomcat thread.

sandbox:
 client.getEvent().callback(); -> no guarantee for writeability
trunk:
 client.getEvent().register(OP_WRITE) -> event fires when network 
buffer is ready to receive data.



  ...
  public void event(CometEvent event) throws IOException, 
ServletException {

...
if ( event.getEventType() == CometEvent.EventType.BEGIN ) {
  //configure blocking
  event.configureBlocking(true);
} if ( event.getEventType() == CometEvent.EventType.READ ) {
  //read client Id and stock list from client
  //and add the event to our list
  String clientId = readClientInfo(event,stocks);
  clients.add(clientId, event, stocks);
} if ( event.getEventType() == CometEvent.EventType.CALLBACK ) {
  Client client = clients.get(event);
  //we can now write
  byte[] data = getUpdateChunk(client.getNextUpdates());
  event.getHttpServletResponse().getOutputStream().write(data);
} else if (...) {
  ...
}
...
  }

}




I think the third example is wrong: there's no reason for isWriteable 
or isReadable to change its result unless they trigger a large amount 
of logic and some IO operations. I thought you said it was wrong ;) 
Also, it will be very vulnerable to busy loops. I can translate it by 
wrapping the content of for (int j=0; jtry/catch, and removing the calls to isWriteable (which only introduce 
useless events, and may cause additional busy loops).


Straight translation (since isWriteable will trigger a write event 
which will flush, it will work, but busy loops are pretty much 
certain; it also assumes things about the data to read):


public class ExampleAllReadThenWriteComet implements CometProcessor {
  ...
  public class AllWriterThread extends Thread {
byte[] dataChunks = ...;
public void run() {
  ...
  for (int i=0; iif ( clients[j].getEvent().i

Re: Proposed simplification of CometEvent

2007-06-14 Thread Filip Hanik - Dev Lists

Costin Manolache wrote:

>
>
> Sounds better - but as Remy explained you would first need to explain
> why blocking is needed in this context and how to deal with the
confusion
> of mixing blocking and non-blocking for users, and the implementation
> complexities it adds.
trunk doesn't mix them. a comet connection is either blocking or non
blocking, it doesn't shift between the two,
and it allows developers to choose what they want. Just like a
SocketChannel in java.nio.
there is nothing confusing about that, unless java.nio is confusing :)




Well, nio is far from perfect - but that's not the point.

Servlets have a very nice blocking mechanism already - it's the 
servlet API

:-).
The question is why would you need to have a Commet connection blocking.
because comet is not a servlet, and the ease of use for having blocking 
write and read is huge,
especially when Tomcat can notify you when you can write or read, there 
is no need to scratch your head and try to use non blocking.

non blocking is more complex, and not for everyone.


I think it's very reasonable to add a blocking waitForEvent() to allow
servlets have a
simpler ( but less efficient ) implementation.
Comet inherently is a "wait for event" system. Tomcat acts as a 
multiplexer/dispatcher,

and fires events into the CometProcessor.
The "waitForEvent()" method is obsolete in the Comet implementation, as 
you are either on a Tomcat thread,
which you can activate at any time or you are on a async thread, and not 
sure why you would need to block an async background thread.


Think about utilities that take the event as param - would they need to
check first
if it's blocking or not ? And what would blocking give you in addition to
waitForEvent() -
which is actually better since it allows you to un-block on any event, 
not

only a specific
read/write.
I can see that there is a huge misunderstanding of how comet actually 
does work and what it actually is.
It is true that is essentially just a TCP socket between server and 
client, but in addition to that,
Tomcat provides a shell around it, and instead of you having to manage 
when to read or when to write,

tomcat becomes an event API.




>
>>
>> > - please don't call the method configure(), it's commonly used 
with a
>> > different meaning ( i.e. setting the port or general 
configuration).

>> > setConnectionMode, etc. And using the enum doesn't sound consistent
>> with
>> > other APIs either.
>> we can call it whatever we want. But saying not using enum, its not
>> consistent with other APIs in Tomcat,
>> means would never take advantage of new language features ever, I 
think

>> that would be a shame.
>
>
> Same as above - the question is not about using new features, but 
if the

> features
> fit the use. I have no problem with using enums for the event types -
> just
> for
> configure, in the context of configure(enum) versus setBlocking(),
> setFoo().
this has been adjusted based on the feedback, the method is now
configureBlocking(boolean)
the state of it can be used by calling isBlocking()

register is using enums, mainly cause Remy, while he was working with
this API insisted on it.
I had preferred using an int, just like the socket API, but since Remy
had initially agreed to register, and proposed enum and unregister
we went with that.



Ok.

I still think an int is better :)





>
>
>
>
>> > - see bellow - I don't think I understand the benefits of mixing
>> blocking
>> > and non-blocking in this interface, it is quite confusing.
>> It would be mixing it, its a one time config, during the BEGIN event,
>> you say
>> configureBlocking(true) or configureBlocking(false).
>> Comet is very much connection centric, so you can't mix it.
>>
>> In the trunk API, its clear to what you are using, blocking or non
>> blocking, in the sandbox API, the swap
>> of it happens when invoking isWriteable or isReadable, making the 
state

>> of the comet connection confusing to the developer.
>
>
> I'm not sure it's true - my understanding is that sandbox is all
> non-blocking.
> Invoking isWriteable is not blocking.
>
> I think it would be ok to add a blocking waitForEvent() - combined 
with

> isReadable()/isWriteable()
this would be a dead lock, as the Comet API must guarantee that a
CometProcessor.event
is only invoked by one worker thread at any time. The blocking you are
talking about can be done
using an async thread can be done by registering for the event you wish
to be notified in and then
maybe await a latch countdown, or doing a sync/wait() combo.



What would happen in the blocking case when a different event happens ?
Isn't it the same, if you want to guarantee single-threaded behaviour ?

Well - is there any docs on what is the intended thread model of comet
servlets,
A comet servlet is not single-threaded, but a comet event is. from 
Tomcat's stand point.
furthermore, a comet event (which represent a socket connection) can be 
used multithreaded,

but it will be the developers respons

DO NOT REPLY [Bug 42662] - Classloader issue for replicated sessions and dynamic proxies

2007-06-14 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=42662





--- Additional Comments From [EMAIL PROTECTED]  2007-06-14 06:39 ---
Created an attachment (id=20347)
 --> (http://issues.apache.org/bugzilla/attachment.cgi?id=20347&action=view)
Webapp to reproduce the problem

Deploy the webapp on a cluster of at least 2 nodes (distributable context,
replicated session). Call one node's url and watch the log of the other.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DO NOT REPLY [Bug 42662] New: - Classloader issue for replicated sessions and dynamic proxies

2007-06-14 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=42662

   Summary: Classloader issue for replicated sessions and dynamic
proxies
   Product: Tomcat 6
   Version: unspecified
  Platform: Other
OS/Version: Linux
Status: NEW
  Severity: critical
  Priority: P2
 Component: Catalina
AssignedTo: [EMAIL PROTECTED]
ReportedBy: [EMAIL PROTECTED]


We use 2 tomcats (tomcat 6.0.13) as Cluster with default tcp session
replication. Some webapps contains complex session information including dynamic
generated proxies.

The problem is that the classloader used to deserialize the proxy cannot load
any of the proxy interfaces during initialisation. The result is a
"ClassNotFoundException" but the interface class is pressent.

I will attach a minimal webapp to reproduce the problem. The exception thrown is
below. The exception is thrown, when both tomcats are running and the servlet is
called on one of them (the other tomcat throws the exception).

java.lang.ClassNotFoundException: proxytest.TestProxyInterface
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at 
java.io.ObjectInputStream.resolveProxyClass(ObjectInputStream.java:676)
at java.io.ObjectInputStream.readProxyDesc(ObjectInputStream.java:1531)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1493)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1732)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
at
org.apache.catalina.ha.session.DeltaRequest$AttributeInfo.readExternal(DeltaRequest.java:361)
at
org.apache.catalina.ha.session.DeltaRequest.readExternal(DeltaRequest.java:255)
at
org.apache.catalina.ha.session.DeltaManager.deserializeDeltaRequest(DeltaManager.java:619)
at
org.apache.catalina.ha.session.DeltaManager.handleSESSION_DELTA(DeltaManager.java:1363)
at
org.apache.catalina.ha.session.DeltaManager.messageReceived(DeltaManager.java:1320)
at
org.apache.catalina.ha.session.DeltaManager.messageDataReceived(DeltaManager.java:1083)
at
org.apache.catalina.ha.session.ClusterSessionListener.messageReceived(ClusterSessionListener.java:87)
at
org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:916)
at
org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:897)
at
org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:264)
at
org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at
org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:110)
at
org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at
org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at
org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:241)
at
org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:225)
at
org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:185)
at
org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:88)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Proposed simplification of CometEvent

2007-06-14 Thread Remy Maucherat

Filip Hanik - Dev Lists wrote:

here we go, some examples

http://people.apache.org/~fhanik/tomcat/aio.html#Example%20code%20snippets

and the entire document has been updated to reflect most changes
http://people.apache.org/~fhanik/tomcat/aio.html


Here is an alternative version of the examples using the sandbox API to 
give people an idea, along with some comments. No renaming of 
sleep/callback, or others at this time. Hopefully, I did not make too 
many mistakes.


First example, which can enter a busy loop if all events start returning 
false for isWriteable (which is not very likely, of course):


public class ExampleCometStockStreamer implements CometProcessor {
  ...
  public class StockUpdater extends Thread {
public void run() {
  ...
  StockUpdates[] updates = fetchUpdates();
  Client[] clients = getClients(updates);
  for (int i=0; i  public void event(CometEvent event) throws IOException, 
ServletException {

...
if ( event.getEventType() == CometEvent.EventType.BEGIN ) {
} if ( event.getEventType() == CometEvent.EventType.READ ) {
  //read client Id and stock list from client
  //and add the event to our list
  String clientId = readClientInfo(event,stocks);
  clients.add(clientId, event, stocks);
} if ( event.getEventType() == CometEvent.EventType.WRITE ) {
  //we can now write
  byte[] data = getUpdateChunk(client.getNextUpdates());
  event.getHttpServletResponse().getOutputStream().write(data);
} else if (...) {
  ...
}
...
  }

}

What this example should be doing is remove the client from the list 
when isWriteable returns false, and add it back when it gets a write event.


I can translate the second example, but it could lead to an abusive 
poller use and number of events (all writes are also done synchronously 
with blocking IO, which never makes sense to me).


public class ExampleCometStockStreamer implements CometProcessor {
  ...
  public class StockUpdater extends Thread {
public void run() {
  ...
  StockUpdates[] updates = fetchUpdates();
  Client[] clients = getClients(updates);
  for (int i=0; i  public void event(CometEvent event) throws IOException, 
ServletException {

...
if ( event.getEventType() == CometEvent.EventType.BEGIN ) {
  //configure blocking
  event.configureBlocking(true);
} if ( event.getEventType() == CometEvent.EventType.READ ) {
  //read client Id and stock list from client
  //and add the event to our list
  String clientId = readClientInfo(event,stocks);
  clients.add(clientId, event, stocks);
} if ( event.getEventType() == CometEvent.EventType.CALLBACK ) {
  Client client = clients.get(event);
  //we can now write
  byte[] data = getUpdateChunk(client.getNextUpdates());
  event.getHttpServletResponse().getOutputStream().write(data);
} else if (...) {
  ...
}
...
  }

}

I think the third example is wrong: there's no reason for isWriteable or 
isReadable to change its result unless they trigger a large amount of 
logic and some IO operations. I thought you said it was wrong ;) Also, 
it will be very vulnerable to busy loops. I can translate it by wrapping 
the content of for (int j=0; jand removing the calls to isWriteable (which only introduce useless 
events, and may cause additional busy loops).


Straight translation (since isWriteable will trigger a write event which 
will flush, it will work, but busy loops are pretty much certain; it 
also assumes things about the data to read):


public class ExampleAllReadThenWriteComet implements CometProcessor {
  ...
  public class AllWriterThread extends Thread {
byte[] dataChunks = ...;
public void run() {
  ...
  for (int i=0; iif ( clients[j].getEvent().isWriteable() && 
clients[j].getEvent().isReadable() ) {
  done = readClientData(clients[j]); //returns true if all 
data has been received for a request

}
  }
  done = false;
  while (!done) {
//write the response
if ( clients[j].getEvent().isWriteable() {

clients[j].getEvent().getHttpServletResponse().write(dataChunks[i]);
   done = true;
}
  }
}
  }
  ...
}
  }
  ...
  public void event(CometEvent event) throws IOException, 
ServletException {

...
if ( event.getEventType() == CometEvent.EventType.BEGIN ) {
  //add the event to our client list
  clients.add(event);
  //start our writer if all clients have arrived
  if (clients.size()==5) {
AllWriterThread thread = new AllWriterThread();
thread.start();
  }
} if ( event.getEventType() == CometEvent.EventType.READ ) {
} if ( event.getEventType() == CometEvent.EventType.WRITE ) {
} else if (...) {
  ...
}
...
  }

}

The last example is quite funny, and I can't translate it (doh), since 
there's no opposite API to sl