----------------------------------------------------------------
BEFORE YOU POST, search the faq at <http://java.apache.org/faq/>
WHEN YOU POST, include all relevant version numbers, log files,
and configuration files.  Don't make us guess your problem!!!
----------------------------------------------------------------



Cott Lang wrote:
> 
> 
> >no effect on counterpart threads executing in the JServ Java VM process
> >space. Thus, a possible scenario today is a spike in requests to some
> >intensive resource, like a JDBC database, a 'freeze out' can occur. That
> >is to say, even though Apache may have correctly executed its timeout
> >operation, JServ is in fact still trying to serve  all the requests. In
> >my opinion, this is a scalability problem.
> 
> More effictive to me would simply be a much lower timeout on the
> Apache side - when a JServ "freezes" on a resource - Apache
> continues to send requests to it without marking it failed even
> though it's not getting any responses back.

Uh, no. That doesn't stop any JServ threads from processing, or from
consuming a precious connection from the pool. In fact, I believe you
are increasing the chances that all JServ connection threads will be
busy on any given request. 

Once Apache times out (or the user hits the stop button), Apache will
not be sending any data back to the client, so why should the JServ VM
continue to consume a resource? Or worse, a queue of timed out requests?

> 
> >The thing I like about inserting a callback handler for timeouts is
> >Apache can more or less immediately halt operation in the JServ VM since
> >any subsequent read or write to the underlying socket will fail and the
> >thread will die. We could take it a step further, so that we actually
> >communicate a kill request, but I'm not sure how much more effective it
> >would be. In general though, communicating an Apache timeout condition
> >in some way, shape or form strikes me as much more scalable.
> 
> I guess, but if it's frozen on a resource like JDBC and especially
> if you dump your content at the end of jserv processing rather than
> streaming it out, you're not going to hit that socket write failure
> soon enough in any of my cases. :(

Two things: 

if the resource is frozen waiting on a JDBC response, at least the
thread is in an efficient wait state. (OK that's a minor one)

if you decide to dump content at the end of jserv processing rather than
streaming it out, you have an architectural issue impacting scalability.
This is precisely the problem the Cocoon group was facing with their
first version. Essentially, they would read in a dataset and then send
the result (there was a transformation in between, but this is the basic
sequence). They have since moved to a more streamed approach with great
results for scalability.

Also, I imagine you might be able to implement something like a
'heartbeat'. Maybe its a ping, maybe a full status handler, but
basically something that implicitly monitors the health of your
connection. If your doing particular arduous content processing, then it
seems all the more important to introduce a construct like this.

John Milan


--
--------------------------------------------------------------
Please read the FAQ! <http://java.apache.org/faq/>
To subscribe:        [EMAIL PROTECTED]
To unsubscribe:      [EMAIL PROTECTED]
Archives and Other:  <http://java.apache.org/main/mail.html>
Problems?:           [EMAIL PROTECTED]

Reply via email to