Excellent reply Doug!  This is not the first time this "trigger happy http-client"
problem has been brought up on this forum, so this is definitly a common problem faced 
by
any who's designed a web app that's exposed to high traffic.

Like you hinted, synchronizing HttpRequests on the HttpSession is a quick-fix, but by 
no
means ideal;  doing so may lead to thread starvation for other requests.  A scenario 
that
demonstrates this:

1)  all servlet requests are synchronized on HttpSession
2)  container thread pool is set at 20
3)  some trigger happy user pushes the submit-form button of a page 20+ times in
succession
4)  all threads block on sync block, starving other requests

Sure we can all write some form of thread "apartment" or read/write mutex or "smart"
queues, but if this is such a common problem (problem also manifests in high/bot
traffic), I would like to see the J2EE/Servlet specs address this.  I know many vendors
provide http request throttle control, and maybe some even do session synchronization,
but wouldn't it be great to have this control in web.xml that's vendor neutral?

Alex, even though web-tier request throttling may solve you issue, I would still be
concerned why you app tier is deadlocking under high or repeated invocations.  You will
want to solve this problem because if you plan to expand your J2EE application to serve
non-web-based clients (XML-bots for example), you will need to fix the problem at its
source.

Deadlocks occur due to badly sequenced locking order.  The easiest way I found to
diagnose deadlocks is to do thread dumps when you sense your server is locking;  this
will show exact what all your threads are doing, and what locks they are unsuccessful 
in
acquiring.

Gene

--- Doug Bateman <[EMAIL PROTECTED]> wrote:
> I'll address the database deadlock issue in a separate follow
> up up e-mail.
>
> However, the specific scenario where are user clicks on a
> single link rapidly in succession will generally cause negative
> effects in most Servlet applications, not just ones using EJB.
> I'll present a short description to a solution to this problem
> at the end of this e-mail.  What is happening is that multiple
> threads all belonging to the same HTTP Session are attempting
> to access the same shared stateful resources at nearly the exact
> same time.  Any one of 5 very negative scenarios can result:
>
> (a) Deadlocks.  Two threads each require locks that the other
> thread has, and wait indefinitely (or until timeout) for the
> other thread to release the resource.  Deadlocks of this sort
> can either occur in the database or in the application server.
>
> (b) Optimistic Locking Rollbacks.  Here, the database doesn't
> prevent concurrent access to the data, but detects conditions
> where two transactions have overlapping edits to a record,
> and forces the 2nd transaction to rollback in order to protect
> the integrity of the database.  (This can most often occur
> when a SELECT rather than a SELECT FOR UPDATE is used to read
> the field.)  If the entire transaction occurs within a single
> SQL request, the database is able to hide the rollback and
> automatically retry.  But in cases where the transaction spans
> multiple database requests, the database lacks the necessary
> understanding of the relationship between the requests, and is
> unable to do the retry.  In this case, a java.sql.SQLException
> is thrown.  The client code responsible for starting the
> transaction has to catch the exception, inspect the SQL ERROR
> CODE FIELD of the exception to identify the exception was caused
> by a concurrency violation, and MANUALLY retry the transaction.
> While it has been suggested that JDBC provided a subclass of
> java.sql.SQLException for exceptions where a retry is warranted,
> the spec designers of JDBC rejected this suggestion, feeling a
> big CASE statement to determine the nature of the SQL Exception
> to be better design.  Also note that EJB provides no facilities
> to automate this kind of transaction retry, although it could
> very easily be done by the container.
>
> (c) Thread Pool Exhaustion.  In this scenario, the first
> thread to request a resource beings using it while all the
> other threads from the same thread pool are stuck waiting
> in line.  What this means is that not only is performance poor
> for the CURRENT USER who clicked on the link multiple times,
> queueing up the requests blocking on that user's resource,
> but performance ALL USERS is negatively impacted because NO
> USER is able to have their request serviced until a thread is
> finally made available again in the thread pool.  In effect,
> ALL USERS are stuck waiting on that ONE GREEDY USER.
>
> (d) Rather than queue up requests for the shared resource, the
> 2nd and subsequent concurrent requests are summarily rejected,
> and exceptions are thrown.  This is often the case when STATEFUL
> SESSION BEANS are used, as the EJB SPEC states:
>
> "Clients are not allowed to make concurrent calls to a stateful
> session object. If a client-invoked business method is in
> progress on an instance when another client-invoked call, from
> the same or different client, arrives at the same instance of
> a stateful session bean class, the container may throw the
> java.rmi.RemoteException to the second client [4] , if the
> client is a remote client, or the javax.ejb.EJBException,
> if the client is a local client."
>
> Some application servers allow an option where concurrent
> requests to STATEFUL SESSION BEANS to be queued, rather than
> be rejected.  However, now we're back to the problem where a
> single user can tie up all the threads in the thread pool,
> causing the one greedy user to degrade the performance of
> all users.
>
> (e) Data corruption & loss.  This last scenario occurs
> when concurrent requests to state are neither queued
> nor rejected/rolled back.  For example, two threads may
> attempt to at the same time read and write to the contents
> a java.util.Hashmap or a java.util.LinkedList stored on
> the HTTP Session, which can result in corruption or loss of
> the underlying data structure or data.  In another example,
> two threads may concurrently be reading from and writing to
> the same database record, without the protect of an overall
> transaction, which breaks isolation and can cause corrupt or
> incorrect data to be stored in the database.
>
>
> So what do we do?
> a) Allow concurrent manipulation of the data, corrupting it?
> b) Reject concurrent requests with rollbacks or
> RemoteExceptions, causing the end user to see a broken app?
> c) Queue requests, eating up the thread pool, degrading
> performance of all users, not just the one greedy user?
>
> Most of the time requests will be quick to process, and thread
> pool starvation from queuing won't be a big deal.  In this
> case, consider synchronizing all HTTP service requests on the
> HTTP Session.  For example:
>
> public class MyServlet extends javax.http.HttpServlet {
>    public void service(...)  {
>       HttpSession session = request.getSession();
>       synchronized(session) {
>          //service the request here.
>       }
>    }
> }
>
> (Note: For distributed web applications where the same session
> may be accessed on multiple machines, it's a bit more complex
> than this.)
>
> And of course, in the case of READ-ONLY operations, corrupting
> data is a non-issue, and we need not be concerned with
> concurrent access.  So if your web request is of this type,
> consider using an isModified flag to avoid the ejbStore()
> operation in your entity beans if nothing has changed.
> (Most CMP engines these days can do this for you.)
>
>
> But if you really are concerned about queuing, what you may
> really want is a policy that:
>
> a) Properly protects our data against corruption
>
> b) Restricts the number of queued requests associated with
>    the same user (ie the same HTTP Session).
>
> and c) Allows the MOST RECENT request to proceed (as this is
>        the one the end user will see) while rejecting the EARLIER
>        requests still sitting in the queue.
>
> The way to do this is with a thread apartment in the servlet
> for each HTTP Session.  When a new request arrives, the thread
> apartment for that HTTP Session is obtained.  If it's the only
> current request for that apartment, the request is allowed
> to proceed.  If it is the 2nd request and there is already
> a request in processing, the first request can't be easily
> aborted and must be allowed to complete, so the 2nd request
> waits for the first to complete.  If a 3rd request arrives
> while the 2nd request is still waiting, the 2nd request is
> removed from the wait queue and aborted (as it hasn't begun
> it's processing yet), and the 3rd request waits on the 1st
> requests completion instead.  So here the policy is that a
> maximum of 1 request can be waiting for a particular user
> at a time.  In applications with frames or multiple windows,
> you've got to allow queue sizes to be a little bit larger.
>
> Coding a thread apartment with a policy like this is rather
> tricky, and involves a lot of the subtleties of thread-safe
> programming.  So unless requests take a long time to process
> and thread pool starvation is a real issue, I recommend you
> go with a simpler approach.  However, on previous projects,
> I have experienced situations where thread pool exhaustion
> was occurring as a result of this form of queuing.  If there's
> interest, I can post the apartment pattern on Theserverside.com.
>
> Doug
> The Middleware Company
>
> On Sun, 5 May 2002 11:12:32 -0700, Alex Paransky <[EMAIL PROTECTED]> wrote:
>
> >I understand what you are saying in the first possible solution.  However,
> >in this case the user is clicking on the same link.  There are multiple
> >requests which are being fired off at the same time.  The access is the
> >same, since it's the same link that is being pressed.
> >
> >The second solution would work at the cost of having to maintain duplicate
> >code.  The navigation relationships and security management are quite
> >difficult, thus I would have to duplicate a lot of complicated code.
> >
> >I am wondering if this is a bug on the application server.
> >
> >-AP_
> >
> >
> >-----Original Message-----
> >From: A mailing list for Enterprise JavaBeans development
> >[mailto:[EMAIL PROTECTED]]On Behalf Of Mike Bresnahan
> >Sent: Sunday, May 05, 2002 10:52 AM
> >To: [EMAIL PROTECTED]
> >Subject: Re: The problem with deadlocking transactions...
> >
> >
> >> When we wrap a transaction around our read operations the
> >> container does not
> >> have to ejbLoad/ejbStore for every call into the entity bean, however, we
> >> noticed that users clicking on the same link in the browser in rapid
> >> succession create multiple requests which all promptly deadlock on each
> >> other.
> >
> >First possible solution: Analyze your data access paths and see if you can
> >ensure that users always access data in the same order.  For example, given
> >entities X and Y, make sure that users always lock X and then Y.  If you can
> >do this, you can eliminate the possibility of a deadlock.
> >
> >Second possible solution: Use a lightweight mechanism for read-only data.
> >I.e. don't use EJBs for data that you are only reading, use a Data Access
> >Object instead.
> >
> >Mike Bresnahan
> >
>
=== message truncated ===


__________________________________________________
Do You Yahoo!?
Yahoo! Health - your guide to health and wellness
http://health.yahoo.com

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff EJB-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to