Okay, TomEE committers/users, I need your help/advice on this one. Please read what I wrote in the email below.
I am considering to use the @Asynchronous on one of the methods on @Stateless EJB to complete this task, but I think I still may need to do something on the Persistence end to avoid locking up the entire app, database, etc... I guess this my first time experiencing a DEADLOCK, but I could be wrong. :) On Tue, Dec 11, 2012 at 11:01 AM, Howard W. Smith, Jr. < smithh032...@gmail.com> wrote: > Wow, i'm reading this now, because I just experienced an issue on my > production server that is TomEE 1.5.1 (Tomcat 7.0.34), and the whole server > locked up all because I had a @Stateless EJB inserting data into multiple > tables in the database, because @Schedule timed event triggered the EJB to > check email server for incoming (customer) requests, and it literally took > down the server. I was on it as well as few other endusers, and then one > enduser captured a LOCK error and the screen capture (photo/pic) had an > error message that showed a long SQL query with datatable table and column > names t0..., t0... > > What I primarily saw was the word 'lock' at the top of that, and we > definitely experienced a lockup. I'm about to check server logs and read > this article. > > The @Stateless EJB had one transaction (entitymanager / persistence > context) that made database updates to multiple tables in the database. I > am only using entitymanager.persist(), flush(), and few lookups/queries > during that process. > > But other endusers (including myself) could not do simple queries against > the database at all. Most of my queries contain query hints (readonly, > statement caching). > > Also, I never seen this behavior at all, but this is first time I added > @Stateless EJB along with @Schedule that does database updates 'during > business hours'. I thought this would be a no-brainer, but I guess it's > not. Again, the server is TomEE 1.5.1 (tomcat 7.0.34). > > Any advise, then please let me know. Onto reading this post now. Thanks. :) > > > > On Tue, Dec 11, 2012 at 10:49 AM, Julien Martin <bal...@gmail.com> wrote: > >> Thank you very much for this exhaustive reply Christopher. >> >> 2012/12/11 Christopher Schultz <ch...@christopherschultz.net> >> >> > -----BEGIN PGP SIGNED MESSAGE----- >> > Hash: SHA256 >> > >> > Julien, >> > >> > Warning: this is long. Like, André-or-Mark-Eggers long. >> > >> > On 12/11/12 7:30 AM, Julien Martin wrote: >> > > I am in reference to the following blog entry: Blog >> > > entry< >> > >> http://blog.springsource.org/2012/05/06/spring-mvc-3-2-preview-introducing-servlet-3-async-support >> > > >> > > >> > > >> > about Spring MVC 3.2 asynchronous support. >> > > >> > > I understand Tomcat uses a thread pool in order to serve http/web >> > > requests. Furthermore, the above article seems to indicate that >> > > Spring MVC asynchronous support relieves Tomcat's thread pool and >> > > allows for better concurrency in the webapp by using background >> > > threads for "heavy-lift" operations. >> > >> > I believe you are misinterpreting what that post has to say. It's not >> > that a "background" thread itself is more efficient, it's that >> > processing that does not need to communicate with the client can be >> > de-coupled from the request-processing thread-pool that exists for >> > that purpose. >> > >> > An example - right from the blog post - will make much more sense than >> > what I wrote above. Let's take the example of sending an email >> > message. First, some assumptions: >> > >> > 1. Sending an email takes a long time (say, 5 seconds) >> > 2. The client does not need confirmation that the email has been sent >> > >> > If your were to write a "classic" servlet, it would look something >> > like this: >> > >> > doPost() { >> > validateOrder(); >> > >> > queueOrder(); >> > >> > sendOrderConfirmation(); // This is the email >> > >> > response.sendRedirect("/order_complete.jsp"); >> > } >> > >> > Let's say that validation takes 500ms, queuing takes 800ms, and >> > emailing (as above) takes 5000ms. That means that the request, from >> > the client perspective, takes 6300ms (6.3 sec). That's a noticeable >> delay. >> > >> > Also, during that whole time, a single request-processing thread (from >> > Tomcat's thread pool) is tied-up, meaning that no other requests can >> > be processed by that same thread. >> > >> > If you have a thread pool of size=1 (foolish, yet instructive), it >> > means you can only process a single transaction every 6.3 seconds. >> > >> > Lets re-write the servlet with a background thread -- no >> > "asynchronous" stuff from the servlet API, but just with a simple >> > background thread: >> > >> > doPost() { >> > validateOrder(); >> > >> > queueOrder(); >> > >> > (new Thread() { >> > public void run() { >> > sendOrderConfirmation(); >> > } >> > }).start(); >> > >> > response.sendRedirect("order_complete.jsp"); >> > } >> > >> > So, now the email is being sent by a background thread: the response >> > returns to the client after 1.3 seconds which is a significant >> > improvement. Now, we can handle a request once every 1.3 seconds with >> > a request-processing thread-pool of size=1. >> > >> > Note that a better implementation of the above would be to use a >> > thread pool for this sort of thing instead of creating a new thread >> > for every request. This is what Spring provides. It's not that Spring >> > can do a better job of thread management, it's that Tomcat's thread >> > pool is special: it's the only one that can actually dispatch client >> > requests. Off-loading onto another thread pool for background >> > processing means more client requests can be handled with a smaller >> > (or same-sized) pool. >> > >> > Looking above, you might notice that the validateOrder() and >> > queueOrder() processes still take some time (1.3 seconds) to complete, >> > and there is no interaction with the client during that time -- the >> > client is just sitting there waiting for a response. Work is still >> > getting done on the server, of course, but there's no real reason that >> > the request-processing thread has to be the one doing that work: we >> > can delegate the entire thing to a background thread so the >> > request-processor thread can get back to dispatching new requests. >> > >> > This is where servlet 3.0 async comes into play. >> > >> > Let's re-write the servlet as an asynchronous one. I've never actually >> > written one, so I'm sure the details are going to be wrong, but the >> > idea is the same. This time, we'll do everything asynchronously. >> > >> > doPost() { >> > final AsyncContext ctx = request.startAsync(); >> > >> > (new Thread() { >> > public void run() { >> > validateOrder(); >> > queueOrder(); >> > sendOrderConfirmation(); >> > >> > ctx.getResponse().sendRedirect("/order_complete.jsp"); >> > >> > ctx.complete(); >> > } >> > }).start(); >> > } >> > >> > So, how what happens? When startAsync is called, an AsyncContext is >> > created and basically the request and response are packaged-up for >> > later use. The doPost method creates a new thread and starts it (or it >> > may start a few ms later), then returns from doPost. At this point, >> > the request-processor thread has only spent a few ms (let's say 5ms) >> > setting up the request and then it goes back into Tomcat's thread-pool >> > and can accept another request. Meanwhile, the "background" thread >> > will process the actual transaction. >> > >> > Let's assume that nothing in the run() method above interacts in any >> > way with the client. In the first example (no async), the client waits >> > the whole time for a response from the server, and the >> > request-processing thread does all the work. So, the client waits 6.3 >> > seconds and the request-processing thread is "working" for 6.3 seconds. >> > >> > In the async example, the client will probably still wait 6.3 seconds, >> > but the request-processing thread is back and ready for more client >> > requests after a tiny amount of time. Of course, the transaction is >> > not complete, yet. >> > >> > The background thread will run and process the transaction, including >> > the 5-second email process. Once the email confirmation has been sent, >> > the background thread "sends" a redirect and completes the async >> > request. I'm not sure of the exact details, here, but either the >> > background thread itself (via getRequest().sendRedirect()) pushes the >> > response back to the client, or Tomcat fetches a request-processing >> > thread from the pool and uses that to do the same thing. I can't see >> > why the background-thread wouldn't do that itself, but it's up to the >> > container to determine who does what. >> > >> > The point is that, when using asynchronous requests, fewer >> > request-processing threads can handle a whole lot of load. In the >> > async example, still with a thread-pool of size=1 and an async-setup >> > time of 5ms, that means that you can handle one client transaction >> > every 5ms. That's much better than every 6.3 seconds, don't you think? >> > >> > (Note that this isn't magic: if your background threads are either >> > limited or your system can't handle the number of transactions you are >> > trying to asynchronously-process, eventually you'll still have >> > everyone waiting 6.3 seconds no matter what). >> > >> > So, a recap of throughput (req/sec) of the above 3 implementations: >> > >> > Standard: .15873 >> > Background: .76923 >> > Async: 200.00000 >> > >> > Using asynchronous dispatching can improve our throughput a huge >> > number of times. >> > >> > It's worth repeating what I said earlier: if your server can't >> > actually handle this much load (200 emails per second, let's say), >> > then using asych isn't going to change anything. Honestly, this trick >> > only works when you have a lot of heterogeneous requests. For example, >> > maybe 10% of your traffic is handling orders as implemented above, >> > while the rest of your traffic is for much smaller sub-500ms-requests. >> > There's probably no reason to convert those short-running requests >> > into asynchronous operations. Only long-running processes with no >> > client interaction make any sense for this. If only 10% of your >> > requests are orders, that means that maybe you can process 20 orders >> > and 190 "small" requests per second. That's much better than, say, >> > waiting 6.3 seconds for a single order and then processing a single >> > short request, then another order and so on. >> > >> > Just remember that once all request-processing threads are tied-up >> > doing something, everyone else waits in line. Asynchronous request >> > dispatching aims to run through the line as quickly as possible. It >> > does *not* improve the processing time of any one transaction. >> > >> > - -chris >> > -----BEGIN PGP SIGNATURE----- >> > Version: GnuPG/MacGPG2 v2.0.17 (Darwin) >> > Comment: GPGTools - http://gpgtools.org >> > Comment: Using GnuPG with undefined - http://www.enigmail.net/ >> > >> > iEYEAREIAAYFAlDHTdkACgkQ9CaO5/Lv0PDrNACgsaeHmBzr9RMSFuZX9ksX3g9d >> > bKYAniJzbqRjGBAjwxIYihvcyJYV5rIl >> > =l+ie >> > -----END PGP SIGNATURE----- >> > >> > --------------------------------------------------------------------- >> > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org >> > For additional commands, e-mail: users-h...@tomcat.apache.org >> > >> > >> > >