A few followups on this. It's only now that I have realised that the the tomcat apr handler in JBoss is subtly different from the one in Tomcat....ie that whole piece of tomcat has been forked into JBossWeb, and they're starting to diverge. Be that as it may, my comments cover the current design as it exists in both of them at trunk level.

I think with Windows, APR will have scalability problems sooner or
later, as poller performance is bad on that platform (the code has a
hack to use many pollers as performance degrades quickly with size).
There is a Vista+ solution to that somewhere in the future, but I'm not
sure this whole thing will still be relevant then.
Why is poller performance bad in Windows? Is that a consequence of the way the APR interfaces to WinSock? I'm guessing that APR uses a Unix-style approach to polling the sockets. Or is it to do with the performance of the poll inside Window itself?

Be that as it may, at our end we still have to make Windows work as well as possible, so if there are simple tweaks we can do to cause performance to degrade more gracefully under conditions of peaky load, can we not discuss it?

Also I couldn't see where it specifically sets a high number of pollers high for Windows? (And in which fork?? :-)

And could you elaborate please on that last statement? "I'm not sure this whole thing will still be relevant then".
DeferAccept on Unix makes accept return a socket only if it has data
available. Of course, this is much faster, but I'm not sure about its
support status on any OS. Setting the options in done in a regular
thread due to possible SSL processing (and just to be safe overall).
Maybe an option there to do that in the accept thread would be decent
(obviously, only useful if there's no SSL and no deferAccept; in theory,
although setSocketOptions is cheap, Poller.add does sync, which is a
problem since it's bad if the accept thread is blocked for any reason,
so I don't know if that would work better in the real world).

There seems to be two distinct aspects to this deferAccept thing. One is what happens with the socket options. (And as I understand it this option is only supported in Linux 2.6 anyway). The other - which is in the AprEndpoint code, concerns the processing of the new connection. Just on that note, I have a question about this bit of code:

if (!deferAccept) {
               if (setSocketOptions(socket)) {
                   getPoller().add(socket);
               } else {
                   // Close socket and pool
                   Socket.destroy(socket);
                   socket = 0;
               }
           } else {
               // Process the request from this socket
               if (!setSocketOptions(socket)
|| handler.process(socket) == Handler.SocketState.CLOSED) {
                   // Close socket and pool
                   Socket.destroy(socket);
                   socket = 0;


The default value of deferAccept is true, but on Windows this option is not supported in the TCP/IP stack, so there is code that falsifies the flag if this is the case. In which case, the socket is added straight to the poller. I'm happy with that approach anyway. But, the act of getting it across to the poller - which should be a relatively quick operation (?) requires the use of a worker thread from the common pool. This gets back to my original point. If the new connection could be pushed across to the poller asap, (without handling the request), and without having to rely on the worker threads, then surely this is going to degrade more gracefully than the current situation where a busy server is going to leave things in the backlog for quite some time. Which is a problem with a relatively small backlog.

In the Tomcat branch, there is code to have multiple acceptor threads, with a remark that it doesn't seem to work that well if you do. So that being the case, why not push it straight across to the poller in the context of the acceptor thread?

...MT

Reply via email to