On Fri, 2005-08-19 at 18:17 +0200, Oleg Kalnichevski wrote:
> On Fri, Aug 19, 2005 at 10:18:54AM -0400, Michael Becke wrote:
> > As your numbers and some tests I've been trying show it seems that the
> > write performance is the key area of difficultly.  From some
> > "profiling" I've been doing it looks like writeToChannel() is called
> > an extremely large number of times, especially when you considering
> > that the entity is quite large and already buffered to memory.  For
> > example I ran just the post request 200 times and here are some
> > numbers I get:
> > 
> > Bytes written: 200,482,200
> > Calls to writeToChannel: 1,126,198
> > Bytes written per call: ~178
> > 
> > This seems pretty fishy to me considering that the buffer is 4096
> > bytes.  My knowledge of NIO is quite limited so I don't have much to
> > compare these numbers to.  Any ideas?
> > 
> > Mike
> > 
> 
> Mike and Roland
> 
> I found the culprit. An open read selector on a non-blocking channel
> slows down write I/O by 100%. Go figure.
> 
> I'll be working on a fix.
> 

Mike et al,

I looked at possibility of fixing the problem by registering and
unregistering the socket channel for each read operation that requires a
timeout. This approach unfortunately causes massive stability problems.
Eventually the application goes down with a NullPointerException
originating from one of the com.sun.io packages. This is a dead end.

I think it is time we reevaluated the problem on the design level. I
will send another email shortly describing what I personally see as the
only way to salvage NIO for HttpCommon

Oleg


> Oleg
> 
> 
> > On 8/18/05, Oleg Kalnichevski <[EMAIL PROTECTED]> wrote:
> > > Folks,
> > > 
> > > Well, looks like things are slightly more complicated than I initially
> > > asserted. I just could not let this one rest and kept on experimenting.
> > > As soon as I stopped just stupidly pumping lots of data through the
> > > socket and started using real HTTP requests things started looking quite
> > > differently. NIO still sucks when it comes to sending and receiving
> > > large content bodies (~ 1MB), but it tends to perform much better for
> > > smaller messages (1KB -100KB). It appears the HTTP data receiver based
> > > on NIO can indeed parse HTTP headers much faster as I hoped.
> > > 
> > > Here's the test app I have been using
> > > http://svn.apache.org/repos/asf/jakarta/httpclient/trunk/coyote-httpconnector/src/tests/tests/performance/PerformanceTest.java
> > > 
> > > In order to run it one needs the latest SVN snapshot of HttpCommon [1]
> > > and a reasonably recent version of Tomcat, preferably 5.5 branch. This
> > > is the server.xml that I have been using [2]. (Please do not forget to
> > > comment out the other connector on port 8888)
> > > 
> > > These are my numbers
> > > 
> > > Windows XP, P4, 1GB
> > > 
> > > tests.performance.PerformanceTest 8080 200 NIO
> > > ==============================================
> > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > Average (nanosec): 17,832,699
> > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > Average (nanosec): 3,444,763
> > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > Average (nanosec): 49,411,834
> > > 
> > > tests.performance.PerformanceTest 8080 200 OldIO
> > > ==============================================
> > > Request: GET /tomcat-docs/changelog.html HTTP/1.1
> > > Average (nanosec): 24,436,939
> > > Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > Average (nanosec): 15,563,380
> > > Request failed: java.nio.channels.ClosedChannelException
> > > Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
> > > Average (nanosec): 26,104,509
> > > 
> > > Oleg
> > > 
> > > [1]
> > > http://svn.apache.org/repos/asf/jakarta/httpclient/trunk/http-common/
> > > [2]
> > > http://svn.apache.org/repos/asf/jakarta/httpclient/trunk/coyote-httpconnector/src/tests/server.xml
> > > 
> > > On Wed, 2005-08-17 at 21:48 +0200, Oleg Kalnichevski wrote:
> > > > Folks,
> > > > I have spend past several miserable nights analyzing the performance of
> > > > the new Coyote HTTP connector. I have discovered that HttpCommon code
> > > > was horribly slow for larger request/response bodies, especially
> > > > chunk-encoded, on my Linux box [1], whereas it seemed almost fine on a
> > > > much slower WinXP laptop of my wife [2]. To cut a long and sad story
> > > > short, after some investigations I found out that the culprit was NIO.
> > > > The way I see it, NIO, as presently implemented in Sun's JREs for Linux,
> > > > simply sucks. Actually blocking NIO appears more or less okay. The real
> > > > problem is the NIO channel selector, which proves horribly expensive in
> > > > terms of performance (we DO have to use a selector on the socket
> > > > channel, because it is the only way (I know of) to implement socket
> > > > timeout with NIO).
> > > >
> > > > I have written a small test app to demonstrate the problem:
> > > > http://svn.apache.org/repos/asf/jakarta/httpclient/trunk/http-common/src/test/tests/performance/NIOvsOldIO.java
> > > >
> > > > This is what I get on my Linux box
> > > > =========================================
> > > > Old IO average time (ms): 1274
> > > > Blocking NIO average time (ms): 1364
> > > > NIO with Select average time (ms): 4981
> > > > =========================================
> > > >
> > > > Bottom line: NIO may still be a better model for some special cases such
> > > > as instant messaging where one can have thousands of mostly idle
> > > > connections with fairly small and infrequent data packets. At the same
> > > > time, I have come to a conclusion that NIO makes no sense of what so
> > > > ever for synchronous HTTP (servlets, for instance), where large
> > > > request/response entities need to be consumed/produced using
> > > > InputStream/OutputStream interfaces, data tends to come in steady
> > > > streams of chunks, and connections are relatively short-lived.
> > > >
> > > > I intent to remove all the NIO related class from HttpCommon and put
> > > > them in the HttpAsynch module, where they may serve as a starting point
> > > > for the asynchronous HTTP implementation. Please take a look at the test
> > > > app and complain loudly if you think something is wrong. Otherwise I'll
> > > > go ahead and get rid of NIO code in HttpCommon.
> > > >
> > > > Oleg
> > > > ===
> > > > [1] Dell Dimension 8300, Pentium 4 3.00GHz, 512MB, Fedora Core 4,
> > > > 2.6.11-1.1369_FC4smp
> > > > [2] A pile of old trash running Windows XP Home SP2 (rather badly)
> > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > > For additional commands, e-mail: [EMAIL PROTECTED]
> > > >
> > > >
> > > 
> > > 
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > For additional commands, e-mail: [EMAIL PROTECTED]
> > > 
> > >
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> > 
> > 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to