Folks,

I think we (and especially I) have been looking at the problem from a
wrong angle. Fundamentally the blocking NIO _IS_ faster than old IO (see
the numbers below). This is especially the case for small requests /
responses where the message content is only a coupe of times larger than
the message head. NIO _DOES_ help significantly speed up parsing HTTP
message headers

tests.performance.PerformanceTest 8080 200 OldIO
================================================
Request: GET /tomcat-docs/changelog.html HTTP/1.1
Average (nanosec): 10,109,390
Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
Average (nanosec): 4,262,260
Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
Average (nanosec): 7,813,805

tests.performance.PerformanceTest 8080 200 NIO
================================================
Request: GET /tomcat-docs/changelog.html HTTP/1.1
Average (nanosec): 8,681,050
Request: GET /servlets-examples/servlet/RequestInfoExample HTTP/1.1
Average (nanosec): 1,993,590
Request: POST /servlets-examples/servlet/RequestInfoExample HTTP/1.1
Average (nanosec): 6,062,200

The performance of the NIO starts degrading dramatically only when
socket channels is unblocked and is registered with a selector. The sole
reason we need to use selectors is to implement read socket timeout. To
make matters worse we are forced to use one selector per channel only to
simulate blocking I/O. This is extremely wasteful. NIO is not meant to
be used this way.  

Fundamentally the whole issue is about troubles timing out idle NIO
connections, not about NIO performance. What if we just decided to NOT
support socket timeouts on NIO connections? Consider this. On the client
side we could easily work the problem around by choosing the type of the
connection depending upon the value of the SO_TIMEOUT parameter.
Besides, there are enough client side applications where socket read
timeout is less important total the request time, which require a
monitor thread anyway. This kind of applications could benefit greatly
from NIO connections without losing a bit of functionality. The server
side is by far more problematic because on the server side socket read
timeout is a convenient way to manage idle connections. However, an
extra thread to monitor and drop idle connections may well be worth the
extra performance of NIO.

What do you think?

Oleg


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to