On Mon, 2007-03-26 at 13:16 +0530, Asankha C. Perera wrote: > Hi Oleg > > Oleg Kalnichevski wrote: > > The problem appears to be caused by Synapse opening an I/O pipe per > > *every* incoming and outgoing HTTP message. On some platforms this can > > be a very expensive operation both in terms of performance and system > > resources. On Windows opening a I/O pipe apparently requires a local IP > > port to be allocated. No wonder Synapse chokes only after a few thousand > > of requests. > > > > I see absolutely no reason why Synapse should make use of I/O pipes. > > Essentially pipes are being used to bridge event-driven NIO and stream > > based classic IO. There are other ways to get the job done. A trivial > > shared buffer with synchronized access should perfectly suffice. I'll > > happily lend you a helping hand if necessary. > > > Could you help me a bit here.. The Pipe class seemed to let us do what > we wanted - i.e. bridge streams to channels - without having to write > our own code. Could you elaborate more on how you propose to get around > using Pipes? or would you have a pointer to any code? >
Hi Asankha, This does require writing some custom code, but it think it is well worth the trouble. Basically all you need is an object with synchronized access to its internal buffer, so it could be used by the I/O dispatch thread to produce data and by the worker thread to consume data or the other way around. HttpCore NIO provides two interfaces for that end, which you may want to take a starting point: http://svn.apache.org/repos/asf/jakarta/httpcomponents/httpcore/trunk/module-nio/src/main/java/org/apache/http/nio/util/ContentInputBuffer.java http://svn.apache.org/repos/asf/jakarta/httpcomponents/httpcore/trunk/module-nio/src/main/java/org/apache/http/nio/util/ContentOutputBuffer.java There are also some concrete implementations of those interfaces provided out of the box by HttpCore NIO. http://svn.apache.org/repos/asf/jakarta/httpcomponents/httpcore/trunk/module-nio/src/main/java/org/apache/http/nio/util/SharedInputBuffer.java http://svn.apache.org/repos/asf/jakarta/httpcomponents/httpcore/trunk/module-nio/src/main/java/org/apache/http/nio/util/SharedOutputBuffer.java These shared buffer classes are pretty advanced, as they are capable of throttling the frequency of I/O events to make sure the internal buffer does not get overflown, that is, the worker thread can temporarily suspend data input / output on the socket channel if it cannot keep up with the I/O rate and take the time needed to do data processing and free up more space in the shared buffers. This can help ensure that the transport operates with a nearly constant memory footprint, so once the connection is established and is fully initialized (content buffers allocated and all) it will never go down due to out of memory condition. There is hardly anything worse for an HTTP transport then dropping connections while streaming out response body after already having sent HTTP 200 OK back to the client. You can take a look at the throttling version of the HTTP service handler for an example of shared buffers in action. http://svn.apache.org/repos/asf/jakarta/httpcomponents/httpcore/trunk/module-nio/src/main/java/org/apache/http/nio/protocol/ThrottlingHttpServiceHandler.java BUT the bad news is there is nearly no test coverage for these classes yet as I wanted to spend more time working on them during ALPHA5. This code can certainly benefit from more testing. So, you might want to start with a somewhat simpler custom implementation for Synapse 1.0 that always expands the buffers whenever more input / output is made available, and then consider making it more sophisticated for 1.1. Just an idea. Oleg > thanks > asankha > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
