On Jul 11, 2005, at 5:59 AM, Kresten Krab Thorup wrote:
Hiram,
I browsed through the ActiveIO code, and it is pretty close to what
I am looking for for the basic IO stuff. Very nice!
Here's a couple of things that come off my mind...
- the NIOAsyncChannel class should use the selector mechanism to
wait for the ability to write data, rather than Object.wait'ing.
Yep, but in that case the producing thread should still do an
Object.wait() for the selector to callback and notify the producing
thread. Right now I'm taking a guess that the amount of time that I
have to wait() is small, while using a selector callback would let me
know explicitly how long the wait is for.
- it would be benefitial to be able to buffer output enabling
invocations in the main logic threads to finish fast, and then do
the I/O in a separate thread (for example directly inside the
selector thread). This is valuable for "slow clients" that may
access the server over a modem for example. But this is just nice
to have.
Good idea, we could do this with a filter easily.. in fact I think I
started on a filter that does just that. But, typically, I would
think raw byte buffering better done in the operating system layer.
Socket buffer sizes can be set to improve that.
Maybe the above two could be combined into an "OutputAsyncChannel"
interface? Perhaps this could work by means of a special
FlushPacket that invokes a callback handler when all previous
packets has been sent.
Our Channel interfaces have a flush() method that is meant to do just
that! (To do it's best to flush all buffered data down the pipe).
- It looks like all writes are synchronous and blocking. We need
to have a timeout mechanism for both read- and write-operations --
this is critical for large-scale operations. When there are many
clients, several of them will typically be inactive for longer
periods of time, and so there is a lot to be saved by closing down
connections. Also, some of them will simply hang, become
disconnected from the network without TCP-level notification, or
otherwise be ineffective. example: In one of our installations we
have hundreds of CORBA clients sitting at "work stations" around
the facility, and most of these are idle most of the time. Corba/
iiop has a mechanism to reestablish such connections as needed (at
least the Trifork ORB does that).
Sounds good to me. I'm willing to add write operation with a
timeout. I would guess we also need a flush operation with a time too.
- It would also be good to have the half-close functionality. In
the Trifork ORB, if ORB receives a shut down, all active connection
sockets are half-closed until active requests are done and
responses sent.
One of the difficult problems with TCP sockets is that one does not
necessarily know if the other end closed the connection (or if he
is gone). This can result in some pretty annoying hangs higher up
the stack, and it's a pain to handle those cases all over the place
in I/O code. The only "generic solution" to this as far as I can
see is to use reasonable timeouts everywhere.
I'll come back later with some thoughts on SSL stuff...
Thanks for the input Kresten. To spare the geronimo list from this
low level io talk, you might want to post further thoughts to the
activeio mailing lists: http://activeio.codehaus.org/maven/mail-
lists.html
Kresten Krab Thorup
[EMAIL PROTECTED]
"We do not inherit the Earth from our parents, we borrow it from
our children." Saint Exupery
On Jul 8, 2005, at 5:35 PM, Hiram Chirino wrote:
Hi David,
You are absolutely correct. The ActiveIO stuff was a
generalization and abstraction of the IO code in ActiveMQ. Thanks
for clarifying a potentially sensitive situation!
Regards,
Hiram
On Jul 7, 2005, at 10:51 PM, David Blevins wrote:
On Thu, Jul 07, 2005 at 12:16:19PM -0400, Davanum Srinivas wrote:
Hiram,
Could you please make sure that the project gets worked on here at
Apache? Am a bit concerned about code getting forked out and then
becoming geronimo becoming a dependency on an external project.
There's the "f" word again :)
Hiram, you can correct me if I'm wrong, but thought the ActiveIO
code
was created from the ActiveMQ transport system?
We had several conversations over a period of months on just
using the
ActiveMQ transports in OpenEJB and Geronimo so there would be
something common to cut-down on duplicating efforts. We didn't use
any Geronimo IO code cause it, well... sucked :)
Even then, I think you pretty much rewrote everything.
Is my memory of history accurate or have I demented myself?
-David