Okay...

I love replying to my own posts :-)

I've modified my axon port so that it schedules according to a rather
well-known message passing kernel policy:

1. The sender transmits to the receiver via a FIFO buffer.
2. The sender only blocks if it tries to send on a full FIFO
3. The receiver blocks if it reads from an empty FIFO
4. Transmitter and receiver can both check the state of the FIFO
before attempting a send/receive.
5. The receiver can override the full buffer behavior by:
     a) asserting the "ready" boolean property before yielding, which
will allow the sender to continue sending even on full FIFO
     b) by  overriding the "ready" boolean with a "ready" boolean
function which can take mitigating action to recover from the full
state.  Some examples might be dynamically increasing it's maximum
FIFO size or correct from a data overrun condition by clearing out the
FIFO and resetting the pipeline state etc
     c) by overriding the "ready" with a function and setting the size
to 0 or 1 so as to simulate an actual callback or interrupt function.
Though I haven't tried this yet, it was interesting that it just seems
to naturally fall out of the code structure.  (One of the reasons I
ported axon was to avoid having to deal with nasty callback
structures.)

I might be over-architecting in feature 5, but it looked like a
natural interrupt-like capability for out-of-band processing.

I'm writing the unit tests right now and might do one refactoring to
clean up single responsibility and encapsulation issues.

As soon as I get something cleaned up a bit with unit tests I'll post.

JimB






On Sep 17, 4:57 pm, Jim Burnes <jvbur...@gmail.com> wrote:
> Michael,
>
> Thanks for the Kudos and I'm glad to hear someone is still out there.
> There are a few architectural issues I'd like to clarify.
>
> As I was going through the development process a couple of ideas occurred to
> me.
>
> 1. Should I just use a string inbox and dynamically convert the messages to
> their final values inside the components, or go for the gold and use D
> templating to allow you to create mailboxes that are type specific?
>
> I decided to allow you to use D templates to create typed mailboxes.  My
> thinking was to allow the compiler to make sure inboxes and outboxes matched
> types.  In order to make it possible for users to dynamically match up an
> inbox with compatible outboxes, I register the outbox with a generic message
> bus and  alllow introspection to take care of suggesting possible matches
> for dynamic programming.  (I can use Tango lisp-like filtering mechanisms to
> find conforming mailboxes).
>
> One of my concerns was the extra complexity required in a generic postman
> process to dispatch a type-specific send.  I actually tried that and while
> it worked, it was really awkward because I had to first link the
> type-specific mailbox into the generic postman and then get the generic
> postman to dispatch the type-specific send in the outbox.
>
> Well that was all resolved when I read a little further in your tutorial. I
> realized that by optimizing the postman out of the process and doing direct
> deliveries when outbox.send is called the whole problem evaporated. (The
> only downside of that is that message sends are not synchronous.  An outbox
> can send 500 messages to a single inbox before a yield.  In that sense it's
> an asynchronous bus, which brings with it certain issues, but avoids a lot
> of others.)
>
> The advantages: Much higher performance becase I don't have to
> serialize/deserialize the messages when I send them.  I can create a fractal
> image compression scheme and stream through an Outbox!(FractalDiff) or
> something.  Pretty fast as long as the whole yield mechanism is  relatively
> small compared to the size of the object being sent or computation being
> performed.
>
> The downside: The rest of the system has to be tweaked slightly to keep
> track of Variant types.  For example, if I have a Component that has one
> inbox of type SIGNAL, one inbox of type float, one outbox of type int and
> one outbox of type string I can't just create a linkedlist of those.
>
> You can see that in the component definition.  My current solution is to
> create a generic interface called "IntrFace" that implements a minimal
> inbox, outbox behavior and have all of the mailboxes implement that
> interface.  Then I can create a LinkedList!(IntrFace) and everyone is
> happy.  That probably seems obvious to you, but when you've been doing
> Python for a long time you make assumptions about the capabilities of the
> language itself. Type safe systems can be sticklers.  :-)  It's a little bit
> of pain now, for a (hopefully) big payoff later.
>
> 2. Another issue that I'm still undecided on is what to do when:
> + An outbox wants to send to an inbox, but the inbox is full.  (It's a
> linked list so it's only "sort of full")
> + You perform a read on an empty inbox (without checking for available
> content)
>
> a) In the first case, I provide a "ready" boolean on the inbox.  It answers
> the question "you're full, do you still want me to send?".  The default
> value is "false", so the sender just ignores the inbox/receiver.  It can be
> changed on-the-fly by the component when it thinks it might be busy or when
> its doing something intense. (rate adaption etc)  The nice part is that the
> "ready" boolean can be overridden with a boolean function so that the
> receiver gets one last chance to adapt to the situation (extend its inbox
> size, change its highwater mark, flush the inbox and consider this an
> overrun condition etc).  If it's able to adapt to extend the inbox it
> returns true.
>
> b) When reading on an emtpy inbox, I'd like to have inbox.read() check its
> own FIFO length.  If it's zero, then do a Fiber.yield() until something
> becomes available.  The problem is that you can't just do a Fiber.yield(),
> you need to set some flag on the component to say its I/O waiting so it
> won't be rescheduled.  Then of course you have to say what its waiting on so
> it won't be dispatched for the wrong inbox and have to sleep again. Levels
> of complexity etc.  In the end I'd just like to remove the mandatory
> requirement that a component must check its inbox size before reading.  It's
> a lot of redundant noise if it's not too difficult to program around.  Of
> course the inbox still has the ability to check its own size whenever it
> wants to.  If it works, inbox.read() behaves a lot like a blocking system
> call.  That simplicity demands complexity in the scheduler.
>
> (sorry if this is running long, but these are the architectural issues that
> run through my mind.  believe it or not, Axon/D or Dendrite or whatever is
> just a subsystem for another simulation I'm building).
>
> 3. Another "feature" I considered and then decided not to implement at the
> mailbox layer was delivery to multiple inboxes from the same outbox.  At
> first it looked like a great idea.  I would get publish/subscribe for free.
> Unfortunately it complicates the delivery mechanism as welll as constraining
> the "publisher" to stick around until all of the content has been consumed.
> Eventually I decided that a publisher is a special kind of component that
> subscribers register themselves with.   The producer component only has to
> produce content, send it to the publisher and could simply quit executing.
> The publisher transmits its inbox to all listening subscribers.  That way I
> decouple the producer from the consumer.  This model also allows a special
> Pool component that does round-robin delivery to listeners.  Such a
> component enables pure tuple space work pools allowing worker components to
> pick up some workunit, process it and come back for more.  Combine that with
> network and multicore components and distributed parallel processing becomes
> possible.
>
> 4. I've also considered pass through delivery, but haven't had a use case
> for it yet (though I won't be suprised if it presents itself.)
>
> I want to clean up the code I've got and then I'll shoot it to you.
>
> Jim Burnes
> Ft Collins, CO
> (erisian)
>
> On Thu, Sep 17, 2009 at 3:30 PM, Michael Sparks <spark...@gmail.com> wrote:
> > Hi Erisian,
>
> > Let me just say that I think this is awesome, and I'm very glad to hear
> > that
> > Kamaelia's been helpful to you here.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"kamaelia" group.
To post to this group, send email to kamaelia@googlegroups.com
To unsubscribe from this group, send email to 
kamaelia+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/kamaelia?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to