Am Montag, den 03.11.2008, 11:11 -0500 schrieb Marc Feeley:
> > So please remove those legal road blocks.
> In any case, the copyright information must be maintained, simply
> to indicate who has intellectual property of the code (i.e. an
> acknowledgement of who designed and wrote the code).

DEFINATELY!

I'm afraid my citation of the original source in the posting, where I
attached that crappy, expanded rbtree.scm might have been not prominent
enough.  Sorry for that.  That's the kind of posting I make at three
o'clock in the night, astonished by the gain of the past hours (even
though those are easily explained by a change from an exponentiell
algorithm to a logarithmic one), confused by the copyright statements in
the code and about to post some code, which is stylistically bad[*].

(((Remark to casual readers: The file was for two reasons *a
intentionally [since that would surely raise questions - even by
myself..] and *b accidentally [since it could come into the compilers
way] just the minimum the compiler would need.  No documentation,
history or copyright.  All left to pointers in the surrounding mail.)))

> Would that suit your purposes?

Since - AFAIK - it's compatible with GPL and BSD - that's enough for me.

> Concerning the alternative priority queue implementations (wttree.scm,  
> llrbtree.scm and skip lists), you have to be careful because those  
> implementations allocate memory

!!!

One of the roadblocks I encountered yesterday.

Nevertheless, Marc, I've got the feeling that llrbtree might be an
interesting alternative.  By now I'm still learning about it.  But the
stomach tells me an implementation without allocation could be possible.

Furthermore I'm interested in an implementation, wich emphasises object
sharing.  Within the Askemos system, I'm faced with a huge difference
between read and write operations to the store.  (Read is what the
underlying Scheme implementation delivers [plus a constant overhead],
mutation is 150-600ms; hardly dependant on the merits of the
implementation in use.)

>  (in the case of llrbtree.scm it is  
> stack space for non-tail recursive calls).

Which, too, means that you can not just unqueue a node.  Removal of a
node is tight to a traversal operation of the tree.

> What this means is that  
> the garbage collector might be called during a priority queue  
> operation, possibly inside the thread scheduler.

Which is a bad thing to have.

  In fact, the "this takes for ever version" of my test case - to all
  reasonable probability - hang endlessly garbage collecting.

Nevertheless - and iff my reading of the code is correct - chicken
should withstand that (or die from it anyway).  At least the old
implementation used to allocate memory in queue removal operations and
ran reliably (though slow).

>   This is not a good  
> thing because it precludes the implementation of object finalization  
> by the garbage collector (i.e. the garbage collector cannot call  
> object finalizers, such as will executors, at the end of the GC  
> because a finalizer might request an operation on the priority queue  
> which is currently in an inconsistent state).  To put it another way,  
> it is hard to make the priority queue operations appear atomic if they  
> allocate memory.  My red-black tree implementation allocates no memory  
> and requires a shallow call stack.

Object finalisation in Chicken is yet another topic anyway.

I would call chickens finalisation support "cooperative".  That is: if
you are unsure your finalisation procedure may ever encounter an
exception, you better wrap it into an exception handler.  Otherwise be
prepared for surprise.  And if your finalisers take their time (let's
say block on i/o...) watch your eyebrows.

Best regards

/Jörg

Footnotes:

[*]: Bad style at least for chicken, which is currently euphoric about
hygiene.  (Not to call that a bad, it's just my impression.  And I
welcome it.)


_______________________________________________
Chicken-users mailing list
Chicken-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/chicken-users

Reply via email to