On Tue, Nov 10, 2009 at 10:41 AM, pmf <phil.fr...@gmx.de> wrote:

> On Nov 10, 7:07 am, David Brown <cloj...@davidb.org> wrote:
> > Ok.  So, it's the existence of this future-like entity that blocks
> > upon deref until filled is indeed somewhat missing.  It's not
> > particularly difficult to implement.
> >
> > This thing could easily create a lazy sequence, in fact, the code
> > would look a lot like the code for seque, with just a separation of
> > the writer from the reader.  I'll have to think about it to make sure
> > that it can be used safely.
>
> You might want to look at the (recently added) fill-queue (in
> clojure.contrib.seq-utils), which provides a lazy seq that is filled
> by another thread and blocks if readers consume faster than the queue
> is filled; maybe your problem fits into this mechanism.


This suggests something else, in turn -- a tunable, somewhat lazy seq
produced using parallelism:

(defmacro p-lazy-seq [lookahead continue? & body]
  `(let [lzy# (fn lzy [] (lazy-seq (if ~continue? (cons (future ~...@body)
(~'lzy)))))
         s# (lzy#)]
     (map (fn [x# _#] (deref x#)) s# (drop ~lookahead s#))))

The result for

(p-lazy-seq n continue?
  body)

should be the same as for

(letfn [(f []
          (lazy-seq
            (if continue?
              (cons (do body) (f)))))]
  (f))

i.e. continue? is evaluated and if true body is evaluated to yield the next
element of the lazy sequence; then continue? is evaluated again, etc. etc.

Except that the generation of elements is done on worker threads, possibly
more than one at a time if on multicore hardware, transparently to the
consumer of the seq.

Consumption of the seq blocks if it reaches an element not yet produced. The
tandem map of s# and (drop ~lookahead s#) is used to force the creation of a
future wrapping the element n ahead of the current element, while the
current element's future is dereferenced (which is where the blocking may
occur) and its result returned. Note that the future n elements ahead is NOT
dereferenced; it's just generated (by realizing that element of a lazy
sequence of future objects) which causes it to be enqueued for calculation
on a thread pool without blocking until it has a result.

Thus, lookahead futures will be computing or done at any given time. A
lookahead of zero would result in no better than sequential performance as
the consumer of the seq would block until one more element was produced,
process it, block until one more element was produced, etc.; however, a
lookahead of 1 allows the consumer of the seq to be processing one element
while the next is being produced by another thread, and higher lookaheads
can exploit more than two cores for even more parallelism.

As for a practical application:

(def *rngs* (atom {}))

(defn thread-local-rand [n]
         (if-let [rng (@*rngs* (Thread/currentThread))]
           (rng n)
           (let [rng-1 (java.util.Random.)
                 rng #(.nextInt rng-1 %)]
             (swap! *rngs* assoc (Thread/currentThread) rng)
             (rng n))))

user=> (take 10 (p-lazy-seq 3 true (thread-local-rand 10)))
(1 2 6 1 5 1 7 8 4 3)

This should generate the random numbers on multiple threads, using multiple
RNGs. In the limit, on multicore hardware and with a slow enough RNG
implementation (e.g. SecureRandom), a consumer of that sequence might be
able to obtain and use random numbers faster than with direct calls to the
RNG implementation.

(The mechanism I used to make a function that maintains its own encapsulated
thread-local state, persistent across calls, might be worth a closer look,
too, for situations where binding just can't get the job done.)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to