Hi,
A distinctive feature of reducers is that "reducing is a one-shot thing".
The common understanding is that reducers are fast for cases where you want
to process whole collection at once, but for infinite and lazy seqs, you
have to use good old seqs.
With core.async, it is now easy to create a transformation which produces
lazy seq from any reducible collection.
(defn lazy-seq*
[reducible]
(let [c (chan)
NIL (Object.)
encode-nil #(if (nil? %) NIL %)
decode-nil #(if (identical? NIL %) nil %)]
(thread
(reduce (fn [r v] (>!! c (encode-nil v))) nil reducible)
(close! c))
(take-while (complement nil?) (repeatedly #(decode-nil (<!! c))))))
(def s (lazy-seq* (clojure.core.reducers/map inc (range))))
(first s)
(take 100 s)
This approach can be also extended to produce chunked seqs and chan buffer
can also be used to further tune the performance.
JW
--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to [email protected]
Note that posts from new members are moderated - please be patient with your
first post.
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google Groups
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.