Hi Meikel,
 Thanks for your response. This is in relation to something I am working
on. And for my problem,

1. Computation of f is very very cheap
2. The subsequences can be very large by themselves to the extent that it
may not fit in memory.
3. I don't mind the effort of waiting a little extra to avoid writing
un-composable code.

do you think my implementation will meet these requirements.

Thanks,
Sunil.

P.S>Basically I am just trying to join very large csv files which are
sorted on the key to be joined. There could be duplicates in the keys.


On Mon, Feb 27, 2012 at 4:40 PM, Meikel Brandmeyer (kotarak) 
<m...@kotka.de>wrote:

> Hi Sunil,
>
> your version pays a price when f is expensive since it is applied twice.
> It is sufficient to wrap the drop into a lazy-seq.
>
> (shameless-self-promotion "
> http://kotka.de/blog/2011/04/Beauty_in_a_bug.html";)
>
> Meikel
>
>  --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to