Re: [core.spec] Stricter map validations?

2017-11-14 Thread Seth Verrinder
I took part of the goal to be that specs themselves would remain 
compatible, so an old set of specs wouldn't start failing on data that 
conforms to a new but compatible set of specs. That sort of compatibility 
isn't possible when you go from disallowing something to allowing it.

On Tuesday, November 14, 2017 at 10:15:23 AM UTC-6, Eric Normand wrote:
>
> Hey everybody!
>
> I'm chiming in after seeing this linked to in The Repl (
> https://therepl.net/).
>
> On Alex's suggestion, I rewatched Spec-ulation last night. The parts about 
> negation and evolution are towards the end. I was struck (once again) by 
> how clearly he picked apart changes. Relaxing a requirement is growth. And 
> adding requirements is breakage. But it left me with a question:
>
> Isn't disallowing a key and then allowing it (as optional) growth (instead 
> of breakage)? All of the old clients are still fine, and new clients can 
> use the key if they choose. You're relaxing the requirements. Taking the 
> opposite approach, I require some keys plus allow anything else. Some 
> clients will inevitably send me something with extra keys, which is okay, 
> they pass my specs. Later, I add in an optional key with a defined spec. So 
> I'm now restricting what used to be completely open. Isn't that breakage? I 
> feel like I'm seeing it exactly opposite as Rich Hickey. He says if you 
> disallow things, it's forever, because if you need to allow it later, 
> that's breakage. But there's not enough explanation for me to understand. 
> It seems like relaxing requirements. I feel like I'm missing something. In 
> short: why is it forever?
>
> He does mention is that logic engines don't have negation. Does this hint 
> that we will want to be using logic engines to reason over our specs?
>
> Thanks
> Eric
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-12 Thread Seth Verrinder
Reordering definitely matters:

StepA: write to x
StepB: read from x

StepB: read from x
StepA: write to x

On Wednesday, April 12, 2017 at 7:15:09 AM UTC-5, Léo Noel wrote:
>
> I could have one thread that invokes a transduce step on odd seconds and 
>> another that invokes on even seconds. Or some external api call that tells 
>> me to take the next step, which I do on a thread pulled from a pool.
>>
>
> Both strategies will fail to ensure no more than one thread at time. You 
> need something to prevent overlapping, e.g when a long step is running and 
> you get a request to start the next one.
>
>
> While this is a good reference, it's also 13 years old and the JMM has 
>> been updated since then. A much better reference explaining the semantics 
>> and constraints is:
>
> https://shipilev.net/blog/2014/jmm-pragmatics/ 
>
> In particular, even if there is a memory barrier, there are some 
>> reorderings allowed if the transducer state is not volatile that may be 
>> surprising. Making it volatile adds a critical edge in the total program 
>> order.
>
> I'm saying that the logical ordering of steps is irrelevant wrt how a 
>> multi-threaded program can be optimized/reordered under the JMM.
>
>
> Thank you for the reference. Very enlightening (esp. part III 
> ).
> I understand reordering is a thing. Does ordering really matter ? What 
> matters to us is that each step is able to see the changes made the step 
> before. That is, we need to ensure memory visibility across steps. This is 
> all what we need to be sure that the JVM won't run the program in an order 
> that doesn't yield the same result as what we expect.
> In a degenerate case, we'll put volatile on every variable, ensuring that 
> the running program is totally ordered and totally unoptimizable. Is this 
> what we want ?
>
>
> happens-before across threads requires a volatile or lock, but I don't see 
>> how the use of one is guaranteed by this logical ordering.
>>
>
> Volatiles and locks are means to an end. The end is memory visibility, and 
> the happens-before partial ordering is what is of interest to us, 
> application developers, to reason about this end. The happens-before rules 
> have not changed since jsr-133 (source 
> )
>  
> :
> * Each action in a thread happens-before every action in that thread that 
> comes later in the program's order.
> * An unlock (synchronized block or method exit) of a monitor 
> happens-before every subsequent lock (synchronized block or method entry) 
> of that same monitor. And because the happens-before relation is 
> transitive, all actions of a thread prior to unlocking happen-before all 
> actions subsequent to any thread locking that monitor.
> * A write to a volatile field happens-before every subsequent read of that 
> same field. Writes and reads of volatile fields have similar memory 
> consistency effects as entering and exiting monitors, but do not entail 
> mutual exclusion locking.
> * A call to start on a thread happens-before any action in the started 
> thread.
> * All actions in a thread happen-before any other thread successfully 
> returns from a join on that thread.
>
> Here is an example of a multithreaded transducing context that doesn't use 
> locks nor volatiles (inspired from the official documentation for agents 
> ) :
>
> (def xf (comp (partition-all 64) cat))
>
> (defn ! [f & args] (apply f args) f)
>
> (defn run [m n]
>   (let [p (promise)
> f (reduce (fn [f _] (partial send (agent f) (xf !)))
>   #(when (zero? %) (deliver p nil)) (range m))]
> (doseq [i (reverse (range n))] (f i))
> (f)
> @p))
>
> (run 1000 1000)
>
>
> The unsynchronized ArrayList in partition-all will be accessed by multiple 
> threads, and I can still be confident about visibility, because agents 
> ensure a happens-before ordering between each message. This behaviour is 
> actually delegated to the backing Executor, which may or may not use locks. 
> Locks are really an implementation detail, what is important is 
> happens-before guarantees.
>
>
> Seems risky to depend on that. eduction creates an iterable for example - 
>> it has no way of preventing somebody from creating the iterator on one 
>> thread and consuming it on another. 
>>
>
> Iterators are unsynchronized and mutable, like many classes in the Java 
> standard library. You know they're unsafe and need to treat them as such. 
> This leads to a more general debate on mutability. Objects generally fall 
> in 3 categories :
> 1. Immutable objects aka values. They're the default in Clojure and that's 
> great because they can be exposed safely and they're so easy to reason 
> about.
> 2. Thread-safe mutable objects. Includes core reference types, core.async 
> channels. They're useful to model 

Re: Using transducers in a new transducing context

2017-04-11 Thread Seth Verrinder
Seems risky to depend on that. eduction creates an iterable for
example - it has no way of preventing somebody from creating the
iterator on one thread and consuming it on another.

On Tue, Apr 11, 2017 at 7:32 AM, Léo Noel <leo.noel...@gmail.com> wrote:
>> volatile! is what ensures that there's a memory barrier.
>
>
> No. The memory barrier is set by the transducing context as a consequence of
> implementing the "single thread at a time" rule. Be it lock, thread
> isolation, agent isolation, or anything that ensures that the end of a step
> happens-before the beginning of the next. All these techniques ensure
> visibility of unsynchronized variables between two successive steps, even
> when multiple threads are involved.
>
>
> On Tuesday, April 11, 2017 at 1:36:30 PM UTC+2, Seth Verrinder wrote:
>>
>> The single thread at a time rule is implemented by the transducing
>> context (transduce, into, core.async, etc). Inside of a transducer's
>> implementation you just have to make the assumption that it's being
>> used properly. volatile! is what ensures that there's a memory
>> barrier.
>>
>> On Tue, Apr 11, 2017 at 2:46 AM, Léo Noel <leo.n...@gmail.com> wrote:
>> > Thank you Alex for these precisions.
>> >
>> >
>> >> The JVM is pretty good at minimizing this stuff - so while you are
>> >> stating
>> >> these barriers are redundant and are implying that's an issue, it would
>> >> not
>> >> surprise me if the JVM is able to reduce or eliminate the impacts of
>> >> that.
>> >> At the very least, it's too difficult to reason about without a real
>> >> perf
>> >> test and numbers.
>> >
>> >
>> >> Fast wrong results are still wrong. I do not think it's at all obvious
>> >> how
>> >> this affects performance without running some benchmarks. Volatiles do
>> >> not
>> >> require flushing values to all cores or anything like that. They just
>> >> define
>> >> constraints - the JVM is very good at optimizing these kinds of things.
>> >> It
>> >> would not surprise me if an uncontended thread-contained volatile could
>> >> be
>> >> very fast (for the single-threaded transducer case) or that a volatile
>> >> under
>> >> a lock would be no worse than the lock by itself.
>> >
>> >
>> > I agree that the perf argument is weak.
>> >
>> >
>> >> A transducer can assume it will be invoked by no more than one thread
>> >> at a
>> >> time
>> >
>> >
>> > Fine. Even simpler like this.
>> >
>> >
>> >> Transducers should ensure stateful changes guarantee visibility. That
>> >> is:
>> >> you should not make assumptions about external memory barriers.
>> >
>> >
>> > How do you enforce no more than one thread at a time without setting a
>> > memory barrier ?
>> > For the JMM, no more than one thread at a time means exactly that return
>> > of
>> > step n will *happen-before* the call to step n+1.
>> > This implies that what was visible to the thread performing step n will
>> > be
>> > visible to the thread performing the step n+1, including all memory
>> > writes
>> > performed during step n inside stateful transducers.
>> > https://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html
>> > Still no need for extra synchronization.
>> >
>> >
>> >> You're conflating the stateful values inside the transducer with the
>> >> state
>> >> returned by and passed into a transducer. That's a linkage that does
>> >> not
>> >> necessarily exist.
>> >
>> >
>> > What do you mean ? How could a function return a value without having
>> > executed its body ?
>> >
>> >
>> > On Monday, April 10, 2017 at 9:51:30 PM UTC+2, Alexander Gunnarson
>> > wrote:
>> >>
>> >> Thanks for clearing all of that up Alex! Very helpful.
>> >>
>> >> On Monday, April 10, 2017 at 3:46:45 PM UTC-4, Alex Miller wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Monday, April 10, 2017 at 2:25:48 PM UTC-5, Alexander Gunnarson
>> >>> wrote:
>> >>>>
>> >>>> I think you present a key question: what assumptions can a transducer
>> >>>> make? We know the standar

Re: Using transducers in a new transducing context

2017-04-11 Thread Seth Verrinder
The single thread at a time rule is implemented by the transducing
context (transduce, into, core.async, etc). Inside of a transducer's
implementation you just have to make the assumption that it's being
used properly. volatile! is what ensures that there's a memory
barrier.

On Tue, Apr 11, 2017 at 2:46 AM, Léo Noel  wrote:
> Thank you Alex for these precisions.
>
>
>> The JVM is pretty good at minimizing this stuff - so while you are stating
>> these barriers are redundant and are implying that's an issue, it would not
>> surprise me if the JVM is able to reduce or eliminate the impacts of that.
>> At the very least, it's too difficult to reason about without a real perf
>> test and numbers.
>
>
>> Fast wrong results are still wrong. I do not think it's at all obvious how
>> this affects performance without running some benchmarks. Volatiles do not
>> require flushing values to all cores or anything like that. They just define
>> constraints - the JVM is very good at optimizing these kinds of things. It
>> would not surprise me if an uncontended thread-contained volatile could be
>> very fast (for the single-threaded transducer case) or that a volatile under
>> a lock would be no worse than the lock by itself.
>
>
> I agree that the perf argument is weak.
>
>
>> A transducer can assume it will be invoked by no more than one thread at a
>> time
>
>
> Fine. Even simpler like this.
>
>
>> Transducers should ensure stateful changes guarantee visibility. That is:
>> you should not make assumptions about external memory barriers.
>
>
> How do you enforce no more than one thread at a time without setting a
> memory barrier ?
> For the JMM, no more than one thread at a time means exactly that return of
> step n will *happen-before* the call to step n+1.
> This implies that what was visible to the thread performing step n will be
> visible to the thread performing the step n+1, including all memory writes
> performed during step n inside stateful transducers.
> https://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html
> Still no need for extra synchronization.
>
>
>> You're conflating the stateful values inside the transducer with the state
>> returned by and passed into a transducer. That's a linkage that does not
>> necessarily exist.
>
>
> What do you mean ? How could a function return a value without having
> executed its body ?
>
>
> On Monday, April 10, 2017 at 9:51:30 PM UTC+2, Alexander Gunnarson wrote:
>>
>> Thanks for clearing all of that up Alex! Very helpful.
>>
>> On Monday, April 10, 2017 at 3:46:45 PM UTC-4, Alex Miller wrote:
>>>
>>>
>>>
>>> On Monday, April 10, 2017 at 2:25:48 PM UTC-5, Alexander Gunnarson wrote:

 I think you present a key question: what assumptions can a transducer
 make? We know the standard ones, but what of memory barriers?
>>>
>>>
>>> Transducers should ensure stateful changes guarantee visibility. That is:
>>> you should not make assumptions about external memory barriers.
>>>

 Based on the current implementation, in terms of concurrency, it seems
 to make (inconsistent — see also `partition-by`) guarantees that sequential
 writes and reads will be consistent, no matter what thread does the reads 
 or
 writes. Concurrent writes are not supported. But should sequential
 multi-threaded reads/writes be supported?
>>>
>>>
>>> Yes. core.async channel transducers already do this.
>>>

 This is a question best left to Alex but I think I already know the
 answer based on his conversation with Rich: it's part of the contract.

 I think another key question is, is the channel lock memory barrier part
 of the contract of a core.async channel implementation?
>>>
>>>
>>> Yes, but other transducing processes may exist either in core in the
>>> future or in external libs.
>>>

 If not, volatiles will be necessary in that context if the memory
 barrier is ever taken away, and it would make sense that volatiles are used
 in transducers "just in case" specifically for that use case. But if the
 channel lock memory barrier is part of the contract and not just an
 implementation detail, then I'm not certain that it's very useful at all 
 for
 transducers to provide a guarantee of safe sequential multi-threaded
 reads/writes.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with your
> first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Clojure" group.
> To unsubscribe from this topic, visit
> 

Re: Using transducers in a new transducing context

2017-04-10 Thread Seth Verrinder
The problem is at a lower level. The memory model of the JVM doesn't
guarantee that changes to an unsynchronized non-volatile reference are
visible to other threads. Transducers don't have to worry about
concurrency but they do have to worry about visibility of changes
across different threads.

On Mon, Apr 10, 2017 at 8:37 AM, Léo Noel  wrote:
> This topic is of high interest to me as it is at the core of my current
> works. I had a similar questioning a while ago and I have to say I'm even
> more confused with this :
>
>> While transducing processes may provide locking to cover the visibility of
>> state updates in a stateful transducer, transducers should still use
>> stateful constructs that ensure visibility (by using volatile, atoms, etc).
>
>
> I actually tried pretty hard to find a use case that would make
> partition-all fail because of its unsynchronized local state, and did not
> manage to find one that did not break any contract. I arrived at the
> conclusion that it is always safe to use unsynchronized constructs in
> stateful transducers. The reason is that you need to ensure that the result
> of each step is given to the next, and doing so you will necessarily set a
> memory barrier of some sort between each step. Each step happens-before the
> next, and therefore mutations performed by the thread at step n are always
> visible by the thread performing the step n+1. This is really brilliant :
> when designing a transducer, you can be confident that calls to your
> reducing function will be sequential and stop worrying about concurrency.
> You just have to ensure that mutable state stays local. True encapsulation,
> the broken promise of object-oriented programming.
>
> My point is that the transducer contract "always feed the result of step n
> as the first argument of step n+1" is strong enough to safely use local
> unsynchronized state. For this reason, switching partition-* transducers to
> volatile constructs really sounds like a step backwards to me. However,
> after re-reading the documentation on transducers, I found that this
> contract is not explicitly stated. It is just *natural* to think this way,
> because transducers are all about reducing processes. Is there a plan to
> reconsider this principle ? I would be very interested to know what Rich has
> in mind that could lead him to advise to overprotect local state of
> transducers.
>
>
>
> On Monday, April 10, 2017 at 4:44:00 AM UTC+2, Alexander Gunnarson wrote:
>>
>> Thanks so much for your input Alex! It was a very helpful confirmation of
>> the key conclusions arrived at in this thread, and I appreciate the
>> additional elaborations you gave, especially the insight you passed on about
>> the stateful transducers using `ArrayList`. I'm glad that I wasn't the only
>> one wondering about the apparent lack of parity between its unsynchronized
>> mutability and the volatile boxes used for e.g. `map-indexed` and others.
>>
>> As an aside about the stateful `take` transducer, Tesser uses the
>> equivalent of one but skirts the issue by not guaranteeing that the first n
>> items of the collection will be returned, but rather, n items of the
>> collection in no particular order and starting at no particular index. This
>> is achievable without Tesser by simply replacing the `volatile` in the
>> `core/take` transducer with an `atom` and using it with `fold`. But yes,
>> `take`'s contract is broken with this and so still follows the rule of thumb
>> you established that `fold` can't use stateful transducers (at least, not
>> without weird things like reordering of the indices in `map-indexed` and so
>> on).
>>
>> That's interesting that `fold` can use transducers directly! I haven't
>> tried that yet — I've just been wrapping them in an `r/folder`.
>>
>> On Sunday, April 9, 2017 at 10:22:13 PM UTC-4, Alex Miller wrote:
>>>
>>> Hey all, just catching up on this thread after the weekend. Rich and I
>>> discussed the thread safety aspects of transducers last fall and the
>>> intention is that transducers are expected to only be used in a single
>>> thread at a time, but that thread can change throughout the life of the
>>> transducing process (for example when a go block is passed over threads in a
>>> pool in core.async). While transducing processes may provide locking to
>>> cover the visibility of state updates in a stateful transducer, transducers
>>> should still use stateful constructs that ensure visibility (by using
>>> volatile, atoms, etc).
>>>
>>> The major transducing processes provided in core are transduce, into,
>>> sequence, eduction, and core.async. All but core.async are single-threaded.
>>> core.async channel transducers may occur on many threads due to interaction
>>> with the go processing threads, but never happen on more than one thread at
>>> a time. These operations are covered by the channel lock which should
>>> guarantee visibility. Transducers used within a go block (via something like

Re: Using transducers in a new transducing context

2017-04-09 Thread Seth Verrinder
I'll defer to Timothy on the particulars of core.async but it looks like 
[1] the transducer in channel is protected by a lock. If that's the case 
volatile isn't adding anything in terms memory barriers.

1: 
https://github.com/clojure/core.async/blob/master/src/main/clojure/clojure/core/async/impl/channels.clj#L71

On Sunday, April 9, 2017 at 11:58:00 AM UTC-5, Alexander Gunnarson wrote:
>
> Thanks so much for your well-considered reply, Timothy! That makes sense 
> about volatiles being used in e.g. core.async or core.reducers contexts 
> where the reducing function that closes over the mutable value of the 
> stateful transducer is called in different threads. Why, then, are 
> unsynchronized ArrayLists used e.g. in 'partition-by'? It's also closed 
> over by the reducing function in just the same way as the volatile long 
> value internal to e.g. 'map-indexed'. I'm not yet clear on how one (the 
> ArrayList) is acceptable being non-volatile and the other (the volatile 
> long) is unacceptable. When .add is called, an unsynchronized mutable 
> counter is updated so the ArrayList can insert the next value at the 
> correct index. Do you have any insight into this? Meanwhile I'll go do some 
> digging myself on the Clojure JIRA etc. so I'm more informed on the 
> subject. 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-09 Thread Seth Verrinder
My guess is that partition-all and partition use non-volatile references 
because none of the built-in stuff will return control back to the caller 
at a finer resolution than output value (AFAIK). That's why take needs 
volatile but partition-all doesn't (because for take the state persists 
between output values).
It does mean that new transducing contexts would need to synchronize though 
- which core.async does through mutexes.

On Sunday, April 9, 2017 at 8:47:25 AM UTC-5, tbc++ wrote:
>
> The volatile! is needed for the case where a transducer is only used by 
> one thread at a time, but the thread executing the transducer may change 
> from one call to the next. This happens fairly often with core.async. If 
> you used a non-atomic, non-volatile mutable field, the JVM would be free to 
> perform several optimizations (like keeping the local in a CPU register) 
> that would cause the value to not properly propagate to other threads in 
> the case of a context switch. Using volatile! tells the JVM to flush all 
> writes to this field by the time the next memory barrier rolls around. It 
> also tells the JVM to make sure it doesn't cache the reads to this field 
> across memory barriers. 
>
> It's a tricky subject, and one that's really hard to test, and frankly I 
> probably got some of the specifics wrong in that last paragraph, but that's 
> the general idea of why transducers use volatile!. 
>
> Timothy
>
> On Sun, Apr 9, 2017 at 12:49 AM, Alexander Gunnarson <
> alexander...@gmail.com > wrote:
>
>> EDIT: Transducers are actually not safe in `fold` contexts as I thought:
>>
>> (let [f (fn [i x] (println (str "i " i " " (Thread/currentThread))) 
>> (flush) x)
>>   r-map-indexed #(r/folder %2 (map-indexed %1))]
>>   (->> [6 7 8 9 10]
>>(r-map-indexed f)
>>(r/fold 1 (fn ([] (vector)) ([x] x) ([a b] (into a b))) conj)))
>>
>> Produces:
>>
>> i 0 Thread[ForkJoinPool-1-worker-2,5,main]
>> i 2 Thread[ForkJoinPool-1-worker-1,5,main]
>> i 3 Thread[ForkJoinPool-1-worker-1,5,main]
>> i 4 Thread[ForkJoinPool-1-worker-1,5,main]
>> i 1 Thread[ForkJoinPool-1-worker-3,5,main]
>>
>> So you would have to be careful to e.g. create different `map-indexed` 
>> transducers for single-threaded (e.g. `unsynchronized-mutable` box) and 
>> multi-threaded (e.g. `atom` box) contexts.
>>
>> On Sunday, April 9, 2017 at 2:10:06 AM UTC-4, Alexander Gunnarson wrote:
>>>
>>> I was wondering the same thing, shintotomoe. This thread 
>>>  talks 
>>> about it as well. I think it's safe to assume that since `ArrayList` uses 
>>> unsynchronized mutability internally (a quick review of the GrepCode 
>>> entry for `ArrayList` confirms this 
>>> ),
>>>  
>>> then we can rest assured that a `volatile` box as opposed to a totally 
>>> unsynchronized mutable variable is unnecessary, even in the context of 
>>> `fold`. After all, `reduce` (and by extension, `transduce`) is only ever 
>>> going to be single-threaded unless the data structure in question 
>>> unexpectedly implements a multithreaded reduce, which should never happen 
>>> (and if it does, you likely have bigger problems). To be honest, I'm not 
>>> sure why `volatile` is used in transducers instead of e.g. an 
>>> `unsynchronized-mutable` box. There may be a good reason, but I'm not 
>>> seeing it immediately. I'd love to learn.
>>>
>>> On Thursday, January 1, 2015 at 10:36:13 PM UTC-5, shintotomoe wrote:

 Thank you for the superfast response. I take it implementing your own 
 transducing process is not something you would usually do unless you have 
 a 
 unique use case (my own use case being already implemented by chan taking 
 a 
 transducer).

 Still, I was wondering about the use of ArrayList in partition-all, and 
 the recommendation to use volatiles inside transducers, which seem at 
 odds. 
 It seems we don't need to implement transducers in a thread-safe way. Is 
 that correct?

 On Friday, January 2, 2015 12:58:51 PM UTC+11, tbc++ wrote:
>
> Core.async already has pipeline, pipeline-blocking and pipeline-async. 
> In addition you can use a transducer inside a channel. Use those instead. 
>
> Timothy
>
> On Thu, Jan 1, 2015 at 6:55 PM, shintotomoe  
> wrote:
>
>> I was wondering how to apply a transducer inside a go process. What 
>> I've so far is the following
>>
>> (defn pipe-xform [in-ch out-ch xform]
>>   (let [tr
>> (let [tr (xform (fn
>>   ([result] result)
>>   ([result input] (conj! result input]
>>   (fn
>> ([] (locking tr (persistent! (tr (transient [])
>> ([input] (locking tr (persistent! (tr (transient []) 
>> 

Re: is this macro right?

2016-01-04 Thread Seth Verrinder
Hi,
I posted another answer to this. There's an additional problem with the 
macro that explains what you're seeing in your first example.

- Seth

On Monday, December 28, 2015 at 2:47:39 AM UTC-6, Mian Pao wrote:
>
>
> http://stackoverflow.com/questions/34448773/is-function-print-has-bug-in-clojure
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.