Re: Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread Didier
Here it is adapted to use a 
deftype: https://gist.github.com/didibus/2ccd608ed9d226039f944b02a10f9ad5

I gather from your solution that "orchestra" is not needed to spec :ret 
> types?
>

It is not. The return spec will be used during st/check. If you want the 
return spec to be validated outside of st/check though, than you need 
Orchestra.

I shall have to read up on deftype versus defrecord.
>

I recommend the clojure.org explanation: 
https://clojure.org/reference/datatypes#_deftype_and_defrecord
In a nutshell, deftype is a blank Java class, defrecord is an extended 
PersistentHashMap.

I am not sure why you need a VirtualTime, and what it will be doing, so I 
can't really comment on which solution is best. As James said, it appears 
you might be creating something that behaves the same way Double does? 
Double already has positive and negative infinity and NaN:

(/ 0.0 0.0) ; NaN
(* 99e 99e) ; Infinity
(* 99e -99e) ; -Infinity
(< Double/NEGATIVE_INFINITY Double/POSITIVE_INFINITY) ; true
(= Double/NEGATIVE_INFINITY Double/POSITIVE_INFINITY) ; false

I'd suggest, if you need Double, use Double. If you need something close to 
Double, and you can build on top of it, simpler to go with the style of my 
first gist. If you can't build on top of double, deftype is probably what 
you want.

On Monday, 10 April 2017 21:41:35 UTC-7, Brian Beckman wrote:
>
> Wow... that's a comprehensive solution, Didier :) Bravo! It's a good 
> lesson for s/fdef, which I haven't yet studied. I gather from your solution 
> that "orchestra" is not needed to spec :ret types?
>
> As to semantics, on the one hand, I can spec ::virtual-time as a light 
> overlay over Double and risk conflation of ordinary operators like < <= = 
> etc. On the other hand, I have several options for full protection. I shall 
> have to read up on deftype versus defrecord.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread Brian Beckman
Wow... that's a comprehensive solution, Didier :) Bravo! It's a good lesson for 
s/fdef, which I haven't yet studied. I gather from your solution that 
"orchestra" is not needed to spec :ret types?

As to semantics, on the one hand, I can spec ::virtual-time as a light overlay 
over Double and risk conflation of ordinary operators like < <= = etc. On the 
other hand, I have several options for full protection. I shall have to read up 
on deftype versus defrecord.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[QUIL] select-input and select-output?

2017-04-10 Thread Jay Porcasi
hello,

i'm looking for the Quil equivalents of Processing selectInput() and 
selectOutput() but i can't find them

if they're not there, what would be the best way to directly access the 
corresponding Processing methods instead?
or is there a different more convenient way to get the same functionality 
of those methods?

thanks for any suggestions in the right direction
Jay

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread James Reeves
Using the preset infinity constants is probably the best solution in this
case. :)

- James

On 11 April 2017 at 01:50, Brian Beckman  wrote:

> James -- just the kind of simplification I was looking for! In fact, I
> think the following will do everything I need --- generate numbers avoiding
> only NaN (which isn't equal to itself, nor less than anything)
>
> (s/def ::virtual-time
>   (s/with-gen
> (s/and
>  number? #(not (Double/isNaN %)))
> ;; We'd like most values generated in tests to be finite, with the
> ;; occasional infinity for spice. Adjust these frequencies to taste.
> #(gen/frequency [[98 (s/gen number?)]
>  [ 1 (gen/return Double/NEGATIVE_INFINITY)]
>  [ 1 (gen/return Double/POSITIVE_INFINITY)]])))
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread Didier
I agree with James, here's what I'd 
do: https://gist.github.com/didibus/d0228ffad9b920c201410806b157ff10

The only downside, and why you might still want to use types (probably with 
deftype), is to prevent people from using standard functions like <,>,= 
etc. If you deftyped virtual-time, it could not accidentally be used like a 
normal number. Ideally, you'd extend the Java Comparable interface too.

On Monday, 10 April 2017 17:50:58 UTC-7, Brian Beckman wrote:
>
> James -- just the kind of simplification I was looking for! In fact, I 
> think the following will do everything I need --- generate numbers avoiding 
> only NaN (which isn't equal to itself, nor less than anything)
>
> (s/def ::virtual-time
>   (s/with-gen
> (s/and
>  number? #(not (Double/isNaN %)))
> ;; We'd like most values generated in tests to be finite, with the
> ;; occasional infinity for spice. Adjust these frequencies to taste.
> #(gen/frequency [[98 (s/gen number?)]
>  [ 1 (gen/return Double/NEGATIVE_INFINITY)]
>  [ 1 (gen/return Double/POSITIVE_INFINITY)]])))
>
>
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread Brian Beckman
James -- just the kind of simplification I was looking for! In fact, I 
think the following will do everything I need --- generate numbers avoiding 
only NaN (which isn't equal to itself, nor less than anything)

(s/def ::virtual-time
  (s/with-gen
(s/and
 number? #(not (Double/isNaN %)))
;; We'd like most values generated in tests to be finite, with the
;; occasional infinity for spice. Adjust these frequencies to taste.
#(gen/frequency [[98 (s/gen number?)]
 [ 1 (gen/return Double/NEGATIVE_INFINITY)]
 [ 1 (gen/return Double/POSITIVE_INFINITY)]])))



-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread James Reeves
I think what you have is overly complex for what you want to do.

Consider this alternative spec:

  (s/def ::virtual-time
(s/or :number number?, :limit #{::infinity- ::infinity+}))

Then we write a comparator:

  (defn compare-times [a b]
(cond
  (= a b) 0
  (= a ::infinity+) +1
  (= a ::infinity-) -1
  (= b ::infinity+) -1
  (= b ::infinity-) +1
  :else (compare a b)))

>From there we can derive less-than and greater-than functions if we really
need them.

I don't think you need protocols, records or custom generators.

- James

On 10 April 2017 at 17:30, Brian Beckman  wrote:

> "I apologize for the length of this post ..."  Blaise Pascal?
>
> I am seeking critique of a certain "programming pattern" that's arisen
> several times in a project. I want testable types satisfying a protocol,
> but the pattern I developed "feels" heavyweight, as the example will show,
> but I don't know a smaller way to get what I want. The amount of code I
> needed to formalize and test my specs "feels" like too much. In particular,
> the introduction of a defrecord just to support the protocol doesn't "feel"
> minimal. The defrecord provides a constructor with positional args — of
> dubious utility — especially for large records, but otherwise acts like a
> hashmap. Perhaps there is a way to bypass the defrecord and directly use a
> hashmap?
>
> Generally, I am suspicious of "programming patterns," because I believe
> that an apparent need for a programming pattern usually means one of two
> things:
>
>1.
>
>The programming language doesn't directly support some reasonable
>need, and that's not usually the case with Clojure
>2.
>
>Ignorance: I don't know an idiomatic way to do what I want.
>
> There is a remote, third possibility, that "what I want" is stupid,
> ignorant, or otherwise unreasonable.
> Here is what I settled on: quadruples of protocol, defrecord, specs and
> tests to fully describe and test types in my application:
>
>1.
>
>a protocol to declare functions that certain types must implement
>2.
>
>at least one defrecord to implement the protocol
>3.
>
>a spec to package checks and test generators
>4.
>
>tests to, well, test them
>
> For a small example (my application has some that are much bigger),
> consider a type that models "virtual times" as numbers-with-infinities.
> Informally, a "virtual time" is either a number or one of two distinguished
> values for plus and minus infinity. Minus infinity is less than any virtual
> time other than minus infinity. Plus infinity is greater than any virtual
> time other than plus infinity." I'll write a protocol, a defrecord, a spec,
> and a couple of tests for this type.
>
> In the actual code, the elements come in the order of protocol, defrecord,
> spec, and tests because of cascading dependencies. For human consumption,
> I'll "detangle" them and present the spec first:
>
> (s/def ::virtual-time  (s/with-gen(s/and ; idiom for providing a 
> "conformer" function below (s/or  :minus-infinity #(vt-eq % 
> :vt-negative-infinity) ; see the protocol for "vt-eq"  :plus-infinity  
> #(vt-eq % :vt-positive-infinity)  :number #(number? (:vt %))) 
> (s/conformer second))  ; strip off redundant conformer tag
> #(gen/frequency [[98 vt-number-gen] ; generate mostly numbers ... 
> [ 1 vt-negative-infinity-gen] ; ... with occasional infinities
>  [ 1 vt-positive-infinity-gen]])))
>
> That should be self-explanatory given the following definitions:
>
> (def vt-number-gen  (gen/bind   (gen/large-integer)   (fn [vt] (gen/return 
> (virtual-time. vt) ; invoke constructor ... heavyweight?​(def 
> vt-negative-infinity-gen  (gen/return (virtual-time. 
> :vt-negative-infinity)))​(def vt-positive-infinity-gen  (gen/return 
> (virtual-time. :vt-positive-infinity)))
>
> The tests use the generators and a couple of global variables:
>
> (def vt-negative-infinity (virtual-time. :vt-negative-infinity))(def 
> vt-positive-infinity (virtual-time. :vt-positive-infinity))​(defspec 
> minus-infinity-less-than-all-but-minus-infinity  100  (prop/for-all   [vt 
> (s/gen :pattern-mve.core/virtual-time)]   (if (not= (:vt vt) 
> :vt-negative-infinity) (vt-lt vt-negative-infinity vt) ; see the protocol 
> for def of "vt-lt" true)))​(defspec plus-infinity-not-less-than-any  100  
> (prop/for-all   [vt (s/gen :pattern-mve.core/virtual-time)]   (not (vt-lt 
> vt-positive-infinity vt
>
> The protocol specifies the comparison operators "vt-lt" and "vt-le." A
> defrecord to implement it should now be obvious, given understanding of how
> they're used above:
>
> (defprotocol VirtualTimeT  (vt-lt [this-vt that-vt])  (vt-le [this-vt 
> that-vt])  (vt-eq [this-vt that-vt]))​(defn -vt-compare-lt [this-vt that-vt]  
> (case (:vt this-vt):vt-negative-infinity(case (:vt that-vt)  
> 

Re: Derefs broken after clojure.tools.namespace.repl/refresh

2017-04-10 Thread Timothy Baldridge
You're reloading your namespaces from a non-repl thread, concurrently while
editing code in the repl. I don't think this is use case is supported by
tools.namespace.

On Mon, Apr 10, 2017 at 2:56 PM, Didier  wrote:

> Hum, not sure why you would do this, but I'm guessing refresh goes in an
> infinite async loop. You keep reloading a namespace which creates a thread
> to reload itself.
>
> I can't really explain why the symbol exists, but is not bound. I would
> have thought either the symbol would not exist, or the Var would contain
> the correct value.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
“One of the main causes of the fall of the Roman Empire was that–lacking
zero–they had no way to indicate successful termination of their C
programs.”
(Robert Firth)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread Brian Beckman
These are good comments that give me things to think about. I'm grateful.
* The pattern concerned me because (1) it was just the first thing I came 
up with, so not sure there wasn't a better way staring me in the face (2) I 
didn't see any clearly better alternatives, so not sure whether I just 
don't know enough Clojure (3) it intuitively felt heavyweight with at least 
four "complected" language features.
* Understood about polymorphism. In my real app, I have a lot of it, but I 
thought I would use protocols & records even when I don't have polymorphism 
just for uniformity of style: one way to express my "testable types." 
* You've pointed out an overlap between two different ways to specify 
structure: (A) requiring keys via records and (B) requiring keys using 
specs over ordinary maps. Starting with records, I sensed that some of my 
specs were actually vapid, and now you've told me why.  
* I think experimenting with mere (collections of) functions over mere maps 
with structure enforced by specs is a good idea. I'll try it and "weigh" it 
against this alternative.


On Monday, April 10, 2017 at 1:13:49 PM UTC-7, Didier wrote:
>
> I think this pattern is fine.
>
> What specifically about it annoys you?
>
> You could do it without records, but then you wouldn't be creating a type. 
> Do you really need a type?
>
> The advantage of types in Clojure are that they let you do polymorphic 
> dispatch of them. So they are useful if you have one function which you 
> want to reuse for many types.
>
> In your case, I'm not seeing other records implementing the protocol. So 
> it doesn't seem you need polymorphic dispatch on type. So maybe you can 
> drop the protocol.
>
> Records are useful if you need a map with guaranteed keys. Spec makes this 
> feature less useful, because you can now spec a map and test that it always 
> has the right keys when used. If you have a record, a fn that works over 
> that record just needs to check the argument has the type of record, and it 
> knows the keys exist. If you have a map instead, the fn would need to check 
> the keys exist.
>
> Records don't support unions, all keys must exist. In your case, you want 
> unions, a map with keys x,y or z. So if you use a record, some keys will 
> always have nil value. So, again, you might be better served by a map.
>
> Recap. Protocols if you want a common interface accross multiple types. 
> Records if you want to create a map with guaranteed keys, which will 
> identify itself as a named java type.
>
> Your spec can't really be made shorter, since they need custom gens.
>
> If I was you, I'd experiment with functions over maps. One thing to 
> consider is that specs are structural types, not nominal. So if you spec a 
> map, it describes its structure. A function says I take a structure of that 
> shape, if you give me anything that conforms, I can successfully do my job. 
> The shape itself has no known runtime name.
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Derefs broken after clojure.tools.namespace.repl/refresh

2017-04-10 Thread Didier
Hum, not sure why you would do this, but I'm guessing refresh goes in an 
infinite async loop. You keep reloading a namespace which creates a thread to 
reload itself.

I can't really explain why the symbol exists, but is not bound. I would have 
thought either the symbol would not exist, or the Var would contain the correct 
value.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread Didier
I think this pattern is fine.

What specifically about it annoys you?

You could do it without records, but then you wouldn't be creating a type. Do 
you really need a type?

The advantage of types in Clojure are that they let you do polymorphic dispatch 
of them. So they are useful if you have one function which you want to reuse 
for many types.

In your case, I'm not seeing other records implementing the protocol. So it 
doesn't seem you need polymorphic dispatch on type. So maybe you can drop the 
protocol.

Records are useful if you need a map with guaranteed keys. Spec makes this 
feature less useful, because you can now spec a map and test that it always has 
the right keys when used. If you have a record, a fn that works over that 
record just needs to check the argument has the type of record, and it knows 
the keys exist. If you have a map instead, the fn would need to check the keys 
exist.

Records don't support unions, all keys must exist. In your case, you want 
unions, a map with keys x,y or z. So if you use a record, some keys will always 
have nil value. So, again, you might be better served by a map.

Recap. Protocols if you want a common interface accross multiple types. Records 
if you want to create a map with guaranteed keys, which will identify itself as 
a named java type.

Your spec can't really be made shorter, since they need custom gens.

If I was you, I'd experiment with functions over maps. One thing to consider is 
that specs are structural types, not nominal. So if you spec a map, it 
describes its structure. A function says I take a structure of that shape, if 
you give me anything that conforms, I can successfully do my job. The shape 
itself has no known runtime name.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread Alexander Gunnarson
Thanks for clearing all of that up Alex! Very helpful.

On Monday, April 10, 2017 at 3:46:45 PM UTC-4, Alex Miller wrote:
>
>
>
> On Monday, April 10, 2017 at 2:25:48 PM UTC-5, Alexander Gunnarson wrote:
>>
>> I think you present a key question: what assumptions can a transducer 
>> make? We know the standard ones, but what of memory barriers? 
>>
>
> Transducers should ensure stateful changes guarantee visibility. That is: 
> you should not make assumptions about external memory barriers.
>  
>
>> Based on the current implementation, in terms of concurrency, it seems to 
>> make (inconsistent — see also `partition-by`) guarantees that sequential 
>> writes and reads will be consistent, no matter what thread does the reads 
>> or writes. Concurrent writes are not supported. But *should *sequential 
>> multi-threaded reads/writes be supported? 
>>
>
> Yes. core.async channel transducers already do this.
>  
>
>> This is a question best left to Alex but I think I already know the 
>> answer based on his conversation with Rich: it's part of the contract.
>>
>> I think another key question is, is the channel lock memory barrier part 
>> of the contract of a core.async channel implementation? 
>>
>
> Yes, but other transducing processes may exist either in core in the 
> future or in external libs.
>  
>
>> If not, volatiles will be necessary in that context if the memory barrier 
>> is ever taken away, and it would make sense that volatiles are used in 
>> transducers "just in case" specifically for that use case. But if the 
>> channel lock memory barrier *is* part of the contract and not just an 
>> implementation detail, then I'm not certain that it's very useful at all 
>> for transducers to provide a guarantee of safe sequential multi-threaded 
>> reads/writes.
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread Alex Miller


On Monday, April 10, 2017 at 2:25:48 PM UTC-5, Alexander Gunnarson wrote:
>
> I think you present a key question: what assumptions can a transducer 
> make? We know the standard ones, but what of memory barriers? 
>

Transducers should ensure stateful changes guarantee visibility. That is: 
you should not make assumptions about external memory barriers.
 

> Based on the current implementation, in terms of concurrency, it seems to 
> make (inconsistent — see also `partition-by`) guarantees that sequential 
> writes and reads will be consistent, no matter what thread does the reads 
> or writes. Concurrent writes are not supported. But *should *sequential 
> multi-threaded reads/writes be supported? 
>

Yes. core.async channel transducers already do this.
 

> This is a question best left to Alex but I think I already know the answer 
> based on his conversation with Rich: it's part of the contract.
>
> I think another key question is, is the channel lock memory barrier part 
> of the contract of a core.async channel implementation? 
>

Yes, but other transducing processes may exist either in core in the future 
or in external libs.
 

> If not, volatiles will be necessary in that context if the memory barrier 
> is ever taken away, and it would make sense that volatiles are used in 
> transducers "just in case" specifically for that use case. But if the 
> channel lock memory barrier *is* part of the contract and not just an 
> implementation detail, then I'm not certain that it's very useful at all 
> for transducers to provide a guarantee of safe sequential multi-threaded 
> reads/writes.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread Alex Miller

On Monday, April 10, 2017 at 1:57:10 PM UTC-5, Léo Noel wrote:
>
> What you said holds for reduction but not necessarily a parallel fold (see 
>> clojure.core.reducers/fold).
>>
>
> Exactly, and that's why stateful transducers are explicitly forbidden in 
> fold and in core.async pipeline functions.
> This is not related to memory visibility, this is due to the fact that 
> stateful transducers force the reducing process to be sequential.
> Does it make any sense to parallelize map-indexed ? partition-all ? dedupe 
> ?
>

Parallel transducers is something Rich has thought about some but that's a 
future effort.
 

> Léo, I definitely agree that you can use unsynchronized mutable stateful 
>> transducers *as long as you can guarantee they'll be used only in 
>> single-threaded contexts.* 
>>
>
> The problem is at a lower level. The memory model of the JVM 
>> doesn't guarantee that changes to an unsynchronized non-volatile reference 
>> are visible to other threads.
>>
>  
> The Java Memory Model allows using unsynchronized variables to share data 
> across threads as long as a memory barrier is set between the writer and 
> the reader. For example, in the case of core.async, the channel lock sets a 
> barrier, and there is also (redundant) barriers for each volatile inside 
> transducers.
>

The JVM is pretty good at minimizing this stuff - so while you are stating 
these barriers are redundant and are implying that's an issue, it would not 
surprise me if the JVM is able to reduce or eliminate the impacts of that. 
At the very least, it's too difficult to reason about without a real perf 
test and numbers.
 

> A transducing process could apply each step of the transduce using a 
>> thread from a pool and also not use a memory barrier
>
>  
>
>> Transducers included in core cannot make the assumption that they will 
>> only be used that way.
>>
>
> Yes, that makes sense that you can't make that assumption. 
>
>  
> This is the key point : what assumptions a transducer can make ?
>

A transducer can assume it will be invoked by no more than one thread at a 
time
 

> In my opinion, it is reasonable for a stateful transducer to assume that 
> the transducing context will fulfill the contract of "always passing the 
> result of step n to the first argument of step n+1".
>
 

> This assumption is powerful because it guarantees that there will always 
> be a memory barrier between two successive steps.
> Proof (reductio ad absurdum) : without a memory barrier, the result of the 
> step n wouldn't be visible to the (potentially different) thread performing 
> the step n+1.
>

You're conflating the stateful values inside the transducer with the state 
returned by and passed into a transducer. That's a linkage that does not 
necessarily exist.
 

> So here is my question to the language designers : is it reasonable to 
> assume that ?
>

No.
 

> If yes, that means it's ok to use unsynchronized variables in stateful 
> transducers as long as they stay local.
>
 

> If no, that means we'll use synchronization in all stateful transducers, 
> with an obvious performance penalty and a benefit that remains unclear.
>

Fast wrong results are still wrong. I do not think it's at all obvious how 
this affects performance without running some benchmarks. Volatiles do not 
require flushing values to all cores or anything like that. They just 
define constraints - the JVM is very good at optimizing these kinds of 
things. It would not surprise me if an uncontended thread-contained 
volatile could be very fast (for the single-threaded transducer case) or 
that a volatile under a lock would be no worse than the lock by itself.


-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread Alexander Gunnarson
I think you present a key question: what assumptions can a transducer make? 
We know the standard ones, but what of memory barriers? Based on the 
current implementation, in terms of concurrency, it seems to make 
(inconsistent — see also `partition-by`) guarantees that sequential writes 
and reads will be consistent, no matter what thread does the reads or 
writes. Concurrent writes are not supported. But *should *sequential 
multi-threaded reads/writes be supported? This is a question best left to 
Alex but I think I already know the answer based on his conversation with 
Rich: it's part of the contract.

I think another key question is, is the channel lock memory barrier part of 
the contract of a core.async channel implementation? If not, volatiles will 
be necessary in that context if the memory barrier is ever taken away, and 
it would make sense that volatiles are used in transducers "just in case" 
specifically for that use case. But if the channel lock memory barrier *is* 
part of the contract and not just an implementation detail, then I'm not 
certain that it's very useful at all for transducers to provide a guarantee 
of safe sequential multi-threaded reads/writes.

On Monday, April 10, 2017 at 2:57:10 PM UTC-4, Léo Noel wrote:
>
> What you said holds for reduction but not necessarily a parallel fold (see 
>> clojure.core.reducers/fold).
>>
>
> Exactly, and that's why stateful transducers are explicitly forbidden in 
> fold and in core.async pipeline functions.
> This is not related to memory visibility, this is due to the fact that 
> stateful transducers force the reducing process to be sequential.
> Does it make any sense to parallelize map-indexed ? partition-all ? dedupe 
> ?
>
>
> These kinds of failures are inherently difficult to reproduce unless the 
>> code is in production and you're on vacation. ;)
>>
>
> Couldn't agree more. However, we're all clever people and the Java Memory 
> Model is not magic :) 
>
>
> Léo, I definitely agree that you can use unsynchronized mutable stateful 
>> transducers *as long as you can guarantee they'll be used only in 
>> single-threaded contexts.* 
>
>
> The problem is at a lower level. The memory model of the JVM 
>> doesn't guarantee that changes to an unsynchronized non-volatile reference 
>> are visible to other threads.
>>
>  
> The Java Memory Model allows using unsynchronized variables to share data 
> across threads as long as a memory barrier is set between the writer and 
> the reader. For example, in the case of core.async, the channel lock sets a 
> barrier, and there is also (redundant) barriers for each volatile inside 
> transducers.
>
>
> A transducing process could apply each step of the transduce using a 
>> thread from a pool and also not use a memory barrier
>
>  
>
>> Transducers included in core cannot make the assumption that they will 
>> only be used that way.
>>
>
> Yes, that makes sense that you can't make that assumption. 
>
>  
> This is the key point : what assumptions a transducer can make ?
> In my opinion, it is reasonable for a stateful transducer to assume that 
> the transducing context will fulfill the contract of "always passing the 
> result of step n to the first argument of step n+1".
> This assumption is powerful because it guarantees that there will always 
> be a memory barrier between two successive steps.
> Proof (reductio ad absurdum) : without a memory barrier, the result of the 
> step n wouldn't be visible to the (potentially different) thread performing 
> the step n+1.
> So here is my question to the language designers : is it reasonable to 
> assume that ?
> If yes, that means it's ok to use unsynchronized variables in stateful 
> transducers as long as they stay local.
> If no, that means we'll use synchronization in all stateful transducers, 
> with an obvious performance penalty and a benefit that remains unclear.
>
>
>
> On Monday, April 10, 2017 at 7:34:39 PM UTC+2, Alexander Gunnarson wrote:
>>
>> Yes, that makes sense that you can't make that assumption. You'd have to 
>> create something like what I was discussing above:
>>
>> (defn map-indexed-transducer-base [f box-mutable inc-mutable]
>>   (fn [rf]
>> (let [i (box-mutable -1)]
>>   (fn
>> ([] (rf))
>> ([result] (rf result))
>> ([result input]
>>   (rf result (f (inc-mutable i) input)))
>>
>> ;; this is the version that Léo would want
>> (defn map-indexed-transducer-single-threaded [f]
>>   (map-indexed-transducer-base f unsynchronized-mutable-long! 
>> #(unsynchronized-mutable-swap! 
>> % inc))
>>
>> ;; this is the version included in clojure.core
>> (defn map-indexed-transducer-sequentially-accessed-by-different-threads [
>> f]
>>   (map-indexed-transducer-base f volatile! #(vswap! % inc))
>>
>> ;; this works with `fold` and gives you all the indices at least, but in 
>> a nondeterministic order
>> (defn map-indexed-transducer-concurrently-accessed-by-different-threads [
>> f]
>>   

Re: Using transducers in a new transducing context

2017-04-10 Thread Léo Noel

>
> What you said holds for reduction but not necessarily a parallel fold (see 
> clojure.core.reducers/fold).
>

Exactly, and that's why stateful transducers are explicitly forbidden in 
fold and in core.async pipeline functions.
This is not related to memory visibility, this is due to the fact that 
stateful transducers force the reducing process to be sequential.
Does it make any sense to parallelize map-indexed ? partition-all ? dedupe ?


These kinds of failures are inherently difficult to reproduce unless the 
> code is in production and you're on vacation. ;)
>

Couldn't agree more. However, we're all clever people and the Java Memory 
Model is not magic :) 


Léo, I definitely agree that you can use unsynchronized mutable stateful 
> transducers *as long as you can guarantee they'll be used only in 
> single-threaded contexts.* 


The problem is at a lower level. The memory model of the JVM 
> doesn't guarantee that changes to an unsynchronized non-volatile reference 
> are visible to other threads.
>
 
The Java Memory Model allows using unsynchronized variables to share data 
across threads as long as a memory barrier is set between the writer and 
the reader. For example, in the case of core.async, the channel lock sets a 
barrier, and there is also (redundant) barriers for each volatile inside 
transducers.


A transducing process could apply each step of the transduce using a thread 
> from a pool and also not use a memory barrier

 

> Transducers included in core cannot make the assumption that they will 
> only be used that way.
>

Yes, that makes sense that you can't make that assumption. 

 
This is the key point : what assumptions a transducer can make ?
In my opinion, it is reasonable for a stateful transducer to assume that 
the transducing context will fulfill the contract of "always passing the 
result of step n to the first argument of step n+1".
This assumption is powerful because it guarantees that there will always be 
a memory barrier between two successive steps.
Proof (reductio ad absurdum) : without a memory barrier, the result of the 
step n wouldn't be visible to the (potentially different) thread performing 
the step n+1.
So here is my question to the language designers : is it reasonable to 
assume that ?
If yes, that means it's ok to use unsynchronized variables in stateful 
transducers as long as they stay local.
If no, that means we'll use synchronization in all stateful transducers, 
with an obvious performance penalty and a benefit that remains unclear.



On Monday, April 10, 2017 at 7:34:39 PM UTC+2, Alexander Gunnarson wrote:
>
> Yes, that makes sense that you can't make that assumption. You'd have to 
> create something like what I was discussing above:
>
> (defn map-indexed-transducer-base [f box-mutable inc-mutable]
>   (fn [rf]
> (let [i (box-mutable -1)]
>   (fn
> ([] (rf))
> ([result] (rf result))
> ([result input]
>   (rf result (f (inc-mutable i) input)))
>
> ;; this is the version that Léo would want
> (defn map-indexed-transducer-single-threaded [f]
>   (map-indexed-transducer-base f unsynchronized-mutable-long! 
> #(unsynchronized-mutable-swap! 
> % inc))
>
> ;; this is the version included in clojure.core
> (defn map-indexed-transducer-sequentially-accessed-by-different-threads [f
> ]
>   (map-indexed-transducer-base f volatile! #(vswap! % inc))
>
> ;; this works with `fold` and gives you all the indices at least, but in a 
> nondeterministic order
> (defn map-indexed-transducer-concurrently-accessed-by-different-threads [f
> ]
>   (map-indexed-transducer-base f atom #(swap! % inc)) ; or an AtomicLong 
> variant
>
> On Monday, April 10, 2017 at 1:06:14 PM UTC-4, Alex Miller wrote:
>>
>>
>> On Monday, April 10, 2017 at 11:48:41 AM UTC-5, Alexander Gunnarson wrote:
>>>
>>> Léo, I definitely agree that you can use unsynchronized mutable stateful 
>>> transducers *as long as you can guarantee they'll be used only in 
>>> single-threaded contexts. *
>>>
>>
>> Transducers included in core cannot make the assumption that they will 
>> only be used that way. (But you may be able to guarantee that with your 
>> own.)
>>
>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread Alexander Gunnarson
Yes, that makes sense that you can't make that assumption. You'd have to 
create something like what I was discussing above:

(defn map-indexed-transducer-base [f box-mutable inc-mutable]
  (fn [rf]
(let [i (box-mutable -1)]
  (fn
([] (rf))
([result] (rf result))
([result input]
  (rf result (f (inc-mutable i) input)))

;; this is the version that Léo would want
(defn map-indexed-transducer-single-threaded [f]
  (map-indexed-transducer-base f unsynchronized-mutable-long! 
#(unsynchronized-mutable-swap! 
% inc))

;; this is the version included in clojure.core
(defn map-indexed-transducer-sequentially-accessed-by-different-threads [f]
  (map-indexed-transducer-base f volatile! #(vswap! % inc))

;; this works with `fold` and gives you all the indices at least, but in a 
nondeterministic order
(defn map-indexed-transducer-concurrently-accessed-by-different-threads [f]
  (map-indexed-transducer-base f atom #(swap! % inc)) ; or an AtomicLong 
variant

On Monday, April 10, 2017 at 1:06:14 PM UTC-4, Alex Miller wrote:
>
>
> On Monday, April 10, 2017 at 11:48:41 AM UTC-5, Alexander Gunnarson wrote:
>>
>> Léo, I definitely agree that you can use unsynchronized mutable stateful 
>> transducers *as long as you can guarantee they'll be used only in 
>> single-threaded contexts. *
>>
>
> Transducers included in core cannot make the assumption that they will 
> only be used that way. (But you may be able to guarantee that with your 
> own.)
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread Alex Miller

On Monday, April 10, 2017 at 11:48:41 AM UTC-5, Alexander Gunnarson wrote:
>
> Léo, I definitely agree that you can use unsynchronized mutable stateful 
> transducers *as long as you can guarantee they'll be used only in 
> single-threaded contexts. *
>

Transducers included in core cannot make the assumption that they will only 
be used that way. (But you may be able to guarantee that with your own.)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread Seth Verrinder
The problem is at a lower level. The memory model of the JVM doesn't
guarantee that changes to an unsynchronized non-volatile reference are
visible to other threads. Transducers don't have to worry about
concurrency but they do have to worry about visibility of changes
across different threads.

On Mon, Apr 10, 2017 at 8:37 AM, Léo Noel  wrote:
> This topic is of high interest to me as it is at the core of my current
> works. I had a similar questioning a while ago and I have to say I'm even
> more confused with this :
>
>> While transducing processes may provide locking to cover the visibility of
>> state updates in a stateful transducer, transducers should still use
>> stateful constructs that ensure visibility (by using volatile, atoms, etc).
>
>
> I actually tried pretty hard to find a use case that would make
> partition-all fail because of its unsynchronized local state, and did not
> manage to find one that did not break any contract. I arrived at the
> conclusion that it is always safe to use unsynchronized constructs in
> stateful transducers. The reason is that you need to ensure that the result
> of each step is given to the next, and doing so you will necessarily set a
> memory barrier of some sort between each step. Each step happens-before the
> next, and therefore mutations performed by the thread at step n are always
> visible by the thread performing the step n+1. This is really brilliant :
> when designing a transducer, you can be confident that calls to your
> reducing function will be sequential and stop worrying about concurrency.
> You just have to ensure that mutable state stays local. True encapsulation,
> the broken promise of object-oriented programming.
>
> My point is that the transducer contract "always feed the result of step n
> as the first argument of step n+1" is strong enough to safely use local
> unsynchronized state. For this reason, switching partition-* transducers to
> volatile constructs really sounds like a step backwards to me. However,
> after re-reading the documentation on transducers, I found that this
> contract is not explicitly stated. It is just *natural* to think this way,
> because transducers are all about reducing processes. Is there a plan to
> reconsider this principle ? I would be very interested to know what Rich has
> in mind that could lead him to advise to overprotect local state of
> transducers.
>
>
>
> On Monday, April 10, 2017 at 4:44:00 AM UTC+2, Alexander Gunnarson wrote:
>>
>> Thanks so much for your input Alex! It was a very helpful confirmation of
>> the key conclusions arrived at in this thread, and I appreciate the
>> additional elaborations you gave, especially the insight you passed on about
>> the stateful transducers using `ArrayList`. I'm glad that I wasn't the only
>> one wondering about the apparent lack of parity between its unsynchronized
>> mutability and the volatile boxes used for e.g. `map-indexed` and others.
>>
>> As an aside about the stateful `take` transducer, Tesser uses the
>> equivalent of one but skirts the issue by not guaranteeing that the first n
>> items of the collection will be returned, but rather, n items of the
>> collection in no particular order and starting at no particular index. This
>> is achievable without Tesser by simply replacing the `volatile` in the
>> `core/take` transducer with an `atom` and using it with `fold`. But yes,
>> `take`'s contract is broken with this and so still follows the rule of thumb
>> you established that `fold` can't use stateful transducers (at least, not
>> without weird things like reordering of the indices in `map-indexed` and so
>> on).
>>
>> That's interesting that `fold` can use transducers directly! I haven't
>> tried that yet — I've just been wrapping them in an `r/folder`.
>>
>> On Sunday, April 9, 2017 at 10:22:13 PM UTC-4, Alex Miller wrote:
>>>
>>> Hey all, just catching up on this thread after the weekend. Rich and I
>>> discussed the thread safety aspects of transducers last fall and the
>>> intention is that transducers are expected to only be used in a single
>>> thread at a time, but that thread can change throughout the life of the
>>> transducing process (for example when a go block is passed over threads in a
>>> pool in core.async). While transducing processes may provide locking to
>>> cover the visibility of state updates in a stateful transducer, transducers
>>> should still use stateful constructs that ensure visibility (by using
>>> volatile, atoms, etc).
>>>
>>> The major transducing processes provided in core are transduce, into,
>>> sequence, eduction, and core.async. All but core.async are single-threaded.
>>> core.async channel transducers may occur on many threads due to interaction
>>> with the go processing threads, but never happen on more than one thread at
>>> a time. These operations are covered by the channel lock which should
>>> guarantee visibility. Transducers used within a go block (via something like

Re: Using transducers in a new transducing context

2017-04-10 Thread Alexander Gunnarson
Léo, I definitely agree that you can use unsynchronized mutable stateful 
transducers *as long as you can guarantee they'll be used only in 
single-threaded contexts. *We were talking up above on which version of 
synchronization is appropriate for which context. With core.async, if 
you're using a transducer on a `chan` or `pipeline` or the like, it is 
guaranteed that only one thread will use that at a time (thus `atom`s 
weren't needed), *but *a different thread might come in and reuse that same 
stateful transducer, in which case the result of that mutation will need to 
propagate to that thread via a `volatile`. With reducers `fold`, stateful 
transducers don't necessarily hold up their contract (e.g. with 
`map-indexed` as we discussed above) even if you use an `atom` or the like. 
But in truly single-threaded contexts, even within a `go` block or a 
`thread` or the like (as long as the transducer is not re-used e.g. on a 
`chan` etc. where the necessity for a `volatile` applies), it's certainly 
fine to use unsynchronized mutable stateful transducers.

On Monday, April 10, 2017 at 9:37:29 AM UTC-4, Léo Noel wrote:
>
> This topic is of high interest to me as it is at the core of my current 
> works. I had a similar questioning a while ago 
>  and I have 
> to say I'm even more confused with this :
>
> While transducing processes may provide locking to cover the visibility of 
>> state updates in a stateful transducer, transducers should still use 
>> stateful constructs that ensure visibility (by using volatile, atoms, etc).
>>
>
> I actually tried pretty hard to find a use case that would make 
> partition-all fail because of its unsynchronized local state, and did not 
> manage to find one that did not break any contract. I arrived at the 
> conclusion that it is always safe to use unsynchronized constructs in 
> stateful transducers. The reason is that you need to ensure that the result 
> of each step is given to the next, and doing so you will necessarily set a 
> memory barrier of some sort between each step. Each step happens-before the 
> next, and therefore mutations performed by the thread at step n are always 
> visible by the thread performing the step n+1. This is really brilliant : 
> when designing a transducer, you can be confident that calls to your 
> reducing function will be sequential and stop worrying about concurrency. 
> You just have to ensure that mutable state stays local. True encapsulation, 
> the broken promise of object-oriented programming.
>
> My point is that the transducer contract "always feed the result of step n 
> as the first argument of step n+1" is strong enough to safely use local 
> unsynchronized state. For this reason, switching partition-* transducers to 
> volatile constructs really sounds like a step backwards to me. However, 
> after re-reading the documentation on transducers, I found that this 
> contract is not explicitly stated. It is just *natural* to think this way, 
> because transducers are all about reducing processes. Is there a plan to 
> reconsider this principle ? I would be very interested to know what Rich 
> has in mind that could lead him to advise to overprotect local state of 
> transducers.
>
>
>
> On Monday, April 10, 2017 at 4:44:00 AM UTC+2, Alexander Gunnarson wrote:
>>
>> Thanks so much for your input Alex! It was a very helpful confirmation of 
>> the key conclusions arrived at in this thread, and I appreciate the 
>> additional elaborations you gave, especially the insight you passed on 
>> about the stateful transducers using `ArrayList`. I'm glad that I wasn't 
>> the only one wondering about the apparent lack of parity between its 
>> unsynchronized mutability and the volatile boxes used for e.g. 
>> `map-indexed` and others.
>>
>> As an aside about the stateful `take` transducer, Tesser uses the 
>> equivalent of one but skirts the issue by not guaranteeing that the first n 
>> items of the collection will be returned, but rather, n items of the 
>> collection in no particular order and starting at no particular index. This 
>> is achievable without Tesser by simply replacing the `volatile` in the 
>> `core/take` transducer with an `atom` and using it with `fold`. But yes, 
>> `take`'s contract is broken with this and so still follows the rule of 
>> thumb you established that `fold` can't use stateful transducers (at least, 
>> not without weird things like reordering of the indices in `map-indexed` 
>> and so on).
>>
>> That's interesting that `fold` can use transducers directly! I haven't 
>> tried that yet — I've just been wrapping them in an `r/folder`.
>>
>> On Sunday, April 9, 2017 at 10:22:13 PM UTC-4, Alex Miller wrote:
>>>
>>> Hey all, just catching up on this thread after the weekend. Rich and I 
>>> discussed the thread safety aspects of transducers last fall and the 
>>> intention is that transducers are expected to only be used in a single 
>>> 

Re: Using transducers in a new transducing context

2017-04-10 Thread Alexander Gunnarson

>
> On Monday, April 10, 2017 at 12:39:37 PM UTC-4, Alex Miller wrote: 

Oh, you still need r/folder, sorry! Something like:
>
> (r/fold + (r/folder v (map inc)))
>

Ah, okay, glad to know I wasn't going crazy :) Thanks!
 
On Monday, April 10, 2017 at 12:39:37 PM UTC-4, Alex Miller wrote:
>
>
>
> On Sunday, April 9, 2017 at 9:44:00 PM UTC-5, Alexander Gunnarson wrote:
>>
>>
>> As an aside about the stateful `take` transducer, Tesser uses the 
>> equivalent of one but skirts the issue by not guaranteeing that the first n 
>> items of the collection will be returned, but rather, n items of the 
>> collection in no particular order and starting at no particular index. This 
>> is achievable without Tesser by simply replacing the `volatile` in the 
>> `core/take` transducer with an `atom` and using it with `fold`. But yes, 
>> `take`'s contract is broken with this and so still follows the rule of 
>> thumb you established that `fold` can't use stateful transducers (at least, 
>> not without weird things like reordering of the indices in `map-indexed` 
>> and so on).
>>
>
> Right, we intentionally chose to require transducer takes to occur in 
> order to match the sequence take. Tesser's approach is perfectly fine too 
> (as long as you understand the difference).
>  
>
>> That's interesting that `fold` can use transducers directly! I haven't 
>> tried that yet — I've just been wrapping them in an `r/folder`.
>>
>
> Oh, you still need r/folder, sorry! Something like:
>
> (r/fold + (r/folder v (map inc)))
>
>
>
>  
>
>  
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread Alex Miller


On Sunday, April 9, 2017 at 9:44:00 PM UTC-5, Alexander Gunnarson wrote:
>
>
> As an aside about the stateful `take` transducer, Tesser uses the 
> equivalent of one but skirts the issue by not guaranteeing that the first n 
> items of the collection will be returned, but rather, n items of the 
> collection in no particular order and starting at no particular index. This 
> is achievable without Tesser by simply replacing the `volatile` in the 
> `core/take` transducer with an `atom` and using it with `fold`. But yes, 
> `take`'s contract is broken with this and so still follows the rule of 
> thumb you established that `fold` can't use stateful transducers (at least, 
> not without weird things like reordering of the indices in `map-indexed` 
> and so on).
>

Right, we intentionally chose to require transducer takes to occur in order 
to match the sequence take. Tesser's approach is perfectly fine too (as 
long as you understand the difference).
 

> That's interesting that `fold` can use transducers directly! I haven't 
> tried that yet — I've just been wrapping them in an `r/folder`.
>

Oh, you still need r/folder, sorry! Something like:

(r/fold + (r/folder v (map inc)))



 

 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Seeking critique of "pattern" in clojure.spec (LONG)

2017-04-10 Thread Brian Beckman
"I apologize for the length of this post ..."  Blaise Pascal?

I am seeking critique of a certain "programming pattern" that's arisen 
several times in a project. I want testable types satisfying a protocol, 
but the pattern I developed "feels" heavyweight, as the example will show, 
but I don't know a smaller way to get what I want. The amount of code I 
needed to formalize and test my specs "feels" like too much. In particular, 
the introduction of a defrecord just to support the protocol doesn't "feel" 
minimal. The defrecord provides a constructor with positional args — of 
dubious utility — especially for large records, but otherwise acts like a 
hashmap. Perhaps there is a way to bypass the defrecord and directly use a 
hashmap?

Generally, I am suspicious of "programming patterns," because I believe 
that an apparent need for a programming pattern usually means one of two 
things:

   1. 
   
   The programming language doesn't directly support some reasonable need, 
   and that's not usually the case with Clojure
   2. 
   
   Ignorance: I don't know an idiomatic way to do what I want.
   
There is a remote, third possibility, that "what I want" is stupid, 
ignorant, or otherwise unreasonable. 
Here is what I settled on: quadruples of protocol, defrecord, specs and 
tests to fully describe and test types in my application:

   1. 
   
   a protocol to declare functions that certain types must implement
   2. 
   
   at least one defrecord to implement the protocol
   3. 
   
   a spec to package checks and test generators
   4. 
   
   tests to, well, test them
   
For a small example (my application has some that are much bigger), 
consider a type that models "virtual times" as numbers-with-infinities. 
Informally, a "virtual time" is either a number or one of two distinguished 
values for plus and minus infinity. Minus infinity is less than any virtual 
time other than minus infinity. Plus infinity is greater than any virtual 
time other than plus infinity." I'll write a protocol, a defrecord, a spec, 
and a couple of tests for this type. 

In the actual code, the elements come in the order of protocol, defrecord, 
spec, and tests because of cascading dependencies. For human consumption, 
I'll "detangle" them and present the spec first:

(s/def ::virtual-time  (s/with-gen(s/and ; idiom for providing a 
"conformer" function below (s/or  :minus-infinity #(vt-eq % 
:vt-negative-infinity) ; see the protocol for "vt-eq"  :plus-infinity  
#(vt-eq % :vt-positive-infinity)  :number #(number? (:vt %))) 
(s/conformer second))  ; strip off redundant conformer tag
#(gen/frequency [[98 vt-number-gen] ; generate mostly numbers ...   
  [ 1 vt-negative-infinity-gen] ; ... with occasional infinities
 [ 1 vt-positive-infinity-gen]])))

That should be self-explanatory given the following definitions:

(def vt-number-gen  (gen/bind   (gen/large-integer)   (fn [vt] (gen/return 
(virtual-time. vt) ; invoke constructor ... heavyweight?​(def 
vt-negative-infinity-gen  (gen/return (virtual-time. 
:vt-negative-infinity)))​(def vt-positive-infinity-gen  (gen/return 
(virtual-time. :vt-positive-infinity)))

The tests use the generators and a couple of global variables:

(def vt-negative-infinity (virtual-time. :vt-negative-infinity))(def 
vt-positive-infinity (virtual-time. :vt-positive-infinity))​(defspec 
minus-infinity-less-than-all-but-minus-infinity  100  (prop/for-all   [vt 
(s/gen :pattern-mve.core/virtual-time)]   (if (not= (:vt vt) 
:vt-negative-infinity) (vt-lt vt-negative-infinity vt) ; see the protocol 
for def of "vt-lt" true)))​(defspec plus-infinity-not-less-than-any  100  
(prop/for-all   [vt (s/gen :pattern-mve.core/virtual-time)]   (not (vt-lt 
vt-positive-infinity vt

The protocol specifies the comparison operators "vt-lt" and "vt-le." A 
defrecord to implement it should now be obvious, given understanding of how 
they're used above:

(defprotocol VirtualTimeT  (vt-lt [this-vt that-vt])  (vt-le [this-vt that-vt]) 
 (vt-eq [this-vt that-vt]))​(defn -vt-compare-lt [this-vt that-vt]  (case (:vt 
this-vt):vt-negative-infinity(case (:vt that-vt)  
:vt-negative-infinity false  #_otherwise true)​:vt-positive-infinity
false​;; otherwise: this-vt is a number.(case (:vt that-vt)  
:vt-positive-infinity true  :vt-negative-infinity false  #_otherwise (< 
(:vt this-vt) (:vt that-vt)​(defrecord virtual-time [vt]  VirtualTimeT  
(vt-lt [this that] (-vt-compare-lt this that))  (vt-eq [this that] (= this 
that))  (vt-le [this that] (or (vt-eq this that) (vt-lt this that

Please see a runnable project here 
https://github.com/rebcabin/ClojureProjects/tree/working/pattern-mve 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are 

Re: Using transducers in a new transducing context

2017-04-10 Thread Alex Miller
I don't agree with your conclusions. :) 

A transducing process could apply each step of the transduce using a thread 
from a pool and also not use a memory barrier - in that scenario visibility 
across threads would not be ensured. These kinds of failures are inherently 
difficult to reproduce unless the code is in production and you're on 
vacation. ;)

On Monday, April 10, 2017 at 8:37:29 AM UTC-5, Léo Noel wrote:
>
> This topic is of high interest to me as it is at the core of my current 
> works. I had a similar questioning a while ago 
>  and I have 
> to say I'm even more confused with this :
>
> While transducing processes may provide locking to cover the visibility of 
>> state updates in a stateful transducer, transducers should still use 
>> stateful constructs that ensure visibility (by using volatile, atoms, etc).
>>
>
> I actually tried pretty hard to find a use case that would make 
> partition-all fail because of its unsynchronized local state, and did not 
> manage to find one that did not break any contract. I arrived at the 
> conclusion that it is always safe to use unsynchronized constructs in 
> stateful transducers. The reason is that you need to ensure that the result 
> of each step is given to the next, and doing so you will necessarily set a 
> memory barrier of some sort between each step. Each step happens-before the 
> next, and therefore mutations performed by the thread at step n are always 
> visible by the thread performing the step n+1. This is really brilliant : 
> when designing a transducer, you can be confident that calls to your 
> reducing function will be sequential and stop worrying about concurrency. 
> You just have to ensure that mutable state stays local. True encapsulation, 
> the broken promise of object-oriented programming.
>
> My point is that the transducer contract "always feed the result of step n 
> as the first argument of step n+1" is strong enough to safely use local 
> unsynchronized state. For this reason, switching partition-* transducers to 
> volatile constructs really sounds like a step backwards to me. However, 
> after re-reading the documentation on transducers, I found that this 
> contract is not explicitly stated. It is just *natural* to think this way, 
> because transducers are all about reducing processes. Is there a plan to 
> reconsider this principle ? I would be very interested to know what Rich 
> has in mind that could lead him to advise to overprotect local state of 
> transducers.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Derefs broken after clojure.tools.namespace.repl/refresh

2017-04-10 Thread Petr
What happens here?

> (send-off (agent {}) 
  (fn [_] 
  (clojure.tools.namespace.repl/refresh)))
> @(atom {})
=> {}
> (def s (atom {}))
=> #'user/s
> s
=> #object[clojure.lang.Atom 0x5f6a0fab {:status :ready, :val {}}]
> @s
=> ClassCastException clojure.lang.Var$Unbound cannot be cast to 
java.util.concurrent.Future  clojure.core/deref-future (core.clj:2206)

Same with agent 
> (def a (agent {}))
> @s
=> ClassCastException clojure.lang.Var$Unbound cannot be cast to 
java.util.concurrent.Future  clojure.core/deref-future (core.clj:2206)


PS: BTW atomic code reload is biggest missing feature in Clojure.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using transducers in a new transducing context

2017-04-10 Thread adrian . medina
What you said holds for reduction but not necessarily a parallel fold (see 
clojure.core.reducers/fold). 

On Monday, April 10, 2017 at 9:37:29 AM UTC-4, Léo Noel wrote:
>
> This topic is of high interest to me as it is at the core of my current 
> works. I had a similar questioning a while ago 
>  and I have 
> to say I'm even more confused with this :
>
> While transducing processes may provide locking to cover the visibility of 
>> state updates in a stateful transducer, transducers should still use 
>> stateful constructs that ensure visibility (by using volatile, atoms, etc).
>>
>
> I actually tried pretty hard to find a use case that would make 
> partition-all fail because of its unsynchronized local state, and did not 
> manage to find one that did not break any contract. I arrived at the 
> conclusion that it is always safe to use unsynchronized constructs in 
> stateful transducers. The reason is that you need to ensure that the result 
> of each step is given to the next, and doing so you will necessarily set a 
> memory barrier of some sort between each step. Each step happens-before the 
> next, and therefore mutations performed by the thread at step n are always 
> visible by the thread performing the step n+1. This is really brilliant : 
> when designing a transducer, you can be confident that calls to your 
> reducing function will be sequential and stop worrying about concurrency. 
> You just have to ensure that mutable state stays local. True encapsulation, 
> the broken promise of object-oriented programming.
>
> My point is that the transducer contract "always feed the result of step n 
> as the first argument of step n+1" is strong enough to safely use local 
> unsynchronized state. For this reason, switching partition-* transducers to 
> volatile constructs really sounds like a step backwards to me. However, 
> after re-reading the documentation on transducers, I found that this 
> contract is not explicitly stated. It is just *natural* to think this way, 
> because transducers are all about reducing processes. Is there a plan to 
> reconsider this principle ? I would be very interested to know what Rich 
> has in mind that could lead him to advise to overprotect local state of 
> transducers.
>
>
>
> On Monday, April 10, 2017 at 4:44:00 AM UTC+2, Alexander Gunnarson wrote:
>>
>> Thanks so much for your input Alex! It was a very helpful confirmation of 
>> the key conclusions arrived at in this thread, and I appreciate the 
>> additional elaborations you gave, especially the insight you passed on 
>> about the stateful transducers using `ArrayList`. I'm glad that I wasn't 
>> the only one wondering about the apparent lack of parity between its 
>> unsynchronized mutability and the volatile boxes used for e.g. 
>> `map-indexed` and others.
>>
>> As an aside about the stateful `take` transducer, Tesser uses the 
>> equivalent of one but skirts the issue by not guaranteeing that the first n 
>> items of the collection will be returned, but rather, n items of the 
>> collection in no particular order and starting at no particular index. This 
>> is achievable without Tesser by simply replacing the `volatile` in the 
>> `core/take` transducer with an `atom` and using it with `fold`. But yes, 
>> `take`'s contract is broken with this and so still follows the rule of 
>> thumb you established that `fold` can't use stateful transducers (at least, 
>> not without weird things like reordering of the indices in `map-indexed` 
>> and so on).
>>
>> That's interesting that `fold` can use transducers directly! I haven't 
>> tried that yet — I've just been wrapping them in an `r/folder`.
>>
>> On Sunday, April 9, 2017 at 10:22:13 PM UTC-4, Alex Miller wrote:
>>>
>>> Hey all, just catching up on this thread after the weekend. Rich and I 
>>> discussed the thread safety aspects of transducers last fall and the 
>>> intention is that transducers are expected to only be used in a single 
>>> thread at a time, but that thread can change throughout the life of the 
>>> transducing process (for example when a go block is passed over threads in 
>>> a pool in core.async). While transducing processes may provide locking to 
>>> cover the visibility of state updates in a stateful transducer, transducers 
>>> should still use stateful constructs that ensure visibility (by using 
>>> volatile, atoms, etc).
>>>
>>> The major transducing processes provided in core are transduce, into, 
>>> sequence, eduction, and core.async. All but core.async are single-threaded. 
>>> core.async channel transducers may occur on many threads due to interaction 
>>> with the go processing threads, but never happen on more than one thread at 
>>> a time. These operations are covered by the channel lock which should 
>>> guarantee visibility. Transducers used within a go block (via something 
>>> like transduce or into) occur eagerly and don't incur any switch in threads 

Re: Using transducers in a new transducing context

2017-04-10 Thread Léo Noel
This topic is of high interest to me as it is at the core of my current 
works. I had a similar questioning a while ago 
 and I have to 
say I'm even more confused with this :

While transducing processes may provide locking to cover the visibility of 
> state updates in a stateful transducer, transducers should still use 
> stateful constructs that ensure visibility (by using volatile, atoms, etc).
>

I actually tried pretty hard to find a use case that would make 
partition-all fail because of its unsynchronized local state, and did not 
manage to find one that did not break any contract. I arrived at the 
conclusion that it is always safe to use unsynchronized constructs in 
stateful transducers. The reason is that you need to ensure that the result 
of each step is given to the next, and doing so you will necessarily set a 
memory barrier of some sort between each step. Each step happens-before the 
next, and therefore mutations performed by the thread at step n are always 
visible by the thread performing the step n+1. This is really brilliant : 
when designing a transducer, you can be confident that calls to your 
reducing function will be sequential and stop worrying about concurrency. 
You just have to ensure that mutable state stays local. True encapsulation, 
the broken promise of object-oriented programming.

My point is that the transducer contract "always feed the result of step n 
as the first argument of step n+1" is strong enough to safely use local 
unsynchronized state. For this reason, switching partition-* transducers to 
volatile constructs really sounds like a step backwards to me. However, 
after re-reading the documentation on transducers, I found that this 
contract is not explicitly stated. It is just *natural* to think this way, 
because transducers are all about reducing processes. Is there a plan to 
reconsider this principle ? I would be very interested to know what Rich 
has in mind that could lead him to advise to overprotect local state of 
transducers.



On Monday, April 10, 2017 at 4:44:00 AM UTC+2, Alexander Gunnarson wrote:
>
> Thanks so much for your input Alex! It was a very helpful confirmation of 
> the key conclusions arrived at in this thread, and I appreciate the 
> additional elaborations you gave, especially the insight you passed on 
> about the stateful transducers using `ArrayList`. I'm glad that I wasn't 
> the only one wondering about the apparent lack of parity between its 
> unsynchronized mutability and the volatile boxes used for e.g. 
> `map-indexed` and others.
>
> As an aside about the stateful `take` transducer, Tesser uses the 
> equivalent of one but skirts the issue by not guaranteeing that the first n 
> items of the collection will be returned, but rather, n items of the 
> collection in no particular order and starting at no particular index. This 
> is achievable without Tesser by simply replacing the `volatile` in the 
> `core/take` transducer with an `atom` and using it with `fold`. But yes, 
> `take`'s contract is broken with this and so still follows the rule of 
> thumb you established that `fold` can't use stateful transducers (at least, 
> not without weird things like reordering of the indices in `map-indexed` 
> and so on).
>
> That's interesting that `fold` can use transducers directly! I haven't 
> tried that yet — I've just been wrapping them in an `r/folder`.
>
> On Sunday, April 9, 2017 at 10:22:13 PM UTC-4, Alex Miller wrote:
>>
>> Hey all, just catching up on this thread after the weekend. Rich and I 
>> discussed the thread safety aspects of transducers last fall and the 
>> intention is that transducers are expected to only be used in a single 
>> thread at a time, but that thread can change throughout the life of the 
>> transducing process (for example when a go block is passed over threads in 
>> a pool in core.async). While transducing processes may provide locking to 
>> cover the visibility of state updates in a stateful transducer, transducers 
>> should still use stateful constructs that ensure visibility (by using 
>> volatile, atoms, etc).
>>
>> The major transducing processes provided in core are transduce, into, 
>> sequence, eduction, and core.async. All but core.async are single-threaded. 
>> core.async channel transducers may occur on many threads due to interaction 
>> with the go processing threads, but never happen on more than one thread at 
>> a time. These operations are covered by the channel lock which should 
>> guarantee visibility. Transducers used within a go block (via something 
>> like transduce or into) occur eagerly and don't incur any switch in threads 
>> so just fall back to the same old expectations of single-threaded use and 
>> visibility.
>>
>> Note that there are a couple of stateful transducers that use ArrayList 
>> (partition-by and partition-all). From my last conversation with Rich, he 
>> said those should really be