Re: core.async buffers alternative backend

2020-09-10 Thread Alex Miller
I don't actually remember now, but it's possible that when core.async was created we were still trying to accommodate an older version of Java before some of those existed. That's not an issue now as we only need to support Java 1.8+. So, I don't know of any reason these wouldn't be an option.

Re: core.async: Unbound channels

2019-07-11 Thread Ernesto Garcia
Thanks Alex! Correct, the channel implementation takes care that "transduced" channels always pass elements through the transducer and the buffer. Also, a FixedBuffer allows running out of limit for those cases, see this example with a FixedBuffer of size 1 making space for 4 elements: (def c

Re: core.async: Unbound channels

2019-07-08 Thread Alex Miller
Expanding transducers (like mapcat) can produce multiple output values per input value, and those have to have someplace to go. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that po

Re: core.async: Unbound channels

2019-07-08 Thread Ernesto Garcia
I see. Bufferless channels are meant to be used within the core.async threading architecture, where there will be a limited number of blocked puts and takes. At the boundaries, channels with dropping or sliding windows can be used for limiting work. So, my original question actually turns into:

Re: core.async: Unbound channels

2019-07-06 Thread Matching Socks
"Effective" is in the eye of the beholder. The 1024 limit helps surface bugs wherein more than a thousand threads are blocked for lack of a certain channel's buffer space. But the 1024 limit does not pertain if 1 thread would like to do thousands of puts for which there is no buffer space. In

Re: core.async: Unbound channels

2019-07-05 Thread Ernesto Garcia
On Thursday, July 4, 2019 at 4:24:33 PM UTC+2, Matching Socks wrote: > > Ernesto, you may be interested in the informative response to this > enhancement request, https://clojure.atlassian.net/browse/ASYNC-23 >

Re: core.async: Unbound channels

2019-07-04 Thread Matching Socks
Ernesto, you may be interested in the informative response to this enhancement request, https://clojure.atlassian.net/browse/ASYNC-23, "Support channel buffers of unlimited size". Anyway, if you do not want to think very hard about buffer size, you can specify a size of 1. It does not limit t

Re: core.async: Unbound channels

2019-07-04 Thread Ernesto Garcia
Thanks for your response, it is important to know. (Sorry for my lexical typo: *unbound**ed*. I didn't realize it derives from the verb *bound*, not *bind*!) My question on channel boundaries still holds though. Why the enforcement of boundaries *always*? On Wednesday, July 3, 2019 at 5:16:31

Re: core.async: Unbound channels

2019-07-03 Thread Ghadi Shayban
(chan) is not a channel with an unbounded buffer. It is a channel with *no* buffer and needs to rendezvous putters and takers 1-to-1. (Additionally it will throw an exception if more than 1024 takers or putters are enqueued waiting) On Wednesday, July 3, 2019 at 7:14:46 AM UTC-4, Ernesto Garci

Re: core.async pipeline bug?

2018-09-04 Thread Paul Rutledge
Sure is. Thanks Alex! On Tue, Sep 4, 2018, 6:21 AM Alex Miller wrote: > Is the scenario this one? > > https://dev.clojure.org/jira/browse/ASYNC-217 > > -- > You received this message because you are subscribed to the Google > Groups "Clojure" group. > To post to this group, send email to clojure

Re: core.async buffered channel behavior

2018-06-27 Thread Timothy Baldridge
The best way to understand how/why this happens is to look at the source of >!!. In short, the thread making the call doesn't block on the channel. It starts an async put, then waits on a promise that is delivered by the async put. So it works something like this: 1) Calling thread creates a promi

Re: core.async buffered channel behavior

2018-06-27 Thread Justin Smith
I should be more precise there, by "consumed" I meant buffered or consumed. On Wed, Jun 27, 2018 at 10:17 AM Justin Smith wrote: > I doubt core.async would ever make promises about the behavior of a > blocking put that gets forcibly cancelled. It promises that the blocking > put doesn't return u

Re: core.async buffered channel behavior

2018-06-27 Thread Justin Smith
I doubt core.async would ever make promises about the behavior of a blocking put that gets forcibly cancelled. It promises that the blocking put doesn't return until the message is consumed, but that's not the same as promising that the message isn't consumed if the blocking put is forcibly cancell

Re: core.async buffered channel behavior

2018-06-26 Thread craig worrall
I guess the interrupt doesn't really obliterate the fourth put attempt, and that put proceeds in background when you first take. On Wednesday, June 27, 2018 at 5:12:45 AM UTC+10, jonah wrote: > > Hi folks, > > It's been a while since I've used core.async. Documentation suggests that > > (chan n)

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
https://dev.clojure.org/jira/browse/ASYNC-210 > On Jan 6, 2018, at 12:11 PM, Brian J. Rubinton > wrote: > > Thanks! I will. Just signed the CA. > > > On Sat, Jan 6, 2018, 12:10 PM Alex Miller > wrote: > > > On Sat

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
Thanks! I will. Just signed the CA. On Sat, Jan 6, 2018, 12:10 PM Alex Miller wrote: > > > On Saturday, January 6, 2018 at 10:56:06 AM UTC-6, Brian J. Rubinton wrote: >> >> Alex - it makes sense to me that the buffer temporarily expands beyond >> its normal size with the content of the expanding

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Alex Miller
On Saturday, January 6, 2018 at 10:56:06 AM UTC-6, Brian J. Rubinton wrote: > > Alex - it makes sense to me that the buffer temporarily expands beyond its > normal size with the content of the expanding transducer. What does not > make sense to me is the buffer also accepts puts even though its

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
Typo — I meant to say the channel executes puts during a take! even though the buffer is full before executing the puts. This is clearer in code (please see the gist). > On Jan 6, 2018, at 11:55 AM, Brian J. Rubinton > wrote: > > Alex - it makes sense to me that the buffer temporarily expands

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
Alex - it makes sense to me that the buffer temporarily expands beyond its normal size with the content of the expanding transducer. What does not make sense to me is the buffer also accepts puts even though its buffer is full. Why would the take! process puts when the channel's buffer is full?

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Alex Miller
On Saturday, January 6, 2018 at 10:27:20 AM UTC-6, Rob Nikander wrote: > > > On Jan 5, 2018, at 8:01 PM, Gary Verhaegen wrote: > >> What about simply having the producer put items one by one on the channel? > > > I will do that. My current producer is doing too many other things, but if > I bre

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Rob Nikander
On Jan 5, 2018, at 8:01 PM, Gary Verhaegen wrote: > What about simply having the producer put items one by one on the channel? I will do that. My current producer is doing too many other things, but if I break it up into separate threads or go blocks for each work queue, then that should wor

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
Rob - I’d go with Gary's approach, which essentially moves the splitting up of the chunk of results from the core.async channel’s transducer to the producing function. You can do that using a channel with a fixed buffer of 50 and >!!. As long as the next db query is blocked until each of the res

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Gary Verhaegen
On 5 January 2018 at 19:44, Rob Nikander wrote: > Hi, > > I’m wondering if there is a core.async design idiom for this situation... > > - A buffered channel > - One producer feeding it > - A bunch of consumers pulling from it. > - Producer should wake up and fill the channel only when it’s empty.

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Brian J. Rubinton
I don’t know; I don’t fully understand the implementation differences of >!! and offer!. The behavior of offer! makes me think the buffer is not empty until all the outputs of the transducer are consumed, but the behavior of >!! makes me think otherwise. Moritz - is the buffer cleared if: - it’

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Rob Nikander
On Friday, January 5, 2018 at 4:00:25 PM UTC-5, Moritz Ulrich wrote: > > > You have a channel with a buffer-size of one. You clear the buffer by > taking one item from it, making room for another one. Therefore the put > succeeds. Try just `(async/chan nil xform)` to create a channel without >

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Moritz Ulrich
Rob Nikander writes: > Thanks for the explanation! This is very close to what I want. I see some > confusing behavior though. See below. > > On Friday, January 5, 2018 at 2:40:14 PM UTC-5, Brian J. Rubinton wrote: >> >> >> The work-queue channel has a fixed buffer size of 1. A collection (range

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Rob Nikander
Thanks for the explanation! This is very close to what I want. I see some confusing behavior though. See below. On Friday, January 5, 2018 at 2:40:14 PM UTC-5, Brian J. Rubinton wrote: > > > The work-queue channel has a fixed buffer size of 1. A collection (range > 50) is put on the channel. Whi

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Brian J. Rubinton
The `mapcat` transducer takes a collection as its input and outputs each of its items individually. This example might be helpful: user> (use ‘[clojure.core.async]) nil user> (def work-queue (chan 1 (mapcat identity))) #’user/work-queue user> (offer! work-queue (range 50)) true user> ( ( (offer!

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Rob Nikander
On Friday, January 5, 2018 at 2:03:00 PM UTC-5, Brian J. Rubinton wrote: > > > What is the buffered channel’s buffer used for? If that’s set to 1 and the > channel’s transducer is `(mapcat identity)` then the producer should be > able to continuously put chunks of work onto the channel with the

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Brian J. Rubinton
Hi Rob, What is the buffered channel’s buffer used for? If that’s set to 1 and the channel’s transducer is `(mapcat identity)` then the producer should be able to continuously put chunks of work onto the channel with the puts only completing when the previous chunk is completely consumed. That

Re: core.async got in a bad state?

2017-08-30 Thread Matching Socks
Special behavior built into only certain blocking operations (e.g., core.async's, but not java.io's) could instill a false sense of security. It could be complemented by a watchdog thread to poll the core.async pool threads and call a given fn if a thread was blocked when it shouldn't have be

Re: core.async got in a bad state?

2017-08-29 Thread Didier
> > No code called by a go block should ever call the blocking variants of > core.async functions (!!, alts!!, etc.). So I'd start at the code > redacted in those lines and go from there. > Seems like a good use case for a static code analyser. Maybe a contribution to https://github.com/jonas

Re: core.async got in a bad state?

2017-08-29 Thread Alex Miller
We did actually discuss doing something like this a long time ago. The worry that comes to mind is whether it should actually be forbidden (an invariant) or merely strong frowned upon. It is possible to arrange a situation where you know (based on other knowledge) that a put on a channel will s

Re: core.async got in a bad state?

2017-08-29 Thread Aaron Iba
Ahh that makes a lot of sense. Indeed, I'm guilty of doing a blocking >!! inside a go-block. I was so careful to avoid other kinds of blocking calls (like IO) that I forgot that blocking variants of core.async calls themselves were forbidden. Thank you for pointing this out! I will rewire th

Re: core.async got in a bad state?

2017-08-29 Thread Gary Trakhman
Hm, I came across a similar ordering invariant (No code called by a go block should ever call the blocking variants of core.async functions) while wrapping an imperative API, and I thought it might be useful to use vars/binding to enforce it. Has this or other approaches been considered in core.as

Re: core.async got in a bad state?

2017-08-29 Thread Timothy Baldridge
To add to what Alex said, look at this trace: https://gist.github.com/anonymous/65049ffdd37d43df8f23630928e8fed0#file-thread-dump-out-L1337-L1372 Here we see a go block calling mapcat, and inside the inner map something is calling >!!. As Alex mentioned this can be a source of deadlocks. No code c

Re: core.async got in a bad state?

2017-08-29 Thread Alex Miller
go blocks are multiplexed over a thread pool which has (by default) 8 threads. You should never perform any kind of blocking activity inside a go block, because if every go block in work happens to end up blocked, you will prevent all go blocks from making any further progress. It sounds to me

Re: core.async vs continuations

2017-07-10 Thread Timothy Baldridge
Yes, calls to ! could be written with continuations, or call/cc, but they are not implemented that way. Instead the code inside the body of the `go` macro is rewritten into a statemachine. This sort of rewrite is a lot like the rewrites that C# does for yield, and Cython does much of the same sort

Re: core.async/close! locks on chans with pending transforms

2017-07-04 Thread Vitalie Spinu
>> On Mon, Jul 03 2017 21:18, Timothy Baldridge wrote: > This means, if you want to execute take correctly you must ensure that only > one thread executes the take instance at one time, since channels already > operate via a channel-level lock, it makes sense to run the transducer > inside the ch

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Timothy Baldridge
A big reason they have to be run inside the lock is that they have to operate in the context of the channel. For example: (chan (take 10)) Firstly we must recognize that transducer instances are *not* thread safe. They should only ever be executed by one thread at a time. But channels allow mult

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Kevin Downey
On 07/03/2017 03:12 PM, Vitalie Spinu wrote: On Monday, 3 July 2017 22:48:40 UTC+2, red...@gmail.com wrote: Discussion of locks aside, doing blocking operations like io or !! or basically anything that looks like it blocks and isn't >! or Is this the limitation in general or only whe

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Vitalie Spinu
On Monday, 3 July 2017 22:48:40 UTC+2, red...@gmail.com wrote: > > > Discussion of locks aside, doing blocking operations like io or >!! or basically anything that looks like it blocks and isn't >! or is a very bad idea in a transducer on a channel. You will (eventually) > block the threadpo

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Vitalie Spinu
> the side-effect of this means that no other operation (puts, takes or closes) Is there a deeper reason for this beside the ease of implementation? If chan is buffered I still fail to see why should close and take block. -- You received this message because you are subscribed to the Googl

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Kevin Downey
On 07/03/2017 11:03 AM, Vitalie Spinu wrote: Hi, Async/close! causes deadlocks if its reducer is stalled (e.g. waits for an event from another chan). Consider: (let [d (chan) s (chan 1 (map (fn [v] (println "this:" v) (printl

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Timothy Baldridge
Transducers on channels lock the channel while they are running. This is by design. So yes, the side-effect of this means that no other operation (puts, takes or closes) can succeed while the transducer is running. So the short answer is: If you have code that can take awhile to run, don't put it

Re: core.async top use cases

2016-10-14 Thread Gordon Syme
I've used agents to wrap thread-unsafe mutable Java objects with a defined life cycle, so that they could be used from multiple threads whilst respecting the life cycle. My particular case was server-side gRPC StreamObservers for long lived client connections. These are either usable, closed, o

Re: core.async top use cases

2016-10-13 Thread Timothy Baldridge
Yeah, I used to do that, but once core.async came out I started to desire the back pressure aspects of channels. I don't think I've used agents for logging since. You always run the risk of something backing up the queue of the agent and causing your thread to crash when it runs out of memory. On

Re: core.async top use cases

2016-10-13 Thread Mark Engelberg
I always found it a bit ironic that my main use case for agents doesn't really at all make use of the "mutable ref" aspect of the agent, only the queue piece. I usually hold the name of the log file in the mutable ref to emphasize that the agent is "guarding" this particular log file, but I don't

Re: core.async top use cases

2016-10-13 Thread Mark Engelberg
My primary use case for agents has always been when I want to coordinate multiple threads writing to a log file. The agent effectively serializes all the write requests with a minimum of fuss. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post t

Re: core.async top use cases

2016-10-13 Thread Alex Miller
The other other special feature of agents is that the stm knows about them so it's a safe way to have a side effect occur in an stm transaction (all agent sends are delayed till the txn succeeds). I've found that to be pretty handy in advanced usage. -- You received this message because you ar

Re: core.async top use cases

2016-10-13 Thread Timothy Baldridge
Agents combine two things 1) a queue of functions, 2) mutable state. The key thing about agents is that they still respect Clojure's concept of "instant deref". That is to say, you can always deref an agent even if the queue is backlogged. This is one of the key differences between agents and actor

Re: core.async top use cases

2016-10-13 Thread Timothy Baldridge
>> When using clojurescript, adding async really increases the load time. That's one place where you might want to use agents when you can. But Clojurescript doesn't support agents. On Thu, Oct 13, 2016 at 7:16 PM, William la Forge wrote: > On Thursday, October 13, 2016 at 3:38:16 PM UTC-4, lar

Re: core.async top use cases

2016-10-13 Thread William la Forge
On Thursday, October 13, 2016 at 3:38:16 PM UTC-4, larry google groups wrote: > > So when to use agents? I've looked through Clojure repos on Github, > looking for uses of agents, and I found very few. (I was writing a blog > post about concurrency in Clojure, and I found that agents are among t

Re: core.async top use cases

2016-10-13 Thread larry google groups
> It is not just convenience. For example agents don't provide functionality like buffering, back pressure > and select aka alts. If you send an action to an agent you don't get to know when it's done or > to choose what to do if it is currently busy. So when to use agents? I've looked throu

Re: core.async top use cases

2016-10-02 Thread Derek Troy-West
Fine grained control of parallelism is a superb aspect of core.async. Cassandra is a distributed database, often a query requires you resolve-on-read denormalised data partitioned multiple ways (semantically, by time, etc). You can think of it like a grid I guess. Lets say I have a query that I

Re: core.async top use cases

2016-09-21 Thread Beau Fabry
You're probably right, I was confusing actors with agents. On Tuesday, September 20, 2016 at 7:05:19 PM UTC-7, Matan Safriel wrote: > > Actually, I am not sure clojure implements the actor model, which I can > only construe as the Erlang actor model here. I am almost certain the core > language

Re: core.async top use cases

2016-09-20 Thread Matan Safriel
Actually, I am not sure clojure implements the actor model, which I can only construe as the Erlang actor model here. I am almost certain the core language explicitly does not: http://clojure.org/about/state It can be shoehorned somehow (see okku) but I would probably not venture saying clojure

Re: core.async top use cases

2016-09-20 Thread Beau Fabry
I'm no expert on this, but the Actor model and the CSP model seem to be two different ways to model a concurrent system. Clojure supports them both. Personally I find the CSP model a simpler and easier to understand one than Actors, and so pretty much default to it. You might find non-clojure r

Re: core.async top use cases

2016-09-20 Thread William la Forge
My bad. I was thinking of atomic. Swap! doesn't work with side effects, but send does. On Tuesday, September 20, 2016 at 2:50:53 AM UTC-4, Matan Safriel wrote: > > Thanks but I'm not entirely sure about this. I could use agents for side > effects too, or at least I thought so. Care to further cl

Re: core.async top use cases

2016-09-19 Thread Matan Safriel
Thanks but I'm not entirely sure about this. I could use agents for side effects too, or at least I thought so. Care to further clarify? Original Message From:William la Forge Sent:Tue, 20 Sep 2016 02:37:20 +0300 To:Clojure Subject:Re: core.async top use cases >The really ni

Re: core.async top use cases

2016-09-19 Thread Ken Restivo
On Sat, Sep 17, 2016 at 11:37:38PM -0700, Matan Safriel wrote: > Hi, > > It's very easy to see how core.async solves callback hell for front-end > development with clojurescript. > In what use cases would you use it for server-side? we already have > non-blocking IO from Java, and we have clojur

Re: core.async top use cases

2016-09-19 Thread William la Forge
The really nice thing to me is that async handles side-effects while agents do not. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please

Re: core.async top use cases

2016-09-19 Thread Matan Safriel
Right! Original Message From:Leon Grapenthin Sent:Mon, 19 Sep 2016 20:19:55 +0300 To:Clojure Subject:Re: core.async top use cases >It is not just convenience. For example agents don't provide functionality >like buffering, backpressure and select aka alts. If you send an acti

Re: core.async top use cases

2016-09-19 Thread Leon Grapenthin
It is not just convenience. For example agents don't provide functionality like buffering, backpressure and select aka alts. If you send an action to an agent you don't get to know when it's done or to choose what to do if it is currently busy. On Monday, September 19, 2016 at 11:49:13 AM UTC+

Re: core.async top use cases

2016-09-19 Thread Matan Safriel
Thanks, and I put the blog post on my reading list. Although I can't avoid thinking that we already have asynchronous idioms in the core language itself, like agents. I think the crux for server-side is more about the convenient piping, rather than the mere asynchronism itself, but I might be wrong

Re: core.async top use cases

2016-09-18 Thread Mond Ray
Pushing asynchrony further into the stack is useful for reliability and fault tolerance. We can also use it as a basis for Complex Event Processing using time series windows. I wrote up a few examples in my blog

Re: core.async top use cases

2016-09-18 Thread Rangel Spasov
http://aleph.io/aleph/literate.html "Alternately, we can use a core.async goroutine to create our response, and convert the channel it returns using manifold.deferred/->source, and then take the first message from it. This is entirely equivalent to the p

Re: core.async and channel flushing

2016-08-29 Thread Jeremy Vuillermet
I'm also wondering how this could be done. I have a similar use case On Tuesday, October 22, 2013 at 9:30:08 AM UTC+2, Alexander L. wrote: > > Sorry for bringing this back up, but I was wondering if anyone figured out > something better... > > On Saturday, September 14, 2013 10:49:08 PM UTC+3, Al

Re: Core.async performance with many channels

2016-04-06 Thread JvJ
It seems to do rather well. On Wednesday, 6 April 2016 08:29:07 UTC-7, Francis Avila wrote: > > On Monday, April 4, 2016 at 6:30:07 PM UTC-5, Howard M. Lewis Ship wrote: >> >> David Nolen had an early ClojureScript core.async demo with thousands of >> channels, controlling individual pixels. >> >

Re: Core.async performance with many channels

2016-04-06 Thread Francis Avila
On Monday, April 4, 2016 at 6:30:07 PM UTC-5, Howard M. Lewis Ship wrote: > > David Nolen had an early ClojureScript core.async demo with thousands of > channels, controlling individual pixels. > This is the demo you are referring to: http://swannodette.github.io/2013/08/02/10-processes/

Re: Core.async performance with many channels

2016-04-04 Thread Howard Lewis Ship
You should perhaps look at github.com/walmartlabs/active-status if you want a way to figure out what all those processes are doing. I'd say the things we've worked on have had dozens of channels, and often common core.async primitives (such as into, pipe, etc.) create additional channels and CSPs

Re: Core.async performance with many channels

2016-04-03 Thread JvJ
This thing just an idea at this point. Basically, your typical game loop will consist of iterating over a collection of objects and calling some kind of update operation on each. I would like to replace this with asynchronous signaling. Signaling could include messages like "update this obje

Re: Core.async performance with many channels

2016-04-03 Thread Rangel Spasov
Without knowing too much about the internals, but having used core.async/channels a lot, I don't think "hundreds" of channels will be a problem ever. However, as always the devil is in the details. People might be able to give better feedback if you give more details about your use case. On S

Re: core.async mult "damaged" on reload

2016-01-30 Thread Terje Dahl
Forgive me for taking so long to reply to your very informative feedback. Only this evening have I had time to experiment and study. These are my findings so far: 'defprotocol' is a macro with side-effects, so any attempt at holding on to a copy and then re-inserting it into the var in the other

Re: core.async channel /w transducer which thread?

2015-12-17 Thread Leon Grapenthin
Pipeline must have a way of controlling on which thread the execution happens, otherwise it would parallize very little. It does so by creating a new channel for each put with a buffer of 1 and an xf. It makes a blocking >!! and only after it unblocks it puts that new channel to the consumer th

Re: core.async channel /w transducer which thread?

2015-12-17 Thread Herwig Hochleitner
2015-12-16 16:22 GMT+01:00 Leon Grapenthin : > The blocking put is made on a separate thread (channel named res), then > later a blocking take from that same channel is made in the second go-loop. > > Or are you saying "if it takes too long, parallelize via pipeline"? In my > case I can't because

Re: core.async channel /w transducer which thread?

2015-12-16 Thread Leon Grapenthin
Thanks for your reply. I just studied the pipeline code and am now wondering how it can affect on which thread the transducer runs if the answer to the question is unspecified. The blocking put is made on a separate thread (channel named res), then later a blocking take from that same channel i

Re: core.async channel /w transducer which thread?

2015-12-16 Thread Timothy Baldridge
When the transducer code was originally added to core.async I asked Rich this very question. He declined to specify where the transducer is run. The reasoning is simple: since the transducer is executed inside the channel lock, your transducers should always be fast enough that you don't care where

Re: core.async mult "damaged" on reload

2015-11-14 Thread James Elliott
If I understand what you’re saying, then it sounds like you are reloading a protocol definition, which creates a new protocol with the same name as the old protocol, but they are different things. Old objects which implement the old protocol do not implement the new protocol, even though it look

Re: core.async multi-channel?

2015-10-27 Thread Alex Miller
You could use mult or pub (bit different capabilities depending on your needs). http://clojure.github.io/core.async/#clojure.core.async/mult http://clojure.github.io/core.async/#clojure.core.async/pub On Tuesday, October 27, 2015 at 3:56:32 PM UTC-5, JvJ wrote: > > Is it possible to create a co

Re: core.async: implementing pi.go

2015-08-03 Thread Divyansh Prakash
> > does maya automatically handle operator precedence? > maya always evals from left to right. (maya 1 + 5 :as six, six * 2 :as twelve, twelve / 3 * 2) ;=> 8 > > - Divyansh -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, s

Re: core.async: implementing pi.go

2015-08-02 Thread Mark Engelberg
I agree that there's value to making math expressions look like the way they are written in actual math -- it is much easier to tell at a glance that you've entered the correct formula. I think your maya library is a nice contribution. (I think Incanter has a similar macro, but it is nice to have

Re: core.async: implementing pi.go

2015-08-02 Thread Divyansh Prakash
Makes sense. By the way - I've refactored my code to not use a go block inside 'term', and made it much more readable (IMO) than before using my maya library . I'm telling you this because I'm

Re: core.async: implementing pi.go

2015-08-02 Thread Mark Engelberg
On Sun, Aug 2, 2015 at 4:38 AM, Divyansh Prakash < divyanshprakas...@gmail.com> wrote: > I have one more question, though: how does one work in ClojureScript > without > This use case is a little weird because the http://groups.google.com/group/clojure?hl=en --- You received this message becaus

Re: core.async: implementing pi.go

2015-08-02 Thread Divyansh Prakash
Hey, puzzler! Thanks for the detailed response. Just changing (chan) to (chan n) actually worked! I get your displeasure with how 'term' is implemented. That's not how I generally code. I'm very new to core.async and was aiming for a direct translation of the Go code. I do get a little carrie

Re: core.async: implementing pi.go

2015-08-02 Thread Mark Engelberg
Clojure's async is built around the opinion that you, the programmer, should be required to think about what sort of buffer you want to have on your channel, and think about what should happen if that buffer overflows. Your code spins off 5000 little go blocks that are each trying to write to a ch

Re: core.async status?

2015-07-05 Thread Michael Blume
Looking through the tickets at http://dev.clojure.org/jira/browse/ASYNC might give you a better idea of what's planned. On Sat, Jul 4, 2015 at 8:52 PM Martin Raison wrote: > thanks! > > Le samedi 4 juillet 2015 20:38:22 UTC-7, Alex Miller a écrit : >> >> Oh just busy. We will get to a new releas

Re: core.async status?

2015-07-04 Thread Martin Raison
thanks! Le samedi 4 juillet 2015 20:38:22 UTC-7, Alex Miller a écrit : > > Oh just busy. We will get to a new release at some point. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note th

Re: core.async pub/sub closing source channel issue

2015-03-01 Thread Leon Grapenthin
The reason for the behavior you are observing is a race condition. The effects of close! on the pub aren't synchronous. Even though the channel is immediately closed before return, consumers need time to determine that it has been closed. At the point in time the pub determines that the source

Re: core.async: "Deprecated - this function will be removed. Use transducer instead"

2015-02-19 Thread Malcolm Sparks
You're right - thanks for that! I've updated the blog article to remove it. On 19 February 2015 at 17:37, Ben Smith-Mannschott wrote: > I'm unclear on one thing: what's the purpose of core.async/pipe? In your > blog article, you write: > > (-> source (pipe (chan)) payload-decoder payload-json-de

Re: core.async: "Deprecated - this function will be removed. Use transducer instead"

2015-02-19 Thread Ben Smith-Mannschott
I'm unclear on one thing: what's the purpose of core.async/pipe? In your blog article, you write: (-> source (pipe (chan)) payload-decoder payload-json-decoder) (pipe source destination) just copies elements from source to destination. How is that any different than just using source here directl

Re: core.async why 42 extra threads?

2015-02-19 Thread Robin Heggelund Hansen
Ahh ok, makes sense :) -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this gr

Re: core.async why 42 extra threads?

2015-02-19 Thread Timothy Baldridge
The number of threads is meant to be semi-tolerant to usage of blocking IO in go blocks. You can't be certain that some go block won't call a function that calls a function in some library that blocks for just a few milliseconds. So we tried to make it high enough to be tolerant of mis-use, but lo

Re: core.async: "Deprecated - this function will be removed. Use transducer instead"

2015-02-18 Thread Ben Smith-Mannschott
Thanks Malcolm, you're blog post was a great help to me. On Thu, Feb 19, 2015 at 3:06 AM, Malcolm Sparks wrote: > I have recently written a blog article which explains how to use > transducers with core.async. > > You can find it here: http://malcolmsparks.com/posts/transducers.html > > > On Wed

Re: core.async: "Deprecated - this function will be removed. Use transducer instead"

2015-02-18 Thread Malcolm Sparks
I have recently written a blog article which explains how to use transducers with core.async. You can find it here: http://malcolmsparks.com/posts/transducers.html On Wednesday, 18 February 2015 21:48:05 UTC, bsmith.occs wrote: > > I'm probably just especially dense today, but perhaps someone c

Re: core.async: "Deprecated - this function will be removed. Use transducer instead"

2015-02-18 Thread Erik Price
(let [out (async/chan 0 (map inc))] (async/pipe in out) out) Earlier in your email you mention printing, however. If you have I/O to perform (like printing), I’m told that you don’t want to do it in a transducer. You can use pipeline-async for this instead: (defn f [v ch] (async/go (pri

Re: core.async -- lazy evaluation on take

2015-02-16 Thread Fluid Dynamics
On Monday, February 16, 2015 at 9:42:31 PM UTC-5, Erik Price wrote: > > Yes, the producer’s put will block until the consumer takes, but doesn’t > this still involve an eager initial request (so that the producer will have > something to put in the first place, so that it can block)? > The produc

Re: core.async -- lazy evaluation on take

2015-02-16 Thread Erik Price
Yes, the producer’s put will block until the consumer takes, but doesn’t this still involve an eager initial request (so that the producer will have something to put in the first place, so that it can block)? e ​ On Mon, Feb 16, 2015 at 5:52 PM, wrote: > Make the channel unbuffered, that way it

Re: core.async -- lazy evaluation on take

2015-02-16 Thread janpaulbultmann
Make the channel unbuffered, that way it turns into a rondevouz a la ada and every producer will block until a consumer takes something from it. cheers Jan > On 16.02.2015, at 21:45, Huey Petersen wrote: > > Hello, > > I was playing around with having a lazy sequence abstracting over a paged

Re: core.async chan ex-handler

2015-01-23 Thread coltnz
On Friday, January 23, 2015 at 12:59:40 AM UTC+13, Derek Troy-West wrote: > > From the documentation: > > (chan buf-or-n xform ex-handler) > > "If a transducer is supplied a > buffer must be specified. ex-handler must be a fn of one argument - > if an exception occurs during transformation it wil

  1   2   3   4   5   >