HI,
So things works well. In terms of memory usage anyway. I use 50 main
threads (in process-data-by-condition-set) and 20 for replications
(process-growths). However I found that after good performance in the
beginning it slows down rapid and quick. VisualVM shows the threads usage
is poor - main
-> is just a list transform performed after reading your code into a list
data structure containing symbols, and before compiling to byte code - it
doesn't do anything directly.
On Fri, Feb 2, 2018 at 3:55 PM Jacek Grzebyta
wrote:
> OK I found what makes the memory leak.
>
> In the project I wor
OK I found what makes the memory leak.
In the project I work with I use a java Model class which is java
Collection proxy/facade for a single transaction. Unfortunately it's not
thread safe. In a few places I passed single instance of model into several
threads Also I requested the instance w
On 2 February 2018 at 08:34, Niels van Klaveren wrote:
> +1 for Claypoole, it removed the needs of using agents or futures in 95%
> of the cases in my code.
>
>
Thanks a lot. I modify the code using claypoole. I imagine with-shutdown
will close the pool properly after finish all tasks so there is
+1 for Claypoole, it removed the needs of using agents or futures in 95% of
the cases in my code.
On Thursday, February 1, 2018 at 9:54:36 PM UTC+1, Alan Thompson wrote:
>
> You may find that using the Claypoole library is the easiest way to handle
> threadpools: https://github.com/TheClimateC
You may find that using the Claypoole library is the easiest way to handle
threadpools: https://github.com/TheClimateCorporation/claypoole
Alan
On Thu, Feb 1, 2018 at 11:16 AM, Justin Smith wrote:
> yes, that's the idea exactly
>
> also, you might want more fine grained control of how much par
yes, that's the idea exactly
also, you might want more fine grained control of how much parallelism
occurs (eg. if every thread is writing to the same physical device, you can
often get better throughput by not parallelizing at all, or keeping the
parallelism quite limited - it's worth experimenti
Thanks folks. I see now! It should be a list of agents not list of futures
within agent. Also any task sent to a agent is processed within a
thread anyway so I do not need to add future...
On 1 February 2018 at 02:17, John Newman wrote:
> Ah, he's using one agent, I see.
>
> On Jan 31, 2018
Ah, he's using one agent, I see.
On Jan 31, 2018 9:15 PM, "John Newman" wrote:
> Multiple sen-doffs to one agent will serialize it's calls, but spawning
> agents on each new task will spawn threads on a bounded thread pool, I
> believe.
>
> On Jan 31, 2018 8:32 PM, "Justin Smith" wrote:
>
>> Do
Multiple sen-doffs to one agent will serialize it's calls, but spawning
agents on each new task will spawn threads on a bounded thread pool, I
believe.
On Jan 31, 2018 8:32 PM, "Justin Smith" wrote:
> Doing all the actions via one agent means that the actions are serialized
> though - you end up
Doing all the actions via one agent means that the actions are serialized
though - you end up with no performance improvement over doing them all in
a doseq in one future - the right way to do this tends to be trickier than
it looks at first glance, and depends on your requirements. agents, the
cla
Agents manage a pool of threads for you. Try doing it without the future
call and see if that works (unless you're trying to do something else).
John
On Wed, Jan 31, 2018 at 7:31 PM, Jacek Grzebyta
wrote:
> Thanks a lot. I will check it tomorrow.
>
> J
>
> On 1 Feb 2018 12:12 a.m., "Justin Smit
Thanks a lot. I will check it tomorrow.
J
On 1 Feb 2018 12:12 a.m., "Justin Smith" wrote:
> this is exactly the kind of problem code I was describing - there's no
> backpressure on existing future tasks to hold up the launching of more
> futures - the work done by the agent calling conj is negl
this is exactly the kind of problem code I was describing - there's no
backpressure on existing future tasks to hold up the launching of more
futures - the work done by the agent calling conj is negligible. You need
to control the size of the pool of threads used, and you need to impose
back-pressu
On 31 January 2018 at 18:08, James Reeves wrote:
> On 31 January 2018 at 17:59, Jacek Grzebyta
> wrote:
>
>> I have application with quite intense tripe store populating ~30/40 k
>> records per chunk (139 portions). The data are wrapped within the future:
>>
>> (conj agent (future (apply task ar
As a shot in the dark, a common problem with memory usage and futures that
I have seen is the antipattern of launching a future for each piece of data
in a collection. The problem that occurs is that the code works for small
input collections and a small load of running tasks / requests, but for a
On 31 January 2018 at 17:59, Jacek Grzebyta wrote:
> I have application with quite intense tripe store populating ~30/40 k
> records per chunk (139 portions). The data are wrapped within the future:
>
> (conj agent (future (apply task args)))
>
> and that all together is send-off into (agent [])
Hi,
I have application with quite intense tripe store populating ~30/40 k
records per chunk (139 portions). The data are wrapped within the future:
(conj agent (future (apply task args)))
and that all together is send-off into (agent []).
At the end of the main thread function I just use await-
18 matches
Mail list logo