ts] Simple worker pool in golnag
On Mon, Dec 30, 2019 at 10:14 PM Robert Engels <reng...@ix.netcom.com> wrote:Here is a simple test that demonstrates the dynamics https://play.golang.org/p/6SZcxCEAfFp (cannot run in playground)Notice that the call that uses an over allocated number of routines take
signal handling is a catch all, could be reading from a high-priority socket, etc.)-Original Message-
From: Robert Engels
Sent: Dec 30, 2019 3:21 PM
To: Robert Engels , Jesper Louis Andersen
Cc: Brian Candler , golang-nuts
Subject: Re: [go-nuts] Simple worker pool in golnag
Also, if
On Mon, Dec 30, 2019 at 10:14 PM Robert Engels
wrote:
> Here is a simple test that demonstrates the dynamics
> https://play.golang.org/p/6SZcxCEAfFp (cannot run in playground)
>
> Notice that the call that uses an over allocated number of routines takes
> 5x longer wall time than the properly siz
-
From: Robert Engels
Sent: Dec 30, 2019 9:43 AM
To: Jesper Louis Andersen
Cc: Brian Candler , golang-nuts
Subject: Re: [go-nuts] Simple worker pool in golnag
Right, but the overhead is not constant nor free. So if you parallelize the CPU bound task into 100 segments and you only have 10
:43 AM
To: Jesper Louis Andersen
Cc: Brian Candler , golang-nuts
Subject: Re: [go-nuts] Simple worker pool in golnag
Right, but the overhead is not constant nor free. So if you parallelize the CPU bound task into 100 segments and you only have 10 cores, the contention on the internal locking
Right, but the overhead is not constant nor free. So if you parallelize the CPU
bound task into 100 segments and you only have 10 cores, the contention on the
internal locking structures (scheduler, locks in channels) will be significant
and the entire process will probably take far longer - wo
On Mon, Dec 30, 2019 at 10:46 AM Brian Candler wrote:
> Which switching cost are you referring to? The switching cost between
> goroutines? This is minimal, as it takes places within a single thread. Or
> are you referring to cache invalidation issues? Or something else?
>
>
It is the usual di
On Sunday, 29 December 2019 22:18:51 UTC, Robert Engels wrote:
>
> I agree. I meant that worker pools are especially useful when you can do
> cpu affinity - doesn’t apply to Go.
>
> I think Go probably needs some idea of “capping” for cpu based workloads.
> You can cap in the local N CPUs
By d
I agree. I meant that worker pools are especially useful when you can do cpu
affinity - doesn’t apply to Go.
I think Go probably needs some idea of “capping” for cpu based workloads. You
can cap in the local N CPUs , but in a larger app that has multiple parallel
processing points you can easy
Perhaps I wasn't clear, but I am not suggesting to use one goroutine per
table. You can limit the no. of goroutines running (say 25 at a time) by
just spawning them and wait for the entire task to complete, without
creating a worker pool of 25 goroutines. It is basically a counting
semaphore with a
I needed a way to let the database complete a few tables (max 25) at a time
without the need for the database to handle other tables in between. A
worker pool which reads one table after the other does that perfectly. That
also means that if all workers wait for the database then my whole program
w
I often choose worker pools when there is working storage to be allocated
as part of the work to be done. This way means that such data and
processing structures are naturally built and torn down right there before
and after the work-channel loop in the worker, with no repeated allocations
during r
On Sat, Dec 28, 2019 at 6:11 AM Robert Engels wrote:
>
> Spinning up a Go routine when for each piece of work may be idiomatic but it
> is far from the best practice for many situations - mainly because of cpu
> cache invalidation. Most HPC systems create worker pools by type and then
> isolate
Spinning up a Go routine when for each piece of work may be idiomatic but it is
far from the best practice for many situations - mainly because of cpu cache
invalidation. Most HPC systems create worker pools by type and then isolate
them to cpus/cores - something you can’t do in Go.
I believe t
> (the task was to limit the whole thing to about 10% of cores)
I still don't think you needed a worker pool here. Like OP mentioned above,
you could just limit the number of goroutines executed to 10% of total
cores.
On Saturday, 28 December 2019 18:02:08 UTC+5:30, Chris Burkert wrote:
>
> Th
Certainly it's important to limit the concurrency, and in your case you
need to process the larger tasks first.
However, to me, the defining characteristic of a "worker pool" are:
1. workers are created before any tasks need to be done
2. the same worker handles multiple tasks sequentially
3. wor
There are Pros and Cons for everything in life. Some time ago I wrote a
database tool which does something per table where the runtime largely
depends on the table size. I started with one goroutine per table because
it was easy. But that put a high load on the database (the task was to
limit the w
On Friday, 27 December 2019 16:30:48 UTC, Bruno Albuquerque wrote:
>
> This might be useful too you, in any case:
>
> https://git.bug-br.org.br/bga/workerpool
>
>
I think the point from Bryan Mills' video is, "worker pool" is something of
an anti-pattern in go. goroutines are so cheap that you mi
This might be useful too you, in any case:
https://git.bug-br.org.br/bga/workerpool
On Thu, Dec 26, 2019 at 8:12 PM Amarjeet Anand
wrote:
> Hi
>
> I have to produce some task to kafka parallely. So I want to implement a
> simple worker group pattern in go.
>
> Does the below code decent enough
I think it is more complex - or simpler :) - than that. A lot depends on the
Kafka client - for example the sarama client recommends one client per
producer/consumer, other clients may multiplex on the same client so having
more than one consumer (sender) may not be beneficial if the IO is fully
Thanks Ian for the resources.
Appreciate it a lot.
On Fri, Dec 27, 2019 at 10:57 AM Ian Lance Taylor wrote:
> On Thu, Dec 26, 2019 at 8:12 PM Amarjeet Anand
> wrote:
> >
> > I have to produce some task to kafka parallely. So I want to implement a
> simple worker group pattern in go.
>
> Bryan
Hi Robert
Actually the code above is simplified to make it easy to understand.
Thanks for the suggestion on variable namings... Will improve that.
The scenario is like the producer functions(produceTaskOfType1ToChan() and
produceTaskOfType2ToChan()) will produce a list of strings to the
channel.
On Thu, Dec 26, 2019 at 8:12 PM Amarjeet Anand
wrote:
>
> I have to produce some task to kafka parallely. So I want to implement a
> simple worker group pattern in go.
Bryan has a good talk on this general topic:
https://www.youtube.com/watch?v=5zXAHh5tJqQ&list=PL2ntRZ1ySWBdatAqf-2_125H4sGzaWng
Yes, the code doesn’t work :) - it will only ever produce 2 items - unless that
was expected - even so, you want the N workers doing work, and probably a
constant number sending to Kafka - but a lot depends on your “serial needs”. In
your case you only have 2 workers producing work, and N sender
Hi
I have to produce some task to kafka parallely. So I want to implement a
simple worker group pattern in go.
Does the below code decent enough to take it to production?
var workerCount = runtime.NumCPU()*7 + 1
func WorkerPattern() {
taskWg := &sync.WaitGroup{}
taskWg.Add(2)
autoC
25 matches
Mail list logo