On Thu, 31 Jan 2019 at 16:22, Michael Jones <michael.jo...@gmail.com> wrote:

> Agree. I’ve never had that level of imbalance but yes.
>

When doing mathematical calculations, as I think your code was, I think
it's usual to have comparable costs for each item. By contrast, I'm usually
adding this kind of code to make concurrent network requests, and in that
scenario it's much more likely that some requests are orders of magnitude
slower than others.

FWIW I was quite pleased with the `chan chan R` idea - I might use it for
real some time :) Come to think of it, it's slightly reminiscent of another
pattern I wrote about some years ago:
https://rogpeppe.wordpress.com/2009/12/01/concurrent-idioms-1-broadcasting-values-in-go-with-linked-channels/

  cheers,
    rog.


> On Thu, Jan 31, 2019 at 4:06 AM roger peppe <rogpe...@gmail.com> wrote:
>
>> On Thu, 31 Jan 2019 at 00:06, Michael Jones <michael.jo...@gmail.com>
>> wrote:
>>
>>> note that my code sidesteps this problem generally
>>>
>>
>> I'm not sure that that's true. Although your code mitigates the problem
>> somewhat,
>> it's still possible for a one slow worker to block the others. You've
>> added 512*NumCPU/2
>> buffer slots, but in general it's not possible to order results and
>> provide avoid unnecessary
>> blocking without having N buffer slots. In your code, assume 4 CPUs and
>> that work items 0
>> and 2 take 1s and all other work items take 1ms. If we've got 5000 items
>> in total,
>> the total time taken will be 2.002498s instead of the ideal time
>> (~1s+2499µs).
>>
>> https://play.golang.org/p/5Ty6pgpmZ0w
>>
>> Here's a kind of hybrid approach. It still serializes, but it makes as
>> good a use of the buffer
>> space as it can - it won't block until a slow item is at least bufSize
>> items behind the
>> most recently processed item:
>>
>> https://play.golang.org/p/PP9NSJuLeEK
>>
>>
>>> On Wed, Jan 30, 2019 at 3:48 PM roger peppe <rogpe...@gmail.com> wrote:
>>>
>>>> On Wed, 30 Jan 2019 at 20:07, 'Bryan Mills' via golang-nuts <
>>>> golang-nuts@googlegroups.com> wrote:
>>>>
>>>>> The code to sequence the results using a channel is not much more
>>>>> verbose.
>>>>>
>>>>> That not only avoids the library dependency, but also makes the peak
>>>>> memory consumption for the results O(runtime.NumCPU()), instead of O(N)
>>>>> with the number of tasks, and allows the output to be streamed instead of
>>>>> buffered to a slice.
>>>>>
>>>>> https://play.golang.org/p/zkBjxlcvESe
>>>>>
>>>>
>>>> Nice! In practice though, I've usually found that I do want to keep the
>>>> results around or I don't care about the order at all, so the parallel
>>>> package works OK.
>>>>
>>>> I'd point out one down side to the sequencing approach - one very slow
>>>> work item can block the others. For example, NProc is 4, the first item
>>>> takes 200 milliseconds to process and all the others take 1 millisecond,
>>>> then the first 4 workers have started, none more will be started until the
>>>> first has, so the overall time will be quite a bit longer (249ms) than if
>>>> they were all allowed to proceed irrespective of order (200ms).
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "golang-nuts" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to golang-nuts+unsubscr...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>>> --
>>>
>>> *Michael T. jonesmichael.jo...@gmail.com <michael.jo...@gmail.com>*
>>>
>> --
>
> *Michael T. jonesmichael.jo...@gmail.com <michael.jo...@gmail.com>*
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to