the task at hand and dishing the work out to workers with a scheme a
> little more complex than odd, even etc. as required?
>
> With the benefit that an off by one etc. causes a panic and not something
> potentially worse?
> Bakul Shah : Jan 18 10:50AM -0800
>
> > brea
Yes, that is possible.
The simulated cores are already generated functions in C.
It’s my experience that if you can leverage an existing concurrency framework
then life is better for everyone; go’s is fairly robust, so this is an
experiment to see how close I can get. A real simulation system has
I’d observed that, experimentally, but I don’t think it’s guaranteed behaviour
:-(
> On Jan 17, 2021, at 10:07 AM, Robert Engels wrote:
>
> If you use GOMAXPROCS as a subset of physical cores and have the same number
> of routines you can busy wait in a similar fashion.
>
WARNING / LEGAL
That’s exactly the plan.
The idea is to simulate perhaps the workload of a complete chiplet. That might
be (assuming no SIMD in the processors to keep the example light) 2K cores.
Each worklet is perhaps 25-50 nsec (worklet = work done for one core) for each
simulated core
The simplest mechan
The problem is that N or so channel communications twice per simulated clock
seems to take much longer than the time spent doing useful work.
go isn’t designed for this sort of work, so it’s not a complaint to note it’s
not as good as I’d like it to be. But the other problem is that processors
t;
>> On Jan 16, 2021, at 7:35 PM, Pete Wilson wrote:
>>
>>
WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain information which
is privileged, confidential, proprietary, or exempt from di
he scheduler revived them. Scheduler
thing, not clock thing.
>
> You can restructure this to avoid the race. You should Add() to to the stage
> 1 and 2 wait groups after the Wait() returns and before you Wait() on the
> stage 2.
>
>> On Jan 16, 2021, at 6:05 PM, Pete
Brian
Thanks for advice. I’ll take a look
— P
> On Jan 16, 2021, at 5:23 PM, golang-nuts@googlegroups.com wrote:
>
> Brian Candler mailto:b.cand...@pobox.com>>: Jan 16
> 01:02PM -0800
>
> On Saturday, 16 January 2021 at 16:28:59 UTC Pete Wilson wrote:
>
> &g
t; Context on one-shot structs - 1 Update
> <>Waitgroup problem
> <http://groups.google.com/group/golang-nuts/t/397c0fe3def54987?utm_source=digest&utm_medium=email>
>
> Pete Wilson : Jan 16 10:28AM -0600
>
> Gentlepersons
>
> I asked for
Gentlepersons
I asked for advice on how to handle a problem a few days ago, and have
constructed a testbed of what I need to do, using WaitGroups in what seems to
be a standard manner.
But the code fails and I don’t understand why.
The (simple version of) the code is at https://play.golang.org
N in this case will be similar to the value of GOMAXPROCS, on the assumption
that scaling that far pays off.
I would love to have the problem of having a 128 core computer…. (Though then
if scaling tops out at 32 cores one simply runs 4 experiments..)
— P
> On Jan 13, 2021, at 10:31 PM, Rober
I have decided to believe that scheduling overhead is minimal, and only
customize if this is untrue for my workload.
[I don’t like customizing. Stuff in the standard library has been built by folk
who have done this in anger, and the results have been widely used; plus
principle of least surpris
Only because I had started out my explanation in a prior thought trying to use
’naked’ read and write atomic fences, and having exactly 3 (main, t0, t1)
exposed all that needed to be exposed of the problem.
Yes, if this scales, there will be more goroutines than 3.
> On Jan 13, 2021, at 9:58 PM,
Dave
Thanks for thoughts.
My intent is to use non-custom barriers and measure scalability, customising if
need be. Not busy-waiting the go routines leaves processor time for other stuff
(like visualization, looking for patterns, etc)
There’s also the load-balancing effect; code pretending to b
r golang-nuts@googlegroups.com - 18 updates in 9 topics
> <http://groups.google.com/group/golang-nuts/t/805f985eaed6749c?utm_source=digest&utm_medium=email>
>
> Pete Wilson : May 13 10:28PM -0500
>
> All this is true.
> But I expect that one of these fine days, s
All this is true.
But I expect that one of these fine days, someone sueable is going to ship
software with a serious bug, and are going to get sued and lose because
(i) there’s a lot of money
and
(ii) it’s well known in the art that doing X is just bloody stupid, and you did
X.
And then the qua
16 matches
Mail list logo