https://code.dlang.org/packages/rust_interop_d
wrapped:
DashMap: is an implementation of a concurrent associative
array/hashmap in Rust.
I was playing with parallel programming, and experienced
"undefined behavior" when storing into an Associative Array in
parallel. Guarding the assignments with a synchronized barrier
fixed it, of course. And obviously loading down your raw AA with
thread barriers would
On Monday, 28 August 2023 at 22:43:56 UTC, Ali Çehreli wrote:
On 8/28/23 15:37, j...@bloow.edu wrote:
> Basically everything is hard coded to use totalCPU's
parallel() is a function that dispatches to a default TaskPool
object, which uses totalCPUs. It's convenient but as you say,
not
kPool. The default value is totalCPUs -
1. Calling the setter after the first call to taskPool does not changes
number of worker threads in the instance returned by taskPool. "
I guess I could try to see if I can change this but I don't know what
the "first call" is(and I'm usin
On 8/28/23 15:37, j...@bloow.edu wrote:
> Basically everything is hard coded to use totalCPU's
parallel() is a function that dispatches to a default TaskPool object,
which uses totalCPUs. It's convenient but as you say, not all problems
should use it.
In such cases, you would create y
instance returned by taskPool. "
I guess I could try to see if I can change this but I don't know
what the "first call" is(and I'm using parallel to create it).
Seems that the code should simply be made more robust. Probably a
just a few lines of code to change/add at most. Mayb
On 26.08.23 05:39, j...@bloow.edu wrote:
On Friday, 25 August 2023 at 21:31:37 UTC, Ali Çehreli wrote:
On 8/25/23 14:27, j...@bloow.edu wrote:
> "A work unit is a set of consecutive elements of range to be
processed
> by a worker thread between communication with any other
thread. The
> number
On Friday, 25 August 2023 at 21:43:26 UTC, Adam D Ruppe wrote:
On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:
to download files from the internet.
Are they particularly big files? You might consider using one
of the other libs that does it all in one thread. (i ask about
size cuz
On Friday, 25 August 2023 at 21:31:37 UTC, Ali Çehreli wrote:
On 8/25/23 14:27, j...@bloow.edu wrote:
> "A work unit is a set of consecutive elements of range to be
processed
> by a worker thread between communication with any other
thread. The
> number of elements processed per work unit is
On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:
to download files from the internet.
Are they particularly big files? You might consider using one of
the other libs that does it all in one thread. (i ask about size
cuz mine ive never tested doing big files at once, i usually use
it
On 8/25/23 14:27, j...@bloow.edu wrote:
> "A work unit is a set of consecutive elements of range to be processed
> by a worker thread between communication with any other thread. The
> number of elements processed per work unit is controlled by the
> workUnitSize parameter. "
>
> So the question
On Wednesday, 23 August 2023 at 14:43:33 UTC, Sergey wrote:
On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:
I use
foreach(s; taskPool.parallel(files, numParallel))
{ L(s); } // L(s) represents the work to be done.
If you make for example that L function return “ok” in case
file
On Wednesday, 23 August 2023 at 14:43:33 UTC, Sergey wrote:
On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:
I use
foreach(s; taskPool.parallel(files, numParallel))
{ L(s); } // L(s) represents the work to be done.
If you make for example that L function return “ok” in case
file
On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:
I use
foreach(s; taskPool.parallel(files, numParallel))
{ L(s); } // L(s) represents the work to be done.
If you make for example that L function return “ok” in case file
successfully downloaded, you can try to use TaskPool.amap.
The
of parallel
tasks is to allow paralleling but the way the code is working is
that it starts the tasks in parallel but then essentially stalls
the paralleling a large portion of the time. E.g.,
If there are a bunch of small downloads but one large one, then
that one large download stalls
On Saturday, 20 May 2023 at 18:27:47 UTC, Ali Çehreli wrote:
[...]
And I've just discovered something.
Me2! The serial version using array indexing
void vec_op_naive0 (double [] outp, const double [] inp,
double function (double) fp)
{
enforce (inp.length == outp.length);
On Saturday, 20 May 2023 at 18:27:47 UTC, Ali Çehreli wrote:
On 5/20/23 04:21, kdevel wrote:
And I've just discovered something. Which one of the following
is the expected documentation?
https://dlang.org/library/std/parallelism.html
https://dlang.org/phobos/std_parallelism.html
What
On 5/20/23 04:21, kdevel wrote:
> Thanks for your explications!
>
> On Friday, 19 May 2023 at 21:18:28 UTC, Ali Çehreli wrote:
>> [...]
>> - std.range.zip can be used instead but it does not provide 'ref'
>> access to its elements.
>
> How/why does sort [1] work with zipped arrays?
I don't know
Thanks for your explications!
On Friday, 19 May 2023 at 21:18:28 UTC, Ali Çehreli wrote:
[...]
- std.range.zip can be used instead but it does not provide
'ref' access to its elements.
How/why does sort [1] work with zipped arrays?
[...]
The following amap example there may be useful for
of elements,
you don't need 'ref' anyway.
- You seem to want to assign to elements in parallel; such functionality
already exists in std.parallelism.
- Phobos documentation is not very useful these days as it's not clear
from the following page that there are goodies like amap, asyncBuf, etc
```
import std.range;
import std.parallelism;
void vec_op (double [] outp, const double [] inp,
double function (double) f)
{
foreach (ref a, b; parallel (lockstep (outp, inp)))
a = f (b);
}
```
Should this compile? dmd says
```
[...]/src/phobos/std/parallelism.d(4094): Error
On Tuesday, 1 November 2022 at 19:49:47 UTC, mw wrote:
On Tuesday, 1 November 2022 at 18:18:45 UTC, Steven
Schveighoffer wrote:
[...]
Maybe the hunt library author doesn't know. (My code does not
directly use this library, it got pulled in by some other
decencies.)
[...]
Please, if
On Tuesday, 1 November 2022 at 18:18:45 UTC, Steven Schveighoffer
wrote:
Oh yeah, isDaemon detaches the thread from the GC. Don't do
that unless you know what you are doing.
As discussed on discord, this isn't actually true. All it does is
prevent the thread from being joined before exiting
On Tuesday, 1 November 2022 at 18:18:45 UTC, Steven Schveighoffer
wrote:
And I just noticed, one of the thread trace points to here:
https://github.com/huntlabs/hunt/blob/master/source/hunt/util/DateTime.d#L430
```
class DateTime {
shared static this() {
...
dateThread.isDaemon
On Tuesday, 1 November 2022 at 18:18:45 UTC, Steven Schveighoffer
wrote:
Oh yeah, isDaemon detaches the thread from the GC. Don't do
that unless you know what you are doing.
As discussed on discord, this isn't true actually. All it does is
prevent the thread from being joined before exiting
On 11/1/22 1:47 PM, mw wrote:
Can you show a code snippet that includes the parallel foreach?
(It's just a very straight forward foreach on an array; as I said it may
not be relevant.)
And I just noticed, one of the thread trace points to here:
https://github.com/huntlabs/hunt/blob/master
Can you show a code snippet that includes the parallel foreach?
(It's just a very straight forward foreach on an array; as I said
it may not be relevant.)
And I just noticed, one of the thread trace points to here:
https://github.com/huntlabs/hunt/blob/master/source/hunt/util/DateTime.d
On Tue, Nov 01, 2022 at 10:37:57AM -0700, Ali Çehreli via Digitalmars-d-learn
wrote:
> On 11/1/22 10:27, H. S. Teoh wrote:
>
> > Maybe try running Digger to reduce the code for you?
>
> Did you mean dustmite, which is accessible as 'dub dustmite
> ' but I haven't used it.
Oh yes, sorry, I
On 11/1/22 10:27, H. S. Teoh wrote:
> Maybe try running Digger to reduce the code for you?
Did you mean dustmite, which is accessible as 'dub dustmite
' but I haven't used it.
My guess for the segmentation fault is that the OP is executing
destructor code that assumes some members are
ted!
> }
>
> int main() {
> foo();
> return 0;
> }
>
> ```
Can you show a code snippet that includes the parallel foreach? Because
the above code snippet is over-simplified to the point it's impossible
to tell what the original problem might be, since obviously calling a
funct
ead (maybe due to foreach parallel) cleanup bug
somewhere, which is unrelated to my own code. This kind of bug is
hard to re-produce, not sure if I should file an issue.
I'm using: LDC - the LLVM D compiler (1.30.0) on x86_64.
Under gdb, here is the threads info (for the record):
Thre
Thank you, folks, for your hints and suggestions!
Indeed, I re-wrote the code and got it substantially faster and
well paralleled.
Insted of making inner loop parallel, I made parallel both of
them. For that I had to convert 2d index into 1d, and then back
to 2d. Essentially I had
On 10/18/22 06:24, Guillaume Piolat wrote:
> To win something with OS threads, you must think of tasks that takes on
> the order of milliseconds rather than less than 0.1ms.
> Else you will just pay extra in synchronization costs.
In other words, the OP can adjust work unit size. It is on the
On Tuesday, 18 October 2022 at 11:56:30 UTC, Yura wrote:
```D
// Then for each Sphere, i.e. dot[i]
// I need to do some arithmetics with itself and other dots
// I have only parallelized the inner loop, i is fixed.
It's usually a much better idea to parallelize the outer loop.
Even OpenMP
On Tuesday, 18 October 2022 at 11:56:30 UTC, Yura wrote:
What I am doing wrong?
The size of your task are way too small.
To win something with OS threads, you must think of tasks that
takes on the order of milliseconds rather than less than 0.1ms.
Else you will just pay extra in
Dear All,
I am trying to make a simple code run in parallel. The parallel
version works, and gives the same number as serial albeit slower.
First, the parallel features I am using:
import core.thread: Thread;
import std.range;
import std.parallelism:parallel;
import std.parallelism:taskPool
On 1/8/22 7:18 PM, Booster wrote:
>
https://dlang.org/library/std/parallelism/task_pool.parallel.html#workUnitSize
>
>
> Basically have no difference than the above but when I change the
> workUnitSize to 1, 4, whatever it is always running 8 parallel loops.
>
> Either
https://dlang.org/library/std/parallelism/task_pool.parallel.html#workUnitSize
Basically have no difference than the above but when I change the
workUnitSize to 1, 4, whatever it is always running 8 parallel
loops.
Either a bug or workUnitSize is not what I think it is. I simply
want
On Sunday, 26 December 2021 at 15:36:54 UTC, Bastiaan Veelo wrote:
On Sunday, 26 December 2021 at 15:20:09 UTC, Bastiaan Veelo
wrote:
So if you use `workerLocalStorage` ... you'll get your output
in order without sorting.
Scratch that, I misunderstood the example. It doesn't solve
ordering.
On Sunday, 26 December 2021 at 15:20:09 UTC, Bastiaan Veelo wrote:
So if you use `workerLocalStorage` to give each thread an
`appender!string` to write output to, and afterwards write
those to `stdout`, you'll get your output in order without
sorting.
Scratch that, I misunderstood the
On Sunday, 26 December 2021 at 11:24:54 UTC, rikki cattermole
wrote:
I would start by removing the use of stdout in your loop kernel
- I'm not familiar with what you are calculating, but if you
can basically have the (parallel) loop operate from (say) one
array directly into another then you
) if (isFloatingPoint!T)
in (y.length == x.length)
{
import std.range : iota;
import std.parallelism : parallel, taskPool;
auto sums = taskPool.workerLocalStorage(0.0L);
foreach (i; parallel(iota(x.length)))
sums.get += x[i] * y[i];
T result = 0.0;
foreach (threadResult
On 27/12/2021 12:10 AM, max haughton wrote:
I would start by removing the use of stdout in your loop kernel - I'm
not familiar with what you are calculating, but if you can basically
have the (parallel) loop operate from (say) one array directly into
another then you can get extremely good
, is there anything I need to know? About shared
resources or how to wait until all threads are done?
Parallel programming is one of the deepest rabbit holes you can
actually get to use in practice. Your question at the moment
doesn't really have much context to it so it's difficult to
suggest where you should
This is curious. I was up for trying to parallelize my code,
specifically having a block of code calculate some polynomials
(*Related to Reed Solomon stuff*). So I cracked open std.parallel
and looked over how I would manage this all.
To my surprise I found ParallelForEach, which gives the
On Thursday, 22 July 2021 at 16:39:45 UTC, Steven Schveighoffer
wrote:
On 7/22/21 1:46 AM, seany wrote:
[...]
Correct. You must synchronize on ii.
[...]
This isn't valid code, because you can't append to an integer.
Though I think I know what you meant. Is it thread-safe
(assuming the
On 7/22/21 1:46 AM, seany wrote:
Consider :
int [] ii;
foreach(i,dummy; parallel(somearray)) {
ii ~= somefunc(dummy);
}
This is not safe, because all threads are accessing the same array and
trying to add values and leading to collision.
Correct. You must synchronize
where each value contains some invalid number, and the
AA's keys are never changed during the parallel code? Yeah,
that should work.
Yes, the keys are never changed during the parallel code
execution. keys are pre-generated.
keys are never changed during the parallel code? Yeah, that
should work.
On Thursday, 22 July 2021 at 06:47:52 UTC, Ali Çehreli wrote:
But even if it did, we wouldn't want synchronized blocks in
parallelization because a synchronized block would run a single
thread at a time and nothing would be running in parallel
anymore.
But it only affects the block
contains
some invalid number, say -1 ?
Then in process, the parallel code can grab the specific key
locations. Will that also create the same problem ?
On Thursday, 22 July 2021 at 07:23:36 UTC, seany wrote:
On Thursday, 22 July 2021 at 05:53:01 UTC, jfondren wrote:
No. Consider
https://programming.guide/hash-tables-open-vs-closed-addressing.html
The page says :
A key is always stored in the bucket it's hashed to.
What if my keys are
On Thursday, 22 July 2021 at 05:53:01 UTC, jfondren wrote:
No. Consider
https://programming.guide/hash-tables-open-vs-closed-addressing.html
The page says :
A key is always stored in the bucket it's hashed to.
What if my keys are always unique?
On 7/21/21 11:01 PM, frame wrote:
> This is another parallel foreach body conversion question.
> Isn't the compiler clever enough to put a synchronized block here?
parallel is a *function* (not a D feature). So, the compiler might have
to analyze the entire code to suspect race cond
On Thursday, 22 July 2021 at 05:46:25 UTC, seany wrote:
But what about this :
int [ string ] ii;
ii.length = somearray.length;
foreach(i,dummy; parallel(somearray)) {
string j = generateUniqueString(i);
ii[j] ~= somefunc(dummy);
}
Is this also guaranteed thread
On Thursday, 22 July 2021 at 05:53:01 UTC, jfondren wrote:
On Thursday, 22 July 2021 at 05:46:25 UTC, seany wrote:
But what about this :
int [ string ] ii;
ii.length = somearray.length;
foreach(i,dummy; parallel(somearray)) {
string j = generateUniqueString(i);
ii[j
Consider :
int [] ii;
foreach(i,dummy; parallel(somearray)) {
ii ~= somefunc(dummy);
}
This is not safe, because all threads are accessing the same
array and trying to add values and leading to collision.
But :
int [] ii;
ii.length = somearray.length;
foreach
On 7/19/21 10:58 PM, H. S. Teoh wrote:
I didn't check the implementation to verify this, but I'm pretty sure
`break`, `continue`, etc., in the parallel foreach body does not change
which iteration gets run or not.
`break` should be undefined behavior (it is impossible to know which
loops
d your question disappeared. :)
> I understand, that the parallel iterator will pick lazily values of `j`
> (up to `my_workunitsize`), and execute the for loop for those values in
> its own thread.
Yes.
> Say, values of `j` from `10`to `20` is filled where `my_workunitsize` =
> 11. S
ail
that should not affect the semantics of the overall
computation. In order to maintain consistency, loop iterations
should not affect each other (unless they deliberately do so,
e.g., read/write from a shared variable -- but parallel foreach
itself should not introduce such a depende
r later ?
> >
> > No, it will.
> >
> > Since each iteration is running in parallel, the fact that one of
> > them terminated early should not affect the others.
[...]
> Even tho, the workunit specified 11 values to a single thread?
Logically speaking, the size of th
e I had `10`... `20` as values of `j`,
will only execute for `j = 10, 11, 12 ` and will not reach
`14`or later ?
No, it will.
Since each iteration is running in parallel, the fact that one
of them terminated early should not affect the others.
T
Even tho, the workunit specified 11 val
t; I didn't test this, but I'm pretty sure `continue` inside a parallel
> > foreach loop simply terminates that iteration early; I don't think
> > it will skip to the next iteration.
> >
> > [...]
>
> Ok, therefore it means that, if at `j = 13 `i use a continue
On Tuesday, 20 July 2021 at 00:37:56 UTC, H. S. Teoh wrote:
On Tue, Jul 20, 2021 at 12:07:10AM +, seany via
Digitalmars-d-learn wrote:
[...]
[...]
I didn't test this, but I'm pretty sure `continue` inside a
parallel foreach loop simply terminates that iteration early; I
don't think
function(i,j) ) continue;
> double d = expensiveFunction(i,j);
> // ... stuff ...
> }
> }
>
> I understand, that the parallel iterator will pick lazily values of
> `j` (up to `my_workunitsize`), and execute the for loop for those
> values in
uff ...
}
}
I understand, that the parallel iterator will pick lazily values
of `j` (up to `my_workunitsize`), and execute the for loop for
those values in its own thread.
Say, values of `j` from `10`to `20` is filled where
`my_workunitsize` = 11. Say, at `j = 13` the `boolean_function`
returns t
On Friday, 25 June 2021 at 19:52:23 UTC, seany wrote:
On Friday, 25 June 2021 at 19:30:16 UTC, jfondren wrote:
On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:
If i use `parallel(...)`it runs.
If i use `prTaskPool.parallel(...`, then in the line : `auto
prTaskPool = new TaskPool
On Friday, 25 June 2021 at 19:30:16 UTC, jfondren wrote:
On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:
If i use `parallel(...)`it runs.
If i use `prTaskPool.parallel(...`, then in the line : `auto
prTaskPool = new TaskPool(threadCount);` it hits the error.
Please help.
parallel
On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:
If i use `parallel(...)`it runs.
If i use `prTaskPool.parallel(...`, then in the line : `auto
prTaskPool = new TaskPool(threadCount);` it hits the error.
Please help.
parallel() reuses a single taskPool that's only established once
any more. But I have "error
creating thread" - time to time. Not always.
But, even with the taskpool, it is not spreading to multiple
cores.
PS: this is the error message :
"core.thread.threadbase.ThreadError@src/core/thread/threadbase.d(1219): Error creating thread"
On Friday, 25 June 2021 at 15:50:37 UTC, seany wrote:
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:
This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated
location... Wait I
On Friday, 25 June 2021 at 16:37:06 UTC, seany wrote:
On Friday, 25 June 2021 at 15:50:37 UTC, seany wrote:
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
[...]
Try : (this
version)[https://github.com/naturalmechanics/mwp/tree/nested-loops]
The goal is to parallelize :
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:
This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated
location... Wait I will try to make a MWP.
[Here is
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
I reckon that there's some other memory error and that the
parallelism is unrelated.
@safe:
```
source/AI.d(83,23): Error: cannot take address of local `rData`
in `@safe` function `main`
source/analysisEngine.d(560,20): Error: cannot
On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:
This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated
location... Wait I will try to make a MWP.
[Here is MWP](https://github.com/naturalmechanics/mwp).
Please compile with `dub build
On Friday, 25 June 2021 at 15:08:38 UTC, Ali Çehreli wrote:
On 6/25/21 7:21 AM, seany wrote:
> The code without the parallel foreach works fine. No segfault.
That's very common.
What I meant is, is the code written in a way to work safely in
a parallel foreach loop? (i.e. Is the c
On 6/25/21 7:21 AM, seany wrote:
> The code without the parallel foreach works fine. No segfault.
That's very common.
What I meant is, is the code written in a way to work safely in a
parallel foreach loop? (i.e. Is the code "independent"?) (But I assume
it is because it's be
On Friday, 25 June 2021 at 14:22:25 UTC, seany wrote:
On Friday, 25 June 2021 at 14:13:14 UTC, jfondren wrote:
On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:
[...]
A self-contained and complete example would help a lot, but
the likely
problem with this code is that you're accessing
On Friday, 25 June 2021 at 14:13:14 UTC, jfondren wrote:
On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:
[...]
A self-contained and complete example would help a lot, but the
likely
problem with this code is that you're accessing pnts[y][x] in
the
loop, which makes the loop bodies no
Do you still have two parallel loops? Are both with explicit
TaskPool objects? If not, I wonder whether multiple threads are
using the convenient 'parallel' function, stepping over each
others' toes. (I am not sure about this because perhaps it's
safe to do this; never tested.)
It is possi
On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:
I tried this .
int[][] pnts ;
pnts.length = fld.length;
enum threadCount = 2;
auto prTaskPool = new TaskPool(threadCount);
scope (exit) {
s.)
Another reason: 1 can be a horrible value for workUnitSize. Try 100,
1000, etc. and see whether it helps with performance.
> Even much deeper down in program, much further down the line...
> And the location of segfault is random.
Do you still have two parallel loops? Are both with
On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:
On Thursday, 24 June 2021 at 21:19:19 UTC, Ali Çehreli wrote:
[...]
I tried this .
int[][] pnts ;
pnts.length = fld.length;
enum threadCount = 2;
auto prTaskPool = new
On Thursday, 24 June 2021 at 21:19:19 UTC, Ali Çehreli wrote:
On 6/24/21 1:41 PM, seany wrote:
> Is there any way to control the number of CPU cores used in
> parallelization ?
Yes. You have to create a task pool explicitly:
import std.parallelism;
void main() {
enum threadCount = 2;
On 6/24/21 1:41 PM, seany wrote:
> Is there any way to control the number of CPU cores used in
> parallelization ?
Yes. You have to create a task pool explicitly:
import std.parallelism;
void main() {
enum threadCount = 2;
auto myTaskPool = new TaskPool(threadCount);
scope (exit) {
On Thursday, 24 June 2021 at 20:56:26 UTC, Ali Çehreli wrote:
On 6/24/21 1:33 PM, Bastiaan Veelo wrote:
> distributes the load across all cores (but one).
Last time I checked, the current thread would run tasks as well.
Ali
Indeed, thanks.
— Bastiaan.
On Thursday, 24 June 2021 at 21:05:28 UTC, Bastiaan Veelo wrote:
On Thursday, 24 June 2021 at 20:41:40 UTC, seany wrote:
Is there any way to control the number of CPU cores used in
parallelization ?
E.g : take 3 cores for the first parallel foreach - and then
for the second one, take 3 cores
On Thursday, 24 June 2021 at 20:41:40 UTC, seany wrote:
On Thursday, 24 June 2021 at 20:33:00 UTC, Bastiaan Veelo wrote:
By the way, nesting parallel `foreach` does not make much
sense, as one level already distributes the load across all
cores (but one). Additional parallelisation
On 6/24/21 1:33 PM, Bastiaan Veelo wrote:
> distributes the load across all cores (but one).
Last time I checked, the current thread would run tasks as well.
Ali
On Thursday, 24 June 2021 at 20:33:00 UTC, Bastiaan Veelo wrote:
By the way, nesting parallel `foreach` does not make much
sense, as one level already distributes the load across all
cores (but one). Additional parallelisation will likely just
add overhead, and have a net negative effect
On Thursday, 24 June 2021 at 18:23:01 UTC, seany wrote:
I have seen
[this](https://forum.dlang.org/thread/akhbvvjgeaspmjntz...@forum.dlang.org).
I can't call break form parallel foreach.
Okey, Is there a way to easily call .stop() from such a case?
Yes there is, but it won’t break
On Thursday, 24 June 2021 at 20:08:06 UTC, seany wrote:
On Thursday, 24 June 2021 at 19:46:52 UTC, Jerry wrote:
On Thursday, 24 June 2021 at 18:23:01 UTC, seany wrote:
[...]
Maybe I'm wrong here, but I don't think there is any way to do
that with parallel.
What I would do is negate
On Thursday, 24 June 2021 at 19:46:52 UTC, Jerry wrote:
On Thursday, 24 June 2021 at 18:23:01 UTC, seany wrote:
[...]
Maybe I'm wrong here, but I don't think there is any way to do
that with parallel.
What I would do is negate someConditionCheck and instead only
do work when there is work
On Thursday, 24 June 2021 at 18:23:01 UTC, seany wrote:
I have seen
[this](https://forum.dlang.org/thread/akhbvvjgeaspmjntz...@forum.dlang.org).
I can't call break form parallel foreach.
Okey, Is there a way to easily call .stop() from such a case?
Here is a case to consider:
outer
I have seen
[this](https://forum.dlang.org/thread/akhbvvjgeaspmjntz...@forum.dlang.org).
I can't call break form parallel foreach.
Okey, Is there a way to easily call .stop() from such a case?
Here is a case to consider:
outer: foreach(i, a; parallel(array_of_a)) {
foreach(j, b
On Wednesday, 16 June 2021 at 06:29:21 UTC, z wrote:
On Tuesday, 15 June 2021 at 06:39:24 UTC, seany wrote:
...
This is the best I could do: https://run.dlang.io/is/dm8LBP
For some reason, LDC refuses to vectorize or even just unroll
the nonparallel version, and more than one `parallel
On Tuesday, 15 June 2021 at 06:39:24 UTC, seany wrote:
...
This is the best I could do: https://run.dlang.io/is/dm8LBP
For some reason, LDC refuses to vectorize or even just unroll the
nonparallel version, and more than one `parallel` corrupts the
results.
But judging by the results you
On Tuesday, 15 June 2021 at 09:09:29 UTC, Ali Çehreli wrote:
On 6/14/21 11:39 PM, seany wrote:
> [...]
I gave an example of it in my DConf Online 2020 presentation as
well:
https://www.youtube.com/watch?v=dRORNQIB2wA=1324s
> [...]
That is violating a
On 6/14/21 11:39 PM, seany wrote:
> I know that D has parallel foreach [like
> this](http://ddili.org/ders/d.en/parallelism.html).
I gave an example of it in my DConf Online 2020 presentation as well:
https://www.youtube.com/watch?v=dRORNQIB2wA=1324s
>
On Tuesday, 15 June 2021 at 07:41:06 UTC, jfondren wrote:
On Tuesday, 15 June 2021 at 06:39:24 UTC, seany wrote:
[...]
add a `writeln(c.length);` in your inner loop and consider
the output. If you were always pushing to the end of c, then
only unique numbers should be output. But I see e.g.
1 - 100 of 358 matches
Mail list logo