On 05/10/2018 19:23, Matthew Flatt wrote:
>
> We should certainly update the documentation with information about the
> limits of parallelism via places.
>
Added PR:
https://github.com/racket/racket/pull/2304
--
Paulo Matos
--
You received this message because you are subscribed to the
On 08/10/2018 22:12, Philip McGrath wrote:
> This is much closer to the metal than where I usually spend my time,
> but, if it terns out that multiple OS processes is better than OS
> threads in this case, Distributed Places might provide an easier path to
> move to multiple processes than
Hi all,
Apologies for the delay in sending this email but I have been trying to
implement and test an alternative and wanted to be sure it works before
sending this off.
So, as Matthew suggested this problem has to do with memory allocation.
The --no-alloc option in Matthew's suggested snippet
I just confirmed that this is due to memory allocation locking in the
kernel. If your places do no allocation then all is fine.
Paulo Matos
On 08/10/2018 21:39, James Platt wrote:
> I wonder if this has anything to do with mitigation for Spectre, Meltdown or
> the other speculative execution
This is much closer to the metal than where I usually spend my time, but,
if it terns out that multiple OS processes is better than OS threads in
this case, Distributed Places might provide an easier path to move to
multiple processes than using `subprocess` directly:
I wonder if this has anything to do with mitigation for Spectre, Meltdown or
the other speculative execution vulnerabilities that have been identified
recently. I understand that some or all of the patches affect the performance
of multi-CPU processing in general.
James
--
You received
On 10/5/2018 10:32 AM, Matthew Flatt wrote:
At Fri, 5 Oct 2018 15:36:04 +0200, Paulo Matos wrote: > Again, I am
really surprised that you mention that places are not > separate
processes. Documentation does say they are separate racket > virtual
machines, how is this accomplished if not by
if not I will have to redesign my system to use 'subprocess'
Expanding on this, for students on the list... Having many worker host
processes is not necessarily a bad thing. It can be more programmer
work, but it simplifies the parallelism in a way (e.g., "let the Linux
kernel worry
At Fri, 5 Oct 2018 17:55:47 +0200, Paulo Matos wrote:
> Matthew, Sam, do you understand why this is happening?
I still think it's probably allocation, and probably specifically
content on the process's page table. Do you see different behavior with
a non-allocating variant (via `--no-alloc`
I was trying to create a much more elaborate example when Matthew sent
his tiny one which is enough to show the problem.
I started a 64core machine on aws to show the issue.
I see a massive degradation as the number of places increases.
I use this slightly modified code:
#lang racket
(define
I tried this same program on my desktop, which also has 4 (i7-4770)
cores with hyperthreading. Here's what I see:
[samth@huor:~/work/grant_parallel_compilers/nsf_submissions (master)
plt] time r ~/Downloads/p.rkt 1
N: 1, cpu: 5808/5808.0, real: 5804
At Fri, 5 Oct 2018 15:36:04 +0200, Paulo Matos wrote:
> Again, I am really surprised that you mention that places are not
> separate processes. Documentation does say they are separate racket
> virtual machines, how is this accomplished if not by using separate
> processes?
Each place is an OS
On 05/10/2018 14:15, Matthew Flatt wrote:
> It's difficult to be sure from your description, but it sounds like the
> problem may just be the usual one of scaling parallelism when
> communication is involved.
>
Matthew, thanks for the reply.
The interesting thing here is that there is no
It's difficult to be sure from your description, but it sounds like the
problem may just be the usual one of scaling parallelism when
communication is involved.
Red is probably synchronization. It might be synchronization due to the
communication you have between places, it might be
All,
A quick update on this problem which is in my critical path.
I just noticed, in an attempt to reproduce it, that during the package
setup part of the racket compilation procedure the same happens.
I am running `make CPUS=24 in-place`on a 36 cpu machine and I see that
not only sometimes the
I attach yet another example where this behaviour is much more
noticiable. This is on a 64 core dedicated machine in amazon aws.
--
You received this message because you are subscribed to the Google Groups
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it,
Hi,
I am not sure this is an issue with places or what it could be but my
devops-fu is poor and I am not even sure how to debug something like
this so maybe someone with more knowledge than me on this might chime in
to hint on a possible debug method.
I was running some benchmarks and noticed
17 matches
Mail list logo