On 9/18/19 4:01 AM, Richard Biener wrote:
On Tue, 17 Sep 2019, Nicholas Krause wrote:
On 9/17/19 2:37 AM, Richard Biener wrote:
On Mon, 16 Sep 2019, Nicholas Krause wrote:
Greetings Richard,
I don't know if it's currently possible but whats the best way to either so
about or
use a tool
On Tue, 17 Sep 2019, Nicholas Krause wrote:
>
> On 9/17/19 2:37 AM, Richard Biener wrote:
> > On Mon, 16 Sep 2019, Nicholas Krause wrote:
> >
> >> Greetings Richard,
> >>
> >> I don't know if it's currently possible but whats the best way to either so
> >> about or
> >>
> >> use a tool to expose
On 9/17/19 2:37 AM, Richard Biener wrote:
On Mon, 16 Sep 2019, Nicholas Krause wrote:
Greetings Richard,
I don't know if it's currently possible but whats the best way to either so
about or
use a tool to expose shared state at both the GIMPLE and RTL level. This
would
allow us to figure
On Mon, 16 Sep 2019, Nicholas Krause wrote:
> Greetings Richard,
>
> I don't know if it's currently possible but whats the best way to either so
> about or
>
> use a tool to expose shared state at both the GIMPLE and RTL level. This
> would
>
> allow us to figure out much better what
Greetings Richard,
I don't know if it's currently possible but whats the best way to either
so about or
use a tool to expose shared state at both the GIMPLE and RTL level.
This would
allow us to figure out much better what algorthims or data structures to
choose
to allow this to scale
Geert Bosch wrote:
Given that CPU usage is at 100% now for most jobs, such as
bootstrapping GCC, there is not much room for any improvement
through threading.
Geert, I find this a bit incomprehensible, the whole point
of threading is to increase CPU availability by using
multiple cores.
On Nov 10, 2006, at 9:08 PM, Geert Bosch wrote:
Most people aren't waiting for compilation of single files.
If they do, it is because a single compilation unit requires
parsing/compilation of too many unchanging files, in which case
the primary concern is avoiding redoing useless compilation.
. No amount of threading in the
compiler will remove either bottleneck.
-Geert
Dave Korn [EMAIL PROTECTED] writes:
The main place where threading may make sense, especially
with LTO, is the linker. This is a longer lived task, and
is the last step of compilation, where no other parellel
processes are active. Moreover, linking tends to be I/O
intensive, so a number
On 14 November 2006 18:30, Ian Lance Taylor wrote:
Dave Korn [EMAIL PROTECTED] writes:
The main place where threading may make sense, especially
with LTO, is the linker. This is a longer lived task, and
is the last step of compilation, where no other parellel
processes are active.
circumstances to get 99% of the benefit there to be had.
I suspect that the balance will tip when the number of cores starts to get
huge, and that at some time in the future we would benefit of the extra
parallelizability we could get from reducing the effective scheduling
granularity by threading
Dave Korn [EMAIL PROTECTED] writes:
It's irrelevant to the main discussion here, but in fact there is a
fair amount of possible threading in the linker proper, quite apart
from LTO. The linker spends a lot of time reading large files, and
the I/O wait can be parallelized.
That's not
On Tue, Nov 14, 2006 at 07:15:19PM -, Dave Korn wrote:
Geert's followup explained this seeming anomaly: he means that the crude
high-level granularity of make -j is enough to keep all cpus busy at 100%,
and I'm fairly persuaded by the arguments that, at the moment, that's
sufficient in
On 14 November 2006 19:40, Joe Buck wrote:
On Tue, Nov 14, 2006 at 07:15:19PM -, Dave Korn wrote:
Geert's followup explained this seeming anomaly: he means that the crude
high-level granularity of make -j is enough to keep all cpus busy at
100%, and I'm fairly persuaded by the arguments
Paul Brook wrote:
For other optimisations I'm not convinced there's an easy win compared with
make -j. You have to make sure those passes don't have any global state, and
as other people have pointed out garbage collection gets messy. The compile
server project did something similar, and
Paul Brook wrote:
For other optimisations I'm not convinced there's an easy win compared
with make -j. You have to make sure those passes don't have any global
state, and as other people have pointed out garbage collection gets messy.
The compile server project did something similar, and
[EMAIL PROTECTED] wrote:
Each of the functions in a C/C++ program is dependent on
the global environment, but each is independent of each other.
Separate threads could process the tree/RTL for each function
independently, with the results merged on completion. This
may interact adversely with
On Nov 11, 2006, at 03:21, Mike Stump wrote:
The cost of my assembler is around 1.0% (ppc) to 1.4% (x86)
overhead as measured with -pipe -O2 on expr.c,. If it was
converted, what type of speedup would you expect?
Given that CPU usage is at 100% now for most jobs, such as
bootstrapping GCC,
On 14 November 2006 01:51, Geert Bosch wrote:
On Nov 11, 2006, at 03:21, Mike Stump wrote:
The cost of my assembler is around 1.0% (ppc) to 1.4% (x86)
overhead as measured with -pipe -O2 on expr.c,. If it was
converted, what type of speedup would you expect?
Given that CPU usage is at
On Nov 13, 2006, at 21:27, Dave Korn wrote:
To be fair, Mike was talking about multi-core SMP, not threading
on a single
cpu, so given that CPU usage is at 100% now for most jobs, there is
an Nx100%
speedup to gain from using 1 thread on each of N cores.
I'm mostly building GCC on
On Nov 10, 2006, at 9:08 PM, Geert Bosch wrote:
I'd guess we win more by writing object files directly to disk like
virtually every other compiler on the planet.
The cost of my assembler is around 1.0% (ppc) to 1.4% (x86) overhead
as measured with -pipe -O2 on expr.c,. If it was converted,
* /From/: Mike Stump mrs at apple dot com
* /To/: GCC Development gcc at gcc dot gnu dot org
* /Date/: Fri, 10 Nov 2006 12:38:07 -0800
* /Subject/: Threading the compiler
We're going to have to think
Let's just say, the CPU is doomed.
So you're building consensus for something that is doomed?
Seriously thought I don't really understand what sort of response
you're expecting.
Just consensus building.
To build a consensus you have to have something for people to agree or
disagree
On Sat, Nov 11, 2006 at 04:16:19PM +, Paul Brook wrote:
I don't know how much of the memory allocated is global readonly data (ie.
suitable for sharing between threads). I wouldn't be surprised if it's a
relatively small fraction.
I don't have numbers on global readonly, but in typical
whole-program optimisation and SMP machines have been around for a
fair while now, so I'm guessing not.
I don't know of anything that is particularly hard about it, but, if
you know of bits that are hard, or have pointer to such, I'd be
interested in it.
You imply you're considering
Ross Ridge wrote:
Mike Stump writes:
We're going to have to think seriously about threading the compiler. Intel
predicts 80 cores in the near future (5 years). [...] To use this many
cores for a single compile, we have to find ways to split the work. The
best way, of course is to have make -j80
Mike Stump wrote:
Thoughts?
Parallelizing GCC is an interesting problem for a couple
reasons: First, the problem is inherently sequential.
Second, GCC expects that each step in the process happens
in order, one after the other.
Most invocations of GCC are part of a cluster of similar
Geert Bosch wrote:
Most of my compilations (on Linux, at least) use close
to 100% of CPU. Adding more overhead for threading and
communication/synchronization can only hurt.
On a single-processor system, adding overhead for multi-
threading does reduce performance. On a multi-processor
Each of the functions in a C/C++ program is dependent on
the global environment, but each is independent of each other.
Separate threads could process the tree/RTL for each function
independently, with the results merged on completion. This
may interact adversely with some global
Ross Ridge wrote:
Umm... those 80 processors that Intel is talking about are more like the
8 coprocessors in the Cell CPU.
Michael Eager wrote:
No, the Cell is asymmetrical (vintage 2000) architecture.
The Cell CPU as a whole is asymmetrical, but I'm only comparing the
design to the 8 identical
We're going to have to think seriously about threading the compiler.
Intel predicts 80 cores in the near future (5 years). http://
hardware.slashdot.org/article.pl?sid=06/09/26/1937237from=rss To
use this many cores for a single compile, we have to find ways to
split the work. The best
On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:
How many hunks do we need, well, today I want 8 for 4.2 and 16 for
mainline, each release, just 2x more. I'm assuming nice, equal sized
hunks. For larger variations in hunk size, I'd need even more hunks.
Or, so that is just
On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:
Will use C++ help or hurt compiler parallelism? Does it really matter?
I'm not an expert, but, in the simple world I want, I want it to not
matter in the least. For the people writing most code in the
compiler, I want clear simple rules for
On Fri, 2006-11-10 at 12:46 -0800, H. J. Lu wrote:
On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:
How many hunks do we need, well, today I want 8 for 4.2 and 16 for
mainline, each release, just 2x more. I'm assuming nice, equal sized
hunks. For larger variations in hunk
On Fri, 2006-11-10 at 13:31 -0800, Mike Stump wrote:
On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:
Will use C++ help or hurt compiler parallelism? Does it really matter?
I'm not an expert, but, in the simple world I want, I want it to not
matter in the least. For the people writing most
On 2006-11-10, at 21:46, H. J. Lu wrote:
On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:
How many hunks do we need, well, today I want 8 for 4.2 and 16 for
mainline, each release, just 2x more. I'm assuming nice, equal sized
hunks. For larger variations in hunk size, I'd need
On 2006-11-10, at 22:33, Sohail Somani wrote:
On Fri, 2006-11-10 at 12:46 -0800, H. J. Lu wrote:
On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:
How many hunks do we need, well, today I want 8 for 4.2 and 16 for
mainline, each release, just 2x more. I'm assuming nice, equal
in LTO) could be to store persistently some
internal representation for each function (within a compilation unit) and to
recall it if the compiler notice that a given function did'nt change.
However, for multi-threading the compiler, a significant issue might be the
internal GCC garbage collector
Mike Stump wrote:
...
Thoughts?
Raw thoughts:
1. Threading isn't going to help for I/O bound portions.
2. The OS should already be doing some of the work of threading.
Some 'parts' of the compiler should already be using CPUs: 'make',
the front-end (gcc) command, the language compiler,
On Fri, 2006-11-10 at 22:49 +0100, Marcin Dalecki wrote:
I don't think it can possibly hurt as long as people follow normal C++
coding rules.
Contrary to C there is no single general coding style for C++. In
fact for a project
of such a scale this may be indeed the most significant
On Nov 10, 2006, at 2:19 PM, Kevin Handy wrote:
What will the multi-core compiler design do to the old processors
(extreme slowness?)
Roughly speaking, I want it to add around 1000 extra instructions per
function compiled, in other words, nothing. The compile speed will
be what the
Mike Stump writes:
We're going to have to think seriously about threading the compiler. Intel
predicts 80 cores in the near future (5 years). [...] To use this many
cores for a single compile, we have to find ways to split the work. The
best way, of course is to have make -j80 do that for us
The competition is already starting to make progress in this area.
We don't want to spend time in locks or spinning and we don't want to
liter our code with such things, so, if we form areas that are fairly
well isolated and independent and then have a manager, manage the
compilation process
On 11/10/06, Mike Stump [EMAIL PROTECTED] wrote:
On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:
Will use C++ help or hurt compiler parallelism? Does it really matter?
I'm not an expert, but, in the simple world I want, I want it to not
matter in the least. For the people writing most code in
Most people aren't waiting for compilation of single files.
If they do, it is because a single compilation unit requires
parsing/compilation of too many unchanging files, in which case
the primary concern is avoiding redoing useless compilation.
The common case is that people just don't use the
On Nov 10, 2006, at 9:08 PM, Geert Bosch wrote:
The common case is that people just don't use the -j feature
of make because
1) they don't know about it
2) their IDE doesn't know about it
3) they got burned by bad Makefiles
4) it's just too much typing
Don't forget:
5) running 4 GCC
On 2006-11-11, at 06:08, Geert Bosch wrote:
Just compiling
int main() { puts (Hello, world!); return 0; }
takes 342 system calls on my Linux box, most of them
related to creating processes, repeated dynamic linking,
and other initialization stuff, and reading and writing
temporary files for
On Sat, 2006-11-11 at 00:08 -0500, Geert Bosch wrote:
Most of my compilations (on Linux, at least) use close
to 100% of CPU. Adding more overhead for threading and
communication/synchronization can only hurt.
In my daily work, I take processes that run 100% and make them use 100%
in less time.
On Nov 10, 2006, at 5:43 PM, Paul Brook wrote:
Can you make it run on my graphics card too?
:-) You know all the power on a bleeding edge system is in the GPU
now. People are already starting to migrate data processing for
their applications to it. Don't bet against it. In fact, we
On Fri, 10 Nov 2006, Mike Stump wrote:
On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:
Will use C++ help or hurt compiler parallelism? Does it really matter?
I'm not an expert, but, in the simple world I want, I want it to not matter
in the least. For the people writing most code in the
50 matches
Mail list logo