Joseph S. Myers wrote:
All in all, perhaps not the most efficient representation for memory
foot print, and the pointer chasing probably doesn't help (cache!).
But changing it is a lot more difficult than the GIMPLE tuples
project. I don't think it can be done.
I don't see any reason
On Mon, May 24, 2010 at 6:20 PM, Mark Mitchell m...@codesourcery.com wrote:
Joseph S. Myers wrote:
All in all, perhaps not the most efficient representation for memory
foot print, and the pointer chasing probably doesn't help (cache!).
But changing it is a lot more difficult than the GIMPLE
Steven Bosscher wrote:
The GIMPLE tuples work took man-years (note: plural). There was less
code to convert and the process of conversion was easier, relatively,
than the conversion of RTL would be. So your one person-year seems
grossly underestimated.
I dunno. To get good project
On Mon, 24 May 2010, Mark Mitchell wrote:
As to whether this is a better choice than working on GIMPLE back-ends,
I think that's unclear. There's no question that a GIMPLE back-end
would be prettier. I think it's a question of what your goals are. If
I don't think of the two as being
On Thu, 20 May 2010, Steven Bosscher wrote:
think, the tree-like representation. If you have an instruction like
(set (a) (b+c)) you could have, at the simples, three integers (insn
uid, basic block, instruction code) and three pointers for operands.
In total, on a 64 bits host: 3*4+3*8 = 36
On Thu, May 20, 2010 at 11:21 PM, Xinliang David Li davi...@google.com wrote:
On Thu, May 20, 2010 at 2:18 PM, Steven Bosscher stevenb@gmail.com
wrote:
On Thu, May 20, 2010 at 11:14 PM, Xinliang David Li davi...@google.com
wrote:
stack variable overlay and stack slot assignments is here
On Thu, May 20, 2010 at 11:21 PM, Xinliang David Li davi...@google.com
wrote:
On Thu, May 20, 2010 at 2:18 PM, Steven Bosscher stevenb@gmail.com
wrote:
On Thu, May 20, 2010 at 11:14 PM, Xinliang David Li davi...@google.com
wrote:
stack variable overlay and stack slot assignments
2010/5/21 Jan Hubicka hubi...@ucw.cz:
On Thu, May 20, 2010 at 11:21 PM, Xinliang David Li davi...@google.com
wrote:
On Thu, May 20, 2010 at 2:18 PM, Steven Bosscher stevenb@gmail.com
wrote:
On Thu, May 20, 2010 at 11:14 PM, Xinliang David Li davi...@google.com
wrote:
stack
2010/5/21 Jan Hubicka hubi...@ucw.cz:
On Thu, May 20, 2010 at 11:21 PM, Xinliang David Li davi...@google.com
wrote:
On Thu, May 20, 2010 at 2:18 PM, Steven Bosscher stevenb@gmail.com
wrote:
On Thu, May 20, 2010 at 11:14 PM, Xinliang David Li
davi...@google.com wrote:
On Fri, May 21, 2010 at 2:24 AM, Richard Guenther
richard.guent...@gmail.com wrote:
On Thu, May 20, 2010 at 11:21 PM, Xinliang David Li davi...@google.com
wrote:
On Thu, May 20, 2010 at 2:18 PM, Steven Bosscher stevenb@gmail.com
wrote:
On Thu, May 20, 2010 at 11:14 PM, Xinliang David Li
On Fri, May 21, 2010 at 6:13 PM, Xinliang David Li davi...@google.com wrote:
On Fri, May 21, 2010 at 2:24 AM, Richard Guenther
richard.guent...@gmail.com wrote:
On Thu, May 20, 2010 at 11:21 PM, Xinliang David Li davi...@google.com
wrote:
On Thu, May 20, 2010 at 2:18 PM, Steven Bosscher
Interesting. Thanks for gathering this.
I did a similar study internally on our C++ codebase. The results are
fairly different. In our case, the front end takes a LARGE chunk of
the compile time. The numbers below are taken from a full build of
one of our applications, consisting of ~4,500
Hello,
For some time now, I've wanted to see where compile time goes in a
typical GCC build, because nobody really seems to know what the
compiler spends its time on. The impressions that get published about
gcc usually indicate that there is at least a feeling that GCC is not
getting faster, and
Steven Bosscher wrote:
Hello,
For some time now, I've wanted to see where compile time goes in a
typical GCC build, because nobody really seems to know what the
compiler spends its time on. The impressions that get published about
gcc usually indicate that there is at least a feeling that GCC
On 05/20/2010 09:17 PM, Vladimir Makarov wrote:
Steven Bosscher wrote:
For some time now, I've wanted to see where compile time goes in a
typical GCC build, because nobody really seems to know what the
compiler spends its time on. The impressions that get published about
gcc usually indicate
* Adding and subtracting the above numbers, the rest of the compiler,
which is mostly the RTL parts, still account for 100-17-16-8=59% of
the total compile time. This was the most surprising result for me.
That figure is a little skewed though, the rest is not entirely RTL.
Front-end (3):
Hi Vlad,
On Thu, May 20, 2010 at 9:17 PM, Vladimir Makarov wrote:
For some time now, I've wanted to see where compile time goes in a
typical GCC build, because nobody really seems to know what the
compiler spends its time on. The impressions that get published about
gcc usually indicate that
That figure is a little skewed though, the rest is not entirely RTL.
Now without some annoying typo in a formula...
Front-end (3):
lexical_analysis 6.65
preprocessing 27.59
parser 31.53
Hi,
I don't know is it big or not to have such time spend in RTL parts. But I
think that this RTL part could be decreased if RTL (magically :) would have
smaller footprint and contain less details.
checks pockets...
Bah, no wand... :-)
I noticed while working on the dragonegg plugin that
Steven Bosscher stevenb@gmail.com writes:
And finally: expand. This should be just a change of IR format, from
GIMPLE to RTL. I have no idea why this pass always shows up in the top
10 of slowest parts of GCC. Lowering passes on e.g. WHIRL, or GENERIC
lowering to GIMPLE, never show up in
On Thu, May 20, 2010 at 2:09 PM, Ian Lance Taylor i...@google.com wrote:
Steven Bosscher stevenb@gmail.com writes:
And finally: expand. This should be just a change of IR format, from
GIMPLE to RTL. I have no idea why this pass always shows up in the top
10 of slowest parts of GCC.
On Thu, May 20, 2010 at 10:54 PM, Duncan Sands baldr...@free.fr wrote:
I noticed while working on the dragonegg plugin that replacing gimple - RTL
with gimple - LLVM IR significantly reduced the amount of memory used by
the compiler at -O0. I didn't investigate where the memory was going, but
On Thu, May 20, 2010 at 11:14 PM, Xinliang David Li davi...@google.com wrote:
stack variable overlay and stack slot assignments is here too.
Yes, and for these I would like to add a separate timevar. Agree?
Ciao!
Steven
On Thu, May 20, 2010 at 2:18 PM, Steven Bosscher stevenb@gmail.com wrote:
On Thu, May 20, 2010 at 11:14 PM, Xinliang David Li davi...@google.com
wrote:
stack variable overlay and stack slot assignments is here too.
Yes, and for these I would like to add a separate timevar. Agree?
Yes.
On my codes, pre-RA instruction scheduling on X86-64 (a) improves run
times by roughly 10%, and (b) costs a lot of compile time.
The -fscheduling option didn't seem to be on in your time tests (I think
it's not on by default on that architecture at -O2).
Brad
25 matches
Mail list logo