Daniel Jacobowitz wrote:
On Thu, Nov 17, 2005 at 03:42:29PM -0800, Ian Lance Taylor wrote:
I just tried a simple unoptimized compile. -ftime-report said that
final took 5% of the time (obviously final does more than formatting),
and the assembler took 4% of the total user time, and system
Bernd Schmidt wrote:
So, maybe a simpler strategy could be to make minor modifications to gas
and gcc so that the former is linked in and the latter can pass strings
to it? Maybe that could get us a performance improvement without the
need for a massive overhaul of all backends, and the need
On Fri, 2005-11-18 at 09:29, Bernd Schmidt wrote:
Also, please keep in mind that generating and then assembling debug
info takes a huge amount of I/O relative to code size. I'd expect much
more than 1% saving the write-out and write-in on -g.
So, maybe a simpler strategy could be to
Richard Earnshaw writes:
- Then, incrementally, we can bypass the parse layer to call routines
directly in the assembler. This can be done both for directives and for
assembly of instructions. *but it can all be done incrementally*.
The main difficulty is preserving -S output in a
Hi,
On Thu, 17 Nov 2005, Kenneth Zadeck wrote:
A stack machine representation was chosen for the same reason. Tree
gimple is a series of statements each statement being a tree.
IMHO we should follow that path of thinking. The representation of GIMPLE
where we do most optimizations on (i.e.
On Friday 18 November 2005 17:31, Michael Matz wrote:
Perhaps even a merger of both
approaches is sensible, three address form for most simple gimple
statements with falling back to stack encoding for deeply nested operands.
That would be a bad violation of the KISS principle.
Gr.
Steven
Kenneth Zadeck wrote:
The stack machine that we have in mind will be as stripped down as
possible. The idea is just to get the trees in and get them back out.
When I first read the proposal, I too wondered if a register machine would be
better here. I've come to the conclusion that it
Hi,
On Fri, 18 Nov 2005, Steven Bosscher wrote:
On Friday 18 November 2005 17:31, Michael Matz wrote:
Perhaps even a merger of both
approaches is sensible, three address form for most simple gimple
statements with falling back to stack encoding for deeply nested operands.
That would be
On 11/18/05, Laurent GUERBY [EMAIL PROTECTED] wrote:
On Fri, 2005-11-18 at 11:40 +, Andrew Haley wrote:
A nightmare scenario is debugging the compiler when its behaviour
changes due to using -S. Assembly source is something that we
maintainers use more than anyone else.
If we go the
On Nov 17, 2005, at 3:09 PM, Robert Dewar wrote:
I never like arguments which have loaded words like lot without
quantification. Just how long *is* spent in this step, is it really
significant?
as is 2-3% as I recall (Finder_FE C++) of total build time.
On Nov 17, 2005, at 6:13 PM, Daniel Jacobowitz wrote:
Also, please keep in mind that generating and then assembling debug
info takes a huge amount of I/O relative to code size. I'd expect
much
more than 1% saving the write-out and write-in on -g.
I'd hope that we can contribute code to
On Nov 17, 2005, at 6:33 PM, Dale Johannesen wrote:
When I arrived at Apple around 5 years ago, I was told of some recent
measurements that showed the assembler took around 5% of the time.
Yeah, it's been sped up actually.
Daniel Berlin [EMAIL PROTECTED] wrote:
Thanks for woking on this. Any specific reason why using the LLVM
bytecode wasn't taken into account?
It was.
A large number of alternatives were explored, including CIL, the JVM,
LLVM, etc.
It is proven to be stable, high-level enough to
perform
On Thu, 2005-11-17 at 01:27, Mark Mitchell wrote:
Richard Henderson wrote:
In Requirement 4, you say that the function F from input files a.o and
b.o should still be named F in the output file. Why is this requirement
more than simply having the debug information reflect that both names
Ian Lance Taylor wrote:
In section 3.4 (Linker) I have the same comment: for non-GNU targets,
the native linker is sometimes required, so modifying the linker
should not be a requirement. And the exact handling of .a files is
surprisingly target dependent, so while it would be easy to code
On Wed, 2005-11-16 at 20:33 -0700, Jeffrey A Law wrote:
Our understanding was that the debugger actually uses the symbol table,
in addition to the debugging information, in some cases. (This must be
true when not running with -g, but I thought it was true in other cases
as well.) It
hi,
Daniel Berlin wrote:
I discovered this when deep hacking into the symbol code of GDB a while
ago. Apparently, some people enjoy breakpointing symbols by using the
fully mangled name, which appears (nowadays) mainly in the minsym table.
This sort of hack is often used to work around
On Wed, Nov 16, 2005 at 02:26:28PM -0800, Mark Mitchell wrote:
http://gcc.gnu.org/projects/lto/lto.pdf
Section 4.2
What is the rationale for using a stack-based representation rather
than a register-based representation? A infinite register based
solution would seem to map
Mark Mitchell [EMAIL PROTECTED] writes:
http://gcc.gnu.org/projects/lto/lto.pdf
Section 4.2 (Executable Representation) describes the GVM as a stack
machine, and mentions load, store, duplicate, and swap operations.
But it also discusses having registers which correspond to GIMPLE
local
Thanks for woking on this. Any specific reason why using the LLVM
bytecode wasn't taken into account?
It was.
A large number of alternatives were explored, including CIL, the JVM,
LLVM, etc.
It is proven to be stable, high-level enough to
perform any kind of needed optimization,
On Wed, Nov 16, 2005 at 02:26:28PM -0800, Mark Mitchell wrote:
http://gcc.gnu.org/projects/lto/lto.pdf
Section 4.2
What is the rationale for using a stack-based representation rather
than a register-based representation? A infinite register based
solution would seem to
Richard Earnshaw [EMAIL PROTECTED] writes:
We spend a lot of time printing out the results of compilation as
assembly language, only to have to parse it all again in the assembler.
Given some of the problems this proposal throws up I think we should
seriously look at bypassing as much of
Ulrich Weigand [EMAIL PROTECTED] writes:
Conversely, I don't know much we are going to care about speed here,
but I assume that we are going to care a bit. For the linker to
determine which files to pull in from an archive, it is going to have
to read the symbol tables of all the input
Ian Lance Taylor wrote:
We spend a lot of time printing out the results of compilation as
assembly language, only to have to parse it all again in the assembler.
I never like arguments which have loaded words like lot without
quantification. Just how long *is* spent in this step, is it
Robert Dewar [EMAIL PROTECTED] writes:
Ian Lance Taylor wrote:
We spend a lot of time printing out the results of compilation as
assembly language, only to have to parse it all again in the
assembler.
I never like arguments which have loaded words like lot without
quantification. Just
On Thu, Nov 17, 2005 at 03:42:29PM -0800, Ian Lance Taylor wrote:
I just tried a simple unoptimized compile. -ftime-report said that
final took 5% of the time (obviously final does more than formatting),
and the assembler took 4% of the total user time, and system time took
16% of wall clock
On Nov 17, 2005, at 3:09 PM, Robert Dewar wrote:
Richard Earnshaw wrote:
We spend a lot of time printing out the results of compilation as
assembly language, only to have to parse it all again in the
assembler.
I never like arguments which have loaded words like lot without
quantification.
On Nov 17, 2005, at 21:33, Dale Johannesen wrote:
When I arrived at Apple around 5 years ago, I was told of some recent
measurements that showed the assembler took around 5% of the time.
Don't know if that's still accurate. Of course the speed of the
assembler
is also relevant, and our stubs
The GCC community has talked about link-time optimization for some time.
In addition to results with other compilers, Geoff Keating's work on
inter-module optimization has demonstrated the potential for improved
code-generation from applying optimizations across translation units.
Some of us (Dan
The GCC community has talked about link-time optimization for some time.
In addition to results with other compilers, Geoff Keating's work on
inter-module optimization has demonstrated the potential for improved
code-generation from applying optimizations across translation units.
Our
Some of us (Dan Berlin, David Edelsohn, Steve Ellcey, Shin-Ming Liu,
Tony Linthicum, Mike Meissner, Kenny Zadeck, and myself) have developed
a high-level proposal for doing link-time optimization in GCC. At this
point, this is just a design sketch. We look forward to jointly
developing this
The GCC community has talked about link-time optimization for some time.
In addition to results with other compilers, Geoff Keating's work on
inter-module optimization has demonstrated the potential for improved
code-generation from applying optimizations across translation units.
I don't
The GCC community has talked about link-time optimization for some time.
In addition to results with other compilers, Geoff Keating's work on
inter-module optimization has demonstrated the potential for improved
code-generation from applying optimizations across translation units.
One thing
Mark Mitchell [EMAIL PROTECTED] wrote:
Thoughts?
Thanks for woking on this. Any specific reason why using the LLVM bytecode
wasn't taken into account? It is proven to be stable, high-level enough to
perform any kind of needed optimization, and already features interpreters,
JITters and
On Thu, 2005-11-17 at 01:26 +0100, Giovanni Bajo wrote:
Mark Mitchell [EMAIL PROTECTED] wrote:
Thoughts?
Thanks for woking on this. Any specific reason why using the LLVM bytecode
wasn't taken into account?
It was.
A large number of alternatives were explored, including CIL, the JVM,
Andrew == Andrew Pinski [EMAIL PROTECTED] writes:
Andrew One thing not mentioned here is how are you going to repesent
Andrew different eh personality functions between languages, because
Andrew currently we cannot even do different ones in the same
Andrew compiling at all.
I think that is
Daniel Berlin Wrote:
It [LLVM] is proven to be stable, high-level enough to
perform any kind of needed optimization,
This is not true, unfortunately. That's why it is called low
level virtual machine.
It doesn't have things we'd like to do high level optimizations
on, like
On Wed, Nov 16, 2005 at 02:26:28PM -0800, Mark Mitchell wrote:
http://gcc.gnu.org/projects/lto/lto.pdf
In Requirement 4, you say that the function F from input files a.o and
b.o should still be named F in the output file. Why is this requirement
more than simply having the debug information
Richard Henderson wrote:
In general, I'm going to just collect comments in a folder for a while,
and then try to reply once the dust has settled a bit. I'm interested
in seeing where things go, and my primary interest is in getting *some*
consensus, independent of a particular one.
But, I'll
On Wed, Nov 16, 2005 at 05:27:58PM -0800, Mark Mitchell wrote:
In Requirement 4, you say that the function F from input files a.o and
b.o should still be named F in the output file. Why is this requirement
more than simply having the debug information reflect that both names
were
Mark Mitchell [EMAIL PROTECTED] writes:
| The GCC community has talked about link-time optimization for some time.
| In addition to results with other compilers, Geoff Keating's work on
| inter-module optimization has demonstrated the potential for improved
| code-generation from applying
Some more comments (this time section by section and a little more thought out):
2.1:
Requirement 1: a good question is how does ICC or even XLC
do this without doing anything special? Or do they keep around
an on-the-side database.
(Requirements 2-4 assume Requirement 1)
Requirement 5: is
The document is on the web here:
http://gcc.gnu.org/projects/lto/lto.pdf
The LaTeX sources are in htdocs/projects/lto/*.tex.
Thoughts?
It may be worth mentioning that this type of optimization
applies mainly to one given type of output: a non-symbolic
a.out. When the output it a shared
Our understanding was that the debugger actually uses the symbol table,
in addition to the debugging information, in some cases. (This must be
true when not running with -g, but I thought it was true in other cases
as well.) It might be true for other tools, too.
I can't offhand recall if
Mark Mitchell [EMAIL PROTECTED] writes:
http://gcc.gnu.org/projects/lto/lto.pdf
Section 2.2.1 (Variables and Functions) mentions C++ inline functions.
It should also mention gcc's C language extern inline functions.
The same section should consider common symbols. These appear as
45 matches
Mail list logo