Re: Building GCC using C++
Basile Starynkevitch wrote, On Tuesday 15 January 2013 11:34 AM: On Tue, Jan 15, 2013 at 11:16:54AM +0530, Uday P. Khedker wrote: I was trying to understand the exact meaning of a loose statement floating around ("gcc has moved to C++ from version 4.7 onwards). I reckon from http://gcc.gnu.org/wiki/gcc-in-cxx that now gcc is compiled using C++. However, the very first line of the description confused me. It says: GCC has been building stages 2 and 3 in C++ mode for a while. My understanding was that stage 2 is built using the compiler created in stage 1 and stage 3 is built using the compiler created in stage 2. (Please see slide 17/53 in http://www.cse.iitb.ac.in/grc/gcc-workshop-12/downloads/slides/gccw12-config-build.pdf). Can someone tell me what is the meaning of building stage 2 in C++ mode? If I restrict my languages (using --enable-languages) to C, My belief is that it is no more possible to configure a recent GCC straight (non-cross) compiler without --enable-language=c++ (that is, if you ask only for --enable-languages=c either configure should bark, or it should also implicitly add C++). I was able to build gcc-4.7.2 with --enable-languages=c only :-) I can't explain in the details how ... Cheers, & best wishes for 2013. -- -- Dr. Uday Khedker Professor Department of Computer Science & Engg. IIT Bombay, Powai, Mumbai 400 076, India. Email : u...@cse.iitb.ac.in Homepage: http://www.cse.iitb.ac.in/~uday Phone : Office -91 (22) 2572 2545 x 7717, 91 (22) 2576 7717 (Direct) Res. -91 (22) 2572 2545 x 8717, 91 (22) 2576 8717 (Direct)
Re: Building GCC using C++
On Tue, Jan 15, 2013 at 11:16:54AM +0530, Uday P. Khedker wrote: > I was trying to understand the exact meaning of a loose statement > floating around ("gcc has moved to C++ from version 4.7 onwards). > > I reckon from http://gcc.gnu.org/wiki/gcc-in-cxx that now gcc is > compiled using C++. However, the very first line of the description > confused me. It says: > > >GCC has been building stages 2 and 3 in C++ mode for a while. > > My understanding was that stage 2 is built using the compiler created > in stage 1 and stage 3 is built using the compiler created in stage 2. > (Please see slide 17/53 in > http://www.cse.iitb.ac.in/grc/gcc-workshop-12/downloads/slides/gccw12-config-build.pdf). > > Can someone tell me what is the meaning of building stage 2 in C++ > mode? If I restrict my languages (using --enable-languages) to C, My belief is that it is no more possible to configure a recent GCC straight (non-cross) compiler without --enable-language=c++ (that is, if you ask only for --enable-languages=c either configure should bark, or it should also implicitly add C++). I can't explain in the details how ... Cheers, & best wishes for 2013. -- Basile STARYNKEVITCH http://starynkevitch.net/Basile/ email: basilestarynkevitchnet mobile: +33 6 8501 2359 8, rue de la Faiencerie, 92340 Bourg La Reine, France *** opinions {are only mines, sont seulement les miennes} ***
Building GCC using C++
I was trying to understand the exact meaning of a loose statement floating around ("gcc has moved to C++ from version 4.7 onwards). I reckon from http://gcc.gnu.org/wiki/gcc-in-cxx that now gcc is compiled using C++. However, the very first line of the description confused me. It says: GCC has been building stages 2 and 3 in C++ mode for a while. My understanding was that stage 2 is built using the compiler created in stage 1 and stage 3 is built using the compiler created in stage 2. (Please see slide 17/53 in http://www.cse.iitb.ac.in/grc/gcc-workshop-12/downloads/slides/gccw12-config-build.pdf). Can someone tell me what is the meaning of building stage 2 in C++ mode? If I restrict my languages (using --enable-languages) to C, how can stage 2 be built in C++ mode? Or is it that regardless of the choice given using --enable-languages option, internally C and C++ compiler are created anyway? The build logs do not seem to bear out the above statement. Thanks and regards, Uday. -- -- Dr. Uday Khedker Professor Department of Computer Science & Engg. IIT Bombay, Powai, Mumbai 400 076, India. Email : u...@cse.iitb.ac.in Homepage: http://www.cse.iitb.ac.in/~uday Phone : Office -91 (22) 2572 2545 x 7717, 91 (22) 2576 7717 (Direct) Res. -91 (22) 2572 2545 x 8717, 91 (22) 2576 8717 (Direct)
Re: mips16 and nomips16
On 01/14/2013 04:50 PM, David Daney wrote: On 01/14/2013 04:32 PM, reed kotler wrote: I'm not understanding why mips16 and nomips16 are not simple inheritable attributes. The mips16ness of a function must be known by the caller so that the appropriate version of the JAL/JALX instruction can be emitted i..e you should be able to say: void foo(); void __attribute((nomips16)) foo(); or void goo(); Any call here would assume nomips16 void __attribute((mips16)) goo(); A call here would assume mips16. Which is it? If you allow it to change, one case will always be incorrect. Or perhaps I misunderstand the question. David Daney I would assume that foo would be nomips16 and goo would be mips16. The definition of plain foo() or goo() says that nothing is specified. What is not clear then? This is how all such other attributes in gcc are handled.
Re: mips16 and nomips16
On 01/14/2013 04:32 PM, reed kotler wrote: I'm not understanding why mips16 and nomips16 are not simple inheritable attributes. The mips16ness of a function must be known by the caller so that the appropriate version of the JAL/JALX instruction can be emitted i..e you should be able to say: void foo(); void __attribute((nomips16)) foo(); or void goo(); Any call here would assume nomips16 void __attribute((mips16)) goo(); A call here would assume mips16. Which is it? If you allow it to change, one case will always be incorrect. Or perhaps I misunderstand the question. David Daney
mips16 and nomips16
I'm not understanding why mips16 and nomips16 are not simple inheritable attributes. i..e you should be able to say: void foo(); void __attribute((nomips16)) foo(); or void goo(); void __attribute((mips16)) goo(); There does not seem to be any other cases in gcc where this would not be allowed. Tia. Reed
Re: stabs support in binutils, gcc, and gdb
> Then it is expected that dwarf debug is much bigger than stabs debug, > since the latter does not include any of the value tracking capabilities > of dwarf. Without that it is almost impossible for a debugger to > display the true value of local variables. Indeed. And it would be interesting to have figures with (1) -fno-var-tracking-assignments and (2) -fno-var-tracking then. -- Eric Botcazou
Re: stabs support in binutils, gcc, and gdb
David Taylor writes: > Optimized, -O2. Then it is expected that dwarf debug is much bigger than stabs debug, since the latter does not include any of the value tracking capabilities of dwarf. Without that it is almost impossible for a debugger to display the true value of local variables. Andreas. -- Andreas Schwab, sch...@linux-m68k.org GPG Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
Re: register indirect addressing for global variables on powerpc
On Mon, Jan 14, 2013 at 2:00 AM, Thomas Baier wrote: > Dear list, > > I've just subscribed to the list and I hope this is the right place for > the following question. > > The operating system I'd like to use gcc for (OS-9, for the curious) > requires an ABI, where global variables are only accessed through > register indirect addressing. On the powerpc platform, r2 is used for > indirect addressing. There is already a feature in gcc which can use > register indirect addressing for the powerpc target for global variables > using a special small data area, but unfortunately this is not enough. > > Currently I'm a bit lost in where to start reading to get an idea how I > could add this new ABI to gcc. Can you please point me to some reading > or maybe even share some ideas how this could be accomplished? What do you mean by register indirect addressing? Not register plus displacement or register plus register? PowerPC only supports loading from an immediate value for a 16KB range around 0. Otherwise, all addresses are constructed at least partially in a register. If you are trying to say that you are encountering a problem because you are running out of space due to the limited size of the TOC -- with or without section anchors -- then, as Peter mentioned, you should look at the support for cmodel=large, which allows a two instruction sequence for 32 bit offsets into the TOC instead of the original 16 bit displacement. cmodel=medium sometimes directly accesses the data area from the r2 base pointer, which may violate the OS-9 ABI requirements. Thanks, David
Re: stabs support in binutils, gcc, and gdb
On Mon, 14 Jan 2013 17:12:25 +0100, Doug Evans wrote: > Not that I think it's a priori worth the effort to dig deeper, but for > another datapoint, Redhat added an lza-compressed mini-dwarf-debug > section. I'm not sure what it supports (if anything beyond making > backtraces better). It can contain anything what separate debug info file may contain, incl. .debug_* sections. Access to those sections may be performance sub-optimal (decompressing blocks per seek request), unaware if usably or unusably slow. Jan
Re: stabs support in binutils, gcc, and gdb
>> Next, I compiled a 5000-line C++ source file at both -O0 and -O2. > > I have to assume that David is working with C code, as stabs debugging > for C++ is nearly unusable. I assumed that too, but I figured C++ would be worse than C as far as DWARF vs. stabs size. I'd still be interested to figure out what's causing that 11.5x expansion. -cary
Re: stabs support in binutils, gcc, and gdb
Andreas Schwab wrote: > David Taylor writes: > > > {As to what d90f.elf is -- that's unimportant; but, it's the kernel for > > one of the boards in one of our hardware products.] > > Is it an optimized or an unoptimized build? Optimized, -O2. According to find piped to wc, there's 2587 C files, 11 assembly files, 5 C++ files, and 2839 header files. Across the files, there's 4.9M lines in C files, 870K lines in header files, 9.7K lines in assembly, and 5.9K lines in C++.
Re: register indirect addressing for global variables on powerpc
On Mon, 2013-01-14 at 08:00 +0100, Thomas Baier wrote: > The operating system I'd like to use gcc for (OS-9, for the curious) > requires an ABI, where global variables are only accessed through > register indirect addressing. On the powerpc platform, r2 is used for > indirect addressing. There is already a feature in gcc which can use > register indirect addressing for the powerpc target for global variables > using a special small data area, but unfortunately this is not enough. If you look at the -mcmodel={small,medium,large} support we (IBM) added to powerpc64-linux, you will see how one can generate larger offsets to r2 (16-bit, 32-bit and 64-bit respectively). Maybe you can borrow some of that code? Peter
Re: register indirect addressing for global variables on powerpc
On Mon, Jan 14, 2013 at 10:03 AM, Eric Botcazou wrote: >> The Mac OS 9 ABI is very similar to the AIX ABI. So you should be >> able to start with the AIX ABI and go from there. > > Are you sure that you're talking about the same OS-9 as Thomas here? Oh OS-9. Anyways it does sound more like the PowerOpen ABI (which the AIX ABI and Mac OS 9 ABI are based on) anyways. Meaning the TOC based ABIs. Thanks, Andrew
Re: register indirect addressing for global variables on powerpc
> The Mac OS 9 ABI is very similar to the AIX ABI. So you should be > able to start with the AIX ABI and go from there. Are you sure that you're talking about the same OS-9 as Thomas here? -- Eric Botcazou
Re: register indirect addressing for global variables on powerpc
On Sun, Jan 13, 2013 at 11:00 PM, Thomas Baier wrote: > Dear list, > > I've just subscribed to the list and I hope this is the right place for > the following question. > > The operating system I'd like to use gcc for (OS-9, for the curious) > requires an ABI, where global variables are only accessed through > register indirect addressing. On the powerpc platform, r2 is used for > indirect addressing. There is already a feature in gcc which can use > register indirect addressing for the powerpc target for global variables > using a special small data area, but unfortunately this is not enough. > > Currently I'm a bit lost in where to start reading to get an idea how I > could add this new ABI to gcc. Can you please point me to some reading > or maybe even share some ideas how this could be accomplished? The Mac OS 9 ABI is very similar to the AIX ABI. So you should be able to start with the AIX ABI and go from there. Thanks, Andrew
Re: stabs support in binutils, gcc, and gdb
On Fri, Jan 11, 2013 at 6:55 PM, Cary Coutant wrote: >>> If I use objcopy --compress-debug-sections to compress the DWARF debug >>> info (but don't use it on the STABS debug info), then the file size >>> ratio is 3.4. >>> >>> While 3.4 is certainly better than 11.5, unless I can come up with a >>> solution where the ratio is less than 2, I'm not currently planning on >>> trying to convince them to switch to DWARF. >> >> The 3.4 number is the number I was interested in. >> Thanks for computing it. > > It's not really fair to compare compressed DWARF with uncompressed stabs, is > it? Data is data. Plus I doubt anyone is going to go to the trouble of compressing stabs. Not that I think it's a priori worth the effort to dig deeper, but for another datapoint, Redhat added an lza-compressed mini-dwarf-debug section. I'm not sure what it supports (if anything beyond making backtraces better).
Re: Graphite TODO tasks
On 01/01/2013 10:53 AM, Shakthi Kannan wrote: Greetings! I would like to know if there are any TODO tasks that I can work on to get started with Graphite/GCC. I came across Tobias Grosser's post regarding Graphite development at: http://gcc.gnu.org/wiki/Graphite-4.8 If you have any suggestions, please do let me know. Hi Shakthi, sorry to jump in late. I think there are several interesting open tasks. 1) Use isl code generation isl 0.18 provides a new code generation. Enabling graphite to use it would be great. Main benefits: - Remove one library dependence (gcc) - Better code The new code generator can often prove that a division is actually a plain C division or a plain modulo. This often replaces the otherwise costly floord() operations. - Fine-grained parametrization The code generator can be parametrized on a per iteration level. This will allow us to fine tune the code generation by setting unrolling, code size parameters or full/partial tile separation for each iteration and loop depth. 2) Performance evaluation / improvements Richard had some examples where the dependence calculation of isl took very long time. It may be interesting to investigate where the problem is and what can be fixed. Richard also once suggested that it may be interesting to test if isl can be speed up if it uses most of the time native 64 bit integers and only falls back to gmp if unavoidable. As I have also seen gmp showing up in many profiling runs, I would be very interested in work here. 3) Make graphite usable on polybench We know that with source to source techniques, polyhedral optimizers can give large speedup on the polybench benchmark kernels. With the recently added isl scheduling optimizer graphite has all infrastructure to obtain the very same speedups. However, to my knowledge this was never tested and there may be a couple of bugs that are still in the way. It would be great to investigate what we can already achieve today and which bugs still need to be solved. If you could look into solving some of the bugs on your way, this would be a big step forward for graphite. That's so far from my side. If you have further questions, feel free to ask. (Also ping me. I must admit that I sometimes miss mails, that I really wanted to reply to) All the best, Tobi
Re: stabs support in binutils, gcc, and gdb
On Fri, Jan 11, 2013 at 7:17 PM, Ian Lance Taylor wrote: > On Fri, Jan 11, 2013 at 5:55 PM, Cary Coutant wrote: >> >> Next, I compiled a 5000-line C++ source file at both -O0 and -O2. > > I have to assume that David is working with C code, as stabs debugging > for C++ is nearly unusable. That was my assumption, fwiw.
Re: bug report: not-a-number not recognized when compiling for x86_64
On Mon, 14 Jan 2013, Mischa Baars wrote: When running the example attached, you can see the compiler fails to recognize not-a-number's properly. Bug reports go to bugzilla. NaN doesn't compare equal to anything. x==x is actually the usual way to test if x is NaN. -- Marc Glisse
Re: bug report: not-a-number not recognized when compiling for x86_64
On 01/14/2013 08:34 AM, Mischa Baars wrote: > When running the example attached, you can see the compiler fails to > recognize not-a-number's properly. > > Anyone who would like to have a look? Comparing NaN with anything always returns false. Even when comparing with NaN. You want: if (x != x) { printf("found a not-a-number\n"); } Andrew.
Re: bug report: not-a-number not recognized when compiling for x86_64
On Mon, Jan 14, 2013 at 9:34 AM, Mischa Baars wrote: > Hi, > > When running the example attached, you can see the compiler fails to > recognize not-a-number's properly. > > Anyone who would like to have a look? THat's how FP works. Use isnan(). Richard. > Regards, > Mischa.
register indirect addressing for global variables on powerpc
Dear list, I've just subscribed to the list and I hope this is the right place for the following question. The operating system I'd like to use gcc for (OS-9, for the curious) requires an ABI, where global variables are only accessed through register indirect addressing. On the powerpc platform, r2 is used for indirect addressing. There is already a feature in gcc which can use register indirect addressing for the powerpc target for global variables using a special small data area, but unfortunately this is not enough. Currently I'm a bit lost in where to start reading to get an idea how I could add this new ABI to gcc. Can you please point me to some reading or maybe even share some ideas how this could be accomplished? Best wishes from Austria, Thomas -- Thomas Baier MicroSys Electronics GmbH Mühlweg 1 D-82054 Sauerlach Tel.: +49 8104 801-132 Fax: +49 8104 801-110 Sitz der Gesellschaft: Sauerlach Geschäftsführer: Dipl.-Ing. Richard Loeffl, Dipl.-Ing. Dieter Pfeiffer HRB München 48340 Ust.ID No: DE129296566 mailto:ba...@microsys.de http://www.microsys.de/ ***
bug report: not-a-number not recognized when compiling for x86_64
Hi, When running the example attached, you can see the compiler fails to recognize not-a-number's properly. Anyone who would like to have a look? Regards, Mischa. #include #include int main() { double x = NAN; if (x == NAN) { printf("found a not-a-number\n"); } return; }
Re: Adding Rounding Mode to Operations Opcodes in Gimple and RTL
On Fri, Jan 11, 2013 at 5:41 PM, Joseph S. Myers wrote: > On Fri, 11 Jan 2013, Michael Zolotukhin wrote: > >> > Personally I'd think a natural starting point on the compiler side would >> > be to write a reasonably thorough and systematic testsuite for such >> > issues. That would cover all operations, for all floating-point types >> > (including ones such as __float128 and __float80), and conversions between >> > all pairs of floating-point types and either way between each >> > floating-point type and each integer type (including __int128 / unsigned >> > __int128), with operands being any of (constants, non-volatile variables >> > initialized with constants, volatile variables, vectors) and results being >> > (discarded, stored in non-volatile variables, stored in volatile >> > variables), in all the rounding modes, testing both results and exceptions >> > and confirming proper results when an operation is repeated after changes >> > of rounding mode or clearing exceptions. >> >> We mostly have problems when there is an 'interaction' between >> different rounding modes - so a ton of tests that checking correctness >> of a single operation in a specific rounding mode won't catch it. We >> could place all such tests in one file/function so that the compiler >> would transform it as it does now, so we'll catch the fail - but in >> this case we don't need many tests. > > Tests should generally be small to make it easier for people to track down > the failures. As you note, interactions are relevant - but that means > tests would do an operation in one rounding mode, check results, repeat in > another rounding mode, check results (which would catch the compiler > wrongly reusing the first results), repeat again for each mode. Tests for > each separate operation and type can still be separate. > >> So, generally I like the idea of having tests covering all the cases >> and then fixing them one-by-one, but I didn't catch what these tests >> would be except the ones from the trackers - it seems useless to have >> a bunch of tests, each of which contains a single operation and >> compares the result, even if we have a version of such test for all >> datatypes and rounding modes. > > I'm thinking in terms of full FENV_ACCESS test coverage, for both > exceptions and rounding modes, where there are many more things that can > go wrong for single operations (such as the operation being wrongly > discarded because the result isn't used, even though the exceptions are > tested, or a libgcc implementation of a function raising excess > exceptions). But even just for rounding modes, there are still various > uses for systematically covering different permutations. > > * Tests should test both building -frounding-math, without the FENV_ACCESS > pragma, and with the pragma but without that option, when the pragma is > implemented. > > * There's clearly some risk that implementations of __float128 using > soft-fp have bugs in how they interact with hardware exceptions and > rounding modes. These are part of libgcc; there should be test coverage > for such issues to provide confidence that GCC is handling exceptions and > rounding modes correctly. This also helps detect soft-fp bugs generally. > > * Some architectures may well have rounding mode bugs in operations > defined in their .md files. E.g., conversion of integer 0 to > floating-point in round-downwards mode on older 32-bit powerpc wrongly > produces -0.0 instead of +0.0. One purpose of tests for an issue with > significant machine dependencies is to allow people testing on an > architecture other than that originally used to develop the feature to > tell whether there are architecture-specific bugs. There are reasonably > thorough tests of conversions between floating-point and integers > (gcc.dg/torture/fp-int-convert-*) in the testsuite, which caught several > bugs when added (especially as regards conversions to/from TImode), and > sometimes continue to do so - but only cover round-to-nearest. > > * Maybe a .md file wrongly enables vector operations without -ffast-math > even though they do not handle all floating-point cases correctly. Since > this is a case where a risk of problems is reasonably predictable (it's > common for processors to define vector instructions in ways that do not > have the full IEEE semantics with rounding modes, exceptions, subnormals > etc., which means they shouldn't be used for vectorization on such > processors without appropriate -ffast-math options), verifying that vector > operations (GNU C generic vectors) handle floating-point correctly is also > desirable. > > > Thus, while adding testcases from specific bugs would ensure that those > very specific tests remained fixed, I don't think it would provide much > confidence that the overall FENV_ACCESS implementation is at all reliable, > only that a limited subset of bugs that people had actually reported had > been fixed (especially, areas such as conversions from TImod