Re: why no shortcut operation for comparion on _Complex operands

2012-03-26 Thread Bin.Cheng
On Mon, Mar 26, 2012 at 3:27 PM, Richard Guenther
 wrote:
> On Sun, Mar 25, 2012 at 2:42 PM, Bin.Cheng  wrote:
>> Hi,
>> In tree-complex.c's function expand_complex_comparison, gcc just
>> expand comparison on complex
>> operands into comparisons on inner type, like:
>>
>>  D.5375_17 = REALPART_EXPR ;
>>  D.5376_18 = IMAGPART_EXPR ;
>>  g2.1_5 = COMPLEX_EXPR ;
>>  D.5377_19 = REALPART_EXPR ;
>>  D.5378_20 = IMAGPART_EXPR ;
>>  g3.2_6 = COMPLEX_EXPR ;
>>  D.5379_21 = D.5375_17 == D.5377_19;
>>  D.5380_22 = D.5376_18 == D.5378_20;
>>  D.5381_23 = D.5379_21 & D.5380_22;
>>  if (D.5381_23 == 1)
>>    goto ;
>>  else
>>    goto ;
>>
>> So is it possible to do shortcut operation for the "&" on the
>> real/imag part of complex data?
>
> Sure.  Does the RTL expander not do that for your target?

Yes, expand_gimple_cond decides how to expand such codes.
And, it depends on BRANCH_COST whether codes will be expanded into
shortcut operations. true for X86, false for arm at least.
I am wondering, should we take a more general strategy, rather than just
branch cost heuristic decision.
The new strategy should compare the cost of operation and branch, e.g.,
On target has no hard float instructions, comparison of float point should be
shortcuted, since the helper function is very expensive.

Any idea? Thanks
-- 
Best Regards.


Re: Configure-time testing for GCC plugins to determine C vs C++? (Was Re: status of GCC & C++)

2012-03-26 Thread Basile Starynkevitch
On Mon, 26 Mar 2012 22:34:22 +0200
Romain Geissler  wrote:

> Hi,
> You'll find something like this :
> 
> /* Define if building with C++. */
> #ifndef USED_FOR_TARGET
> #define ENABLE_BUILD_WITH_CXX 1
> #endif
> 
> So that's it, you already got all you need for all version.
> 

I did mention ENABLE_BUILD_WITH_CXX in previous email, but I am not very 
satisfied by
this solution.

I really think that GCC 4.7 should explicitly tell, e.g. thru gcc -v, how it 
was compiled
(ie in C or in C++ mode).

This would help a lot plugin package makers (not the same as plugin developers).

Cheers.


-- 
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mine, sont seulement les miennes} ***


RE: Question about Tree_function_versioning

2012-03-26 Thread Oleg Endo
On Mon, 2012-03-26 at 22:51 +, Iyer, Balaji V wrote:
> I have another question along the same lines. Is it possible to tell
> gcc to never delete a certain function even if it is never called in
> the executable?
> 

"__attribute__ ((used))" maybe?

Cheers,
Oleg



RE: Question about Tree_function_versioning

2012-03-26 Thread Iyer, Balaji V
I have another question along the same lines. Is it possible to tell gcc to 
never delete a certain function even if it is never called in the executable?

Any help is greatly appreciated!

Thanks,

Balaji V. Iyer.

-Original Message-
From: Martin Jambor [mailto:mjam...@suse.cz] 
Sent: Monday, March 26, 2012 8:52 AM
To: Iyer, Balaji V
Cc: 'gcc@gcc.gnu.org'
Subject: Re: Question about Tree_function_versioning

Hi,

On Mon, Mar 26, 2012 at 01:34:55AM +, Iyer, Balaji V wrote:
> Hello Everyone,
> I am currently trying to take certain functions (marked by certain
> attributes) and create vector version along with the scalar versions 
> of the function. For example, let's say I have a function my_add that 
> is marked with a certain attribute, I am trying to clone it into 
> my_add_vector and still keep the original my_add. For this, I am 
> trying to use tree_function_versioning in cgraphunit.c to clone the 
> cgraph_node into a new function. Does this function actually create a 
> 2nd function (called my_add_vector) and copy the body from my_add 
> function to the my_add_vector function or does it just create a node 
> called my_add_vector and then create a pointer to the body of the 
> my_add?
> 
> Is there a better approach for doing this?
> 

tree_function_versioning indeed does copy the body of a function into a new 
one, but that's the only thing it does.  You might be better served by its 
callers such as cgraph_function_versioning.  But I believe all cloning 
functions currently also make the new clone private to the current compilation 
unit (and thus subject to unreachable node removal if they have no callers) 
which is something you might not want.  If it is a problem, you'd either need 
to re-set the relevant decl and node attributes subsequently or change the 
cloning functions themselves.

I assume you're not operating within an IPA pass, in that case you'd need 
cgraph_create_virtual_clone and a transformation hook.

Martin


Re: Freescale 68HC11/68HC12 port (gcc newbie help request)

2012-03-26 Thread James Murray
On Wed, 2012-03-21 at 15:12 -0700, Ian Lance Taylor wrote:
> I can understand why you are doing this.  However, you should be aware
> that the compiler internals changed significantly in version 4.0.  Time
> spent working on detailed optimizations of gcc 3.4 is almost certainly
> time wasted.  Walking forward version by version makes some sense, I
> guess, but you shouldn't even look at the optimizations in the generated
> code until you get to at least 4.3.

Thanks for the advice. After a number of days fumbling around, I think
I'm going to concede defeat for the time being (!)

I have brought the code forwards a little bit but at each step there are
hurdles to overcome, using a current build environment highlights
ancient hidden bugs in the gcc code. The m68hc11 code too is frequently
struck by ICEs due to insn not matching contraints and the like.

The commentary in the 2003 gcc conference proceedings regarding the
difficulty of bringing stagnant targets up to date is spot on. Seeing
that it took 2 developers from MIPs six months to revive their target...
I realise that I do not have the time (or skill) to presently undertake
this task. Also the m68hc11 is (was) a minority target.

However, I do think I'm able to make some small improvements on the
3.3.6 code for m68hc11 and will use that internally.

regards

James


[patch][RFC] bail out after front-end errors

2012-03-26 Thread Steven Bosscher
Hello,

This patch is one way to address PR44982. I see no good reason to
cgraph_finalize_compilation_unit if there were parse errors. As Richi
already pointed out, GCC traditionally has proceeded after parse
errors to preserve warnings and errors we generate from the middle-end
and during semantic analysis. But it seems to me that those warnings
are not very meaningful after parse errors (-Wuninitialized after a
parse error??), and errors from the middle end are mostly for exotic
code (involving asm()s and the like). Bailing out after parse errors
is therefore IMHO the right thing to do for the common case.

Thoughts? Comments?

If the consensus is that this patch goes in, I'll also have to do some
work on the test suite, because some warnings and errors disappear.
List attached below. A lot of errors and warnings from g++ disappear.
I suspect this is because they are only issued during gimplification.
That is something I'll have to address before this patch could go in.
Before I spend the effort, I'd like to know if there is consensus on
the general direction proposed here... ;-)

Ciao!
Steven



Index: toplev.c
===
--- toplev.c(revision 185813)
+++ toplev.c(working copy)
@@ -561,9 +561,14 @@ compile_file (void)
   /* Compilation is now finished except for writing
  what's left of the symbol table output.  */

-  if (flag_syntax_only || flag_wpa)
+  /* If all we have to do is syntax checking, or if there were parse
+ errors, stop here.  */
+  if (flag_syntax_only || seen_error ())
 return;

+  if (flag_wpa)
+return;
+
   timevar_start (TV_PHASE_GENERATE);

   ggc_protect_identifiers = false;
@@ -571,12 +576,6 @@ compile_file (void)
   /* This must also call cgraph_finalize_compilation_unit.  */
   lang_hooks.decls.final_write_globals ();

-  if (seen_error ())
-{
-  timevar_stop (TV_PHASE_GENERATE);
-  return;
-}
-
   /* Compilation unit is finalized.  When producing non-fat LTO object, we are
  basically finished.  */
   if (in_lto_p || !flag_lto || flag_fat_lto_objects)


New failing tests:
> FAIL: gcc.dg/asm-7.c  (test for errors, line 15)
> FAIL: gcc.dg/asm-7.c  (test for errors, line 16)
> FAIL: gcc.dg/declspec-10.c  (test for warnings, line 19)
> FAIL: gcc.dg/declspec-11.c  (test for warnings, line 19)
> FAIL: gcc.dg/declspec-9.c  (test for errors, line 20)
> FAIL: gcc.dg/gnu99-static-1.c  (test for errors, line 21)
> FAIL: gcc.dg/gnu99-static-1.c  (test for errors, line 25)
> FAIL: gcc.dg/pr48552-1.c  (test for errors, line 16)
> FAIL: gcc.dg/pr48552-1.c  (test for errors, line 40)
> FAIL: gcc.dg/pr48552-1.c  (test for errors, line 52)
> FAIL: gcc.dg/pr48552-2.c  (test for errors, line 16)
> FAIL: gcc.dg/pr48552-2.c  (test for errors, line 40)
> FAIL: gcc.dg/pr48552-2.c  (test for errors, line 52)
> FAIL: gcc.dg/redecl-10.c  (test for warnings, line 15)
> FAIL: gcc.dg/redecl-10.c  (test for warnings, line 29)
> FAIL: gcc.dg/gomp/block-2.c  (test for errors, line 14)
> FAIL: gcc.dg/gomp/block-2.c  (test for errors, line 16)
> FAIL: gcc.dg/gomp/block-7.c  (test for errors, line 9)
> FAIL: gcc.dg/gomp/block-7.c  (test for errors, line 10)
> FAIL: gcc.dg/gomp/block-7.c  (test for errors, line 11)
> FAIL: gcc.dg/gomp/block-7.c  (test for errors, line 15)
> FAIL: gcc.dg/gomp/block-7.c  (test for errors, line 16)
> FAIL: gcc.dg/gomp/block-7.c  (test for errors, line 17)
> FAIL: gcc.dg/gomp/pr27415.c  (test for errors, line 9)
> FAIL: gcc.dg/gomp/pr27415.c  (test for errors, line 28)
> FAIL: gcc.dg/gomp/pr27415.c  (test for errors, line 37)
> FAIL: c-c++-common/tm/safe-3.c (internal compiler error)
> FAIL: c-c++-common/tm/safe-3.c (test for excess errors)
> FAIL: gcc.dg/tm/pr52141.c (internal compiler error)
> FAIL: gcc.dg/tm/pr52141.c (test for excess errors)
> FAIL: g++.dg/cpp0x/constexpr-ex1.C  (test for warnings, line 17)
> FAIL: g++.dg/cpp0x/constexpr-function2.C  (test for warnings, line 46)
> FAIL: g++.dg/cpp0x/constexpr-neg1.C  (test for warnings, line 5)
> FAIL: g++.dg/cpp0x/lambda/lambda-ctor-neg.C  (test for warnings, line 15)
> FAIL: g++.dg/cpp0x/lambda/lambda-ctor-neg.C not an aggregate (test for 
> errors, line 16)
> FAIL: g++.dg/cpp0x/lambda/lambda-ctor-neg.C deleted default ctor (test for 
> errors, line 17)
> FAIL: g++.dg/cpp0x/lambda/lambda-ctor-neg.C deleted assignment op (test for 
> errors, line 18)
> FAIL: g++.dg/cpp0x/lambda/lambda-field-names.C no member named i (test for 
> errors, line 11)
> FAIL: g++.dg/cpp0x/noexcept15.C  (test for errors, line 16)
> FAIL: g++.dg/cpp0x/pr47416.C  (test for errors, line 187)
> FAIL: g++.dg/cpp0x/pr47416.C  (test for warnings, line 213)
> FAIL: g++.dg/cpp0x/pr47416.C  (test for warnings, line 223)
> FAIL: g++.dg/cpp0x/static_assert2.C  (test for errors, line 14)
> FAIL: g++.dg/cpp0x/union1.C  (test for errors, line 17)
> FAIL: g++.dg/cpp0x/union1.C  (test for errors, line 18)
> FAIL: g++.dg/cpp0x/union1.C  (test for errors, line 28)
> FAIL:

Re: Configure-time testing for GCC plugins to determine C vs C++? (Was Re: status of GCC & C++)

2012-03-26 Thread Romain Geissler
Hi,

Le 26 mars 2012 à 20:33, Basile Starynkevitch a écrit :

> 
> And I still think that GCC 4.7.1 should be able to tell by itself if it was 
> compiled by C
> or by C++.
> 

Actually you can already find it for every GCC version you are interested in 
(4.6.x and 4.7.x), with very little logic, as it was pointed out to you 
yesterday here http://gcc.gnu.org/ml/gcc/2012-03/msg00381.html and here 
http://gcc.gnu.org/ml/gcc/2012-03/msg00382.html (the solution with gcc -v as 
using nm is not a portable solution).

This will work in most case : the targeted GCC you want to build a plugin for 
can be run on the build machine.

If you need to be able to cross-build a plugin on arch A for a targeted GCC 
running on host arch B, then you won't be able to invoke gcc -v.

Anyway there is a a solution that works all the time, all you need is being 
able to grep in a file for a given pattern. Just take a look at the following 
file :  $(gcc -print-file-name=plugin)/include/auto-host.h

You'll find something like this :

/* Define if building with C++. */
#ifndef USED_FOR_TARGET
#define ENABLE_BUILD_WITH_CXX 1
#endif

So that's it, you already got all you need for all version.

Cheers

Romain Geissler



Re: Animation showing 23 years of GCC development

2012-03-26 Thread Toon Moene

On 03/25/2012 11:31 PM, Diego Novillo wrote:


I just stumbled into this video animation showing a graphical
representation of GCC's source tree over the years.

It is a bit long, but it's amusing to recognize big events in GCC
(addition of Java, Ada, tree-ssa, etc) over time.

http://www.youtube.com/watch?v=ZEAlhVOZ8qQ

It lasts around 30 minutes.


Absolutely awesome.  The graphics are just stunning.

This is the description of the code generating the graphics:

> Software projects are displayed by Gource as an animated tree with
> the root directory of the project at its centre. Directories appear
> as branches with files as leaves. Developers can be seen working on
> the tree at the times they contributed to the project.

> Currently Gource includes built-in log generation support for Git,
> Mercurial and Bazaar and SVN (as of 0.29). Gource can also parse logs
> produced by several third party tools for CVS repositories.

Of course, due to its dependence on the code revision system in use by 
the project, it tends to blow up large code additions / changes ... it 
is not artificial intelligence - the change from g77 to gfortran was 
very important to the Fortran community, but unless you know exactly at 
which time the gfortran sources were added to the GCC repository, you'll 
miss it ...


Nevertheless, recommended - I'm going to watch it for the fourth time - 
to see if I can catch up with things I missed !


--
Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
At home: http://moene.org/~toon/; weather: http://moene.org/~hirlam/
Progress of GNU Fortran: http://gcc.gnu.org/wiki/GFortran#news


Re: Configure-time testing for GCC plugins to determine C vs C++? (Was Re: status of GCC & C++)

2012-03-26 Thread Basile Starynkevitch
On Mon, 26 Mar 2012 13:13:22 -0400
David Malcolm  wrote:

> On Mon, 2012-03-26 at 17:07 +, Joseph S. Myers wrote:
> > On Mon, 26 Mar 2012, David Malcolm wrote:
> > 
> 
> I suppose now is a bad time to mention that my python plugin *doesn't*
> use autoconf for its configure script - I didn't want to use m4 given
> that python is available.  I'm sure I'll figure something out though.

For what it is worth, the MELT plugin don't use autoconf.

And I still think that GCC 4.7.1 should be able to tell by itself if it was 
compiled by C
or by C++.

Cheers.


-- 
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mine, sont seulement les miennes} ***


Re: Configure-time testing for GCC plugins to determine C vs C++? (Was Re: status of GCC & C++)

2012-03-26 Thread David Malcolm
On Mon, 2012-03-26 at 17:07 +, Joseph S. Myers wrote:
> On Mon, 26 Mar 2012, David Malcolm wrote:
> 
> > Presumably a fix would be for the plugin's configuration phase to have a
> > test that tries to build a test plugin and run it, first building with a
> > C compiler, then a C++ compiler, and decides what compiler the real
> > plugin should be built with accordingly.
> 
> We've previously discussed providing some generic configure / build 
> support for plugins (standard autoconf macros intended for use in a 
> plugin's build system, for example).  If in future plugins are supported 
> on Windows hosts (or any other hosts that lack a -rdynamic equivalent) 
> then such generic support will be increasingly useful because of the need 
> to link plugins on such hosts against a shared library or libraries that 
> contain most of GCC.

I suppose now is a bad time to mention that my python plugin *doesn't*
use autoconf for its configure script - I didn't want to use m4 given
that python is available.  I'm sure I'll figure something out though.



Re: Backends with no exception handling on GCC47

2012-03-26 Thread Ian Lance Taylor
"Paulo J. Matos"  writes:

> I am porting my backend to GCC47 and during libgcc configuration I get:
> configure:4511: checking whether to use setjmp/longjmp exceptions
> configure:: /home/pm18/p4ws/pm18_binutils/bc/main/result/linux/
> intermediate/FirmwareGcc47Package/./gcc/xgcc -B/home/pm18/p4ws/
> pm18_binutils/bc/main/result/linux/intermediate/Firmware
> Gcc47Package/./gcc/ -B/home/pm18/p4ws/pm18_binutils/bc/main/result/linux/
> image/gcc_470_1/xap-local-xap/bin/ -B/home/pm18/p4ws/pm18_binutils/bc/
> main/result/linux/image/gcc_470_1/xap-l
> ocal-xap/lib/ -isystem /home/pm18/p4ws/pm18_binutils/bc/main/result/linux/
> image/gcc_470_1/xap-local-xap/include -isystem /home/pm18/p4ws/
> pm18_binutils/bc/main/result/linux/image/gcc_
> 470_1/xap-local-xap/sys-include-c --save-temps -fexceptions  
> conftest.c >&5
> conftest.c: In function 'foo':
> conftest.c:19:1: internal compiler error: in emit_move_insn, at 
> expr.c:3435
> Please submit a full bug report,
> with preprocessed source if appropriate.
> See  for instructions.
> configure:: $? = 1

Errors and compilation failures during configuration are not necessarily
important.  You shouldn't spend your time looking at them.  You should
look at the results.

However, an internal compiler error is always considered a bug that
should be fixed, even if it is only replaced by an error message.  That
is, the fact that the compilation failed during libgcc configuration is
unimportant.  The fact that you got an internal compiler error is worth
looking into.


> Also, I get loads of errors of the following while testing for the 
> existence of system headers. Am I expected to have a working libc for my 
> port at this point in GCC compilation?

It is not absolutely required, but if you have one you should use it.


> *** Configuration xap-local-xap not supported

You will have to find out where that last error message is coming from.
It's not happening because of errors in configure tests.  It's most
likely coming from libgcc/config.host.  You probably need to add an
entry for xap-*-* in that switch.

Since you are porting to GCC 4.7, probably you need to move some support
from gcc/config/zap over to libgcc/config/xap.  In GCC 4.7 there has
been a lot of work to move target library code from gcc to libgcc.

Ian


Re: regrename creates invalid insn

2012-03-26 Thread Eric Botcazou
> Here, I think the problem is that we have an in-out operand whose chain
> is closed prematurely due to a bogus REG_DEAD note which shouldn't be
> there for a register set in the instruction.

IIRC I didn't see a REG_DEAD note, but I might be misremembering.

-- 
Eric Botcazou


Re: regrename creates invalid insn

2012-03-26 Thread Bernd Schmidt
On 03/26/2012 07:37 PM, Eric Botcazou wrote:
>> Does 4.7 still have the failure at all? I've checked with the 4.6
>> branch, and regrename gets confused because there's a REG_DEAD note for
>> the register, and another REG_UNUSED for the same reg. As far as I
>> remember, it used to be the case that there should not be a REG_DEAD
>> note for a register that gets set in the insn, but maybe df changed the
>> rules? Or maybe it was a df bug in 4.6?
> 
> My understanding is that the REG_UNUSED note causes the chain opened for a 
> dest 
> register operand to be immediately closed but, when you have multiple such 
> dest register operands, one would need to have the chain live "during the 
> instruction" or right after, so that you have a conflict with the other dest 
> register operands for the instruction.  This looks awkward though.

You're right on the behaviour of REG_UNUSED, but it only takes effect
after all destinations have been opened. So they still conflict with
each other, usually.

Here, I think the problem is that we have an in-out operand whose chain
is closed prematurely due to a bogus REG_DEAD note which shouldn't be
there for a register set in the instruction.


Bernd


Re: regrename creates invalid insn

2012-03-26 Thread Eric Botcazou
> Does 4.7 still have the failure at all? I've checked with the 4.6
> branch, and regrename gets confused because there's a REG_DEAD note for
> the register, and another REG_UNUSED for the same reg. As far as I
> remember, it used to be the case that there should not be a REG_DEAD
> note for a register that gets set in the insn, but maybe df changed the
> rules? Or maybe it was a df bug in 4.6?

My understanding is that the REG_UNUSED note causes the chain opened for a dest 
register operand to be immediately closed but, when you have multiple such 
dest register operands, one would need to have the chain live "during the 
instruction" or right after, so that you have a conflict with the other dest 
register operands for the instruction.  This looks awkward though.

-- 
Eric Botcazou


Re: Setting precision for a PSImode type

2012-03-26 Thread Peter Bigot
On Mon, Mar 5, 2012 at 10:38 AM, Bernd Schmidt  wrote:
> On 03/05/2012 05:24 PM, Peter Bigot wrote:
>> And is there any reason (other than it doesn't seem to have been done
>> before) to believe PSImode is the wrong way to support a
>> general-purpose 20-bit integral type in gcc?
>
> If you're using 4.7.0, it should be possible to use FRACTIONAL_INT_MODE
> and get reasonable results. However, it hasn't been tested much, since
> the final bits of the patch series which would have added 40 bit int
> support to the C frontend didn't make it in. See the discussion following
>  http://gcc.gnu.org/ml/gcc-patches/2011-07/msg00079.html

Thanks; I've updated to 4.7.0.  In this thread I found:

>On 07/01/11 23:18, Bernd Schmidt wrote:
>>> What is the function of having both PARTIAL_INT_MODE and
>>> FRACTIONAL_INT_MODE?
>>
>> Not having to change all the targets using PARTIAL_INT_MODE immediately
>> to use the better mechanism.
>
> Also, come to think of it, preventing the rest of the compiler from
> trying to use such a mode in case the target only supports some very
> specific operations on it. A port could choose to use PImode, defined in
> machmode.def (and get __int40_t support), or it could add its own
> private PDImode to use in specific situations only.

Another major difference seems to be that PARTIAL_INT_MODE places the
type in MODE_PARTIAL_INT, while FRACTIONAL_INT_MODE places it in
MODE_INT, and allows it to participate in things like
GET_MODE_WIDER_MODE regardless of whether the fractional mode is
actually available on the target.

I'd really wanted to be able to make general-use 20-bit integers
available to the programmer, but it's looking ugly.  Does it still
make sense to use FRACTIONAL_INT_MODE on a target where there's
essentially an (incomplete) superset of instructions that operate on
the standard registers with 20-bits instead of their original 16-bits?
 Those instructions are only available on a subset of the target
chips, and are enabled by a target-specific user option.  genmodes
doesn't seem to support that constraint.

Peter


Re: Configure-time testing for GCC plugins to determine C vs C++? (Was Re: status of GCC & C++)

2012-03-26 Thread Joseph S. Myers
On Mon, 26 Mar 2012, David Malcolm wrote:

> Presumably a fix would be for the plugin's configuration phase to have a
> test that tries to build a test plugin and run it, first building with a
> C compiler, then a C++ compiler, and decides what compiler the real
> plugin should be built with accordingly.

We've previously discussed providing some generic configure / build 
support for plugins (standard autoconf macros intended for use in a 
plugin's build system, for example).  If in future plugins are supported 
on Windows hosts (or any other hosts that lack a -rdynamic equivalent) 
then such generic support will be increasingly useful because of the need 
to link plugins on such hosts against a shared library or libraries that 
contain most of GCC.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: The state of glibc libm

2012-03-26 Thread Steven Munroe
On Mon, 2012-03-26 at 12:26 +0200, Vincent Lefevre wrote:
> On 2012-03-22 16:29:00 +, Joseph S. Myers wrote:
> > On Thu, 22 Mar 2012, Vincent Lefevre wrote:
> > > For the same reason, if the user chose long double instead of
> > > double, this may be because he wanted more precision than double.
> > 
> > You mean range?  IBM long double provides more precision, but not more 
> > range.
> 
> Well, precision and/or range. If double precision format is sufficient
> for his application, the user can just choose the "double" type. So,
> I don't think that it is useful to have long double = double.
> 
> Then concerning double-double vs quad (binary128) for the "long double"
> type, I think that quad would be more useful, in particular because
> it has been standardized and it is a true FP format. If need be (for
> efficiency reasons), double-double could still be implemented using
> the "double" type, via a library or ad-hoc code (that does something
> more clever, taking the context into account). And the same code (with
> just a change of the base type) could be reused to get a double-quad
> (i.e. quad + quad) arithmetic, that can be useful to implement the
> "long double" versions of the math functions (expl, and so on).
> 
This is much easier said then done. In practice it is a major ABI change
and would have to be staged over multiple (7-10) years.

> > > So, in the long term, the ABI should probably be changed to have
> > > long double = quadruple precision (binary128).
> > 
> > The ABI for Power Architecture changed away from quad precision to using 
> > IBM long double (the original SysV ABI for PowerPC used quad precision, 
> > the current ABI uses IBM long double)
> 
> Perhaps they could change back to quad precision.
> 
That is not the feedback we get from our customers. No one will use
software IEEE binary128 and we don't have hardware binary128. So far
there is abstract interest but no strong demand for this. So there is no
incentive to change.



Re: regrename creates invalid insn

2012-03-26 Thread Andreas Schwab
Bernd Schmidt  writes:

> Does 4.7 still have the failure at all?

Yes, see PR52573.

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Configure-time testing for GCC plugins to determine C vs C++? (Was Re: status of GCC & C++)

2012-03-26 Thread David Malcolm
On Sun, 2012-03-25 at 22:10 +0200, Basile Starynkevitch wrote:
> On Sun, 25 Mar 2012 20:30:31 +0200
> Basile Starynkevitch  wrote:
> > 
> > How can a plugin know that cc1 was compiled with C++ or just with
> > plain C? I don't really know (we do have GCCPLUGIN_VERSION, but should a 
> > plugin use
> > ENABLE_BUILD_WITH_CXX)?
> 
[...snip discussion of various ways to try to "ask" GCC  whether it was
built with a C or a C++ compiler...]

> Since 4.7 is probably the only release of GCC which can be compiled by C and 
> by C++ I
> believe it is a bug specific to that release.
> 
> I'm sure this bug affect several GCC plugins. It does affect GCC MELT for 
> instance.
> https://groups.google.com/forum/?fromgroups#!topic/gcc-melt/PRWr28sQExk

As noted in another thread, it also affects the Python plugin:
https://fedorahosted.org/pipermail/gcc-python-plugin/2012-March/000202.html
https://fedorahosted.org/pipermail/gcc-python-plugin/2012-March/000204.html

> My feeling is that it is not only the plugins' fault (so it is not only a 
> MELT plugin
> bug, but a GCC one)
> 
> 
> What do you think? Should 4.7.1 provide a fix to correct that? How? Testing 
> simply
> ENABLE_BUILD_WITH_GCC makes me feel unhappy; that name is really confusing, 
> if we
> understand and use it as GCC_IS_BUILT_WITH_CXX

Presumably a fix would be for the plugin's configuration phase to have a
test that tries to build a test plugin and run it, first building with a
C compiler, then a C++ compiler, and decides what compiler the real
plugin should be built with accordingly.

Presumably the only APIs that are guaranteed for use by plugins so far
are the ones directly relating to plugins themselves:
  * plugin_default_version_check
  * register_callback

I'd hoped that a suitable feature test might be a minimal test plugin
that simply tries to call plugin_default_version_check() in its init
function: that way the plugin_init can only run if the dynamic link
succeeds, and you've got the correct compiler.

However, annoyingly, in gcc-plugin.h
http://gcc.gnu.org/viewcvs/trunk/gcc/gcc-plugin.h?revision=184997&view=markup
'extern "C"' guards are present around the relevant declarations, making
these symbols unsuitable for use in a configure-time test.

So are there any plugin-visible symbols that aren't yet wrapped in an
extern "C"?

(FWIW I've been working on a possible plugin API for GCC, similar to the
one I posted in my earlier mail, trying to get my python plugin to
compile against it, but I don't have anything working yet)

> My wish would be to add, perhaps in gcc/configure.ac of GCC 4.7.1, something 
> which
> defines GCCPLUGIN_IN_CXX in e.g.  $(gcc-4.7 
> -print-file-name=plugin)/plugin-version.h
> when gcc-4.7 have been built with a C++ compiler so its plugins need to be 
> compiled with
> a C++ compiler. That information should also be accessible (e.g. for plugin 
> makers) thru
> some invocation of GCC (perhaps gcc -v).




Re: regrename creates invalid insn

2012-03-26 Thread Bernd Schmidt
On 03/13/2012 12:41 AM, Andreas Schwab wrote:
> Andreas Schwab  writes:
> 
>> Ian Lance Taylor  writes:
>>
>>> Andreas Schwab  writes:
>>>
 Ian Lance Taylor  writes:

> But it also looks like the pattern should use a match_scratch.

 It is also used as input in operand 2.
>>>
>>> Sorry, I missed that.
>>
>> That appears not to be an issue actually, there is already one use of
>> match_scratch together with a matching constraint in *cmpdi_internal.
>> But then, using match_scratch instead of match_operand doesn't really
>> fix the bug either (it only helps a simplified test case, but not the
>> original one).
> 
> It doesn't actually change anything (I was confused because 4.7/4.8 no
> longer generates the overlapping output for the simplified testcase).

Does 4.7 still have the failure at all? I've checked with the 4.6
branch, and regrename gets confused because there's a REG_DEAD note for
the register, and another REG_UNUSED for the same reg. As far as I
remember, it used to be the case that there should not be a REG_DEAD
note for a register that gets set in the insn, but maybe df changed the
rules? Or maybe it was a df bug in 4.6?


Bernd


Re: Question about Tree_function_versioning

2012-03-26 Thread Martin Jambor
Hi,

On Mon, Mar 26, 2012 at 01:34:55AM +, Iyer, Balaji V wrote:
> Hello Everyone,
> I am currently trying to take certain functions (marked by certain
> attributes) and create vector version along with the scalar versions
> of the function. For example, let's say I have a function my_add
> that is marked with a certain attribute, I am trying to clone it
> into my_add_vector and still keep the original my_add. For this, I
> am trying to use tree_function_versioning in cgraphunit.c to clone
> the cgraph_node into a new function. Does this function actually
> create a 2nd function (called my_add_vector) and copy the body from
> my_add function to the my_add_vector function or does it just create
> a node called my_add_vector and then create a pointer to the body of
> the my_add?
> 
> Is there a better approach for doing this?
> 

tree_function_versioning indeed does copy the body of a function into
a new one, but that's the only thing it does.  You might be better
served by its callers such as cgraph_function_versioning.  But I
believe all cloning functions currently also make the new clone
private to the current compilation unit (and thus subject to
unreachable node removal if they have no callers) which is something
you might not want.  If it is a problem, you'd either need to re-set
the relevant decl and node attributes subsequently or change the
cloning functions themselves.

I assume you're not operating within an IPA pass, in that case you'd
need cgraph_create_virtual_clone and a transformation hook.

Martin


Re: GSoC :Project Idea(Before final Submission) for review and feedback

2012-03-26 Thread Subrata Biswas
Thank You David for your suggestion and feedback. I'll let you know my
final proposal (after the modification based on your feedback and my
latest study) as early as possible.

I would like to request everyone in this community to kindly enrich me
with more suggestions and guidance to prepare my final project
proposal for GSoC2012.

Thanks a lot to everyone for your active and helpful feedback!!!

On 26 March 2012 17:04, David Brown  wrote:
> On 25/03/2012 11:55, Oleg Endo wrote:
>>
>> Please reply in CC to the GCC mailing list, so others can follow the
>> discussion.
>>
>> On Sun, 2012-03-25 at 09:21 +0530, Subrata Biswas wrote:
>>>
>>> On 25 March 2012 03:59, Oleg Endo  wrote:


 I might be misunderstanding the idea...
 Let's assume you've got a program that doesn't compile, and you leave
 out those erroneous blocks to enforce successful compilation of the
 broken program.  How are you going to figure out for which blocks it is
 actually safe to be removed and for which it isn't?
>>>
>>>
>>> I can do it by tracing the code blocks which are dependent on the
>>> erroneous block. i.e if any block is data/control dependent(the output
>>> or written value of the erroneous part is read) on this erroneous
>>> block or line of code will be eliminated.
>>>
 Effectively, you'll
 be changing the original semantics of a program, and those semantic
 changes might be completely not what the programmer originally had in
 mind.  In the worst case, something might end up with an (un)formatted
 harddisk...*

 Cheers,
 Oleg

>>> Thank you sir for your great feedback. You have understood it
>>> correctly. Now the programmer will be informed about the change in
>>> code and the semantics.(Notice that this plug-in is not going to
>>> modify the original code!, it just copy the original code and perform
>>> all the operations on the temporary file!!!) Even from the partial
>>> execution of the code the programmer will get an overview of his
>>> actual progress.
>>>
>>> suppose the program written by the programmer be:
>>>
>>> 1 int main(void)
>>> 2 {
>>> 3    int arr[]={3,4,-10,22,33,37,11};
>>> 4    sort(arr);
>>> 5    int a = arr[3] // Now suppose the programmer missed the semicolon
>>> here. Which generates a compilation error at line 5;
>>> 6    printf("%d\n",a);
>>> 7    for(i=0;i<7;i++)
>>> 8    {
>>> 9        printf("%d\n",arr[i]);
>>> 10    }
>>> 11  }
>>>
>>>
>>> Now if we just analyze the data (i.e. variable), we can easily find
>>> that there is only data dependency exists between line 5 and line 6.
>>> The rest of the program is not being effected due to elimination or
>>> commenting line 5.
>>>
>
> There is also a dependency between lines 6 and 9 - the second printf cannot
> be done until the first printf is complete.  Anything that involves calls to
> external code, or reading or writing volatile data, must be done in order
> and as specified, and cannot be omitted or re-arranged.  So if you omit the
> first printf, then you must omit all later external calls or volatile
> accesses.  And everything else is then subject to dead-code elimination
> because it has no effect, and can be omitted.
>
> In other words, the best you can do is spot the error, replace it with a
> call to abort(), and generate a partial execution up to the failing point.
>  There is no way to continue while still being the same program.  And if it
> is a different program, how does it help anyone to continue?
>
> I'd suggest then that you are better off thinking of a C interpreter, that
> will interpret and execute C code as it goes along.
>
>



-- 
Thanking You,

Regards
Subrata Biswas
MTech (pursuing)
Computer Science and Engineering
Indian Institute of Technology, Roorkee
Mob: +91 7417474559


Re: GSoC :Project Idea(Before final Submission) for review and feedback

2012-03-26 Thread David Brown

On 25/03/2012 11:55, Oleg Endo wrote:

Please reply in CC to the GCC mailing list, so others can follow the
discussion.

On Sun, 2012-03-25 at 09:21 +0530, Subrata Biswas wrote:

On 25 March 2012 03:59, Oleg Endo  wrote:


I might be misunderstanding the idea...
Let's assume you've got a program that doesn't compile, and you leave
out those erroneous blocks to enforce successful compilation of the
broken program.  How are you going to figure out for which blocks it is
actually safe to be removed and for which it isn't?


I can do it by tracing the code blocks which are dependent on the
erroneous block. i.e if any block is data/control dependent(the output
or written value of the erroneous part is read) on this erroneous
block or line of code will be eliminated.


Effectively, you'll
be changing the original semantics of a program, and those semantic
changes might be completely not what the programmer originally had in
mind.  In the worst case, something might end up with an (un)formatted
harddisk...*

Cheers,
Oleg


Thank you sir for your great feedback. You have understood it
correctly. Now the programmer will be informed about the change in
code and the semantics.(Notice that this plug-in is not going to
modify the original code!, it just copy the original code and perform
all the operations on the temporary file!!!) Even from the partial
execution of the code the programmer will get an overview of his
actual progress.

suppose the program written by the programmer be:

1 int main(void)
2 {
3int arr[]={3,4,-10,22,33,37,11};
4sort(arr);
5int a = arr[3] // Now suppose the programmer missed the semicolon
here. Which generates a compilation error at line 5;
6printf("%d\n",a);
7for(i=0;i<7;i++)
8{
9printf("%d\n",arr[i]);
10}
11  }


Now if we just analyze the data (i.e. variable), we can easily find
that there is only data dependency exists between line 5 and line 6.
The rest of the program is not being effected due to elimination or
commenting line 5.



There is also a dependency between lines 6 and 9 - the second printf 
cannot be done until the first printf is complete.  Anything that 
involves calls to external code, or reading or writing volatile data, 
must be done in order and as specified, and cannot be omitted or 
re-arranged.  So if you omit the first printf, then you must omit all 
later external calls or volatile accesses.  And everything else is then 
subject to dead-code elimination because it has no effect, and can be 
omitted.


In other words, the best you can do is spot the error, replace it with a 
call to abort(), and generate a partial execution up to the failing 
point.  There is no way to continue while still being the same program. 
 And if it is a different program, how does it help anyone to continue?


I'd suggest then that you are better off thinking of a C interpreter, 
that will interpret and execute C code as it goes along.





Re: The state of glibc libm

2012-03-26 Thread Vincent Lefevre
On 2012-03-22 16:29:00 +, Joseph S. Myers wrote:
> On Thu, 22 Mar 2012, Vincent Lefevre wrote:
> > For the same reason, if the user chose long double instead of
> > double, this may be because he wanted more precision than double.
> 
> You mean range?  IBM long double provides more precision, but not more 
> range.

Well, precision and/or range. If double precision format is sufficient
for his application, the user can just choose the "double" type. So,
I don't think that it is useful to have long double = double.

Then concerning double-double vs quad (binary128) for the "long double"
type, I think that quad would be more useful, in particular because
it has been standardized and it is a true FP format. If need be (for
efficiency reasons), double-double could still be implemented using
the "double" type, via a library or ad-hoc code (that does something
more clever, taking the context into account). And the same code (with
just a change of the base type) could be reused to get a double-quad
(i.e. quad + quad) arithmetic, that can be useful to implement the
"long double" versions of the math functions (expl, and so on).

> > So, in the long term, the ABI should probably be changed to have
> > long double = quadruple precision (binary128).
> 
> The ABI for Power Architecture changed away from quad precision to using 
> IBM long double (the original SysV ABI for PowerPC used quad precision, 
> the current ABI uses IBM long double)

Perhaps they could change back to quad precision.

-- 
Vincent Lefèvre  - Web: 
100% accessible validated (X)HTML - Blog: 
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)


Backends with no exception handling on GCC47

2012-03-26 Thread Paulo J. Matos
Hello,

I am porting my backend to GCC47 and during libgcc configuration I get:
configure:4511: checking whether to use setjmp/longjmp exceptions
configure:: /home/pm18/p4ws/pm18_binutils/bc/main/result/linux/
intermediate/FirmwareGcc47Package/./gcc/xgcc -B/home/pm18/p4ws/
pm18_binutils/bc/main/result/linux/intermediate/Firmware
Gcc47Package/./gcc/ -B/home/pm18/p4ws/pm18_binutils/bc/main/result/linux/
image/gcc_470_1/xap-local-xap/bin/ -B/home/pm18/p4ws/pm18_binutils/bc/
main/result/linux/image/gcc_470_1/xap-l
ocal-xap/lib/ -isystem /home/pm18/p4ws/pm18_binutils/bc/main/result/linux/
image/gcc_470_1/xap-local-xap/include -isystem /home/pm18/p4ws/
pm18_binutils/bc/main/result/linux/image/gcc_
470_1/xap-local-xap/sys-include-c --save-temps -fexceptions  
conftest.c >&5
conftest.c: In function 'foo':
conftest.c:19:1: internal compiler error: in emit_move_insn, at 
expr.c:3435
Please submit a full bug report,
with preprocessed source if appropriate.
See  for instructions.
configure:: $? = 1

My guess is that this is due to the use of -fexceptions. The failing 
program is:
| /* confdefs.h */
| #define PACKAGE_NAME "GNU C Runtime Library"
| #define PACKAGE_TARNAME "libgcc"
| #define PACKAGE_VERSION "1.0"
| #define PACKAGE_STRING "GNU C Runtime Library 1.0"
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL "http://www.gnu.org/software/libgcc/";
| #define SIZEOF_DOUBLE 0
| #define SIZEOF_LONG_DOUBLE 0
| #define HAVE_GETIPINFO 1
| /* end confdefs.h.  */
|
| void bar ();
| void clean (int *);
| void foo ()
| {
|   int i __attribute__ ((cleanup (clean)));
|   bar();
| }
|

Is there any new requirement for backends related to exceptions on GCC47? 
We have never supported exceptions and we haven't had any trouble until 
now.

Also, I get loads of errors of the following while testing for the 
existence of system headers. Am I expected to have a working libc for my 
port at this point in GCC compilation?
conftest.c:9:19: fatal error: stdio.h: No such file or directory

libgcc configuration terminates with:
checking size of long double... 0
checking whether decimal floating point is supported... no
configure: WARNING: decimal float is not supported for this target, 
ignored
checking whether fixed-point is supported... no
checking whether to use setjmp/longjmp exceptions... unknown
checking if the linker (/home/pm18/p4ws/pm18_binutils/bc/main/result/
linux/intermediate/FirmwareGcc47Package/./gcc/collect-ld) is GNU ld... no
checking for thread model used by GCC... single
checking whether assembler supports CFI directives... yes
*** Configuration xap-local-xap not supported
make[2]: *** [configure-target-libgcc] Error 1

Cheers,

-- 
PMatos



Re: GSoC :Project Idea(Before final Submission) for review and feedback

2012-03-26 Thread Subrata Biswas
Thank you sir for your important feedback and suggestion. I'll modify
my proposal and inform you about it very soon.

On 26 March 2012 09:41, Iyer, Balaji V  wrote:
>
>
> -Original Message-
> From: Subrata Biswas [mailto:subrata.i...@gmail.com]
> Sent: Sunday, March 25, 2012 12:22 PM
> To: Oleg Endo
> Cc: gcc
> Subject: Re: GSoC :Project Idea(Before final Submission) for review and 
> feedback
>
> Thank you sir for your excellent example.
>
> On 25 March 2012 15:25, Oleg Endo  wrote:
>> Please reply in CC to the GCC mailing list, so others can follow the
>> discussion.
>>
>> On Sun, 2012-03-25 at 09:21 +0530, Subrata Biswas wrote:
>>> On 25 March 2012 03:59, Oleg Endo  wrote:
>>> >
>>> > I might be misunderstanding the idea...
>>> > Let's assume you've got a program that doesn't compile, and you
>>> > leave out those erroneous blocks to enforce successful compilation
>>> > of the broken program.  How are you going to figure out for which
>>> > blocks it is actually safe to be removed and for which it isn't?
>>>
>>> I can do it by tracing the code blocks which are dependent on the
>>> erroneous block. i.e if any block is data/control dependent(the
>>> output or written value of the erroneous part is read) on this
>>> erroneous block or line of code will be eliminated.
>>>
>>> > Effectively, you'll
>>> > be changing the original semantics of a program, and those semantic
>>> > changes might be completely not what the programmer originally had
>>> > in mind.  In the worst case, something might end up with an
>>> > (un)formatted
>>> > harddisk...*
>>> >
>>> > Cheers,
>>> > Oleg
>>> >
>>> Thank you sir for your great feedback. You have understood it
>>> correctly. Now the programmer will be informed about the change in
>>> code and the semantics.(Notice that this plug-in is not going to
>>> modify the original code!, it just copy the original code and perform
>>> all the operations on the temporary file!!!) Even from the partial
>>> execution of the code the programmer will get an overview of his
>>> actual progress.
>>>
>>> suppose the program written by the programmer be:
>>>
>>> 1 int main(void)
>>> 2 {
>>> 3    int arr[]={3,4,-10,22,33,37,11};
>>> 4    sort(arr);
>>> 5    int a = arr[3] // Now suppose the programmer missed the
>>> semicolon here. Which generates a compilation error at line 5;
>>> 6    printf("%d\n",a);
>>> 7    for(i=0;i<7;i++)
>>> 8    {
>>> 9        printf("%d\n",arr[i]);
>>> 10    }
>>> 11  }
>>>
>>>
>>> Now if we just analyze the data (i.e. variable), we can easily find
>>> that there is only data dependency exists between line 5 and line 6.
>>> The rest of the program is not being effected due to elimination or
>>> commenting line 5.
>>>
>>> Hence the temporary source file after commenting out the erroneous
>>> part of the code and the code segment that is dependent on this
>>> erroneous  part would be:
>>>
>>> 1 int main(void)
>>> 2 {
>>> 3    int arr[]={3,4,-10,22,33,37,11};
>>> 4    sort(arr);
>>> 5    //int a = arr[3] // Now suppose the programmer missed the
>>> semicolon here. Which generates a compilation error at line 5;
>>> 6   // printf("%d\n",a);
>>> 7    for(i=0;i<7;i++)
>>> 8    {
>>> 9        printf("%d\n",arr[i]);
>>> 10    }
>>> 11  }
>>>
>>> Now this part of the program(broken program) is error free. Now we
>>> can compile this part using GCC and get the partial executable.
>>>
>>> Now the possible output after compilation using this plug in(if
>>> programmer use it) with GCC would be:
>>>
>>> "You have syntax error at Line no. 5. and to generate the partial
>>> executable Line 5 and Line 6 have removed in the temporary executable
>>> execute the partial executable excute p.out"
>>>
>>> Advantages to the Programmer:
>>> 1. If programmer can see the result of the partial executable he can
>>> actually quantify his/her progress in code.
>>> 2. The debug become easier as this plug-in would suggest about
>>> possible correction in the code etc.
>>
>> I don't think it will make the actual debugging task easier.  It might
>> make writing code easier (that's what IDEs are doing these days while
>> you're typing code...).  In order to debug a program, the actual bugs
>> need to be _in_ the program, otherwise there is nothing to debug.
>> Removing arbitrary parts of the program could potentially introduce
>> new artificial bugs, just because of a missing semicolon.
>>
>>> * I did not understand the  worst case that you have mentioned as
>>> (un)formatted hard disk. Can you kindly explain it?
>>>
>>
>> Let's say I'm writing a kind of disk utility that reads and writes
>> sectors...
>>
>> -
>> source1.c:
>>
>> bool
>> copy_sector (void* outbuf, const void* inbuf, int bytecount) {
>>  if (bytecount < 4)
>>    return false;
>>
>>  if ((bytecount & 3) != 0)
>>    return false;
>>
>>  int* out_ptr = (int*)outbuf;
>>  const int* in_ptr = (const int*)inbuf;
>>  int count = bytecount / 4;
>>
>>  do
>>  {
>>    int i = *in_ptr++;
>>    if (i & 1)
>>      i = do

Re: why no shortcut operation for comparion on _Complex operands

2012-03-26 Thread Richard Guenther
On Sun, Mar 25, 2012 at 2:42 PM, Bin.Cheng  wrote:
> Hi,
> In tree-complex.c's function expand_complex_comparison, gcc just
> expand comparison on complex
> operands into comparisons on inner type, like:
>
>  D.5375_17 = REALPART_EXPR ;
>  D.5376_18 = IMAGPART_EXPR ;
>  g2.1_5 = COMPLEX_EXPR ;
>  D.5377_19 = REALPART_EXPR ;
>  D.5378_20 = IMAGPART_EXPR ;
>  g3.2_6 = COMPLEX_EXPR ;
>  D.5379_21 = D.5375_17 == D.5377_19;
>  D.5380_22 = D.5376_18 == D.5378_20;
>  D.5381_23 = D.5379_21 & D.5380_22;
>  if (D.5381_23 == 1)
>    goto ;
>  else
>    goto ;
>
> So is it possible to do shortcut operation for the "&" on the
> real/imag part of complex data?

Sure.  Does the RTL expander not do that for your target?

Richard.

> Thanks
>
> --
> Best Regards.