Re: Lots of C++ failures

2014-09-13 Thread FX
> In fact, after looking at the latest gcc-patches messages, I think it may be 
> due to this commit: https://gcc.gnu.org/ml/gcc-patches/2014-09/msg01107.html
> (based purely on the fact that the unrecognized insn is a call, and the patch 
> deals with CALL_EXPR).

Duh. No, it’s not, cause it’s apparently not committed yet (at least, not in 
rev. 215234 which is the one failing for me).

But now, I’ve found it is PR 61387 
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61387), already known.

FX

Re: Lots of C++ failures

2014-09-13 Thread FX
In fact, after looking at the latest gcc-patches messages, I think it may be 
due to this commit: https://gcc.gnu.org/ml/gcc-patches/2014-09/msg01107.html
(based purely on the fact that the unrecognized insn is a call, and the patch 
deals with CALL_EXPR).

FX



> I am testing trunk on darwin14 (Mac OS X Yosemite), and I am seeing a lot of 
> (800 so far) C++ failures, in the form of internal compiler errors, like this:
> 
>> vbase1.C:17:1: error: unrecognizable insn:
>> }
>> ^
>> (call_insn/j 4 3 5 (call (mem/u/c:DI (const:DI (unspec:DI [
>>(symbol_ref/i:DI ("_ZN8VDerivedD1Ev") [flags 0x1] 
>> )
>>] UNSPEC_GOTPCREL)) [0  S8 A8])
>>(const_int 0 [0])) vbase1.C:9 -1
>> (nil)
>>(nil))
>> vbase1.C:17:1: internal compiler error: in insn_default_length, at 
>> config/i386/i386.md:2071
> 
> 
> Google and bugzilla search don’t find anything particularly recent like that, 
> but the scale of it is weird. I have isolated a very small reproducer, 
> attached (gives the above ICE with no compile options).
> 
> Has anyone seen this on another platform? Is it known? Otherwise, I’ll report 
> it.
> 
> FX
> 
> 
> 
> 



Lots of C++ failures

2014-09-13 Thread FX
Hi,

I am testing trunk on darwin14 (Mac OS X Yosemite), and I am seeing a lot of 
(800 so far) C++ failures, in the form of internal compiler errors, like this:

> vbase1.C:17:1: error: unrecognizable insn:
>  }
>  ^
> (call_insn/j 4 3 5 (call (mem/u/c:DI (const:DI (unspec:DI [
> (symbol_ref/i:DI ("_ZN8VDerivedD1Ev") [flags 0x1] 
> )
> ] UNSPEC_GOTPCREL)) [0  S8 A8])
> (const_int 0 [0])) vbase1.C:9 -1
>  (nil)
> (nil))
> vbase1.C:17:1: internal compiler error: in insn_default_length, at 
> config/i386/i386.md:2071


Google and bugzilla search don’t find anything particularly recent like that, 
but the scale of it is weird. I have isolated a very small reproducer, attached 
(gives the above ICE with no compile options).

Has anyone seen this on another platform? Is it known? Otherwise, I’ll report 
it.

FX





vbase1.C
Description: Binary data


Re: [PATCH] gcc parallel make check

2014-09-13 Thread Bernhard Reutner-Fischer

On 13 September 2014 02:04:51 Jakub Jelinek  wrote:


On Fri, Sep 12, 2014 at 04:42:25PM -0700, Mike Stump wrote:
> curious, when I run atomic.exp=stdatom\*.c:
>
>   gcc.dg/atomic/atomic.exp completed in 30 seconds.
>
> atomic.exp=c\*.c takes 522 seconds with 3, 2, 5 and 4 being the worst 
offenders.


That's the
@if [ -z "$(filter-out --target_board=%,$(filter-out 
--extra_opts%,$(RUNTESTFLAGS)))" ] \

&& [ "$(filter -j, $(MFLAGS))" = "-j" ]; then \
i.e. if you specify anything in RUNTESTFLAGS other than --target_board= or
--extra_opts, it is not parallelized.  This was done previously because
parallelization required setting the flags to something different (manually
created *.exp list).  The first [] could


Yes, this is very inconvenient, especially in the light of -v in the 
runtestflags which should certainly not prohibit parallel execution.

See https://gcc.gnu.org/ml/gcc-patches/2013-11/msg00997.html
for how I would fix that.. (findstring empty instead of filter-out).

TIA,

perhaps be removed now, if one e.g.

RUNTESTFLAGS=atomic.exp etc. with sufficiently enough tests, parallelization
will be still worth it.  I've been worried about the quick cases where
parallelization is not beneficial, like make check-gcc \
RUNTESTFLAGS=dg.exp=pr60123.c or similar, but one doesn't usually pass -jN
in that case.  So yes, the
[ -z "$(filter-out --target_board=%,$(filter-out 
--extra_opts%,$(RUNTESTFLAGS)))" ]

can be dropped (not in libstdc++ though, there are abi.exp and
prettyprinters.exp still run serially, though even that could be handled the
struct-layout-1.exp way, of running it by the first instance to encounter
those with small changes in those *.exp files).

Jakub




Sent with AquaMail for Android
http://www.aqua-mail.com




Re: [PATCH] gcc parallel make check

2014-09-13 Thread Bernhard Reutner-Fischer

On 12 September 2014 19:46:33 Mike Stump  wrote:


On Sep 12, 2014, at 9:32 AM, Jakub Jelinek  wrote:
> Here is my latest version of the patch.
>
> With this patch I get identical test_summary output on make -k check
> (completely serial testing) and make -j48 -k check from toplevel directory.
>
> Major changes since last version:
> 1) I've changed the granularity, now it does O_EXCL|O_CREAT attempt
>   only every 10th runtest_file_p invocation

So, I’d love to see the numbers for 5 and 20 to double check that 10 is the 
right number to pick.  This sort of refinement is trivial post checkin.


> 3) various other *.exp fails didn't use runtest_file_p, especially the
>   gcc.misc-tests/ ones, tweaked those like struct-layout-1.exp or
>   plugin.exp so that only the first runtest instance to encounter those
>   runs all of the *.exp file serially

> Regtested on x86_64-linux, ok for trunk?

Ok.  Please be around after you apply it to try and sort out any major fallout.


Usage of $(or) and $(and) will bump GNU make prerequisite version from our 
current 3.80 to at least 3.82 (IIRC).


PS: for the numbers I had used addsuffix  rather than patsubst in the hopes 
that it avoids lots of regexp calls. Very minor not though.


Cheers,


If someone can check their target post checkin (or help out pre-checkin) 
and report back, that would be nice.  Times before and post checkin with 
core count -j setting would be nice.


I wonder if the libstdc++ problems can be sorted out merely by finding a 
way to sort them so the expensive ones come early (regexp -> 0regexp for 
example).  Or, instead of sorting them by name, sort them by some other key 
(md5 per line).  The idea then would be that the chance of all regexp tests 
being in one group is 0.




Sent with AquaMail for Android
http://www.aqua-mail.com




Re: [GSoC] Status - 20140901 FINAL

2014-09-13 Thread Tobias Burnus

On 12.09.2014 18:10, Manuel López-Ibáñez wrote:

Hi Maxim, Many thanks for your leadership and hard work administering this.


Also thanks from my side.


I would be interested in reading about the results of the projects and
evaluations. Please student (and mentors), could you provide some
details?


Let me write a few words regarding the Alessandro's coarrays project.

Coarrays are a way to parallelize programs; conceptually, they are about 
distributed memory as each process (called image) has its own memory 
except for certain variables – called coarrays – which are available on 
all images/processes. They act as local (remote accessible) variable – 
unless one explicitly accesses remote memory using the image index of a 
remote process. The scheme is known as Partitioned Global Address Space 
- short PGAS - and other languages also know that, such as Universal 
Parallel C (UPC) and, I think, Chapel, Fortress and X10.


Coarrays are a new feature in the Fortran 2008 standard and are being 
extended (collectives, teams, events etc.) in a Technical Specification 
(TS18508), which will be part of Fortran 2015.


On the GCC/gfortran side, some initial support has been added in 4.6 and 
4.7, but the multi-image support has never materialized. And a parallel 
feature which only works on a single process is kind of boring.



Enter Alessandro: Starting a bit before GSoC, but mostly during and as 
GSoC work, he worked on the library side of coarrays, ensuring that the 
communication actually happens. (At the same time, his progress forced 
me to add some missing bits to the compiler itself.)


While there are still some issues – both on the library and on the 
compiler side, coarrays now mostly work! The basic features are all 
there and the performance is also competitive. That was also also part 
of Alessandro's work: To measure and to decide which communication 
library should be used. Currently, MPI and GASNet are the supported 
backends.


For the comparison with other implementations and the scaling, see:
http://opencoarrays.org/yahoo_site_admin/assets/docs/Coarrays_GFortran.217135934.pdf

For the current status and usage https://gcc.gnu.org/wiki/Coarray and 
the links there in.



Coarrays were (albeit with a slightly different syntax) supported for a 
long time in the Cray compiler – but only recently with the Fortran 2008 
syntax. Additionally, the Intel compiler has an implementation which is 
known to be slow. (I assume that will now change.) Hence, there aren't 
not that many coarray programs. However, there is definitely interest by 
the developers – such that the feature will likely become more popular.



Outlook on the gfortran side: Locking is not yet implemented on the 
compiler side (not crucial as one has synchronization and atomics, but 
still) – nor are allocatable/pointer derived-type components ('struct 
elements') *of* coarrays, which will then also require more work on the 
library side. And completely missing are some of the nifty features of 
the upcoming Technical Specification.


One of the next items is in any case the new array descriptor, which is 
a different project but would help with some of the issues we have with 
array access to components of array coarrays



Tobias