GCC Buildbot Update

2017-12-14 Thread Paulo Matos
Hello,

Apologies for the delay on the update. It was my plan to do an update on
a monthly basis but it slipped by a couple of weeks.

The current status is:

*Workers:*

- x86_64

2 workers from CF (gcc16 and gcc20) up and running;
1 worker from my farm (jupiter-F26) up and running;

2 broken CF (gcc75 and gcc76) - the reason for the brokenness is that
the machines work well but all outgoing ports except the git port is
open (9418 if not mistaken). This means that not only we cannot svn co
gcc but we can't connect a worker to the master through port 9918. I
have contacted the cf admin but the reply was that nothing can be done
as they don't really own the machine. They seemed to have relayed the
request to the machine owners.

- aarch64

I got an email suggesting I add some aarch64 workers so I did:
4 workers from CF (gcc113, gcc114, gcc115 and gcc116);

*Builds:*

As before we have the full build and the incremental build. Both enabled
for x86_64 and aarch64, except they are currently failing for aarch64
(more on that later).

The full build is triggered on Daily bump commit and the incremental
build is triggered for each commit.

The problem with this setup is that the incremental builder takes too
long to run the tests. Around 1h30m on CF machines for x86_64.

Segher Boessenkool sent me a patch to disable guality and prettyprinters
which coupled with --disable-gomp at configure time was supposed to make
things much faster. I have added this as the Fast builder, except this
is failing during the test runs:
unable to alloc 389376 bytes
/bin/bash: line 21: 32472 Aborted `if [ -f
${srcdir}/../dejagnu/runtest ] ; then echo ${srcdir}/../dejagnu/runtest
; else echo runtest; fi` --tool gcc
/bin/bash: fork: Cannot allocate memory
make[3]: [check-parallel-gcc] Error 254 (ignored)
make[3]: execvp: /bin/bash: Cannot allocate memory
make[3]: [check-parallel-gcc_1] Error 127 (ignored)
make[3]: execvp: /bin/bash: Cannot allocate memory
make[3]: [check-parallel-gcc_1] Error 127 (ignored)
make[3]: execvp: /bin/bash: Cannot allocate memory
make[3]: *** [check-parallel-gcc_1] Error 127


However, something interesting is happening here since the munin
interface for gcc16 doesn't show the machine running out of memory:
https://cfarm.tetaneutral.net/munin/gccfarm/gcc16/memory.html
(something confirmed by the cf admins)

The aarch64 build is failing as mentioned earlier. If you check the logs:
https://gcc-buildbot.linki.tools/#/builders/5/builds/10
the problem seems to be the assembler issuing:
Assembler messages:
Error: unknown architecture `armv8.1-a'
Error: unrecognized option -march=armv8.1-a


If I go to the machines and check the versions I get:
pmatos@gcc115:~/gcc-8-20171203_BUILD$ as --version
GNU assembler (GNU Binutils for Ubuntu) 2.24
Copyright 2013 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or later.
This program has absolutely no warranty.
This assembler was configured for a target of `aarch64-linux-gnu'.

pmatos@gcc115:~/gcc-8-20171203_BUILD$ gcc --version
gcc (Ubuntu/Linaro 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

pmatos@gcc115:~/gcc-8-20171203_BUILD$ as -march=armv8.1-a
Assembler messages:
Error: unknown architecture `armv8.1-a'

Error: unrecognized option -march=armv8.1-a

However, if I run the a compiler build manually with just:

$ configure --disable-multilib
$ nice -n 19 make -j4 all

This compiles just fine. So I am at the moment attempting to investigate
what might cause the difference between what buildbot does and what I do
through ssh.

*Reporters:*

There is a single reporter which is a irc bot currently silent.

*Regression analysis:*

This is one of the most important issues to tackle and I have a solution
in a branch regression-testing :
https://github.com/LinkiTools/gcc-buildbot/tree/regression-testing

using jamais-vu from David Malcolm to analyze the regressions.
It needs some more testing and I should be able to get it working still
this year.

*LNT:*

I had mentioned I wanted to setup an interface which would allow for
easy visibility of test failures, time taken to build/test, etc.
Initially I thought a stack of influx+grafana would be a good idea, but
was pointed out to using LNT as presented by James Greenhalgh in the GNU
Cauldron. I have setup LNT (soon to be available under
https://gcc-lnt.linki.tools) and contacted James to learn more about the
setup. As it turns out James is just using it for benchmarking results
and out-of-the-box only seems to support the LLVM testing infrastructure
so getting GCC results in there might take a bit more of scripting and
plumbing.

I will probably take the same route and set it up first for the
benchmarking results and then try to get the gcc te

Re: GCC Buildbot Update

2017-12-14 Thread David Malcolm
On Thu, 2017-12-14 at 09:56 +0100, Paulo Matos wrote:
> Hello,
> 
> Apologies for the delay on the update. It was my plan to do an update
> on
> a monthly basis but it slipped by a couple of weeks.

Thanks for working on this.

> The current status is:
> 
> *Workers:*

[...snip...]

> *Builds:*

[...snip...]

Looking at some of the red blobs in e.g. the grid view there seem to be
a few failures in the initial "update gcc trunk repo" step of the form:

svn: Working copy '.' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for
details)

https://gcc-lnt.linki.tools/#/builders/3/builds/388/steps/0/logs/stdio

Is there a bug-tracking location for the buildbot?
Presumably:
  https://github.com/LinkiTools/gcc-buildbot/issues
?

*Reporters:*
> 
> There is a single reporter which is a irc bot currently silent.
> 
> *Regression analysis:*
> 
> This is one of the most important issues to tackle and I have a
> solution
> in a branch regression-testing :
> https://github.com/LinkiTools/gcc-buildbot/tree/regression-testing
> 
> using jamais-vu from David Malcolm to analyze the regressions.
> It needs some more testing and I should be able to get it working
> still
> this year.

I actually found a serious bug in jamais-vu yesterday - it got confused
by  multiple .sum lines for the same source line e.g. from multiple
"dg-" directives that all specify a particular line).  For example,
when testing one of my patches, of the 3 tests reporting as
  "c-c++-common/pr83059.c  -std=c++11  (test for warnings, line 7)"
one of the 3 PASS results became a FAIL.  jv correctly reported that
new FAILs had occurred, but wouldn't identify them, and mistakenly
reported that new PASSes has occurred also.

I've fixed that now; to do so I've done some refactoring and added a
testsuite.

It looks like you're capturing the textual output from "jv compare" and
using the exit code.  Would you prefer to import "jv" as a python
module and use some kind of API?  Or a different output format?

If you file pull request(s) for the changes you've made in your copy of
jamais-vu, I can take at look at merging them.

[...]

> I hope to send another update in about a months time.
> 
> Kind regards,

Thanks again for your work on this
Dave


Conversion progress report

2017-12-14 Thread Eric S. Raymond
It took a couple of days of struggling, but I have succeeded in
getting the GCC repo to load into reposurgeon on a 64GB machine.
In only 6 hours. :-)

In the process I found a couple of optimizations to reposurgeon that
dramatically increased its read speed and somewhat reduced maximimum
working set. But the real win was switching to PyPy rather than
CPython as the Python interpreter - it turns out this is exactly the
kind of job load for which their JIT compilation shines.

You have a crapload of cv2svn artifacts in the early history -
redundant D/M pairs generated while making Subversion tag commits.
That in itself is quite usual. But fully 50% of the load time (three
hours!) is spent optimizing these out, which is a degree of severity
I've never seen before.  That's a solved problem now.

There's a fair amount of surgery to be done still.  You have 151
mid-branch deletealls.  This usually indicates that a Subversion tag or
branch was created by mistake, and someone later tried to undo the
error by deleting the tag/branch directory before recreating it with a
copy operation.  *Usually* the right thing is to reroot the portion of
the branch forward of the delete and discard the commits before it,
but these cases will need to be checked by hand.

But now that the initial load has succeeded, the rest is just hard
work, as opposed to can-it-be-done-at-all? territory. And, as
previously noted, I am now authorized to concentrate on it until it's
done.

Actually my project manager and the senior devs on the NTPsec team are
following this work with lively interest and making constructive
suggestions. Moving that history out of Bitkeeper was an epic, too, and
as a result most of them are at least somewhat familiar with this
class of problem and find it interesting.

Now synced to r255661.
--
http://www.catb.org/~esr/";>Eric S. Raymond

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.




Request for data

2017-12-14 Thread Eric S. Raymond
For a slightly higher-quality conversion, the attribution entries in
the map file should have a third field: timezone.  IANA zones are
acceptable.

This wouldn't change how commit times are stored internally, but the
Git tools use it for display in local time.
--
http://www.catb.org/~esr/";>Eric S. Raymond

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.




Re: Request for data

2017-12-14 Thread Paul.Koning
The TZ project, which maintains the timezone database, would be a good place to 
find pointers.  They don't actually manage that information, but pointers to 
"shape files" that translate map coordinates into the timezone identifier are 
available.

paul

> On Dec 14, 2017, at 2:44 PM, Eric S. Raymond  wrote:
> 
> For a slightly higher-quality conversion, the attribution entries in
> the map file should have a third field: timezone.  IANA zones are
> acceptable.
> 
> This wouldn't change how commit times are stored internally, but the
> Git tools use it for display in local time.
> --
>   http://www.catb.org/~esr/";>Eric S. Raymond
> 
> My work is funded by the Internet Civil Engineering Institute: 
> https://icei.org
> Please visit their site and donate: the civilization you save might be your 
> own.
> 
> 


Re: GCC Buildbot Update

2017-12-14 Thread Christophe Lyon
On 14 December 2017 at 09:56, Paulo Matos  wrote:
> Hello,
>
> Apologies for the delay on the update. It was my plan to do an update on
> a monthly basis but it slipped by a couple of weeks.
>
Hi,

Thanks for the update!


> The current status is:
>
> *Workers:*
>
> - x86_64
>
> 2 workers from CF (gcc16 and gcc20) up and running;
> 1 worker from my farm (jupiter-F26) up and running;
>
> 2 broken CF (gcc75 and gcc76) - the reason for the brokenness is that
> the machines work well but all outgoing ports except the git port is
> open (9418 if not mistaken). This means that not only we cannot svn co
> gcc but we can't connect a worker to the master through port 9918. I
> have contacted the cf admin but the reply was that nothing can be done
> as they don't really own the machine. They seemed to have relayed the
> request to the machine owners.
>
> - aarch64
>
> I got an email suggesting I add some aarch64 workers so I did:
> 4 workers from CF (gcc113, gcc114, gcc115 and gcc116);
>
Great, I thought the CF machines were reserved for developpers.
Good news you could add builders on them.

> *Builds:*
>
> As before we have the full build and the incremental build. Both enabled
> for x86_64 and aarch64, except they are currently failing for aarch64
> (more on that later).
>
> The full build is triggered on Daily bump commit and the incremental
> build is triggered for each commit.
>
> The problem with this setup is that the incremental builder takes too
> long to run the tests. Around 1h30m on CF machines for x86_64.
>
> Segher Boessenkool sent me a patch to disable guality and prettyprinters
> which coupled with --disable-gomp at configure time was supposed to make
> things much faster. I have added this as the Fast builder, except this
> is failing during the test runs:
> unable to alloc 389376 bytes
> /bin/bash: line 21: 32472 Aborted `if [ -f
> ${srcdir}/../dejagnu/runtest ] ; then echo ${srcdir}/../dejagnu/runtest
> ; else echo runtest; fi` --tool gcc
> /bin/bash: fork: Cannot allocate memory
> make[3]: [check-parallel-gcc] Error 254 (ignored)
> make[3]: execvp: /bin/bash: Cannot allocate memory
> make[3]: [check-parallel-gcc_1] Error 127 (ignored)
> make[3]: execvp: /bin/bash: Cannot allocate memory
> make[3]: [check-parallel-gcc_1] Error 127 (ignored)
> make[3]: execvp: /bin/bash: Cannot allocate memory
> make[3]: *** [check-parallel-gcc_1] Error 127
>
>
> However, something interesting is happening here since the munin
> interface for gcc16 doesn't show the machine running out of memory:
> https://cfarm.tetaneutral.net/munin/gccfarm/gcc16/memory.html
> (something confirmed by the cf admins)
>
> The aarch64 build is failing as mentioned earlier. If you check the logs:
> https://gcc-buildbot.linki.tools/#/builders/5/builds/10
> the problem seems to be the assembler issuing:
> Assembler messages:
> Error: unknown architecture `armv8.1-a'
> Error: unrecognized option -march=armv8.1-a
>
>
> If I go to the machines and check the versions I get:
> pmatos@gcc115:~/gcc-8-20171203_BUILD$ as --version
> GNU assembler (GNU Binutils for Ubuntu) 2.24
> Copyright 2013 Free Software Foundation, Inc.
> This program is free software; you may redistribute it under the terms of
> the GNU General Public License version 3 or later.
> This program has absolutely no warranty.
> This assembler was configured for a target of `aarch64-linux-gnu'.
>
> pmatos@gcc115:~/gcc-8-20171203_BUILD$ gcc --version
> gcc (Ubuntu/Linaro 4.8.4-2ubuntu1~14.04.3) 4.8.4
> Copyright (C) 2013 Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
>
> pmatos@gcc115:~/gcc-8-20171203_BUILD$ as -march=armv8.1-a
> Assembler messages:
> Error: unknown architecture `armv8.1-a'
>
> Error: unrecognized option -march=armv8.1-a
>
> However, if I run the a compiler build manually with just:
>
> $ configure --disable-multilib
> $ nice -n 19 make -j4 all
>
> This compiles just fine. So I am at the moment attempting to investigate
> what might cause the difference between what buildbot does and what I do
> through ssh.
>
I suspect you are hitting a bug introduced recently, and fixed by:
https://gcc.gnu.org/ml/gcc-patches/2017-12/msg00434.html

> *Reporters:*
>
> There is a single reporter which is a irc bot currently silent.
>
> *Regression analysis:*
>
> This is one of the most important issues to tackle and I have a solution
> in a branch regression-testing :
> https://github.com/LinkiTools/gcc-buildbot/tree/regression-testing
>
> using jamais-vu from David Malcolm to analyze the regressions.
> It needs some more testing and I should be able to get it working still
> this year.
>
Great

> *LNT:*
>
> I had mentioned I wanted to setup an interface which would allow for
> easy visibility of test failures, time taken to build/test, etc.
> Initially I thought a stack of influx+grafana would be a good idea, but
> was pointed ou

Re: Conversion progress report

2017-12-14 Thread Joseph Myers
On Thu, 14 Dec 2017, Eric S. Raymond wrote:

> There's a fair amount of surgery to be done still.  You have 151
> mid-branch deletealls.  This usually indicates that a Subversion tag or
> branch was created by mistake, and someone later tried to undo the

Some may be mistakes - I think some are deliberate rebases.

There's a policy question we'll need to figure out for after the 
conversion of whether we want to have a branch namespace where people can 
push branches that can be deleted and rebased (while branches outside that 
namespace can't be deleted / rebased).  (Where a git rebase of course is 
different from an SVN rebase in that the history becomes liable to be 
garbage-collected.)

-- 
Joseph S. Myers
jos...@codesourcery.com


gcc-7-20171214 is now available

2017-12-14 Thread gccadmin
Snapshot gcc-7-20171214 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/7-20171214/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 7 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-7-branch 
revision 255666

You'll find:

 gcc-7-20171214.tar.xzComplete GCC

  SHA256=2709a9dcd086ea8b3e5c6598c02738a032135e7e87c5bc019a29669801758193
  SHA1=70a2e9486f457d5fdd70a16b6aa656255e82a266

Diffs from 7-20171207 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-7
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Archaic date formats

2017-12-14 Thread Eric S. Raymond
Joseph Myers :
> * There's an older ChangeLog style that the code doesn't handle but which 
> can be found in older GCC commits, with header lines such as:
> 
> Tue Dec  9 01:16:06 1997  Jeffrey A Law  (l...@cygnus.com)
> 
> This format sometimes has the email address surrounded by (), sometimes by 
> <>.  There are a few log entries with a hybrid of old and new forms, where 
> there is a -mm-dd date but an email address surrounded by ().

All these are now handled properly, and there's a unit test.
--
http://www.catb.org/~esr/";>Eric S. Raymond

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.




Register Allocation Graph Coloring algorithm and Others

2017-12-14 Thread Leslie Zhai

Hi GCC and LLVM developers,

I am learning Register Allocation algorithms and I am clear that:

* Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg (hard)

* Memory (20 - 100 cycles) is expensive than Register (1 cycle), but it 
has to spill code when PhysReg is unavailable


* Folding spill code into instructions, handling register coallescing, 
splitting live ranges, doing rematerialization, doing shrink wrapping 
are harder than RegAlloc


* LRA and IRA is default Passes in RA for GCC:

$ /opt/gcc-git/bin/gcc hello.c
DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
DEBUG: ../../gcc/ira-build.c, ira_build, line 3409

* Greedy is default Pass for LLVM

But I have some questions, please give me some hint, thanks a lot!

* IRA is regional register allocator performing graph coloring on a 
top-down traversal of nested regions, is it Global? compares with Local LRA


* The papers by Briggs and Chaiten contradict[2] themselves when examine 
the text of the paper vs. the pseudocode provided?


* Why  interference graph is expensive to build[3]?

And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for 
LLVM firstly.



[1] https://reviews.llvm.org/D39712

[2] http://lists.llvm.org/pipermail/llvm-dev/2008-March/012940.html

[3] https://github.com/joaotavio/llvm-register-allocator

[4] https://github.com/xiangzhai/llvm/tree/avr/include/llvm/CodeGen/GCol

--
Regards,
Leslie Zhai - https://reviews.llvm.org/p/xiangzhai/





Re: Register Allocation Graph Coloring algorithm and Others

2017-12-14 Thread Vladimir Makarov



On 12/14/2017 10:18 PM, Leslie Zhai wrote:

Hi GCC and LLVM developers,

I am learning Register Allocation algorithms and I am clear that:

* Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg 
(hard)


* Memory (20 - 100 cycles) is expensive than Register (1 cycle), but 
it has to spill code when PhysReg is unavailable



It might be much less if memory value is in L1 cache.
* Folding spill code into instructions, handling register coallescing, 
splitting live ranges, doing rematerialization, doing shrink wrapping 
are harder than RegAlloc


RegAlloc is in a wide sense includes all this tasks and more.  For some 
architectures, other tasks like a right live range splitting might be 
even more important for generated code quality than just better graph 
coloring.

* LRA and IRA is default Passes in RA for GCC:

$ /opt/gcc-git/bin/gcc hello.c
DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
DEBUG: ../../gcc/ira-build.c, ira_build, line 3409

* Greedy is default Pass for LLVM

But I have some questions, please give me some hint, thanks a lot!

* IRA is regional register allocator performing graph coloring on a 
top-down traversal of nested regions, is it Global? compares with 
Local LRA

IRA is a global RA.  The description of its initial version can be found

https://vmakarov.fedorapeople.org/vmakarov-submission-cgo2008.pdf

LRA in some way is also global RA but it is a very simplified version of 
global RA (e.g. LRA does not use conflict graph and its coloring 
algoritm is closer to priority coloring).  LRA does a lot of other very 
complicated things besides RA, for example instruction selection which 
is quite specific to GCC machine description. Usually code selection 
task is a separate pass in other compilers. Generally speaking LRA is 
more complicated, machine dependent and more buggy than IRA.  But 
fortunately LRA is less complicated than its predecessor so called 
reload pass.


IRA and LRA names have a long history and they do not reflect correctly 
the current situation.


It would be possible to incorporate LRA tasks into IRA, but the final RA 
would be much slower, even more complicated and hard to maintain and the 
generated code would be not much better.  So to improve RA 
maintainability, RA is divided on two parts solving a bit different 
tasks.  This is a typical engineering approach.


* The papers by Briggs and Chaiten contradict[2] themselves when 
examine the text of the paper vs. the pseudocode provided?
I don't examine Preston Briggs work so thoroughly.  So I can not say 
that is true.  Even so it is natural that there are discrepancy in 
pseudocode and its description especially for such size description.


For me Preston Briggs is famous for his introduction of optimistic coloring.


* Why  interference graph is expensive to build[3]?

That is because it might be N^2 algorithm.  There are a lot of 
publications investigating building conflict graphs and its cost in RAs.
And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, for 
LLVM firstly.


When I just started to work on RAs very long ago I used about the same 
approach: a lot of tiny transformations directed by a cost function and 
using metaheuristics (I also used tabu search as HEA). Nothing good came 
out of this.


If you are interesting in RA algorithms and architectures, I'd recommend 
Michael Matz article


ftp://gcc.gnu.org/pub/gcc/summit/2003/Graph%20Coloring%20Register%20Allocation.pdf

as a start point.


[1] https://reviews.llvm.org/D39712

[2] http://lists.llvm.org/pipermail/llvm-dev/2008-March/012940.html

[3] https://github.com/joaotavio/llvm-register-allocator

[4] https://github.com/xiangzhai/llvm/tree/avr/include/llvm/CodeGen/GCol





Re: Register Allocation Graph Coloring algorithm and Others

2017-12-14 Thread Leslie Zhai

Hi Vladimir,

Thanks for your kind and very detailed response!


在 2017年12月15日 12:40, Vladimir Makarov 写道:



On 12/14/2017 10:18 PM, Leslie Zhai wrote:

Hi GCC and LLVM developers,

I am learning Register Allocation algorithms and I am clear that:

* Unlimited VirtReg (pseudo) -> limited or fixed or alias[1] PhysReg 
(hard)


* Memory (20 - 100 cycles) is expensive than Register (1 cycle), but 
it has to spill code when PhysReg is unavailable



It might be much less if memory value is in L1 cache.
* Folding spill code into instructions, handling register 
coallescing, splitting live ranges, doing rematerialization, doing 
shrink wrapping are harder than RegAlloc


RegAlloc is in a wide sense includes all this tasks and more.  For 
some architectures, other tasks like a right live range splitting 
might be even more important for generated code quality than just 
better graph coloring.

* LRA and IRA is default Passes in RA for GCC:

$ /opt/gcc-git/bin/gcc hello.c
DEBUG: ../../gcc/lra.c, lra_init_once, line 2441
DEBUG: ../../gcc/ira-build.c, ira_build, line 3409

* Greedy is default Pass for LLVM

But I have some questions, please give me some hint, thanks a lot!

* IRA is regional register allocator performing graph coloring on a 
top-down traversal of nested regions, is it Global? compares with 
Local LRA

IRA is a global RA.  The description of its initial version can be found

https://vmakarov.fedorapeople.org/vmakarov-submission-cgo2008.pdf

I am reading this paper at present :)





LRA in some way is also global RA but it is a very simplified version 
of global RA (e.g. LRA does not use conflict graph and its coloring 
algoritm is closer to priority coloring).  LRA does a lot of other 
very complicated things besides RA, for example instruction selection 
which is quite specific to GCC machine description. Usually code 
selection task is a separate pass in other compilers. Generally 
speaking LRA is more complicated, machine dependent and more buggy 
than IRA.  But fortunately LRA is less complicated than its 
predecessor so called reload pass.


IRA and LRA names have a long history and they do not reflect 
correctly the current situation.


It would be possible to incorporate LRA tasks into IRA, but the final 
RA would be much slower, even more complicated and hard to maintain 
and the generated code would be not much better.  So to improve RA 
maintainability, RA is divided on two parts solving a bit different 
tasks.  This is a typical engineering approach.

I am debugging by printf to be familiar with LRA and IRA.






* The papers by Briggs and Chaiten contradict[2] themselves when 
examine the text of the paper vs. the pseudocode provided?
I don't examine Preston Briggs work so thoroughly.  So I can not say 
that is true.  Even so it is natural that there are discrepancy in 
pseudocode and its description especially for such size description.


For me Preston Briggs is famous for his introduction of optimistic 
coloring.


* Why  interference graph is expensive to build[3]?

That is because it might be N^2 algorithm.  There are a lot of 
publications investigating building conflict graphs and its cost in RAs.
And I am practicing[4] to use HEA, developed by Dr. Rhydian Lewis, 
for LLVM firstly.


When I just started to work on RAs very long ago I used about the same 
approach: a lot of tiny transformations directed by a cost function 
and using metaheuristics (I also used tabu search as HEA). Nothing 
good came out of this.
Thanks for your lesson! But are there some benchmarks when you used Tabu 
search as HEA, AntCol, etc. such as 
https://pbs.twimg.com/media/DRD-kxcUMAAxZec.jpg






If you are interesting in RA algorithms and architectures, I'd 
recommend Michael Matz article


ftp://gcc.gnu.org/pub/gcc/summit/2003/Graph%20Coloring%20Register%20Allocation.pdf 



as a start point.

Thanks! I am reading it.




[1] https://reviews.llvm.org/D39712

[2] http://lists.llvm.org/pipermail/llvm-dev/2008-March/012940.html

[3] https://github.com/joaotavio/llvm-register-allocator

[4] https://github.com/xiangzhai/llvm/tree/avr/include/llvm/CodeGen/GCol





--
Regards,
Leslie Zhai - https://reviews.llvm.org/p/xiangzhai/





Re: GCC Buildbot Update

2017-12-14 Thread Markus Trippelsdorf
On 2017.12.14 at 21:32 +0100, Christophe Lyon wrote:
> On 14 December 2017 at 09:56, Paulo Matos  wrote:
> > I got an email suggesting I add some aarch64 workers so I did:
> > 4 workers from CF (gcc113, gcc114, gcc115 and gcc116);
> >
> Great, I thought the CF machines were reserved for developpers.
> Good news you could add builders on them.

I don't think this is good news at all. 

Once a buildbot runs on a CF machine it immediately becomes impossible
to do any meaningful measurement on that machine. That is mainly because
of the random I/O (untar, rm -fr, etc.) of the bot. As a result variance
goes to the roof and all measurements drown in noise.

So it would be good if there was a strict separation of machines used
for bots and machines used by humans. In other words bots should only
run on dedicated machines.

-- 
Markus