persistent threads in a nested parallel region

2017-05-30 Thread Jakub Kurzak
I have a parallel for inside a parallel task.
Nesting works fine, but the nested threads are launched every time,
and the overhead is problematic.
Is there a way to keep the nested threads persistent?


Re: backporting fixes for xtensa to stable branches

2017-05-30 Thread Max Filippov
On Tue, May 30, 2017 at 3:34 PM, augustine.sterl...@gmail.com
 wrote:
> On Tue, May 30, 2017 at 3:26 PM, Max Filippov  wrote:
>> Hi Sterling,
>>
>> for xtensa we have a number of bugfixes in the mainline that were never
>> backported to the stable branches. It'd be great having them in the stable
>> gcc releases instead of carrying the fixes in various toolchain builders.
>> Would it be ok to do the following backports?
>
> Yes. All backports are absolutely fine.

Thanks. Applied all mentioned patches to the corresponding branches.

-- Max


gcc-5-20170530 is now available

2017-05-30 Thread gccadmin
Snapshot gcc-5-20170530 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/5-20170530/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 5 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-5-branch 
revision 248701

You'll find:

 gcc-5-20170530.tar.xzComplete GCC

  SHA256=fdbd8b683b1b6f0875e084aa041e87819b9b4385a8d7899e7c1e4b86b49831b0
  SHA1=245aacf6e0ddfc1330e7a5147ae0ba825d474eac

Diffs from 5-20170523 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-5
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: backporting fixes for xtensa to stable branches

2017-05-30 Thread augustine.sterl...@gmail.com
On Tue, May 30, 2017 at 3:26 PM, Max Filippov  wrote:
> Hi Sterling,
>
> for xtensa we have a number of bugfixes in the mainline that were never
> backported to the stable branches. It'd be great having them in the stable
> gcc releases instead of carrying the fixes in various toolchain builders.
> Would it be ok to do the following backports?

Yes. All backports are absolutely fine.


backporting fixes for xtensa to stable branches

2017-05-30 Thread Max Filippov
Hi Sterling,

for xtensa we have a number of bugfixes in the mainline that were never
backported to the stable branches. It'd be great having them in the stable
gcc releases instead of carrying the fixes in various toolchain builders.
Would it be ok to do the following backports?

to the gcc-5-branch:

r226963 ("xtensa: use unwind-dw2-fde-dip instead of unwind-dw2-fde"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=226963
r226964 ("xtensa: fix _Unwind_GetCFA"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=226964
r227809 ("xtensa: fix xtensa_fallback_frame_state for call0 ABI"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=227809
r233505 ("xtensa: fix libgcc build with --text-section-literals"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=233505
r241313 ("xtensa: don't use unwind-dw2-fde-dip with elf targets"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=241313
r242979 ("xtensa: Fix PR target/78603"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=242979
r248586 ("gcc: xtensa: fix fprintf format specifiers"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=248586

to the gcc-6-branch:

r241313 ("xtensa: don't use unwind-dw2-fde-dip with elf targets"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=241313
r241748 ("xtensa: Fix PR target/78118"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=241748
r242979 ("xtensa: Fix PR target/78603"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=242979
r248586 ("gcc: xtensa: fix fprintf format specifiers"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=248586

to the gcc-7-branch:

r248586 ("gcc: xtensa: fix fprintf format specifiers"):
  https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=248586

-- 
Thanks.
-- Max


Re: Basic Block Statistics

2017-05-30 Thread Will Hawkins
I just wanted to send a quick follow up.

Thanks to the incredible support on this list from Mr. Law and support
in IRC from segher, djgpp and dmalcolm, I was able to put together a
serviceable little plugin that does some very basic statistic
generation on basic blocks.

Here is a link to the source with information about how to build/run:
https://github.com/whh8b/bb_stats

If you are interested in more information, just send me an email.

Thanks again for everyone's help!
Will

On Sat, May 20, 2017 at 11:29 PM, Will Hawkins  wrote:
> On Fri, May 19, 2017 at 4:40 PM, Jeff Law  wrote:
>> On 05/17/2017 08:22 PM, Will Hawkins wrote:
>>> On Wed, May 17, 2017 at 2:59 PM, Will Hawkins  wrote:
 On Wed, May 17, 2017 at 2:41 PM, Will Hawkins  wrote:
> On Wed, May 17, 2017 at 1:04 PM, Will Hawkins  wrote:
>> On Wed, May 17, 2017 at 1:02 PM, Jeff Law  wrote:
>>> On 05/17/2017 10:36 AM, Will Hawkins wrote:
 As I started looking into this, it seems like PLUGIN_FINISH is where
 my plugin will go. Everything is great so far. However, when plugins
 at that event are invoked, they get no data. That means I will have to
 look into global structures for information regarding the compilation.
 Are there pointers to the documentation that describe the relevant
 global data structures that are accessible at this point?

 I am looking through the source code and documentation and can't find
 what I am looking for. I am happy to continue working, but thought I'd
 ask just in case I was missing something silly.

 Thanks again for all your help getting me started on this!
>>> FOR_EACH_BB (bb) is what you're looking for.  That will iterate over the
>>> basic blocks.
>>
>> Thank you so much for your response!
>>
>> I just found this as soon as you sent it. Sorry for wasting your time!
>>
>>
>>>
>>> Assuming you're running late, you'll then want to walk each insn within
>>> the bb.  So something like this
>>>
>>> basic_block bb
>>> FOR_EACH_BB (bb)
>>>   {
>>> rtx_insn *insn;
>>> FOR_BB_INSNS (bb, insn)
>>>   {
>>> /* Do something with INSN.  */
>>>   }
>>>   }
>>>
>>>
>>> Note that if you're running too late the CFG may have been released, in
>>> which case this code wouldn't do anything.
>
> This macro seems to require that there be a valid cfun. This seems to
> imply that the macro will work only where the plugin callback is
> invoked before/after a pass that does some optimization for a
> particular function. In particular, at PLUGIN_FINISH, cfun is NULL.
> This makes perfect sense.
>
> Since PLUGIN_FINISH is the place where diagnostics are supposed to be
> printed, I was wondering if there was an equivalent iterator for all
> translation units (from which I could derive functions, from which I
> could derive basic blocks) that just "FINISH"ed compiling?


 Answering my own question for historical purposes and anyone else who
 might need this:

   FOR_EACH_VEC_ELT(*all_translation_units, i, t)

 is exactly what I was looking for!

 Sorry for the earlier spam and thank you for your patience!
 Will
>>>
>>>
>>> Well, I thought that this was what I wanted, but it turns out perhaps
>>> I was wrong. So, I am turning back for some help. Again, i apologize
>>> for the incessant emails.
>>>
>>> I would have thought that a translation unit tree node's chain would
>>> point to all the nested tree nodes. This does not seem to be the case,
>>> however. Am I missing something? Or is this the intended behavior?
>> I think there's a fundamental misunderstanding.
>
> You are right, Mr. Law. I'm really sorry for the confusion. I got
> things straightened out in my head and now I am making great progress.
>>
>> We don't hold the RTL IR for all the functions in a translation unit in
>> memory at the same time.  You have to look at the RTL IR for each as its
>> generated.
>
> Thank you, as ever, for your continued input. I am going to continue
> to work and I will keep everyone on the list posted and let you know
> when it is complete.
>
> Thanks again and have a great rest of the weekend!
>
> Will
>>
>> jeff