On Tue, Dec 31, 2019 at 09:25:01PM -0800, Andi Kleen wrote:
> Would be useful to figure out in more details where the memory
> consumption goes in your test case.
>
> Unfortunately gcc doesn't have a good general heap profiler,
> but I usually do (if you're on Linux). Whoever causes most page
> fa
Qing Zhao writes:
> (gdb) where
> #0 0x00ddbcb3 in df_chain_create (src=0x631006480f08,
> dst=0x63100f306288) at ../../gcc-8.2.1-20180905/gcc/df-problems.c:2267
> #1 0x001a in df_chain_create_bb_process_use (
> local_rd=0x7ffc109bfaf0, use=0x63100f306288, top_fla
Yes, much more. When you traverse a CFG, the analysis develops into a tree
(for example a tree of uses). That is, every basic block could be
*recursively* a root of an individual linear iteration for up to all basic
blocks. Sum them up, and you will get a polynomial expression. I don't
insist that
On Fri, Dec 20, 2019 at 02:57:57AM +0100, Dmitry Mikushin wrote:
> Trying to plan memory consumption ahead-of-work contradicts with the nature
> of the graph traversal. Estimation may work very well for something simple
> like linear or log-linear behavior.
Almost everything we do is (almost) line
On Fri, 20 Dec 2019 at 16:05, Qing Zhao wrote:
>
> Thanks a lot for all these help.
>
> So, currently, if GCC compilation aborts due to this reason, what’s the best
> way for the user to resolve it?
> I added “#pragma GCC optimize (“O1”) to the large routine in order to
> workaround this issue.
>
Thanks a lot for all these help.
So, currently, if GCC compilation aborts due to this reason, what’s the best
way for the user to resolve it?
I added “#pragma GCC optimize (“O1”) to the large routine in order to
workaround this issue.
Is there other better way to do it?
Is GCC planning to re
On December 20, 2019 1:41:19 AM GMT+01:00, Jeff Law wrote:
>On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote:
>> Hi, Dmitry,
>>
>> Thanks for the responds.
>>
>> Yes, routine size only cannot determine the complexity of the
>routine. Different compiler analysis might have different formula wi
Trying to plan memory consumption ahead-of-work contradicts with the nature
of the graph traversal. Estimation may work very well for something simple
like linear or log-linear behavior. But many compiler algorithms are known
to be polynomial or exponential (or even worse in case of bugs). So,
esti
On Thu, Dec 19, 2019 at 7:41 PM Jeff Law wrote:
>
> On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote:
> > Hi, Dmitry,
> >
> > Thanks for the responds.
> >
> > Yes, routine size only cannot determine the complexity of the routine.
> > Different compiler analysis might have different formula with
On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote:
> Hi, Dmitry,
>
> Thanks for the responds.
>
> Yes, routine size only cannot determine the complexity of the routine.
> Different compiler analysis might have different formula with multiple
> parameters to compute its complexity.
>
> Howev
Hi, Dmitry,
Thanks for the responds.
Yes, routine size only cannot determine the complexity of the routine.
Different compiler analysis might have different formula with multiple
parameters to compute its complexity.
However, the common issue is: when the complexity of a specific routine for
This issue is well-known in research/scientific software. The problem of
compiler hang or RAM overconsumption is actually not about the routine
size, but about too complicated control flow. When optimizing, the compiler
traverses the control flow graph, which may have the misfortune to explode
in t
Hi,
When using GCC to compile a very large routine with -O2, it failed with out of
memory during run time. (O1 is Okay)
As I checked within gdb, when “cc1” was consuming around 95% of the memory,
it’s at :
(gdb) where
#0 0x00ddbcb3 in df_chain_create (src=0x631006480f08,
dst=
13 matches
Mail list logo