On 08/08/11 21:35, Jan Hubicka wrote:
>> On Fri, Aug 5, 2011 at 3:24 PM, Jan Hubicka <hubi...@ucw.cz> wrote:
>>>>>
>>>>> In a way I like the current scheme since it is simple and extending it
>>>>> should IMO have some good reason. We could refine -Os behaviour without
>>>>> changing current predicates to optimize for speed in
>>>>> a) functions declared as "hot" by user and BBs in them that are not proved
>>>>> cold.
>>>>> b) based on profile feedback - i.e. we could have two thresholds, BBs with
>>>>> very arge counts wil be probably hot, BBs in between will be maybe
>>>>> hot/normal and BBs with low counts will be cold.
>>>>> This would probably motivate introduction of probably_hot predicate that
>>>>> summarize the above.
>>>>
>>>> Introducing a new 'probably_hot' will be very confusing -- unless you
>>>> also rename 'maybe_hot', but this leads to finer grained control:
>>>> very_hot, hot, normal, cold, unlikely which can be hard to use.  The
>>>> three state partition (not counting exec_once) seems ok, but
>>>
>>> OK, I also preffer to have fewer stages than more ;)
>>>>
>>>> 1) the unlikely state does not have controllable parameter
>>>
>>> Well, it is defined as something that is not likely to be executed, so the 
>>> requirement
>>> on count to be less than 1/(number_of_test_runs*2) is very natural and 
>>> don't seem
>>> to need to be tuned.
>>
>> Ok, so it is defined to be different from 'rarely-executed' case.
>> However rarely-executed seems more general and can perhaps be used in
>> place of unlikely case. If there are situation that applies only to
>> 'unlikely', they can be split apart.
> 
> So you thing of having hot (as for optimize for speed), cold (as for optimize
> for size) and rarely executed (as for optimize very heavily for size)?
> (as a replacement of current hot=speed/cold=size scheme)
> 
> It may not be completely crazy - i.e. at least kernel people tends to call
> for something that is like -Os but not doing extreme tradeoffs (like not
> expanding simple division by constant sequences or doing similar things that
> hurts performance a lot and usually save just small amount of code).
> 
> I however wonder how large portions of program can be safely characterized as
> rarely executed that are not unlikely. I.e. my intuition would be that it is
> relatively small portion of program since code tends to be either dead or used
> resonably often.
> 
> BTW The original motivation for "unlikely" was the function splitting pass, so
> the functions put into unlikely section are having good chance to be never
> touched in program execution and thus never mapped in.
> 
> It is in fact the only place it seems to be used in till today...
> 

Slightly on a tangent, but I think there would even be a case for -O1s
-O2s and -O3s, with -Os==-O2s.  On this scale -O1s would be similar to
-O1 in opitimizations, but avoiding some code-expanding situations
(examples might include loop head duplication); -O2s would largely be
the same as today, except that very expensive code removal options would
not be applied, but -O3s would be aggressive size-based optimizations,
even at the expense of significant performance.

Once such a division is well defined, making LTO use the specified
categories should be easier.

R.

Reply via email to