2008/10/4 Richard Freeman <[EMAIL PROTECTED]>:
> Ben de Groot wrote:
>>
>> -Os optimizes for size, while -O2 optimizes for speed. There is no need
>> at all to use -Os on a modern desktop machine, and it will run
>> comparatively slower than -O2 optimized code, which is probably not what
>> you want.
>>
>
> There are a couple of schools of thought on that, and I think performance
> can depend a great deal on what program you're talking about.
>
> On any machine, memory is a limited resource.  Oh sure, you could just
> "spend a little more on decent RAM", but you could also "spend a little more
> on a decent CPU" or whatever.  For a given amount of money you can only buy
> so much hardware, so any dollar spent on RAM is a dollar not spent on
> something else.
>
> So, if you reduce the memory footprint of processes, then you increase the
> amount of memory available for buffers/cache.  That cache is many orders of
> magnitude faster than even the fastest hard drive.  You also reduce swapping
> which obviously greatly slows things down.
>
> On the other hand, if you have a smaller program that does
> highly-CPU-intensive tasks like compression/transcoding/etc then speed
> optimization makes sense just about all the time (generally -O2 only - -O3
> sometimes does worse due to L2 cache misses).
>
> So, there are trade-offs.  To make things even more complicated the
> practical results can be impacted by your CPU model - different CPUs have a
> different cost for looping/jumping/branching vs cache misses. And the
> compiler makes a big difference - many of these observations date back to
> the gcc 3.4 days.  I've heard that newer GCC versions are more aggressive
> with size optimization at the expense of speed, which could tip the balance.
>
> A while back there were some posts from folks who had done some actual
> benchmarking.  I don't think it has been repeated.  Note that I wouldn't
> expect -Os to have much benefit unless you compile EVERYTHING on the system
> that way - since the freeing up of memory is cumulative.  The rather high
> level of pain associated with rebuilding your system with -Os vs -O2 for
> some benchmarking and subjective evaluation is probably the reason why
> everybody has an opinion but there isn't much hard data.
>
> Right now I'm using -Os across the board, and -O2 on an exception basis.
>  Maybe I should give that some more thought...
>
>

well, Os not only does O2 but does also some O3 safe flags and some
size optimizations. if you look at the -Os vs O2 and vs O3
optimization you'll see that most of the O3 flags that you want over
O2 are insinde Os. i've actually tried out O2, O3 and Os and found out
that O3 is sometimes faster (not everytime and especially with system
packages is even slower than O2) while Os is almost always faster than
O2. this leads me to think that if something is faster and also
smaller then it really worths trying it out. i've made a comparison on
gcc 4.1.0 with all the system and world built with a single option (of
course some packages have harcoded ebuild optimization and won't be
affected). when i'll have some spare time i'll look out for the
reports and post them.

-- 
dott. ing. beso

Reply via email to