I haven't done any hard benchmarking yet.  Here's what I have now.  I'm
in RH8.0 right now, and playing an MP3 in XMMS took 0.2% of my CPU with
the precompiled i386 installation and the MP3 plugin RPM from GURU
labs.  Simply rebuilding everything with the --target=athlon (nothing
else), I play MP3s with 0% CPU utilization.  I'd have to do some long
term statistical logging to give you firmer numbers.

On my brother's K6-2-500, I rebuilt Phoebe for i586.  He has an nVidia
GF2 MX200 32MB AGP, so I had to stick in the kernel from RH8.0.  His
system was doing about 18FPS at it's fastest in Quake3 full version with
the newest release following a computer player (no humans, 3 computers)
in The Longest Yard.  Now it does about 22FPS at it's fastest.  (Of
course, I'd have to get more statistics to be sure of this, too.)

CPU optimizations can make a huge difference.  Consider how MMX and
3D-Now! greatly reduce the program size, (and therefore) memory
consumption, and processor usage.  I remember when I had an Intel P200
(BTW, I'm not an Intel guy) playing Quake at reasonable speeds.  I
thought it was the coolest thing.  Then I got an IBM (Cyrix) 6x86 P120. 
Quake was smooth as silk.  Why?  'cause software emulation of hardware
optimization takes up a LOT of processing, just like hardware 3D
acceleration versus software 3D.

For those wanting numbers, though, I would have to agree...try it out. 
Linux is such a smorgasbord of already well written code.  Depending
upon what you're doing, "your mileage may vary".  I doubt that Samba,
for instance, would run any faster.  It's usually limited by RAM, hard
drive speed, and network card speed.  Most basic applications, such as
file-managers, won't see any improvement.  (Mozilla, sadly, won't ever
see any major improvements.  Mozilla is mostly an interpreted program. 
It's like browsing the internet in QBASIC.)

In the field of stability, it's funny, but rebuilding with optimizations
should improve stability.  When a program has all the complex code
written into it, it's much more likely to have acceptable bugs.  If the
CPU has the same code written into it, it's much more reliable than the
software which isn't as heavily tested.  Fortunately, Linux software is
very high-level.  By this, I mean that it's well organized, whereas in
Winblows, each program gets execution, allowing it to screw up anything
it very well pleases and requiring it to have complex code instead of
letting it rely upon other libraries for everything.  True, Windows does
have some libraries, but they're not nearly as well organized (nor
full-range) as Linux.  Since Linux is greatly source-based (and open
source), all anyone needs to do to optimize code is recompile it. 
Closed source publishers have to produce code for every circumstance and
have it choose the right code (originally with risky hardware tests at
program start)

>It was poo-poo'ed by Havoc earlier, too.  No offense, but until I see
>real numbers from a benchmark app, I'm inclined to believe two
>knowledgeable hackers over easily influenced geeks, who desperately
want
>to believe they can squeeze out those last few drops of performance,
and
>actually notice a difference.  Yeah, compiling EVERYTHING with a
certain
>set of flags might yield a difference in performance, but I'm guessing
>that for most things, it's psychological - one *believes* it is faster.
>I'd love to see double-blind testing on this someday...

If optimization didn't make much difference, then why aren't we using
1700MHz 386 chips?  (Of course, newer chips take less clocks to run the
i386 instruction set, but it's not due to the instruction set itself,
it's due to the chip design.)

>Also, I refuse to believe that Red Hat aren't already looking to
>optimize their distribution as greatly as they feel is safe.  If Red
>Hat's not performing some optimization on their RPMS, I'm guessing it's
>for a *reason*.  What reason, outside of real technological issues,
>would they have for making their distro *slower* than they are capable
>of making it?  They could only lose for doing so!

I can understand using the precompiled i386 code for a cookie-cutter
distro.  Also, using the i386 instruction set avoids odd-ball chips,
like older IDT, Cyrix, and TI chips that are detected incorrectly (like
the infamous 5x86 that's read as an i686).  Plus, as I said before, I
don't think there's any noticeable drop in performance for most servers
for using unoptimized code.

When I had Sorcerer working, it wasn't just a little snappier, it was
downright fast at everything (that worked).  Where Red Hat takes around
three seconds to fully load Afterstep, Sorcerer loaded it before my
screen even clicked into the new refresh rate.

Also, as for whether they would make their distro *slower* than they are
capable of making it, note that they don't have ReiserFS available by
default.  Ext3 is only a little more reliable, but ReiserFS is
substantially faster.  (In fact, I'm getting ready to post a message
about how my Ext3 died again.)  "It just works."  That doesn't mean it
works fast.

>Wow, 3 days on a nice system like that!

This brings up another thing about AMD chips.  I've always been
skeptical about dual processor mommyboards because of the cache not
being unified, and after looking up benchmarks, I've only seen a 7%
increase in performance, and that's in MP-optimized software.  All other
software ran slower on dual-chip boards.  Unless I've been terribly
misinformed, why do they even sell MP setups (though my company has yet
to sell even one MP board)?

>Even if you could figure out how to run benchmarks..I still don't think
>you'd have any idea how to figure out what exactly gentoo is doing
>differently.

Try this.  Install RH and compile the kernel.  See how long it takes. 
Then, install all the .athlon.rpm stuff and do it again.  I'll try it in
a little bit, too.

--Benjamin Vander Jagt



-- 
Phoebe-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/phoebe-list

Reply via email to