Rod,

  I agree with you that the best solution to the issue of an actual
application that is doing this type of programming would be to correct
the program.  While throwing hardware may seem "easier", I'm always
amazed by programmers reactions when I do point out that by making
"minor" changes to a program, they can achieve big performance
improvements (both in cpu and elapsed time).



Peter I. Vander Woude

Sr. Mainframe Engineer
Harris Teeter, Inc.



>>> [EMAIL PROTECTED] 08/11/2003 4:06:03 PM >>>
>What you are seeing is the result of a badly configured
>paging subsystem

>What happens: Linux touches all it's pages when it boots.
>These pages then overtime get paged out. Then you run your
>program - and all those pages get paged back in.

>Applications with more consistent workings sets would not
>see this.

Erm... far be it from me to argue with the master on the
performance of the actual physical hardware, but isn't the
point here that they've generated a worst-possible-case
scenario (is this called a degenerate case these days?)
and that if everything needs to be paged back in then the
problem is that they've inherently produced a paging scenario
in their program? (Assuming that they've been paged
out, of course.)

I know that the paging and pre-fetch algorithms on VM are
pretty good but isn't the best solution to this case to
change the program? Standard program performance and tuning?
Or even system tuning for any pre-fetch algorithms?

(Aside: I was surprised how much of my VM perf/tun work
carried over to Solaris when I went on their perf/tun course
and there's 15+ years between the two courses. Of course, I
was left wondering how people manage when one of the other
sysadmins did't want to know that it really still matters
where you put your swap, system utils etc. on a disk
because in the 21st century you didn't want to have to
be bothered with that. Sigh. Oink, oink, flap, flap.)

Rod

Reply via email to