Jim Lux wrote:

You're right. But what might be interesting is if on some folks, the kernel fraction goes up, but for others it doesn't, or there's radical (as in factor of 2) differences in memory/io/paging.


Practically and philosophically, one has to decide how much it's worth to track something like this down. At some point, it's cheaper just to buy a faster computer (or tolerate the 30% utilization, which is still low) than to try and find the problem.

Yes and no. If this were the last release of the software, ever, certainly. However, if we drop (say) 10 per cent in performance per preview, even Moore's Law won't be able to keep up with the accumulated losses.

It's a trade-off.

The real question ought to be:  Do we know why it _should_ be happening?

It's one thing to say "this new feature is worth raising our minimum computation requirements."

It's quite another to say "oh, heck, let's give away 10 per cent this time." You can do that once in a while, or even let it dangle and hope to figure it out later on, but in the long run, it won't do.



Larry   WO0Z





Reply via email to