On 2/12/2007 11:43 AM, Tom Lane wrote:
Greg Smith <[EMAIL PROTECTED]> writes:
Right now when you run pgbench, the results vary considerably from run to
run even if you completely rebuild the database every time. I've found
that a lot of that variation comes from two things:
This is a real issue, but I think your proposed patch does not fix it.
A pgbench run will still be penalized according to the number of
checkpoints or autovacuums that happen while it occurs. Guaranteeing
that there's at least one is maybe a bit more fair than allowing the
possibility of having none, but it's hardly a complete fix. Also,
this approach means that short test runs will have artificially lower
TPS results than longer ones, because the fixed part of the maintenance
overhead is amortized over fewer transactions.
Anything that doesn't run exclusively on the server, is given enough
data in size and enough time to similarly populate the buffer cache for
each run, WILL report more or less random TPS results. Real benchmarks
on considerable sized hardware have ramp-up times that are measured in
hours if not days, with the sole purpose of populating the cache and
thus smoothing out the transaction response profile. I think this change
is an entirely misleading approach to tackle the problem at hand.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== [EMAIL PROTECTED] #
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate