"Bruce Momjian" <[EMAIL PROTECTED]> writes:

> Uh, am I supposed to be running more TOAST tests?  Would someone explain
> what they want tested?

If you want my opinion I would say we need two tests:

1) For TOAST_TUPLE_TARGET:

We need to run the test scripts you have already for sizes that cause actual
disk i/o. The real cost of TOAST lies in the random access seeks and your
tests all fit in memory so they're missing that.

2) And for TOAST_MAX_CHUNK_SIZE:

Set TOAST_MAX_CHUNK_SIZE to 8k and TOAST_TOAST_TUPLE_TARGET to 4097 and store
a large table (larger than RAM) of 4069 bytes (and verify that that's creating
two chunks for each tuple). Test how long it takes to do a sequential scan
with hashtext(). Compare that to the above with TOAST_MAX_CHUNK_SIZE set to 4k
(and verify that the toast table is much smaller in this configuration).

Actually I think we need to do the latter of these first. Because if it shows
that bloating the toast table is faster than chopping up data into finer
chunks then we'll want to set TOAST_MAX_CHUNK_SIZE to 8k and then your tests
above will have to be rerun.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com


---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to