While trying to figure out an appropriate cost expression function for
Thick indexes, i learned that we are using Mackert and Lohman formula
(described in their paper "Index Scans Using a Finite LRU Buffer: A
Validated I/O Model", ACM Transactions on Database Systems).
The paper's result is as follows:
# Heap Pages fetched from disk for x index probes =
 min(2TDx/(2T+Dx), T)            when T <= b
 2TDx/(2T+Dx)                    when T > b and Dx <= 2Tb/(2T-b)
 b + (Dx - 2Tb/(2T-b))*(T-b)/T   when T > b and Dx > 2Tb/(2T-b)

where,
T = # pages in table
N = # tuples in table
D = avg. number of an index value is repeated in the table.
(duplication factor), and
b buffer/cache size

Please note that the above result only computes _heap_ page reads.


The above expression is used by index_pages_fetched() function to
compute index scan cost. The function however doesn't account for cost
of index page scans. On average an index probe will require (h-1) page
reads from disk, where h is the height of the B-tree (when # index
probes << # index key values).  I can post the details of the
derivation of this result, if required.
I am planning to use a similar expression for Thick indexes cost expressions.


Upon taking a cursory look at the cost functions of other operators, I
realized that available memory (effective_cache_size) is not
considered for estimating the costs of hash/sort/NLjoin/etc. Why is
that the case?

Regards,
Amit
Persistent Systems

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to