On 08/17/2009 03:24 AM, Craig Ringer wrote:
On 16/08/2009 9:06 PM, NTPT wrote:
So I suggest we should have random_page_cost and
Sequential_page_cost configurable on per tablespace basis.
That strikes me as a REALLY good idea, personally, though I don't know
enough about the planner to factor
AFAIK postgresql measure characteristic of the data distribution in the
tables and indexes (that is what vacuum ANALYSE does) , but results of
that measures are **weighted by** random_page_cost and
sequential_page_cost. So measurements are correct, but costs (weight)
should reflect a real
2009/8/17 Jeremy Harris j...@wizmail.org:
Could not pgsql *measure* these costs (on a sampling basis, and with long
time-constants)?
In theory, sure. In practice, well, there are some engineering
challenges to solve.
1) The cost model isn't perfect so the it's not clear exactly what to
measure
Hi all
I have some idea/feature request.
Now, there are several devices available, that can be called rapid seek
devices (RSD in future text). I mean SSD disks, some devices like
gigabyte I-RAM and other (semi)profesional ram disk like solutions for
example Acard ANS-9010 . Rapid seek
On 16/08/2009 9:06 PM, NTPT wrote:
So I suggest we should have random_page_cost and
Sequential_page_cost configurable on per tablespace basis.
That strikes me as a REALLY good idea, personally, though I don't know
enough about the planner to factor in implementation practicalities and
any