On Wed, Apr 10, 2013 at 8:56 AM, Steve Singer <ssin...@ca.afilias.info>wrote:

> On 13-04-10 09:56 AM, k...@rice.edu wrote:
>
>> On Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:
>>
>
>
>> Hi Steve,
>>
>> The one thing that stands out to me is that you are working with 200GB of
>> data on a machine with 4-8GB of ram and you have the random_page_cost set
>> to 2.0. That is almost completely uncached and I would expect a value of
>> 10 or more to be closer to reality.
>>
>
> Setting random_page_cost to 15 makes the planner choose the nested-loop
> plan (at least the date range I tried).
>
> I thought that the point of effective cache size was to tell the planner
> high likely it is for a random page to be in cache.



e_c_s tells it how likely it is to still be in cache the second (and
subsequent) time the page is visited during the *same query*.  It doesn't
tell it how likely it is to be in cache the first time it is needed in a
given query.  (Also, e_c_s is irrelevant for bitmap scans, as they
inherently hit every block only once)


Also, it doesn't tell how expensive it is to bring it into cache when it is
needed. That is what random_page_cost is for.  If you tell that those
fetches are going to be cheap, then it doesn't matter so much how many of
them it is going to have to do.

Cheers,

Jeff

Reply via email to