On Dec 12, 2012, at 8:57 AM, Evgeny Shishkin <itparan...@gmail.com> wrote:

> 
> On Dec 12, 2012, at 8:44 AM, Huan Ruan <huan.ruan...@gmail.com> wrote:
> 
>> 
>> On 12 December 2012 15:33, Evgeny Shishkin <itparan...@gmail.com> wrote:
>> Optimiser thinks that nested loop is more expensive, because of point PK 
>> lookups, which a random io.
>> Can you set random_page_cost to 2 or 3 and try again?
>> 
>> Hi Evgeny
>> 
>> Thanks for the quick reply. Setting random_page_cost to 3 doesn't make a 
>> difference, but to 2 makes the optimiser to choose nested loop. However, 
>> with such a small penalty for random I/O, I'm worry about this setting will 
>> make other small queries incorrectly use index when it should be a 
>> sequential scan though. I understand random I/O is expensive, but in this 
>> case the optimiser already knows the big table is really big, should it 
>> consider a sequential scan will be slower than an index lookup? Scan 170 
>> million records vs index lookup of 50,000 records. Any thoughts?
>> 
> 
> Yes, this is the most common issue for me. 
> Usually you just have to find the right combination of random and seq scan 
> costs, shared_buffers and effective_cache_size.
> If some of the queries work well with another value of, say, 
> random_page_cost, then, since it is per session parameter, you can SET it in 
> your session before the query. But over time your table may change in size 
> and distribution and everything brakes. No speaking about general ugliness 
> from application standpoint.
> 
> May be somebody more experienced would help.
> 
> Also you can set different costs per tablespace.
> 
>> Thanks
>> Huan
> 

Added CC.

Reply via email to