On Wed, Mar 8, 2017 at 1:49 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> On Tue, Mar 7, 2017 at 10:10 PM, Dilip Kumar <dilipbal...@gmail.com> wrote:
>>> It's not about speed.  It's about not forgetting to prefetch.  Suppose
>>> that worker 1 becomes the prefetch worker but then doesn't return to
>>> the Bitmap Heap Scan node for a long time because it's busy in some
>>> other part of the plan tree.  Now you just stop prefetching; that's
>>> bad.  You want prefetching to continue regardless of which workers are
>>> busy doing what; as long as SOME worker is executing the parallel
>>> bitmap heap scan, prefetching should continue as needed.
>>
>> Right, I missed this part. I will fix this.
> I have fixed this part, after doing that I realised if multiple
> processes are prefetching then it may be possible that in boundary
> cases (e.g. suppose prefetch_target is 3 and prefetch_pages is at 2)
> there may be some extra prefetch but finally those prefetched blocks
> will be used.  Another, solution to this problem is that we can
> increase the prefetch_pages in advance then call tbm_shared_iterate,
> this will avoid extra prefetch.  But I am not sure what will be best
> here.

I don't think I understand exactly why this system might be prone to a
little bit of extra prefetching - can you explain further? I don't
think it will hurt anything as long as we are talking about a small
number of extra prefetches here and there.

> Fixed

Committed 0001.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to