On Tue, Mar 7, 2017 at 10:10 PM, Dilip Kumar <dilipbal...@gmail.com> wrote:
>> It's not about speed.  It's about not forgetting to prefetch.  Suppose
>> that worker 1 becomes the prefetch worker but then doesn't return to
>> the Bitmap Heap Scan node for a long time because it's busy in some
>> other part of the plan tree.  Now you just stop prefetching; that's
>> bad.  You want prefetching to continue regardless of which workers are
>> busy doing what; as long as SOME worker is executing the parallel
>> bitmap heap scan, prefetching should continue as needed.
>
> Right, I missed this part. I will fix this.
I have fixed this part, after doing that I realised if multiple
processes are prefetching then it may be possible that in boundary
cases (e.g. suppose prefetch_target is 3 and prefetch_pages is at 2)
there may be some extra prefetch but finally those prefetched blocks
will be used.  Another, solution to this problem is that we can
increase the prefetch_pages in advance then call tbm_shared_iterate,
this will avoid extra prefetch.  But I am not sure what will be best
here.


On Tue, Mar 7, 2017 at 9:41 PM, Robert Haas <robertmh...@gmail.com> wrote:
> +    if (--ptbase->refcount == 0)
> +        dsa_free(dsa, istate->pagetable);
> +
> +    if (istate->spages)
> +    {
> +        ptpages = dsa_get_address(dsa, istate->spages);
> +        if (--ptpages->refcount == 0)
> +            dsa_free(dsa, istate->spages);
> +    }
> +    if (istate->schunks)
> +    {
> +        ptchunks = dsa_get_address(dsa, istate->schunks);
> +        if (--ptchunks->refcount == 0)
> +            dsa_free(dsa, istate->schunks);
> +    }
>
> This doesn't involve any locking, which I think will happen to work
> with the current usage pattern but doesn't seem very robust in
> general.  I think you either need the refcounts to be protected by a
> spinlock, or maybe better, use pg_atomic_uint32 for them.  You want
> something like if (pg_atomic_sub_fetch_u32(&refcount, 1) == 0) {
> dsa_free(...) }
>
> Otherwise, there's no guarantee it will get freed exactly once.

Fixed


On Tue, Mar 7, 2017 at 9:34 PM, Robert Haas <robertmh...@gmail.com> wrote:
> You've still got this:
>
> +               if (DsaPointerIsValid(node->pstate->tbmiterator))
> +                       tbm_free_shared_area(dsa, node->pstate->tbmiterator);
> +
> +               if (DsaPointerIsValid(node->pstate->prefetch_iterator))
> +                       dsa_free(dsa, node->pstate->prefetch_iterator);
>
> I'm trying to get to a point where both calls use
> tbm_free_shared_area() - i.e. no peeking behind the abstraction layer.

Fixed

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachment: 0001-tidbitmap-support-shared-v8.patch
Description: Binary data

Attachment: 0003-parallel-bitmap-heapscan-v8.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to