On Wed, 1 Apr 2020 at 11:46, Amit Kapila <amit.kapil...@gmail.com> wrote:
>
> On Tue, Mar 31, 2020 at 7:32 PM Dilip Kumar <dilipbal...@gmail.com> wrote:
> >
> > While testing I have found one issue.  Basically, during a parallel
> > vacuum, it was showing more number of
> > shared_blk_hits+shared_blks_read.  After, some investigation, I found
> > that during the cleanup phase nworkers are -1, and because of this we
> > didn't try to launch worker but "lps->pcxt->nworkers_launched" had the
> > old launched worker count and shared memory also had old buffer read
> > data which was never updated as we did not try to launch the worker.
> >
> > diff --git a/src/backend/access/heap/vacuumlazy.c
> > b/src/backend/access/heap/vacuumlazy.c
> > index b97b678..5dfaf4d 100644
> > --- a/src/backend/access/heap/vacuumlazy.c
> > +++ b/src/backend/access/heap/vacuumlazy.c
> > @@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel,
> > IndexBulkDeleteResult **stats,
> >          * Next, accumulate buffer usage.  (This must wait for the workers 
> > to
> >          * finish, or we might get incomplete data.)
> >          */
> > -       for (i = 0; i < lps->pcxt->nworkers_launched; i++)
> > +       nworkers = Min(nworkers, lps->pcxt->nworkers_launched);
> > +       for (i = 0; i < nworkers; i++)
> >                 InstrAccumParallelQuery(&lps->buffer_usage[i]);
> >
> > It worked after the above fix.
> >
>
> Good catch.  I think we should not even call
> WaitForParallelWorkersToFinish for such a case.  So, I guess the fix
> could be,
>
> if (workers > 0)
> {
> WaitForParallelWorkersToFinish();
> for (i = 0; i < lps->pcxt->nworkers_launched; i++)
>                  InstrAccumParallelQuery(&lps->buffer_usage[i]);
> }
>

Agreed. I've attached the updated patch.

Thank you for testing, Dilip!

Regards,

-- 
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachment: bufferusage_vacuum_v4.patch
Description: Binary data

Reply via email to