On Tue, 17 Dec 2019 at 18:07, Masahiko Sawada <
masahiko.saw...@2ndquadrant.com> wrote:
>
> On Fri, 13 Dec 2019 at 15:50, Amit Kapila <amit.kapil...@gmail.com> wrote:
> >
> > On Fri, Dec 13, 2019 at 11:08 AM Masahiko Sawada
> > <masahiko.saw...@2ndquadrant.com> wrote:
> > >
> > > On Fri, 13 Dec 2019 at 14:19, Amit Kapila <amit.kapil...@gmail.com>
wrote:
> > > >
> > > > > > >
> > > > > > > How about adding an additional argument to
ReinitializeParallelDSM()
> > > > > > > that allows the number of workers to be reduced? That seems
like it
> > > > > > > would be less confusing than what you have now, and would
involve
> > > > > > > modify code in a lot fewer places.
> > > > > > >
> > > > > >
> > > > > > Yeah, we can do that.  We can maintain some information in
> > > > > > LVParallelState which indicates whether we need to reinitialize
the
> > > > > > DSM before launching workers.  Sawada-San, do you see any
problem with
> > > > > > this idea?
> > > > >
> > > > > I think the number of workers could be increased in cleanup
phase. For
> > > > > example, if we have 1 brin index and 2 gin indexes then in
bulkdelete
> > > > > phase we need only 1 worker but in cleanup we need 2 workers.
> > > > >
> > > >
> > > > I think it shouldn't be more than the number with which we have
> > > > created a parallel context, no?  If that is the case, then I think
it
> > > > should be fine.
> > >
> > > Right. I thought that ReinitializeParallelDSM() with an additional
> > > argument would reduce DSM but I understand that it doesn't actually
> > > reduce DSM but just have a variable for the number of workers to
> > > launch, is that right?
> > >
> >
> > Yeah, probably, we need to change the nworkers stored in the context
> > and it should be lesser than the value already stored in that number.
> >
> > > And we also would need to call
> > > ReinitializeParallelDSM() at the beginning of vacuum index or vacuum
> > > cleanup since we don't know that we will do either index vacuum or
> > > index cleanup, at the end of index vacum.
> > >
> >
> > Right.
>
> I've attached the latest version patch set. These patches requires the
> gist vacuum patch[1]. The patch incorporated the review comments. In
> current version patch only indexes that support parallel vacuum and
> whose size is larger than min_parallel_index_scan_size can participate
> parallel vacuum. I'm still not unclear to me that using
> min_parallel_index_scan_size is the best approach but I agreed to set
> a lower bound of relation size. I separated the patch for
> PARALLEL_VACUUM_DISABLE_LEADER_PARTICIPATION  from the main patch and
> I'm working on that patch.
>
> Please review it.
>
> [1]
https://www.postgresql.org/message-id/CAA4eK1J1RxmXFAHC34S4_BznT76cfbrvqORSk23iBgRAOj1azw%40mail.gmail.com

Thanks for updated patches.  I verified my all reported issues and all are
fixed in v36 patch set.

Below are some review comments:
1.
+   /* cap by *max_parallel_maintenace_workers* */


+   parallel_workers = Min(parallel_workers,
max_parallel_maintenance_workers);

Here, spell of max_parallel_*maintenace*_workers is wrong.  (correct:
max_parallel_maintenance_workers)

2.
+ * size of stats for each index.  Also, this function   Since currently we
don't support parallel vacuum

+ * for autovacuum we don't need to care about autovacuum_work_mem

Here, I think, 1st line should be changed because it is not looking correct
as grammatically.

Thanks and Regards
Mahendra Thalor
EnterpriseDB: http://www.enterprisedb.com

Reply via email to