Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-13 Thread Amit Kapila
On Thu, Jan 12, 2023 at 8:25 AM Ted Yu  wrote:

> On Wed, Jan 11, 2023 at 6:54 PM Amit Kapila  wrote:
>>
>> On Wed, Jan 11, 2023 at 9:31 AM Ted Yu  wrote:
>> >
>> > On Tue, Jan 10, 2023 at 7:55 PM houzj.f...@fujitsu.com 
>> >  wrote:
>> >>
>> >> On Wednesday, January 11, 2023 10:21 AM Ted Yu  
>> >> wrote:
>> >> > /* First time through, initialize parallel apply worker state 
>> >> > hashtable. */
>> >> > if (!ParallelApplyTxnHash)
>> >> >
>> >> > I think it would be better if `ParallelApplyTxnHash` is created by the 
>> >> > first
>> >> > successful parallel apply worker.
>> >>
>> >> Thanks for the suggestion. But I am not sure if it's worth to changing the
>> >> order here, because It will only optimize the case where user enable 
>> >> parallel
>> >> apply but never get an available worker which should be rare. And in such 
>> >> a
>> >> case, it'd be better to increase the number of workers or disable the 
>> >> parallel mode.
>> >>
>> >
>> >
>> > I think even though the chance is rare, we shouldn't leak resource.
>> >
>>
>> But that is true iff we are never able to start the worker. Anyway, I
>> think it is probably fine either way but we can change it as per your
>> suggestion to make it more robust and probably for the code clarity
>> sake. I'll push this tomorrow unless someone thinks otherwise.
>>

Pushed.

-- 
With Regards,
Amit Kapila.




Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-11 Thread Ted Yu
On Wed, Jan 11, 2023 at 6:54 PM Amit Kapila  wrote:

> On Wed, Jan 11, 2023 at 9:31 AM Ted Yu  wrote:
> >
> > On Tue, Jan 10, 2023 at 7:55 PM houzj.f...@fujitsu.com <
> houzj.f...@fujitsu.com> wrote:
> >>
> >> On Wednesday, January 11, 2023 10:21 AM Ted Yu 
> wrote:
> >> > /* First time through, initialize parallel apply worker state
> hashtable. */
> >> > if (!ParallelApplyTxnHash)
> >> >
> >> > I think it would be better if `ParallelApplyTxnHash` is created by
> the first
> >> > successful parallel apply worker.
> >>
> >> Thanks for the suggestion. But I am not sure if it's worth to changing
> the
> >> order here, because It will only optimize the case where user enable
> parallel
> >> apply but never get an available worker which should be rare. And in
> such a
> >> case, it'd be better to increase the number of workers or disable the
> parallel mode.
> >>
> >
> >
> > I think even though the chance is rare, we shouldn't leak resource.
> >
>
> But that is true iff we are never able to start the worker. Anyway, I
> think it is probably fine either way but we can change it as per your
> suggestion to make it more robust and probably for the code clarity
> sake. I'll push this tomorrow unless someone thinks otherwise.
>
> --
> With Regards,
> Amit Kapila.
>

Thanks Amit for the confirmation.

Cheers


Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-11 Thread Amit Kapila
On Wed, Jan 11, 2023 at 9:31 AM Ted Yu  wrote:
>
> On Tue, Jan 10, 2023 at 7:55 PM houzj.f...@fujitsu.com 
>  wrote:
>>
>> On Wednesday, January 11, 2023 10:21 AM Ted Yu  wrote:
>> > /* First time through, initialize parallel apply worker state 
>> > hashtable. */
>> > if (!ParallelApplyTxnHash)
>> >
>> > I think it would be better if `ParallelApplyTxnHash` is created by the 
>> > first
>> > successful parallel apply worker.
>>
>> Thanks for the suggestion. But I am not sure if it's worth to changing the
>> order here, because It will only optimize the case where user enable parallel
>> apply but never get an available worker which should be rare. And in such a
>> case, it'd be better to increase the number of workers or disable the 
>> parallel mode.
>>
>
>
> I think even though the chance is rare, we shouldn't leak resource.
>

But that is true iff we are never able to start the worker. Anyway, I
think it is probably fine either way but we can change it as per your
suggestion to make it more robust and probably for the code clarity
sake. I'll push this tomorrow unless someone thinks otherwise.

-- 
With Regards,
Amit Kapila.




Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-10 Thread Ted Yu
On Tue, Jan 10, 2023 at 7:55 PM houzj.f...@fujitsu.com <
houzj.f...@fujitsu.com> wrote:

> On Wednesday, January 11, 2023 10:21 AM Ted Yu 
> wrote:
> > /* First time through, initialize parallel apply worker state
> hashtable. */
> > if (!ParallelApplyTxnHash)
> >
> > I think it would be better if `ParallelApplyTxnHash` is created by the
> first
> > successful parallel apply worker.
>
> Thanks for the suggestion. But I am not sure if it's worth to changing the
> order here, because It will only optimize the case where user enable
> parallel
> apply but never get an available worker which should be rare. And in such a
> case, it'd be better to increase the number of workers or disable the
> parallel mode.
>
> Best Regards,
> Hou zj
>

I think even though the chance is rare, we shouldn't leak resource.

The `ParallelApplyTxnHash` shouldn't be created if there is no single apply
worker.


RE: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-10 Thread houzj.f...@fujitsu.com
On Wednesday, January 11, 2023 10:21 AM Ted Yu  wrote:
> /* First time through, initialize parallel apply worker state 
> hashtable. */
> if (!ParallelApplyTxnHash)
> 
> I think it would be better if `ParallelApplyTxnHash` is created by the first
> successful parallel apply worker.

Thanks for the suggestion. But I am not sure if it's worth to changing the
order here, because It will only optimize the case where user enable parallel
apply but never get an available worker which should be rare. And in such a
case, it'd be better to increase the number of workers or disable the parallel 
mode.

Best Regards,
Hou zj


Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-10 Thread Ted Yu
On Tue, Jan 10, 2023 at 6:13 PM houzj.f...@fujitsu.com <
houzj.f...@fujitsu.com> wrote:

> On Wednesday, January 11, 2023 1:25 AM Ted Yu  wrote:
>
> > I was reading src/backend/replication/logical/applyparallelworker.c .
> > In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I
> think the `ParallelApplyTxnHash` should be released.
>
> Thanks for reporting.
>
> ParallelApplyTxnHash is used to cache the state of streaming transactions
> being
> applied. There could be multiple streaming transactions being applied in
> parallel and their information were already saved in ParallelApplyTxnHash,
> so
> we should not release them just because we don't have a worker available to
> handle a new transaction here.
>
> Best Regards,
> Hou zj
>
Hi,

/* First time through, initialize parallel apply worker state
hashtable. */
if (!ParallelApplyTxnHash)

I think it would be better if `ParallelApplyTxnHash` is created by the
first successful parallel apply worker.

Please take a look at the new patch and see if it makes sense.

Cheers


create-parallel-apply-txn-hash-after-the-first-worker.patch
Description: Binary data


RE: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-10 Thread houzj.f...@fujitsu.com
On Wednesday, January 11, 2023 1:25 AM Ted Yu  wrote:

> I was reading src/backend/replication/logical/applyparallelworker.c .
> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think 
> the `ParallelApplyTxnHash` should be released.

Thanks for reporting.

ParallelApplyTxnHash is used to cache the state of streaming transactions being
applied. There could be multiple streaming transactions being applied in
parallel and their information were already saved in ParallelApplyTxnHash, so
we should not release them just because we don't have a worker available to
handle a new transaction here.

Best Regards,
Hou zj


Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-10 Thread Ted Yu
On Tue, Jan 10, 2023 at 9:43 AM Ted Yu  wrote:

>
>
> On Tue, Jan 10, 2023 at 9:26 AM Ted Yu  wrote:
>
>>
>>
>> On Tue, Jan 10, 2023 at 9:25 AM Ted Yu  wrote:
>>
>>> Hi,
>>> I was reading src/backend/replication/logical/applyparallelworker.c .
>>> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I
>>> think the `ParallelApplyTxnHash` should be released.
>>>
>>> Please see the patch.
>>>
>>> Thanks
>>>
>> Here is the patch :-)
>>
>
> In `pa_process_spooled_messages_if_required`, the `pa_unlock_stream` call
> immediately follows `pa_lock_stream`.
> I assume the following is the intended sequence of calls. If this is the
> case, I can add it to the patch.
>
> Cheers
>
> diff --git a/src/backend/replication/logical/applyparallelworker.c
> b/src/backend/replication/logical/applyparallelworker.c
> index 2e5914d5d9..9879b3fff2 100644
> --- a/src/backend/replication/logical/applyparallelworker.c
> +++ b/src/backend/replication/logical/applyparallelworker.c
> @@ -684,9 +684,9 @@ pa_process_spooled_messages_if_required(void)
>  if (fileset_state == FS_SERIALIZE_IN_PROGRESS)
>  {
>  pa_lock_stream(MyParallelShared->xid, AccessShareLock);
> -pa_unlock_stream(MyParallelShared->xid, AccessShareLock);
>
>  fileset_state = pa_get_fileset_state();
> +pa_unlock_stream(MyParallelShared->xid, AccessShareLock);
>  }
>
>  /*
>
Looking closer at the comment above this code and other part of the file,
it seems the order is intentional.

Please disregard my email about `pa_process_spooled_messages_if_required`.


Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-10 Thread Ted Yu
On Tue, Jan 10, 2023 at 9:26 AM Ted Yu  wrote:

>
>
> On Tue, Jan 10, 2023 at 9:25 AM Ted Yu  wrote:
>
>> Hi,
>> I was reading src/backend/replication/logical/applyparallelworker.c .
>> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I
>> think the `ParallelApplyTxnHash` should be released.
>>
>> Please see the patch.
>>
>> Thanks
>>
> Here is the patch :-)
>

In `pa_process_spooled_messages_if_required`, the `pa_unlock_stream` call
immediately follows `pa_lock_stream`.
I assume the following is the intended sequence of calls. If this is the
case, I can add it to the patch.

Cheers

diff --git a/src/backend/replication/logical/applyparallelworker.c
b/src/backend/replication/logical/applyparallelworker.c
index 2e5914d5d9..9879b3fff2 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -684,9 +684,9 @@ pa_process_spooled_messages_if_required(void)
 if (fileset_state == FS_SERIALIZE_IN_PROGRESS)
 {
 pa_lock_stream(MyParallelShared->xid, AccessShareLock);
-pa_unlock_stream(MyParallelShared->xid, AccessShareLock);

 fileset_state = pa_get_fileset_state();
+pa_unlock_stream(MyParallelShared->xid, AccessShareLock);
 }

 /*


Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns NULL

2023-01-10 Thread Ted Yu
On Tue, Jan 10, 2023 at 9:25 AM Ted Yu  wrote:

> Hi,
> I was reading src/backend/replication/logical/applyparallelworker.c .
> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I
> think the `ParallelApplyTxnHash` should be released.
>
> Please see the patch.
>
> Thanks
>
Here is the patch :-)


destroy-parallel-apply-txn-hash-when-worker-not-launched.patch
Description: Binary data