On Mon, Dec 18, 2023 at 10:00:29PM -0600, Nathan Bossart wrote:
> I found this code when searching for callers that use atomic exchanges as
> atomic writes with barriers (for a separate thread [0]). Can't we use
> pg_atomic_write_u64() here since the locking functions that follow should
> serve as
On Tue, Jul 25, 2023 at 04:43:16PM +0900, Michael Paquier wrote:
> 0001 has been now applied. I have done more tests while looking at
> this patch since yesterday and was surprised to see higher TPS numbers
> on HEAD with the same tests as previously, and the patch was still
> shining with more th
On Wed, Jul 26, 2023 at 1:27 AM Andres Freund wrote:
>
> > 0001 has been now applied. I have done more tests while looking at
> > this patch since yesterday and was surprised to see higher TPS numbers
> > on HEAD with the same tests as previously, and the patch was still
> > shining with more tha
On Tue, Jul 25, 2023 at 04:43:16PM +0900, Michael Paquier wrote:
> We really need to do something in terms of documentation with
> something like 0002, so I'll try to look at that next.
I have applied a slightly-tweaked version of 0002 as of 66d86d4 to
improve a bit the documentation of the area,
Hi,
On 2023-07-26 07:40:31 +0900, Michael Paquier wrote:
> On Tue, Jul 25, 2023 at 09:49:01AM -0700, Andres Freund wrote:
> > FWIW, I'm working on a patch that replaces WAL insert locks as a whole,
> > because they don't scale all that well.
>
> What were you looking at here? Just wondering.
Her
On Tue, Jul 25, 2023 at 09:49:01AM -0700, Andres Freund wrote:
> FWIW, I'm working on a patch that replaces WAL insert locks as a whole,
> because they don't scale all that well.
What were you looking at here? Just wondering.
--
Michael
signature.asc
Description: PGP signature
On Tue, Jul 25, 2023 at 12:57:37PM -0700, Andres Freund wrote:
> I just rebased my aio tree over the commit and promptly, on the first run, saw
> a hang. I did some debugging on that. Unfortunately repeated runs haven't
> repeated that hang, despite quite a bit of trying.
>
> The symptom I was see
Hi,
On 2023-07-25 16:43:16 +0900, Michael Paquier wrote:
> On Sat, Jul 22, 2023 at 01:08:49PM +0530, Bharath Rupireddy wrote:
> > Yes, it looks safe to me too.
>
> 0001 has been now applied. I have done more tests while looking at
> this patch since yesterday and was surprised to see higher TPS
Hi,
On 2023-07-25 16:43:16 +0900, Michael Paquier wrote:
> On Sat, Jul 22, 2023 at 01:08:49PM +0530, Bharath Rupireddy wrote:
> > Yes, it looks safe to me too.
>
> 0001 has been now applied. I have done more tests while looking at
> this patch since yesterday and was surprised to see higher TPS
On Sat, Jul 22, 2023 at 01:08:49PM +0530, Bharath Rupireddy wrote:
> Yes, it looks safe to me too.
0001 has been now applied. I have done more tests while looking at
this patch since yesterday and was surprised to see higher TPS numbers
on HEAD with the same tests as previously, and the patch was
On Fri, Jul 21, 2023 at 11:29 AM Michael Paquier wrote:
>
> + /* Reading atomically avoids getting a torn value */
> + value = pg_atomic_read_u64(valptr);
>
> Should this specify that this is specifically important for platforms
> where reading a uint64 could lead to a torn value read,
On Thu, Jul 20, 2023 at 02:38:29PM +0530, Bharath Rupireddy wrote:
> On Fri, Jul 14, 2023 at 4:17 AM Andres Freund wrote:
>> I think this commit does too many things at once.
>
> I've split the patch into three - 1) Make insertingAt 64-bit atomic.
> 2) Have better commenting on why there's no mem
On Fri, Jul 14, 2023 at 4:17 AM Andres Freund wrote:
>
> Hi,
>
> On 2023-07-13 14:04:31 -0700, Andres Freund wrote:
> > From b74b6e953cb5a7e7ea1a89719893f6ce9e231bba Mon Sep 17 00:00:00 2001
> > From: Bharath Rupireddy
> > Date: Fri, 19 May 2023 15:00:21 +
> > Subject: [PATCH v8] Optimize WAL
Hi,
On 2023-07-13 14:04:31 -0700, Andres Freund wrote:
> From b74b6e953cb5a7e7ea1a89719893f6ce9e231bba Mon Sep 17 00:00:00 2001
> From: Bharath Rupireddy
> Date: Fri, 19 May 2023 15:00:21 +
> Subject: [PATCH v8] Optimize WAL insertion lock acquisition and release
>
> This commit optimizes WAL
On 2023-07-11 09:20:45 +0900, Michael Paquier wrote:
> On Mon, Jun 05, 2023 at 08:00:00AM +0530, Bharath Rupireddy wrote:
> > On Wed, May 31, 2023 at 5:05 PM Michael Paquier wrote:
> >> @Andres: Were there any extra tests you wanted to be run for more
> >> input?
> >
> > @Andres Freund please let
On Mon, Jun 05, 2023 at 08:00:00AM +0530, Bharath Rupireddy wrote:
> On Wed, May 31, 2023 at 5:05 PM Michael Paquier wrote:
>> @Andres: Were there any extra tests you wanted to be run for more
>> input?
>
> @Andres Freund please let us know your thoughts.
Err, ping. It seems like this thread is
On Wed, May 31, 2023 at 5:05 PM Michael Paquier wrote:
>
> On Mon, May 22, 2023 at 09:26:25AM +0900, Michael Paquier wrote:
> > Simpler and consistent, nice. I don't have much more to add, so I
> > have switched the patch as RfC.
>
> While at PGcon, Andres has asked me how many sockets are in the
On Mon, May 22, 2023 at 09:26:25AM +0900, Michael Paquier wrote:
> Simpler and consistent, nice. I don't have much more to add, so I
> have switched the patch as RfC.
While at PGcon, Andres has asked me how many sockets are in the
environment I used for the tests, and lscpu tells me the following
On Fri, May 19, 2023 at 08:34:16PM +0530, Bharath Rupireddy wrote:
> I get it. How about the following similar to what
> ProcessProcSignalBarrier() has?
>
> + * Note that pg_atomic_exchange_u64 is a full barrier, so we're
> guaranteed
> + * that the variable is updated before waking up wa
On Fri, May 19, 2023 at 12:24 PM Michael Paquier wrote:
>
> On Thu, May 18, 2023 at 11:18:25AM +0530, Bharath Rupireddy wrote:
> > I think what I have so far seems more verbose explaining what a
> > barrier does and all that. I honestly think we don't need to be that
> > verbose, thanks to README.
On Thu, May 18, 2023 at 11:18:25AM +0530, Bharath Rupireddy wrote:
> I think what I have so far seems more verbose explaining what a
> barrier does and all that. I honestly think we don't need to be that
> verbose, thanks to README.barrier.
Agreed. This file is a mine of information.
> I simplif
On Wed, May 10, 2023 at 5:34 PM Michael Paquier wrote:
>
> +* NB: LWLockConflictsWithVar (which is called from
> +* LWLockWaitForVar) relies on the spinlock used above in this
> +* function and doesn't use a memory barrier.
>
> This patch adds the following comment in WaitXLogInsertion
On Fri, May 12, 2023 at 07:35:20AM +0530, Bharath Rupireddy wrote:
> --enable-atomics=no, -T60:
> --enable-spinlocks=no, -T60:
> --enable-atomics=no --enable-spinlocks=no, -T60:
Thanks for these extra tests, I have not done these specific cases but
the profiles look similar to what I've seen mysel
On Thu, May 11, 2023 at 11:56 AM Michael Paquier wrote:
>
> On Wed, May 10, 2023 at 09:04:47PM +0900, Michael Paquier wrote:
> > It took me some time, but I have been able to deploy a big box to see
> > the effect of this patch at a rather large scale (64 vCPU, 512G of
> > memory), with the follow
On Wed, May 10, 2023 at 09:04:47PM +0900, Michael Paquier wrote:
> It took me some time, but I have been able to deploy a big box to see
> the effect of this patch at a rather large scale (64 vCPU, 512G of
> memory), with the following test characteristics for HEAD and v6:
> - TPS comparison with p
On Wed, May 10, 2023 at 10:40:20PM +0530, Bharath Rupireddy wrote:
> test-case 2: -T900, WAL ~256 bytes - ran for about 3.5 hours and the
> more than 3X improvement in TPS is seen - 3.11X @ 512 3.79 @ 768, 3.47
> @ 1024, 2.27 @ 2048, 2.77 @ 4096
>
> [...]
>
> test-case 2: -t100, WAL ~256 bytes
On Tue, May 9, 2023 at 9:24 AM Bharath Rupireddy
wrote:
>
> On Tue, May 9, 2023 at 9:02 AM Michael Paquier wrote:
> >
> > On Mon, May 08, 2023 at 04:04:10PM -0700, Nathan Bossart wrote:
> > > On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:
> > >> test-case 1: -T5, WAL ~16 bytes
On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:
> Note that I've used pg_logical_emit_message() for ease of
> understanding about the txns generating various amounts of WAL, but
> the pattern is the same if txns are generating various amounts of WAL
> say with inserts.
Sounds go
On Tue, May 09, 2023 at 02:10:20PM +0900, Michael Paquier wrote:
> Should we split this patch into two parts, as they aim at tackling two
> different cases then? One for LWLockConflictsWithVar() and
> LWLockReleaseClearVar() which are the straight-forward pieces
> (using one pg_atomic_write_u64()
On Mon, May 08, 2023 at 08:18:04PM +0530, Bharath Rupireddy wrote:
> On Mon, May 8, 2023 at 5:57 PM Bharath Rupireddy
> wrote:
>> On Mon, Apr 10, 2023 at 9:38 AM Michael Paquier wrote:
>>> The sensitive change is in LWLockUpdateVar(). I am not completely
>>> sure to understand this removal, thou
On Tue, May 09, 2023 at 09:34:56AM +0530, Bharath Rupireddy wrote:
> Below is the configuration I've been using. I have been keeping the
> checkpoints away so far to get expected numbers. Probably, something
> that I should modify for this long run? Change checkpoint_timeout to
> 15 min or so?
>
>
On Tue, May 9, 2023 at 9:27 AM Michael Paquier wrote:
>
> On Tue, May 09, 2023 at 09:24:14AM +0530, Bharath Rupireddy wrote:
> > I'll pick a test case that generates a reasonable amount of WAL 256
> > bytes. What do you think of the following?
> >
> > test-case 2: -T900, WAL ~256 bytes (for c in 1
On Tue, May 09, 2023 at 09:24:14AM +0530, Bharath Rupireddy wrote:
> I'll pick a test case that generates a reasonable amount of WAL 256
> bytes. What do you think of the following?
>
> test-case 2: -T900, WAL ~256 bytes (for c in 1 2 4 8 16 32 64 128 256
> 512 768 1024 2048 4096 - takes 3.5hrs)
>
On Tue, May 9, 2023 at 9:02 AM Michael Paquier wrote:
>
> On Mon, May 08, 2023 at 04:04:10PM -0700, Nathan Bossart wrote:
> > On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:
> >> test-case 1: -T5, WAL ~16 bytes
> >> test-case 1: -t1000, WAL ~16 bytes
> >
> > I wonder if it's wor
On Mon, May 08, 2023 at 04:04:10PM -0700, Nathan Bossart wrote:
> On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:
>> test-case 1: -T5, WAL ~16 bytes
>> test-case 1: -t1000, WAL ~16 bytes
>
> I wonder if it's worth doing a couple of long-running tests, too.
Yes, 5s or 1000 trans
On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:
> I ran performance tests on the patch with different use-cases. Clearly
> the patch reduces burden on LWLock's waitlist lock (evident from perf
> reports [1]). However, to see visible impact in the output, the txns
> must be genera
On Mon, May 8, 2023 at 5:57 PM Bharath Rupireddy
wrote:
>
> On Mon, Apr 10, 2023 at 9:38 AM Michael Paquier wrote:
>
> > -LWLockUpdateVar(LWLock *lock, uint64 *valptr, uint64 val)
> > +LWLockUpdateVar(LWLock *lock, pg_atomic_uint64 *valptr, uint64 val)
> > [...]
> > Assert(pg_atomic_read_u32(
On Mon, Feb 20, 2023 at 09:49:48PM -0800, Nathan Bossart wrote:
> I'm marking this as ready-for-committer. I think a couple of the comments
> could use some small adjustments, but that probably doesn't need to hold up
> this patch.
Apologies. I was planning to have a thorough look at this patch
Freund ; PostgreSQL Hackers
主题: Re: WAL Insertion Lock Improvements
On Thu, Feb 09, 2023 at 11:51:28AM +0530, Bharath Rupireddy wrote:
> On Thu, Feb 9, 2023 at 3:36 AM Nathan Bossart
> wrote:
>> Overall, I think this patch is in reasonable shape.
>
> Thanks for reviewi
On Thu, Feb 09, 2023 at 11:51:28AM +0530, Bharath Rupireddy wrote:
> On Thu, Feb 9, 2023 at 3:36 AM Nathan Bossart
> wrote:
>> Overall, I think this patch is in reasonable shape.
>
> Thanks for reviewing. Please see the attached v5 patch.
I'm marking this as ready-for-committer. I think a coup
On Thu, Feb 9, 2023 at 3:36 AM Nathan Bossart wrote:
>
> + pg_atomic_exchange_u64(valptr, val);
>
> nitpick: I'd add a (void) at the beginning of these calls to
> pg_atomic_exchange_u64() so that it's clear that we are discarding the
> return value.
I did that in the attached v5 patch altho
+ pg_atomic_exchange_u64(valptr, val);
nitpick: I'd add a (void) at the beginning of these calls to
pg_atomic_exchange_u64() so that it's clear that we are discarding the
return value.
+ /*
+* Update the lock variable atomically first without having to acquire
wait
+*
On Tue, Jan 24, 2023 at 7:00 PM Bharath Rupireddy
wrote:
>
> I'm attaching the v3 patch with the above review comments addressed.
> Hopefully, no memory ordering issues now. FWIW, I've added it to CF
> https://commitfest.postgresql.org/42/4141/.
>
> Test results with the v3 patch and insert worklo
On Tue, Dec 6, 2022 at 12:00 AM Andres Freund wrote:
>
> Hi
Thanks for reviewing.
> FWIW, I don't see an advantage in 0003. If it allows us to make something else
> simpler / faster, cool, but on its own it doesn't seem worthwhile.
I've discarded this change.
> On 2022-12-02 16:31:58 -0800, Na
Hi,
On 2022-12-08 12:29:54 +0530, Bharath Rupireddy wrote:
> On Tue, Dec 6, 2022 at 12:00 AM Andres Freund wrote:
> > I think it'd be safe to optimize LWLockConflictsWithVar(), due to some
> > pre-existing, quite crufty, code. LWLockConflictsWithVar() says:
> >
> > * Test first to see if
On Tue, Dec 6, 2022 at 12:00 AM Andres Freund wrote:
>
> FWIW, I don't see an advantage in 0003. If it allows us to make something else
> simpler / faster, cool, but on its own it doesn't seem worthwhile.
Thanks. I will discard it.
> I think it'd be safe to optimize LWLockConflictsWithVar(), due
Hi,
FWIW, I don't see an advantage in 0003. If it allows us to make something else
simpler / faster, cool, but on its own it doesn't seem worthwhile.
On 2022-12-02 16:31:58 -0800, Nathan Bossart wrote:
> On Fri, Dec 02, 2022 at 04:32:38PM +0530, Bharath Rupireddy wrote:
> > On Fri, Dec 2, 2022
On Fri, Dec 02, 2022 at 04:32:38PM +0530, Bharath Rupireddy wrote:
> On Fri, Dec 2, 2022 at 6:10 AM Andres Freund wrote:
>> I'm not sure this is quite right - don't we need a memory barrier. But I
>> don't
>> see a reason to not just leave this code as-is. I think this should be
>> optimized enti
3111157
> > 322202022074
> > 644174242213
> > 1287130076638
> > 256103652118944
> > 512111250161582
> > 76899544 161987
> > 1024 96743164161
> > 204872711156686
> > 4096541581
49 matches
Mail list logo