Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive

2016-01-26 Thread and...@anarazel.de
On 2016-01-26 13:22:09 +0530, Amit Kapila wrote:
> @@ -633,9 +633,11 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   
> 11:34   0:00 postgres: ser
>   Time when the state was last changed
>  
>  
> - waiting
> - boolean
> - True if this backend is currently waiting on a lock
> + wait_event
> + text
> + Wait event name if backend is currently waiting, otherwise
> +  process not waiting
> + 
>  
>  

I still think this is a considerable regression in pg_stat_activity
usability. There are lots of people out there that have queries that
automatically monitor pg_stat_activity.waiting, and automatically go to
pg_locks to understand what's going on, if there's one. With the above
definition, that got much harder. Not only do I have to write
WHERE wait_event <> 'process not waiting', but then also parse the wait
event name, to know whether the process is waiting on a heavyweight
lock, or something else!

I do think there's a considerable benefit in improving the
instrumentation here, but his strikes me as making live more complex for
more users than it makes it easier. At the very least this should be
split into two fields (type & what we're actually waiting on). I also
strongly suspect we shouldn't use in band signaling ("process not
waiting"), but rather make the field NULL if we're not waiting on
anything.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2016-01-05 Thread and...@anarazel.de
On 2016-01-05 10:48:43 -0500, Bruce Momjian wrote:
> On Tue, Jan  5, 2016 at 04:42:24PM +0100, Andres Freund wrote:
> > On 2016-01-05 10:40:13 -0500, Bruce Momjian wrote:
> > > On Tue, Jan  5, 2016 at 04:31:15PM +0100, Andres Freund wrote:
> > > > On 2016-01-05 10:28:25 -0500, Bruce Momjian wrote:
> > > > Yes? But it's ok sizewise on the common platforms?
> > > 
> > > What is the uncommon part?  I guess I missed that.
> > 
> > http://archives.postgresql.org/message-id/20151212181702.GH17938%40alap3.anarazel.de
> 
> Yes, I saw that, and the URL in the email, but what is the uncommon
> case?

Are you asking which platforms s_lock is larger than a char? If so, grep
s_lock.h for typedefs. If not, I'm not following what you're asking for?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2016-01-05 Thread and...@anarazel.de
On 2016-01-05 10:40:13 -0500, Bruce Momjian wrote:
> On Tue, Jan  5, 2016 at 04:31:15PM +0100, Andres Freund wrote:
> > On 2016-01-05 10:28:25 -0500, Bruce Momjian wrote:
> > Yes? But it's ok sizewise on the common platforms?
> 
> What is the uncommon part?  I guess I missed that.

http://archives.postgresql.org/message-id/20151212181702.GH17938%40alap3.anarazel.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2016-01-05 Thread and...@anarazel.de
On 2016-01-05 10:28:25 -0500, Bruce Momjian wrote:
> On Sun, Dec 13, 2015 at 12:35:34PM +0100, Andres Freund wrote:
> > > > One thing to call out is that an "oversized" s_lock can now make
> > > > BufferDesc exceed 64 bytes, right now that's just the case when it's
> > > > larger than 4 bytes.  I'm not sure if that's cause for real concern,
> > > > given that it's not very concurrent or ancient platforms where that's
> > > > the case.
> > > > http://archives.postgresql.org/message-id/20150915020625.GI9666%40alap3.anarazel.de
> > > > would alleviate that concern again, as it collapses flags, usage_count,
> > > > buf_hdr_lock and refcount into one 32 bit int...
> > > 
> > > I don't think that would be worth worrying about even if we didn't
> > > have a plan in mind that would make it go away again, and even less so
> > > given that we do have such a plan.
> > 
> > Ok cool. I'm not particularly concerned either, just didn't want to slip
> > that in without having it called out.
> 
> Uh, didn't you and I work in 9.5 to make sure the BufferDesc was 64-byte
> aligned to avoid double-CPU cache invalidation that was causing
> performance problems on a server you were testing?

Yes? But it's ok sizewise on the common platforms?

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2015-12-13 Thread and...@anarazel.de
On 2015-12-12 21:15:52 -0500, Robert Haas wrote:
> On Sat, Dec 12, 2015 at 1:17 PM, and...@anarazel.de  
> wrote:
> > Here's two patches doing that. The first is an adaption of your
> > constants patch, using an enum and also converting xlog.c's locks. The
> > second is the separation into distinct tranches.
> 
> Personally, I prefer the #define approach to the enum, but I can live
> with doing it this way.

I think the lack needing to adjust the 'last defined' var is worth it...

> Other than that, I think these patches look
> good, although if it's OK with you I would like to make a pass over
> the comments and the commit messages which seem to me that they could
> benefit from a bit of editing (but not much substantive change).

Sounds good to me. You'll then commit that?


> > One thing to call out is that an "oversized" s_lock can now make
> > BufferDesc exceed 64 bytes, right now that's just the case when it's
> > larger than 4 bytes.  I'm not sure if that's cause for real concern,
> > given that it's not very concurrent or ancient platforms where that's
> > the case.
> > http://archives.postgresql.org/message-id/20150915020625.GI9666%40alap3.anarazel.de
> > would alleviate that concern again, as it collapses flags, usage_count,
> > buf_hdr_lock and refcount into one 32 bit int...
> 
> I don't think that would be worth worrying about even if we didn't
> have a plan in mind that would make it go away again, and even less so
> given that we do have such a plan.

Ok cool. I'm not particularly concerned either, just didn't want to slip
that in without having it called out.


Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2015-12-12 Thread and...@anarazel.de
On 2015-11-15 16:24:24 -0500, Robert Haas wrote:
> I think what we should do about the buffer locks is polish up this
> patch and get it applied:
> 
> http://www.postgresql.org/message-id/20150907175909.gd5...@alap3.anarazel.de
> 
> I think it needs to be adapted to use predefined constants for the
> tranche IDs instead of hardcoded values, maybe based on the attached
> tranche-constants.patch.

Here's two patches doing that. The first is an adaption of your
constants patch, using an enum and also converting xlog.c's locks. The
second is the separation into distinct tranches.

One thing to call out is that an "oversized" s_lock can now make
BufferDesc exceed 64 bytes, right now that's just the case when it's
larger than 4 bytes.  I'm not sure if that's cause for real concern,
given that it's not very concurrent or ancient platforms where that's
the case.
http://archives.postgresql.org/message-id/20150915020625.GI9666%40alap3.anarazel.de
would alleviate that concern again, as it collapses flags, usage_count,
buf_hdr_lock and refcount into one 32 bit int...

Greetings,

Andres Freund
>From 5f10d977f285109df16bfef6b79456564251aa54 Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Sat, 12 Dec 2015 18:41:59 +0100
Subject: [PATCH 1/2] Define enum of builtin LWLock tranches.

It's a bit cumbersome having to allocate tranche IDs with
LWLockNewTrancheId() because the returned value needs to be shared
between backends, which all need to call LWLockRegisterTranche(),
somehow.  For builtin tranches we can easily do better, and simple
pre-define tranche IDs for those.

This is motivated by an upcoming patch adding further builtin tranches.

Author: Robert Haas and Andres Freund
Discussion:
CA+TgmoYciHS4FuU2rYNt8bX0+7ZcNRBGeO-L20apcXTeo7=4...@mail.gmail.com
---
 src/backend/access/transam/xlog.c | 10 +++---
 src/backend/storage/lmgr/lwlock.c |  7 ---
 src/include/storage/lwlock.h  | 11 +++
 3 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 71fc8ff..147fd53 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -512,7 +512,6 @@ typedef struct XLogCtlInsert
 	 */
 	WALInsertLockPadded *WALInsertLocks;
 	LWLockTranche WALInsertLockTranche;
-	int			WALInsertLockTrancheId;
 } XLogCtlInsert;
 
 /*
@@ -4653,7 +4652,7 @@ XLOGShmemInit(void)
 
 		/* Initialize local copy of WALInsertLocks and register the tranche */
 		WALInsertLocks = XLogCtl->Insert.WALInsertLocks;
-		LWLockRegisterTranche(XLogCtl->Insert.WALInsertLockTrancheId,
+		LWLockRegisterTranche(LWTRANCHE_WAL_INSERT,
 			  &XLogCtl->Insert.WALInsertLockTranche);
 		return;
 	}
@@ -4677,17 +4676,14 @@ XLOGShmemInit(void)
 		(WALInsertLockPadded *) allocptr;
 	allocptr += sizeof(WALInsertLockPadded) * NUM_XLOGINSERT_LOCKS;
 
-	XLogCtl->Insert.WALInsertLockTrancheId = LWLockNewTrancheId();
-
 	XLogCtl->Insert.WALInsertLockTranche.name = "WALInsertLocks";
 	XLogCtl->Insert.WALInsertLockTranche.array_base = WALInsertLocks;
 	XLogCtl->Insert.WALInsertLockTranche.array_stride = sizeof(WALInsertLockPadded);
 
-	LWLockRegisterTranche(XLogCtl->Insert.WALInsertLockTrancheId, &XLogCtl->Insert.WALInsertLockTranche);
+	LWLockRegisterTranche(LWTRANCHE_WAL_INSERT, &XLogCtl->Insert.WALInsertLockTranche);
 	for (i = 0; i < NUM_XLOGINSERT_LOCKS; i++)
 	{
-		LWLockInitialize(&WALInsertLocks[i].l.lock,
-		 XLogCtl->Insert.WALInsertLockTrancheId);
+		LWLockInitialize(&WALInsertLocks[i].l.lock, LWTRANCHE_WAL_INSERT);
 		WALInsertLocks[i].l.insertingAt = InvalidXLogRecPtr;
 	}
 
diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c
index b13ebc6..84691df 100644
--- a/src/backend/storage/lmgr/lwlock.c
+++ b/src/backend/storage/lmgr/lwlock.c
@@ -445,7 +445,7 @@ CreateLWLocks(void)
 
 		/* Initialize all LWLocks in main array */
 		for (id = 0, lock = MainLWLockArray; id < numLocks; id++, lock++)
-			LWLockInitialize(&lock->lock, 0);
+			LWLockInitialize(&lock->lock, LWTRANCHE_MAIN);
 
 		/*
 		 * Initialize the dynamic-allocation counters, which are stored just
@@ -457,7 +457,7 @@ CreateLWLocks(void)
 		LWLockCounter = (int *) ((char *) MainLWLockArray - 3 * sizeof(int));
 		LWLockCounter[0] = NUM_FIXED_LWLOCKS;
 		LWLockCounter[1] = numLocks;
-		LWLockCounter[2] = 1;	/* 0 is the main array */
+		LWLockCounter[2] = LWTRANCHE_FIRST_USER_DEFINED;
 	}
 
 	if (LWLockTrancheArray == NULL)
@@ -466,12 +466,13 @@ CreateLWLocks(void)
 		LWLockTrancheArray = (LWLockTranche **)
 			MemoryContextAlloc(TopMemoryContext,
 		  LWLockTranchesAllocated * sizeof(LWLockTranche *));
+		Assert(LWLockTranchesAllocated >= LWTRANCHE_FIRST_USER_DEFINED);
 	}
 
 	MainLWLockTranche.name = "main";
 	MainLWLockTranche.array_base = MainLWLockArray;
 	MainLWLockTranche.array_stride = sizeof(LWLockPadded);
-	LWLockRegisterTranche(0, &MainLWLockTranche);
+	LWLockRegisterTranche(LWTRANCHE_MAIN, &MainLWLockTranche);
 }
 
 /*
diff --git a/s

Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2015-11-17 Thread and...@anarazel.de
On 2015-11-17 14:14:50 +0300, Ildus Kurbangaliev wrote:
> 1) We can avoid constants, and use a standard steps for tranches
> creation.

The constants are actually a bit useful, to easily determine
builtin/non-builtin stuff.

> 2) We have only one global variable (BufferCtl)

Don't see the advantage. This adds another, albeit predictable,
indirection in frequent callsites. There's no architectural advantage in
avoiding these.

> 3) Tranches moved to shared memory, so we won't need to do
> an additional work here.

I can see some benefit, but it also doesn't seem like a huge benefit.


> 4) Also we can kept buffer locks from MainLWLockArray there (that was done
> in attached patch).

That's the 'blockLocks' thing? That's a pretty bad name. These are
buffer mapping locks. And this seems like a separate patch, separately
debated.

> + if (!foundCtl)
>   {
> - int i;
> + /* Align descriptors to a cacheline boundary. */
> + ctl->base.bufferDescriptors = (void *) 
> CACHELINEALIGN(ShmemAlloc(
> + NBuffers * sizeof(BufferDescPadded) + 
> PG_CACHE_LINE_SIZE));
> +
> + ctl->base.bufferBlocks = (char *) ShmemAlloc(NBuffers * (Size) 
> BLCKSZ);

I'm going to entirely veto this.

> + strncpy(ctl->IOLWLockTrancheName, "buffer_io",
> + BUFMGR_MAX_NAME_LENGTH);
> + strncpy(ctl->contentLWLockTrancheName, "buffer_content",
> + BUFMGR_MAX_NAME_LENGTH);
> + strncpy(ctl->blockLWLockTrancheName, "buffer_blocks",
> + BUFMGR_MAX_NAME_LENGTH);
> +
> + ctl->IOLocks = (LWLockMinimallyPadded *) ShmemAlloc(
> + sizeof(LWLockMinimallyPadded) * NBuffers);

This should be cacheline aligned.

> - buf->io_in_progress_lock = LWLockAssign();
> - buf->content_lock = LWLockAssign();
> + LWLockInitialize(BufferDescriptorGetContentLock(buf),
> + ctl->contentLWLockTrancheId);
> + LWLockInitialize(&ctl->IOLocks[i].lock,
> + ctl->IOLWLockTrancheId);

Seems weird to use the macro accessing content locks, but not IO locks.

>  /* Note: these two macros only work on shared buffers, not local ones! */
> -#define BufHdrGetBlock(bufHdr)   ((Block) (BufferBlocks + ((Size) 
> (bufHdr)->buf_id) * BLCKSZ))
> +#define BufHdrGetBlock(bufHdr)   ((Block) (BufferCtl->bufferBlocks + 
> ((Size) (bufHdr)->buf_id) * BLCKSZ))

That's the additional indirection I'm talking about.

> @@ -353,9 +353,6 @@ NumLWLocks(void)
>   /* Predefined LWLocks */
>   numLocks = NUM_FIXED_LWLOCKS;
>  
> - /* bufmgr.c needs two for each shared buffer */
> - numLocks += 2 * NBuffers;
> -
>   /* proc.c needs one for each backend or auxiliary process */
>   numLocks += MaxBackends + NUM_AUXILIARY_PROCS;

Didn't you also move the buffer mapping locks... ?

> diff --git a/src/include/storage/buf_internals.h 
> b/src/include/storage/buf_internals.h
> index 521ee1c..e1795dc 100644
> --- a/src/include/storage/buf_internals.h
> +++ b/src/include/storage/buf_internals.h
> @@ -95,6 +95,7 @@ typedef struct buftag
>   (a).forkNum == (b).forkNum \
>  )
>  
> +
>  /*

unrelated change.

>   * The shared buffer mapping table is partitioned to reduce contention.
>   * To determine which partition lock a given tag requires, compute the tag's
> @@ -104,10 +105,9 @@ typedef struct buftag
>  #define BufTableHashPartition(hashcode) \
>   ((hashcode) % NUM_BUFFER_PARTITIONS)
>  #define BufMappingPartitionLock(hashcode) \
> - (&MainLWLockArray[BUFFER_MAPPING_LWLOCK_OFFSET + \
> - BufTableHashPartition(hashcode)].lock)
> + (&((BufferCtlData 
> *)BufferCtl)->blockLocks[BufTableHashPartition(hashcode)].lock)
>  #define BufMappingPartitionLockByIndex(i) \
> - (&MainLWLockArray[BUFFER_MAPPING_LWLOCK_OFFSET + (i)].lock)
> + (&((BufferCtlData *)BufferCtl)->blockLocks[i].lock)

Again, unnecessary indirections.

> +/* Maximum length of a bufmgr lock name */
> +#define BUFMGR_MAX_NAME_LENGTH   32

If we were to do this I'd just use NAMEDATALEN.

> +/*
> + * Base control struct for the buffer manager data
> + * Located in shared memory.
> + *
> + * BufferCtl variable points to BufferCtlBase because of only
> + * the backend code knows about BufferDescPadded, LWLock and
> + * others structures and the same time it must be usable in
> + * the frontend code.
> + */
> +typedef struct BufferCtlData
> +{
> + BufferCtlBase base;

Eeek. What's the point here?


Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2015-11-15 Thread and...@anarazel.de
On 2015-11-15 16:24:24 -0500, Robert Haas wrote:
> I think it needs to be adapted to use predefined constants for the
> tranche IDs instead of hardcoded values, maybe based on the attached
> tranche-constants.patch.

Yea, that part is clearly not ok. Let me look at the patch.

> Also, I think we should rip all the volatile qualifiers out of
> bufmgr.c, using the second attached patch, devolatileize-bufmgr.patch.
> The comment claiming that we need this because spinlocks aren't
> compiler barriers is obsolete.

Same here.

> @@ -457,7 +457,7 @@ CreateLWLocks(void)
>   LWLockCounter = (int *) ((char *) MainLWLockArray - 3 * 
> sizeof(int));
>   LWLockCounter[0] = NUM_FIXED_LWLOCKS;
>   LWLockCounter[1] = numLocks;
> - LWLockCounter[2] = 1;   /* 0 is the main array */
> + LWLockCounter[2] = LWTRANCHE_LAST_BUILTIN_ID + 1;
>   }

Man this LWLockCounter[0] stuff is just awful. Absolutely nothing that
needs to be fixed here, but here it really should be fixed someday.

>  /*
> + * We reserve a few predefined tranche IDs.  These values will never be
> + * returned by LWLockNewTrancheId.
> + */
> +#define LWTRANCHE_MAIN   0
> +#define LWTRANCHE_BUFFER_CONTENT 1
> +#define LWTRANCHE_BUFFER_IO_IN_PROGRESS  2
> +#define LWTRANCHE_LAST_BUILTIN_ID
> LWTRANCHE_BUFFER_IO_IN_PROGRESS

Nitpick: I'm inclined to use an enum to avoid having to adjust the last
builtin id when adding a new builtin tranche.


Besides that minor thing I think this works for me. We might
independently want something making adding non-builtin tranches easier,
but that's really just independent.

> From 9e7f9219b5e752da46be0e26a0be074191ae8f62 Mon Sep 17 00:00:00 2001
> From: Robert Haas 
> Date: Sun, 15 Nov 2015 10:24:03 -0500
> Subject: [PATCH 1/3] De-volatile-ize.

I very strongly think this should be done. It's painful having to
cast-away volatile, and it de-optimizes a fair bit of performance
relevant code.

>  /* local state for StartBufferIO and related functions */
>  /* local state for StartBufferIO and related functions */
> -static volatile BufferDesc *InProgressBuf = NULL;
> +static BufferDesc *InProgressBuf = NULL;
>  static bool IsForInput;
>  
>  /* local state for LockBufferForCleanup */
> -static volatile BufferDesc *PinCountWaitBuf = NULL;
> +static BufferDesc *PinCountWaitBuf = NULL;

Hm. These could also be relevant for sigsetjmp et al. Haven't found
relevant code tho.

> - volatile BufferDesc *bufHdr;
> + BufferDesc *bufHdr;
>   Block   bufBlock;

Looks mis-indented now, similarly in a bunch of other places. Maybe
pg-indent afterwards?


>   /*
>* Header spinlock is enough to examine BM_DIRTY, see comment in
> @@ -1707,7 +1707,7 @@ BufferSync(int flags)
>   num_written = 0;
>   while (num_to_scan-- > 0)
>   {
> - volatile BufferDesc *bufHdr = GetBufferDescriptor(buf_id);
> + BufferDesc *bufHdr = GetBufferDescriptor(buf_id);
>

This case has some theoretical behavioural implications: As
bufHdr->flags is accessed without a spinlock the compiler might re-use
an older value. But that's ok: a) there's no old value it really could
use b) the whole race-condition exists anyway, and the comment in the
body explains why that's ok.

>  BlockNumber
>  BufferGetBlockNumber(Buffer buffer)
>  {
> - volatile BufferDesc *bufHdr;
> + BufferDesc *bufHdr;
>  
>   Assert(BufferIsPinned(buffer));
>  
> @@ -2346,7 +2346,7 @@ void
>  BufferGetTag(Buffer buffer, RelFileNode *rnode, ForkNumber *forknum,
>BlockNumber *blknum)
>  {
> - volatile BufferDesc *bufHdr;
> + BufferDesc *bufHdr;

>   /* Do the same checks as BufferGetBlockNumber. */
>   Assert(BufferIsPinned(buffer));
> @@ -2382,7 +2382,7 @@ BufferGetTag(Buffer buffer, RelFileNode *rnode, 
> ForkNumber *forknum,
>   * as the second parameter.  If not, pass NULL.
>   */
>  static void
> -FlushBuffer(volatile BufferDesc *buf, SMgrRelation reln)
> +FlushBuffer(BufferDesc *buf, SMgrRelation reln)
>  {

>   XLogRecPtr  recptr;
>   ErrorContextCallback errcallback;
> @@ -2520,7 +2520,7 @@ RelationGetNumberOfBlocksInFork(Relation relation, 
> ForkNumber forkNum)
>  bool
>  BufferIsPermanent(Buffer buffer)
>  {
> - volatile BufferDesc *bufHdr;
> + BufferDesc *bufHdr;


These require that the buffer is pinned (a barrier implying
operation). The pin prevents the contents from changing in a relevant
manner, the barrier implied in the pin forces the core's view to be
non-stale.


> @@ -2613,7 +2613,7 @@ DropRelFileNodeBuffers(RelFileNodeBackend rnode, 
> ForkNumber forkNum,
>  
>   for (i = 0; i < NBuffers; i++)
>   {
> - volatile BufferDesc *bufHdr = GetBufferDescriptor(i);
> + BufferDesc *bufHdr = GetBufferDescriptor(i);

>   /*
>* We can

Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2015-09-15 Thread and...@anarazel.de
On 2015-09-15 14:39:51 -0400, Robert Haas wrote:
> We could, but since that would be strictly more annoying and less
> flexible than what we've already got, why would we?

I don't find the current approach of having to define tranches in every
backend all that convenient. It also completely breaks down if you want
to display locks from tranches that are only visible within a subset of
the backends - not that unlikely given that shared_preload_libraries is
a PITA.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2015-09-06 Thread and...@anarazel.de
On 2015-09-06 22:57:04 +0300, Ildus Kurbangaliev wrote:
> Ok, I've kept only one tranche for individual LWLocks

But you've added the lock names as a statically sized array to all
tranches? Why not just a pointer to an array that's either NULL ('not
individualy named locks') or appropriately sized?

> > I don't really like the tranche model as in the patch right now. I'd
> > rather have in a way that we have one tranch for all the individual
> > lwlocks, where the tranche points to an array of names alongside the
> > tranche's name. And then for the others we just supply the tranche name,
> > but leave the name array empty, whereas a name can be generated.
> 
> Maybe I don't understand something here, but why add extra field to all 
> tranches
> if we need only one array (especially the array for individual LWLocks).

It's cheap to add an optional pointer field to an individual
tranche. It'll be far less cheap to extend the max number of
tranches. But it's also just ugly to to use a tranche per lock - they're
intended to describe 'runs' of locks.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2015-09-06 Thread and...@anarazel.de
Hi,

On 2015-09-05 12:48:12 +0300, Ildus Kurbangaliev wrote:
> Another parts require a some discussion so I didn't touch them yet.

At this point I don't see any point in further review until these are
addressed.

> The idea to create an individual tranches for individual LWLocks have
> come from Heikki Linnakangas and I also think that tranche is a good place to 
> keep
> LWLock name.

I think it's rather ugly.

> Base of these tranches points to MainLWLockArray

And that's just plain wrong. The base of a tranche ought to point to the
first lwlock in it.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive

2015-09-01 Thread and...@anarazel.de
On 2015-08-04 23:37:08 +0300, Ildus Kurbangaliev wrote:
> diff --git a/src/backend/access/transam/clog.c 
> b/src/backend/access/transam/clog.c
> index 3a58f1e..10c25cf 100644
> --- a/src/backend/access/transam/clog.c
> +++ b/src/backend/access/transam/clog.c
> @@ -457,7 +457,8 @@ CLOGShmemInit(void)
>  {
>   ClogCtl->PagePrecedes = CLOGPagePrecedes;
>   SimpleLruInit(ClogCtl, "CLOG Ctl", CLOGShmemBuffers(), 
> CLOG_LSNS_PER_PAGE,
> -   CLogControlLock, "pg_clog");
> +   CLogControlLock, "pg_clog",
> +   "CLogBufferLocks");
>  }

I'd rather just add the name "clog" (etc) as a string once to
SimpleLruInit instead of now four 3 times.

> +/* Lock names. For monitoring purposes */
> +const char *LOCK_NAMES[] =
> +{
> + "Relation",
> + "RelationExtend",
> + "Page",
> + "Tuple",
> + "Transaction",
> + "VirtualTransaction",
> + "SpeculativeToken",
> + "Object",
> + "Userlock",
> + "Advisory"
> +};

Why do we need this rather than DescribeLockTag?

> + /* Create tranches for individual LWLocks */
> + for (i = 0; i < NUM_INDIVIDUAL_LWLOCKS; i++, tranche++)
> + {
> + int id = LWLockNewTrancheId();
> +
> + /*
> +  * We need to be sure that generated id is equal to 
> index
> +  * for individual LWLocks
> +  */
> + Assert(id == i);
> +
> + tranche->array_base = MainLWLockArray;
> + tranche->array_stride = sizeof(LWLockPadded);
> + MemSet(tranche->name, 0, LWLOCK_MAX_TRANCHE_NAME);
> +
> + /* Initialize individual LWLock */
> + LWLockInitialize(&MainLWLockArray[i].lock, id);
> +
> + /* Register new tranche in tranches array */
> + LWLockRegisterTranche(id, tranche);
> + }
> +
> + /* Fill individual LWLock names */
> + InitLWLockNames();

Why a new tranche for each of these? And it can't be correct that each
has the same base?



I don't really like the tranche model as in the patch right now. I'd
rather have in a way that we have one tranch for all the individual
lwlocks, where the tranche points to an array of names alongside the
tranche's name. And then for the others we just supply the tranche name,
but leave the name array empty, whereas a name can be generated.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers