Re: Lock contention high

2021-12-20 Thread Yura Sokolov
В Чт, 28/10/2021 в 03:14 +0530, Ashkil Dighin пишет:
> Hi,
> Yes, lock contention reduced with postgresqlv14.
> Lock acquire reduced 18% to 10%
> 10.49 %postgres  postgres[.] LWLockAcquire
> 5.09%  postgres  postgres[.] _bt_compare
> 
> Is lock contention can be reduced to 0-3%?
> On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”
> 
> 
> On Tuesday, October 26, 2021, Andres Freund  wrote:
> > Hi,
> > 
> > On 2021-10-12 13:05:12 +0530, Ashkil Dighin wrote:
> > > PostgreSQL version: 13.3
> > 
> > You could try postgres 14 - that did improve scalability in some areas.
> > 
> > 
> > 
> > > Perf data for 24vu(TPC-C)
> > > 
> > > 
> > >   18.99%  postgres  postgres[.] LWLockAcquire
> > >  7.09%  postgres  postgres[.] _bt_compare
> > >  8.66%  postgres  postgres[.] LWLockRelease
> > >  2.28%  postgres  postgres[.] GetSnapshotData
> > >  2.25%  postgres  postgres[.] hash_search_with_hash_value
> > >  2.11%  postgres  postgres[.] XLogInsertRecord
> > >  1.98%  postgres  postgres[.] PinBuffer
> > 
> > To be more useful you'd need to create a profile with 'caller' information
> > using 'perf record --call-graph dwarf', and then check what the important
> > callers are.
> > 
> > 
> > > Postgres.conf used  in Baremetal
> > > 
> > > shared_buffers = 128GB(1/4 th RAM size)
> > > effective_cachesize=392 GB(1/3 or 75% of RAM size)
> > 
> > If your hot data set is actually larger than s_b, I'd recommend trying a
> > larger s_b. It's plausible that a good chunk of lock contention is from 
> > that.
> > 

Could you try attached patch?
It reduces lock contention in buffer manager by not acquiring
two locks simultaneously on buffer eviction.

v1-0001-* - it is file for postgresql 14 and master branch
vpg13v1-0001-* - this file for postgresql 13

Corresponding (not so loud) discussion:
https://postgr.es/m/flat/1edbb61981fe1d99c3f20e3d56d6c88999f4227c.camel%40postgrespro.ru



regards,

Yura Sokolov
y.soko...@postgrespro.ru
funny.fal...@gmail.com
From efa1d36f0e6a83f4329d259f310b8da25b04bc24 Mon Sep 17 00:00:00 2001
From: Yura Sokolov 
Date: Wed, 22 Sep 2021 13:10:37 +0300
Subject: [PATCH vpg13v1] bufmgr: do not acquire two partition locks.

Acquiring two partition locks leads to complex dependency chain that hurts
at high concurrency level.

There is no need to hold both lock simultaneously. Buffer is pinned so
other processes could not select it for eviction. If tag is cleared and
buffer removed from old partition other processes will not find it.
Therefore it is safe to release old partition lock before acquiring
new partition lock.

This change requires to manually return BufferDesc to free list.

Also insertion and deletion to dynahash is optimized by avoiding
unnecessary free list manipulations in common case (when buffer is
reused)

Also small and never triggered bug in hash_update_hash_key is fixed.
---
 src/backend/storage/buffer/buf_table.c |  54 +++--
 src/backend/storage/buffer/bufmgr.c| 190 
 src/backend/utils/hash/dynahash.c  | 289 +++--
 src/include/storage/buf_internals.h|   6 +-
 src/include/utils/hsearch.h|  17 ++
 5 files changed, 410 insertions(+), 146 deletions(-)

diff --git a/src/backend/storage/buffer/buf_table.c b/src/backend/storage/buffer/buf_table.c
index 4953ae9f824..feab7f62f5b 100644
--- a/src/backend/storage/buffer/buf_table.c
+++ b/src/backend/storage/buffer/buf_table.c
@@ -107,36 +107,29 @@ BufTableLookup(BufferTag *tagPtr, uint32 hashcode)
 
 /*
  * BufTableInsert
- *		Insert a hashtable entry for given tag and buffer ID,
- *		unless an entry already exists for that tag
- *
- * Returns -1 on successful insertion.  If a conflicting entry exists
- * already, returns the buffer ID in that entry.
+ *		Insert a hashtable entry for given tag and buffer ID.
+ *		Caller should be sure there is no conflicting entry.
  *
  * Caller must hold exclusive lock on BufMappingLock for tag's partition
+ * and call BufTableLookup to check for conflicting entry.
+ *
+ * If oldelem is passed it is reused.
  */
-int
-BufTableInsert(BufferTag *tagPtr, uint32 hashcode, int buf_id)
+void
+BufTableInsert(BufferTag *tagPtr, uint32 hashcode, int buf_id, void *oldelem)
 {
 	BufferLookupEnt *result;
-	bool		found;
 
 	Assert(buf_id >= 0);		/* -1 is reserved for not-in-table */
 	Assert(tagPtr->blockNum != P_NEW);	/* invalid tag */
 
 	result = (BufferLookupEnt *)
-		hash_search_with_hash_value(SharedBufHash,
-	(void *) tagPtr,
-	hashcode,
-	HASH_ENTER,
-	);
-
-	if (found)	/* found something already in the table */
-		return result->id;
+		hash_insert_with_hash_nocheck(SharedBufHash,
+	  (void *) tagPtr,
+	  hashcode,
+	  oldelem);
 
 	result->id = buf_id;
-
-	return -1;
 }
 
 /*
@@ -144,19 

Re: Lock contention high

2021-11-29 Thread arjun shetty
1. How to check which NUMA node in PostgreSQL process fetching from the
memory?

2. Is NUMA configuration is better for PostgreSQL?
  vm.zone_reclaim_mode= 0
   numactl --interleave = all  /init.d/ PostgreSQL start
kernel.numa_balancing= 0





On Wednesday, November 17, 2021, arjun shetty 
wrote:

> Hi Askhil
>
> PostgreSQL utilizes  lightweight locks(LWLocks) to synchronize and
> control access to the buffer content. A process acquires an LWLock in a
> shared mode to read from the buffer and an exclusive mode  to write to
> the buffer. Therefore, while holding an exclusive lock, a process prevents
> other processes from acquiring a shared or exclusive lock. Also, a shared
> lock can be acquired concurrently by other processes. The issue starts when
> many processes acquire an exclusive lock on buffer content. As a result,
> LwlockAcquire seen as top hot function in profilng.
> Here  need to understand LwlockAcquire is lock contention or cpu time
> spent inside the method/ function(top function in profiling)
>
> It can analysed log  “LwStatus” with parameters like
> ex-acquire-count(exclusive mode) , sh-acquire-count , block-count and
> spin-delay-count
>
> Total lock acquisition request = ex-acquire-count+sh-acquire-count)
> Time lock contention %= block count)/ Total lock acquisition request.
>
> Time lock contention may provide as most of cpu time inside the function
> rather than spinning/ waiting for lock.
>
> On Friday, November 12, 2021, Ashkil Dighin 
> wrote:
>
>> Hi
>> I suspect lock contention and performance issues with __int128. And I
>> would like to check the performance by forcibly disabling
>> int128(Maxalign16bytes) and enable like long long(maxlign 8bytes).
>>  Is it possible to disable int128 in PostgreSQL?
>>
>> On Thursday, October 28, 2021, Andres Freund  wrote:
>>
>>> Hi,
>>>
>>> On October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <
>>> ashkildighi...@gmail.com> wrote:
>>> >Hi,
>>> >Yes, lock contention reduced with postgresqlv14.
>>> >Lock acquire reduced 18% to 10%
>>> >10.49 %postgres  postgres[.] LWLockAcquire
>>> >5.09%  postgres  postgres[.] _bt_compare
>>> >
>>> >Is lock contention can be reduced to 0-3%?
>>>
>>> Probably not, or at least not easily. Because of the atomic instructions
>>> the locking also includes  some other costs (e.g. cache misses, serializing
>>> store buffers,...).
>>>
>>> There's a good bit we can do to increase the cache efficiency around
>>> buffer headers, but it won't get us quite that low I'd guess.
>>>
>>>
>>> >On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”
>>>
>>> Without knowing what proportion they have to each and to non-waiting
>>> backends that unfortunately doesn't help that much..
>>>
>>> Andres
>>>
>>> --
>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>>
>>


Re: Lock contention high

2021-11-16 Thread arjun shetty
Hi Askhil

PostgreSQL utilizes  lightweight locks(LWLocks) to synchronize and control
access to the buffer content. A process acquires an LWLock in a  shared
mode to read from the buffer and an exclusive mode  to write to the buffer.
Therefore, while holding an exclusive lock, a process prevents other
processes from acquiring a shared or exclusive lock. Also, a shared lock
can be acquired concurrently by other processes. The issue starts when many
processes acquire an exclusive lock on buffer content. As a result,
LwlockAcquire seen as top hot function in profilng.
Here  need to understand LwlockAcquire is lock contention or cpu time spent
inside the method/ function(top function in profiling)

It can analysed log  “LwStatus” with parameters like
ex-acquire-count(exclusive mode) , sh-acquire-count , block-count and
spin-delay-count

Total lock acquisition request = ex-acquire-count+sh-acquire-count)
Time lock contention %= block count)/ Total lock acquisition request.

Time lock contention may provide as most of cpu time inside the function
rather than spinning/ waiting for lock.

On Friday, November 12, 2021, Ashkil Dighin 
wrote:

> Hi
> I suspect lock contention and performance issues with __int128. And I
> would like to check the performance by forcibly disabling
> int128(Maxalign16bytes) and enable like long long(maxlign 8bytes).
>  Is it possible to disable int128 in PostgreSQL?
>
> On Thursday, October 28, 2021, Andres Freund  wrote:
>
>> Hi,
>>
>> On October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <
>> ashkildighi...@gmail.com> wrote:
>> >Hi,
>> >Yes, lock contention reduced with postgresqlv14.
>> >Lock acquire reduced 18% to 10%
>> >10.49 %postgres  postgres[.] LWLockAcquire
>> >5.09%  postgres  postgres[.] _bt_compare
>> >
>> >Is lock contention can be reduced to 0-3%?
>>
>> Probably not, or at least not easily. Because of the atomic instructions
>> the locking also includes  some other costs (e.g. cache misses, serializing
>> store buffers,...).
>>
>> There's a good bit we can do to increase the cache efficiency around
>> buffer headers, but it won't get us quite that low I'd guess.
>>
>>
>> >On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”
>>
>> Without knowing what proportion they have to each and to non-waiting
>> backends that unfortunately doesn't help that much..
>>
>> Andres
>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>
>


Re: Lock contention high

2021-11-15 Thread Tom Lane
Ashkil Dighin  writes:
> I suspect lock contention and performance issues with __int128. And I would
> like to check the performance by forcibly disabling int128(Maxalign16bytes)
> and enable like long long(maxlign 8bytes).
>  Is it possible to disable int128 in PostgreSQL?

Sure, you can build without it --- easiest way would be to modify
pg_config.h after the configure step.  But the idea that it has
something to do with lock contention seems like nonsense.

regards, tom lane




Re: Lock contention high

2021-11-15 Thread Ashkil Dighin
Hi
I suspect lock contention and performance issues with __int128. And I would
like to check the performance by forcibly disabling int128(Maxalign16bytes)
and enable like long long(maxlign 8bytes).
 Is it possible to disable int128 in PostgreSQL?

On Thursday, October 28, 2021, Andres Freund  wrote:

> Hi,
>
> On October 27, 2021 2:44:56 PM PDT, Ashkil Dighin <
> ashkildighi...@gmail.com> wrote:
> >Hi,
> >Yes, lock contention reduced with postgresqlv14.
> >Lock acquire reduced 18% to 10%
> >10.49 %postgres  postgres[.] LWLockAcquire
> >5.09%  postgres  postgres[.] _bt_compare
> >
> >Is lock contention can be reduced to 0-3%?
>
> Probably not, or at least not easily. Because of the atomic instructions
> the locking also includes  some other costs (e.g. cache misses, serializing
> store buffers,...).
>
> There's a good bit we can do to increase the cache efficiency around
> buffer headers, but it won't get us quite that low I'd guess.
>
>
> >On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”
>
> Without knowing what proportion they have to each and to non-waiting
> backends that unfortunately doesn't help that much..
>
> Andres
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>


Re: Lock contention high

2021-10-28 Thread Ashkil Dighin
Hi,
Yes, lock contention reduced with postgresqlv14.
Lock acquire reduced 18% to 10%
10.49 %postgres  postgres[.] LWLockAcquire
5.09%  postgres  postgres[.] _bt_compare

Is lock contention can be reduced to 0-3%?
On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”


On Tuesday, October 26, 2021, Andres Freund  wrote:

> Hi,
>
> On 2021-10-12 13:05:12 +0530, Ashkil Dighin wrote:
> > PostgreSQL version: 13.3
>
> You could try postgres 14 - that did improve scalability in some areas.
>
>
>
> > Perf data for 24vu(TPC-C)
> > 
> >
> >   18.99%  postgres  postgres[.] LWLockAcquire
> >  7.09%  postgres  postgres[.] _bt_compare
> >  8.66%  postgres  postgres[.] LWLockRelease
> >  2.28%  postgres  postgres[.] GetSnapshotData
> >  2.25%  postgres  postgres[.] hash_search_with_hash_value
> >  2.11%  postgres  postgres[.] XLogInsertRecord
> >  1.98%  postgres  postgres[.] PinBuffer
>
> To be more useful you'd need to create a profile with 'caller' information
> using 'perf record --call-graph dwarf', and then check what the important
> callers are.
>
>
> > Postgres.conf used  in Baremetal
> > 
> > shared_buffers = 128GB(1/4 th RAM size)
> > effective_cachesize=392 GB(1/3 or 75% of RAM size)
>
> If your hot data set is actually larger than s_b, I'd recommend trying a
> larger s_b. It's plausible that a good chunk of lock contention is from
> that.
>
> Greetings,
>
> Andres Freund
>


Re: Lock contention high

2021-10-27 Thread Andres Freund
Hi, 

On October 27, 2021 2:44:56 PM PDT, Ashkil Dighin  
wrote:
>Hi,
>Yes, lock contention reduced with postgresqlv14.
>Lock acquire reduced 18% to 10%
>10.49 %postgres  postgres[.] LWLockAcquire
>5.09%  postgres  postgres[.] _bt_compare
>
>Is lock contention can be reduced to 0-3%?

Probably not, or at least not easily. Because of the atomic instructions the 
locking also includes  some other costs (e.g. cache misses, serializing store 
buffers,...).

There's a good bit we can do to increase the cache efficiency around buffer 
headers, but it won't get us quite that low I'd guess.


>On pg-stat-activity shown LwLock as “BufferCounter” and “WalInsert”

Without knowing what proportion they have to each and to non-waiting backends 
that unfortunately doesn't help that much..

Andres

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.




Re: Lock contention high

2021-10-25 Thread Andres Freund
Hi,

On 2021-10-25 18:38:40 -0600, Michael Lewis wrote:
> On Mon, Oct 25, 2021, 5:36 PM Andres Freund  wrote:
> If your hot data set is actually larger than s_b, I'd recommend trying a
> larger s_b. It's plausible that a good chunk of lock contention is from
> that.

> How much larger might you go?

I've seen s_b in the ~700GB range being a considerable speedup over lower
values quite a few years ago. I don't see a clear cut upper boundary. The one
thing this can regress measurably is the speed of dropping / truncating
tables.


> Any write ups on lock contention as it relates to shared buffers?

I don't have a concrete thing to point you to, but searching for
NUM_BUFFER_PARTITIONS might point you to some discussions.


> How impactful might huge pages (off, transparent or on) be to the use of
> shared buffers and the related locking mechanism?

Using huge pages can *hugely* help performance-wise. Not directly by relieving
postgres-side contention however (it does reduce cache usage somewhat, but
it's mainly really just the frequency of TLB misses that makes the
difference).

Greetings,

Andres Freund




Re: Lock contention high

2021-10-25 Thread Michael Lewis
On Mon, Oct 25, 2021, 5:36 PM Andres Freund  wrote:
If your hot data set is actually larger than s_b, I'd recommend trying a
larger s_b. It's plausible that a good chunk of lock contention is from
that.


How much larger might you go? Any write ups on lock contention as it
relates to shared buffers? How impactful might huge pages (off, transparent
or on) be to the use of shared buffers and the related locking mechanism?


Re: Lock contention high

2021-10-25 Thread Andres Freund
Hi,

On 2021-10-12 13:05:12 +0530, Ashkil Dighin wrote:
> PostgreSQL version: 13.3

You could try postgres 14 - that did improve scalability in some areas.



> Perf data for 24vu(TPC-C)
> 
> 
>   18.99%  postgres  postgres[.] LWLockAcquire
>  7.09%  postgres  postgres[.] _bt_compare
>  8.66%  postgres  postgres[.] LWLockRelease
>  2.28%  postgres  postgres[.] GetSnapshotData
>  2.25%  postgres  postgres[.] hash_search_with_hash_value
>  2.11%  postgres  postgres[.] XLogInsertRecord
>  1.98%  postgres  postgres[.] PinBuffer

To be more useful you'd need to create a profile with 'caller' information
using 'perf record --call-graph dwarf', and then check what the important
callers are.


> Postgres.conf used  in Baremetal
> 
> shared_buffers = 128GB(1/4 th RAM size)
> effective_cachesize=392 GB(1/3 or 75% of RAM size)

If your hot data set is actually larger than s_b, I'd recommend trying a
larger s_b. It's plausible that a good chunk of lock contention is from that.

Greetings,

Andres Freund




Re: Lock contention high

2021-10-20 Thread Ashkil Dighin
Hi
B-tree index used in the postgres environment
Checked on warehouse different values like 100,800,1600,2400 and 3200 with
virtual user 64
On different values(warehouse) the lock contention same i.e. approx 17% and
iostat usage is 30-40%



pg_Count_ware=100
-
17.76%  postgres  postgres[.] LWLockAcquire
4.88%  postgres  postgres[.] _bt_compare
3.10%  postgres  postgres[.] LWLockRelease




pg_Count_ware=800(previously I used Warehouse 800)

17.91%  postgres  postgres[.] LWLockAcquire
5.76%  postgres  postgres[.] _bt_compare
3.06%  postgres  postgres[.] LWLockRelease





pg_Count_ware_1600
-
17.80%  postgres  postgres[.] LWLockAcquire
5.88%  postgres  postgres[.] _bt_compare
2.70%  postgres  postgres[.] LWLockRelease




pg_Count_ware_2400
--
17.77%  postgres  postgres[.] LWLockAcquire
6.01%  postgres  postgres[.] _bt_compare
2.71%  postgres  postgres[.] LWLockRelease




pg_Count_ware_3200
--
17.46%  postgres  postgres[.] LWLockAcquire
6.32%  postgres  postgres[.] _bt_compare
2.86%  postgres  postgres[.] hash_search_with_hash_value



1.Tired different values of lock management values in postgres.conf but it
not helped to reduce lock contention.
deadlock_timeout = 5s
max_locks_per_transaction = 64
max_pred_locks_per_transaction = 64
max_pred_locks_per_relation = -2

max_pred_locks_per_page = 2
2.Intention to check the postgreSQL scalability and performance or
throughput(TPC-C/TPC-H)
 with HammerDB and pgbench with  server configuration on tune
settings(postgresql.conf)-reduce lock contention
CPU's :256
Threadper core:  2
Core per socket:  64
Sockets:   2
NUMA node0 :   0-63,128-191
NUMA node1 :   64-127,192-255
RAM size :512GB
SSD :1TB

Ref link:
https://www.hammerdb.com/blog/uncategorized/hammerdb-best-practice-for-postgresql-performance-and-scalability/

On Thursday, October 14, 2021, Peter Geoghegan  wrote:

> On Tue, Oct 12, 2021 at 12:45 AM Ashkil Dighin 
> wrote:
> > Lock contention observed high in PostgreSQLv13.3
> > The source code compiled with GNC(GCCv11.x)
> > PostgreSQL version: 13.3
> > Operating system:   RHEL8.3
> > Kernel name:4.18.0-305.10.2.el8_4.x86_64
> > RAM Size:512GB
> > SSD: 1TB
> > The environment used IBM metal and test benchmark environment
> HammerDbv4.2
> > Test case :TPC-C
>
> You didn't say how many TPC-C warehouses you used. In my experience,
> people sometimes run TPC-C with relatively few, which will tend to
> result in extreme contention on certain B-Tree leaf pages. (My
> experiences are with BenchmarkSQL, but I can't imagine HammerDB is too
> much different.)
>
> Assuming that's the case here, for you, then it's not clear that you
> have a real problem. You're really not supposed to run the benchmark
> in that way, per the TPC-C spec, which strictly limits the number of
> transactions per minute per warehouse -- for better or worse, valid
> results generally require that you use lots of warehouses to get a
> very large database (think terabytes). If you run the benchmark with
> 100 warehouses or less, on a big server, then the contention you'll
> see will be out of all proportion to what you're ever likely to see in
> the real world.
>
> --
> Peter Geoghegan
>


Re: Lock contention high

2021-10-14 Thread Ashkil Dighin
Not using PostGIS

On Thursday, October 14, 2021, Paul Friedman <
paul.fried...@streetlightdata.com> wrote:

> Are you using PostGIS?
>
> If so, there is an issue with TOAST table locking having these symptoms.
>
>
> ---Paul
>
>
> On Wed, Oct 13, 2021 at 11:15 AM MichaelDBA 
> wrote:
>
>> 1.Is there a way to tune the lock contention ?
>> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf
>>
>> I think you'd want to find *which* LW locks are being waited on, to see if 
>> it's
>> something that can be easily tuned.
>>
>> You can check pg_stat_activity, or maybe create a cronjob to record its 
>> content
>> for later analysis.
>>
>>
>> Hello,
>>
>> Also turn on log_lock_waits so you can evaluate the actual SQL causing
>> the problems in the PG log files.  Thinking ahead, you may want to consider
>> if using advisory locks from the application side of things might be
>> helpful to manage locks in a more pessimistic way.  Also, join with
>> pg_locks table to find out the specific resources that are in contention.
>>
>> Regards,
>> Michael Vitale
>>
>>
>>


Re: Lock contention high

2021-10-14 Thread Ashkil Dighin
NUMA node0 CPU(s):   0-63,128-191NUMA node1 CPU(s):   64-127,192-255
Thread(s) per core:  2
Core(s) per socket:  64
Socket(s):   2
NUMA node(s):2
corepinning(ta perf lock contention results for 24,32 vu
0-63
  24: 18.03%  postgres  postgres[.] LWLockAcquire
  32: 7.02%  postgres  postgres [.] LWLockAcquire
64-127
  24: 17.96%  postgres  postgres[.] LWLockAcquire
  32: 7.04%  postgres  postgres [.] LWLockAcquire
0-63,128-191(Node0)
  24: 18.4%  postgres  postgres[.] LWLockAcquire
  32: 7.07%  postgres  postgres[.] LWLockAcquire
64-127,192-255(Node1)
  24: 18.3%  postgres  postgres[.] LWLockAcquire
  32: 7.06%  postgres  postgres[.] LWLockAcquire
I do not understand on interconnect type and has restrictions on lscpu .

On Tuesday, October 12, 2021, Mikhail Zhilin  wrote:

> Hi,
>
> How many sockets are on motherboard?
> What is CPU model and interconnect type (UPI?)?
> Can you share output of "lscpu"?
>
> If you have more than 1 NUMA node it may be worth to run PostgreSQL in
> single NUMA node via taskset. It will eliminate access to remote memory and
> speed up processing.
>
> Thanks,
>  Michael.
>
> On 10/12/21 10:35 AM, Ashkil Dighin wrote:
>
>
> Hi,
> Lock contention observed high in PostgreSQLv13.3
> The source code compiled with GNC(GCCv11.x)
> PostgreSQL version: 13.3
> Operating system:   RHEL8.3
> Kernel name:4.18.0-305.10.2.el8_4.x86_64
> RAM Size:512GB
> SSD: 1TB
> The environment used IBM metal and test benchmark environment HammerDbv4.2
> Test case :TPC-C
>
> Perf data for 24vu(TPC-C)
> 
>
>   18.99%  postgres  postgres[.] LWLockAcquire
>  7.09%  postgres  postgres[.] _bt_compare
>  8.66%  postgres  postgres[.] LWLockRelease
>  2.28%  postgres  postgres[.] GetSnapshotData
>  2.25%  postgres  postgres[.] hash_search_with_hash_value
>  2.11%  postgres  postgres[.] XLogInsertRecord
>  1.98%  postgres  postgres[.] PinBuffer
>
> 1.Is there a way to tune the lock contention ?
> 2.Is any recommendations to tune/reduce the lock contention via
> postgres.conf
>
> Postgres.conf used  in Baremetal
> 
> shared_buffers = 128GB(1/4 th RAM size)
> effective_cachesize=392 GB(1/3 or 75% of RAM size)
> huge_pages = on
> temp_buffers = 4000MB
> work_mem = 4000MB
> maintenance_work_mem = 512MB
> autovacuum_work_mem = -1
> max_stack_depth = 7MB
> dynamic_shared_memory_type = posix
> max_files_per_process = 4000
> effective_io_concurrency = 32
> wal_level = minimal
> synchronous_commit = off
> wal_buffers = 512MB
> checkpoint_timeout = 1h
> checkpoint_completion_target = 1
> checkpoint_warning = 0
> log_min_messages = error
> log_min_error_statement = error
> log_timezone = 'GB'
> autovacuum = off
> datestyle = 'iso, dmy'
> timezone = 'GB'
> lc_messages = 'en_GB.UTF-8'
> lc_monetary = 'en_GB.UTF-8'
> lc_numeric = 'en_GB.UTF-8'
> lc_time = 'en_GB.UTF-8'
> default_text_search_config = 'pg_catalog.english'
> max_locks_per_transaction = 64
> max_pred_locks_per_transaction = 64
>
> Best Regards
> Anil
>
>
>


Re: Lock contention high

2021-10-14 Thread Ashkil Dighin
Hi
Captured the concurrent session with Netsat and pg-stat-actvity. Is the
procedure the right way to capture concurrent sesssions in postgresql?

netstat -a | grep postgres tcp 0 0 0.0.0.0:postgres 0.0.0.0:* LISTEN tcp 0
0 :postgres :53984 ESTABLISHED tcp 0 0 :postgres :54012 ESTABLISHED tcp 0
74 :postgres :53998 ESTABLISHED tcp 0 73 :53986 :postgres ESTABLISHED tcp 0
0 :54004 :postgres ESTABLISHED tcp 0 75 :53990 :postgres ESTABLISHED tcp 0
0 :postgres :53994 ESTABLISHED tcp 0 0 :postgres :54004 ESTABLISHED tcp 0
106 :53978 :postgres ESTABLISHED tcp 0 0 :postgres :53972 ESTABLISHED tcp 0
90 :54000 :postgres ESTABLISHED tcp 0 0 :postgres :54018 ESTABLISHED tcp 0
0 :54016 :postgres ESTABLISHED tcp 0 0 :postgres :53986 ESTABLISHED tcp 0
59 :54006 :postgres ESTABLISHED tcp 0 74 :postgres :53982 ESTABLISHED tcp 0
75 :53994 :postgres ESTABLISHED tcp 0 0 :53970 :postgres ESTABLISHED tcp 0
0 :postgres :53974 ESTABLISHED tcp 0 76 :53988 :postgres ESTABLISHED tcp 0
0 :postgres :54008 ESTABLISHED tcp 0 93 :54014 :postgres ESTABLISHED tcp 0
74 :54012 :postgres ESTABLISHED tcp 0 75 :53972 :postgres ESTABLISHED tcp 0
76 :54002 :postgres ESTABLISHED tcp 0 68 :postgres :54006 ESTABLISHED tcp 0
0 :postgres :53978 ESTABLISHED tcp 0 73 :54008 :postgres ESTABLISHED tcp 0
0 :postgres :53976 ESTABLISHED tcp 0 93 :53974 :postgres ESTABLISHED tcp 0
59 :53998 :postgres ESTABLISHED tcp 74 0 :53984 :postgres ESTABLISHED tcp 0
0 :postgres :54014 ESTABLISHED tcp 0 76 :53982 :postgres ESTABLISHED tcp 0
0 :postgres :54002 ESTABLISHED tcp 0 76 :53996 :postgres ESTABLISHED tcp 0
0 :postgres :53990 ESTABLISHED tcp 0 59 :53976 :postgres ESTABLISHED tcp 0
74 :postgres :53996 ESTABLISHED tcp 0 76 :53992 :postgres ESTABLISHED tcp 0
0 :postgres :54016 ESTABLISHED tcp 0 0 :postgres :54000 ESTABLISHED tcp 0 0
:postgres :53980 ESTABLISHED tcp 0 77 :53980 :postgres ESTABLISHED tcp 0 74
:54018 :postgres ESTABLISHED tcp 0 0 :postgres :53970 ESTABLISHED tcp 0 0
:postgres :53988 ESTABLISHED tcp 0 104 :54010 :postgres ESTABLISHED tcp 0 0
:postgres :54010 ESTABLISHED tcp 0 0 :postgres :53992 ESTABLISHED tcp6 0 0
[::]:postgres

Select pg_stat_activity


datid | datname | pid | leader_pid | usesysid | usename | application_name
| client_addr | client_hostname | client_port | backend_start | xact_start
| query_start | state_change | wait_event_type | wait_event | state |
backend_xid | backend_xmin | query | backend_type
---+--+-++--+--+--+-+-+-+---+---+---+---+-+-++-+--+-+--
| | 2092230 | | 10 | postgres | | | | | 2021-10-13 02:41:12.083391-04 | | |
| Activity | LogicalLauncherMain | | | | | logical replication launcher 16385
| tpcc | 2092540 | | 16384 | tpcc | | 127.0.0.1 | | 53970 | 2021-10-13
02:41:57.336031-04 | | 2021-10-13 02:43:58.97025-04 | 2021-10-13
02:43:58.971538-04 | Client | ClientRead | idle | | | select
sum(d_next_o_id) from district | client backend 16385 | tpcc | 2092541 | |
16384 | tpcc | | 127.0.0.1 | | 53972 | 2021-10-13 02:41:57.836054-04 |
2021-10-13 02:44:04.649045-04 | 2021-10-13 02:44:04.649054-04 | 2021-10-13
02:44:04.649055-04 | | | active | 11301598 | 11301493 | prepare delivery
(INTEGER, INTEGER) AS select delivery($1,$2) | client backend 16385 | tpcc
| 2092548 | | 16384 | tpcc | | 127.0.0.1 | | 53974 | 2021-10-13
02:41:58.336566-04 | 2021-10-13 02:44:04.649153-04 | 2021-10-13
02:44:04.649163-04 | 2021-10-13 02:44:04.649163-04 | | | active | 11301611
| 11301493 | prepare neword (INTEGER, INTEGER, INTEGER, INTEGER, INTEGER)
as select neword($1,$2,$3,$4,$5,0) | client backend 16385 | tpcc | 2092549
| | 16384 | tpcc | | 127.0.0.1 | | 53976 | 2021-10-13 02:41:58.836269-04 |
2021-10-13 02:44:04.649443-04 | 2021-10-13 02:44:04.649454-04 | 2021-10-13
02:44:04.649454-04 | | | active | | 11301528 | prepare neword (INTEGER,
INTEGER, INTEGER, INTEGER, INTEGER) as select neword($1,$2,$3,$4,$5,0) |
client backend 16385 | tpcc | 2092556 | | 16384 | tpcc | | 127.0.0.1 | |
53978 | 2021-10-13 02:41:59.336172-04 | 2021-10-13 02:44:04.648817-04 |
2021-10-13 02:44:04.648827-04 | 2021-10-13 02:44:04.648828-04 | | | active
| | 11301493 | prepare slev (INTEGER, INTEGER, INTEGER) AS select
slev($1,$2,$3) | client backend 16385 | tpcc | 2092557 | | 16384 | tpcc | |
127.0.0.1 | | 53980 | 2021-10-13 02:41:59.83835-04 | 2021-10-13
02:44:04.649027-04 | 2021-10-13 02:44:04.649036-04 | 2021-10-13
02:44:04.649036-04 | | | active | | 11301493 | prepare slev (INTEGER,
INTEGER, INTEGER) AS select slev($1,$2,$3) | client backend 16385 | tpcc |
2092564 | | 16384 | tpcc | | 127.0.0.1 | | 53982 | 2021-10-13
02:42:00.336974-04 | 2021-10-13 02:44:04.649194-04 | 

Re: Lock contention high

2021-10-14 Thread Mikhail Zhilin

Ashkil,

Can you bind postgres in single NUMA node, for instance:
 $ taskset -pc 0-63 

Then run your benchmark, compare results in terms of benchmark metrics & 
presence on LWLock(Acquire|Release) in perf top.


BR,
 Michael.

On 10/14/21 9:45 AM, Ashkil Dighin wrote:


NUMA node0 CPU(s):   0-63,128-191NUMA node1 CPU(s):   64-127,192-255
Thread(s) per core:  2
Core(s) per socket:  64
Socket(s):   2
NUMA node(s):    2
corepinning(ta perf lock contention results for 24,32 vu
0-63
  24: 18.03%  postgres  postgres    [.] LWLockAcquire
  32: 7.02%  postgres  postgres [.] LWLockAcquire
64-127
  24: 17.96%  postgres  postgres    [.] LWLockAcquire
  32: 7.04%  postgres  postgres [.] LWLockAcquire
0-63,128-191(Node0)
  24: 18.4%  postgres  postgres    [.] LWLockAcquire
  32: 7.07%  postgres  postgres    [.] LWLockAcquire
64-127,192-255(Node1)
  24: 18.3%  postgres  postgres    [.] LWLockAcquire
  32: 7.06%  postgres  postgres    [.] LWLockAcquire

I do not understand on interconnect type and has restrictions on lscpu .

On Tuesday, October 12, 2021, Mikhail Zhilin > wrote:


Hi,

How many sockets are on motherboard?
What is CPU model and interconnect type (UPI?)?
Can you share output of "lscpu"?

If you have more than 1 NUMA node it may be worth to run
PostgreSQL in single NUMA node via taskset. It will eliminate
access to remote memory and speed up processing.

Thanks,
 Michael.

On 10/12/21 10:35 AM, Ashkil Dighin wrote:


Hi,
Lock contention observed high in PostgreSQLv13.3
The source code compiled with GNC(GCCv11.x)
PostgreSQL version: 13.3
Operating system:   RHEL8.3
Kernel name:4.18.0-305.10.2.el8_4.x86_64
RAM Size:512GB
SSD: 1TB
The environment used IBM metal and test benchmark
environment HammerDbv4.2
Test case :TPC-C

Perf data for 24vu(TPC-C)


      18.99%  postgres  postgres            [.] LWLockAcquire
     7.09%  postgres  postgres            [.] _bt_compare
     8.66%  postgres  postgres            [.] LWLockRelease
     2.28%  postgres  postgres            [.] GetSnapshotData
     2.25%  postgres  postgres            [.]
hash_search_with_hash_value
     2.11%  postgres  postgres            [.] XLogInsertRecord
     1.98%  postgres  postgres            [.] PinBuffer

1.Is there a way to tune the lock contention ?
2.Is any recommendations to tune/reduce the lock contention via
postgres.conf

Postgres.conf used  in Baremetal

shared_buffers = 128GB(1/4 th RAM size)
effective_cachesize=392 GB(1/3 or 75% of RAM size)
huge_pages = on
temp_buffers = 4000MB
work_mem = 4000MB
maintenance_work_mem = 512MB
autovacuum_work_mem = -1
max_stack_depth = 7MB
dynamic_shared_memory_type = posix
max_files_per_process = 4000
effective_io_concurrency = 32
wal_level = minimal
synchronous_commit = off
wal_buffers = 512MB
checkpoint_timeout = 1h
checkpoint_completion_target = 1
checkpoint_warning = 0
log_min_messages = error
log_min_error_statement = error
log_timezone = 'GB'
autovacuum = off
datestyle = 'iso, dmy'
timezone = 'GB'
lc_messages = 'en_GB.UTF-8'
lc_monetary = 'en_GB.UTF-8'
lc_numeric = 'en_GB.UTF-8'
lc_time = 'en_GB.UTF-8'
default_text_search_config = 'pg_catalog.english'
max_locks_per_transaction = 64
max_pred_locks_per_transaction = 64

Best Regards
Anil







Re: Lock contention high

2021-10-14 Thread Laurenz Albe
On Thu, 2021-10-14 at 11:33 +0530, Ashkil Dighin wrote:
> Captured the concurrent session with Netsat and pg-stat-actvity. Is the 
> procedure the right way to capture concurrent sesssions in postgresql?
> 
> Select pg_stat_activity 

[some two dozen sessions]

That doesn't look like you would get into trouble just from the
sheer number of sessions, so it must be something else.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Lock contention high

2021-10-13 Thread Peter Geoghegan
On Wed, Oct 13, 2021 at 6:54 PM Jeremy Schneider
 wrote:
> only a half GB memory for autovac? (it will have a mandatory run as soon
> as you hit 200 mil XIDs, seems like you'd want the full max 1GB for it)

While anti-wraparound vacuums will become a problem for TPC-C (unless
you tune for it), it's not too sensitive to mwm. You just don't end up
accumulating too many TIDs to delete from indexes in practice, even
though the overhead from VACUUM is a concern. The new autovacuum
instrumentation in Postgres 14 makes this far clearer.

-- 
Peter Geoghegan




Re: Lock contention high

2021-10-13 Thread Jeremy Schneider
Your settings are interesting, I'm curious what the goal is for this
particular hammerdb exercise.

A few comments inline


On 10/12/21 00:35, Ashkil Dighin wrote:
> 
> Postgres.conf used  in Baremetal
> 
> maintenance_work_mem = 512MB           

only a half GB memory for autovac? (it will have a mandatory run as soon
as you hit 200 mil XIDs, seems like you'd want the full max 1GB for it)

> synchronous_commit = off            
> checkpoint_timeout = 1h         
> checkpoint_completion_target = 1       
> checkpoint_warning = 0         

curious about this, seems you're just looking to understand how much
throughput you can get with a config that would not be used on a real system

> autovacuum = off                       

i assume you understand that autovacuum will still run when you hit 200
mil XIDs. this setting seems incongruent with the previous settings,
because it seemed like you were going for throughput, which generally
requires autovacuum to be more aggressive rather than less aggressive.
assuming the benchmark runs for a properly sufficient length of time,
this setting will slow things down because of accumulating bloat.

Just a few opinions, I might be wrong, hope the feedback is helpful.  :)

-Jeremy


-- 
http://about.me/jeremy_schneider




Re: Lock contention high

2021-10-13 Thread Peter Geoghegan
On Tue, Oct 12, 2021 at 12:45 AM Ashkil Dighin  wrote:
> Lock contention observed high in PostgreSQLv13.3
> The source code compiled with GNC(GCCv11.x)
> PostgreSQL version: 13.3
> Operating system:   RHEL8.3
> Kernel name:4.18.0-305.10.2.el8_4.x86_64
> RAM Size:512GB
> SSD: 1TB
> The environment used IBM metal and test benchmark environment HammerDbv4.2
> Test case :TPC-C

You didn't say how many TPC-C warehouses you used. In my experience,
people sometimes run TPC-C with relatively few, which will tend to
result in extreme contention on certain B-Tree leaf pages. (My
experiences are with BenchmarkSQL, but I can't imagine HammerDB is too
much different.)

Assuming that's the case here, for you, then it's not clear that you
have a real problem. You're really not supposed to run the benchmark
in that way, per the TPC-C spec, which strictly limits the number of
transactions per minute per warehouse -- for better or worse, valid
results generally require that you use lots of warehouses to get a
very large database (think terabytes). If you run the benchmark with
100 warehouses or less, on a big server, then the contention you'll
see will be out of all proportion to what you're ever likely to see in
the real world.

-- 
Peter Geoghegan




Re: Lock contention high

2021-10-13 Thread Paul Friedman
Are you using PostGIS?

If so, there is an issue with TOAST table locking having these symptoms.


---Paul


On Wed, Oct 13, 2021 at 11:15 AM MichaelDBA  wrote:

> 1.Is there a way to tune the lock contention ?
> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf
>
> I think you'd want to find *which* LW locks are being waited on, to see if 
> it's
> something that can be easily tuned.
>
> You can check pg_stat_activity, or maybe create a cronjob to record its 
> content
> for later analysis.
>
>
> Hello,
>
> Also turn on log_lock_waits so you can evaluate the actual SQL causing
> the problems in the PG log files.  Thinking ahead, you may want to consider
> if using advisory locks from the application side of things might be
> helpful to manage locks in a more pessimistic way.  Also, join with
> pg_locks table to find out the specific resources that are in contention.
>
> Regards,
> Michael Vitale
>
>
>


Re: Lock contention high

2021-10-13 Thread MichaelDBA

1.Is there a way to tune the lock contention ?
2.Is any recommendations to tune/reduce the lock contention via postgres.conf

I think you'd want to find *which* LW locks are being waited on, to see if it's
something that can be easily tuned.

You can check pg_stat_activity, or maybe create a cronjob to record its content
for later analysis.


Hello,

Also turn on log_lock_waits so you can evaluate the actual SQL causing 
the problems in the PG log files.  Thinking ahead, you may want to 
consider if using advisory locks from the application side of things 
might be helpful to manage locks in a more pessimistic way.  Also, join 
with pg_locks table to find out the specific resources that are in 
contention.


Regards,
Michael Vitale




Re: Lock contention high

2021-10-13 Thread Justin Pryzby
On Tue, Oct 12, 2021 at 01:05:12PM +0530, Ashkil Dighin wrote:
> Hi,
> Lock contention observed high in PostgreSQLv13.3
> The source code compiled with GNC(GCCv11.x)
> PostgreSQL version: 13.3
> Operating system:   RHEL8.3
> Kernel name:4.18.0-305.10.2.el8_4.x86_64
> RAM Size:512GB
> SSD: 1TB
> The environment used IBM metal and test benchmark environment HammerDbv4.2
> Test case :TPC-C
> 
> Perf data for 24vu(TPC-C)
> 
> 
>   18.99%  postgres  postgres[.] LWLockAcquire
>  7.09%  postgres  postgres[.] _bt_compare
>  8.66%  postgres  postgres[.] LWLockRelease
...
> 1.Is there a way to tune the lock contention ?
> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf

I think you'd want to find *which* LW locks are being waited on, to see if it's
something that can be easily tuned.

You can check pg_stat_activity, or maybe create a cronjob to record its content
for later analysis.

-- 
Justin




Re: Lock contention high

2021-10-12 Thread Mladen Gogala


On 10/12/21 03:35, Ashkil Dighin wrote:

1.Is there a way to tune the lock contention ?


Lock contention is usually an application issue. Application processes 
are stepping on each other's toes. I have never seen a situation where 
the database would be slow with managing locks. Postgres uses an 
in-memory queue manager which is, generally speaking, very fast. 
Applications usually do stupid things. I've seen GUI doing "SELECT FOR 
UPDATE". And then the operator decided to have lunch. I'll leave the 
rest to your imagination.



--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com


Re: Lock contention high

2021-10-12 Thread Laurenz Albe
On Tue, 2021-10-12 at 13:05 +0530, Ashkil Dighin wrote:
> Perf data for 24vu(TPC-C)
> 
> 
>       18.99%  postgres  postgres            [.] LWLockAcquire
>      7.09%  postgres  postgres            [.] _bt_compare
>      8.66%  postgres  postgres            [.] LWLockRelease
>      2.28%  postgres  postgres            [.] GetSnapshotData
>      2.25%  postgres  postgres            [.] hash_search_with_hash_value
>      2.11%  postgres  postgres            [.] XLogInsertRecord
>      1.98%  postgres  postgres            [.] PinBuffer
> 
> 1.Is there a way to tune the lock contention ?

How many concurrent sesions are you running?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Lock contention high

2021-10-12 Thread Mikhail Zhilin

Hi,

How many sockets are on motherboard?
What is CPU model and interconnect type (UPI?)?
Can you share output of "lscpu"?

If you have more than 1 NUMA node it may be worth to run PostgreSQL in 
single NUMA node via taskset. It will eliminate access to remote memory 
and speed up processing.


Thanks,
 Michael.

On 10/12/21 10:35 AM, Ashkil Dighin wrote:


Hi,
Lock contention observed high in PostgreSQLv13.3
The source code compiled with GNC(GCCv11.x)
PostgreSQL version: 13.3
Operating system:   RHEL8.3
Kernel name:4.18.0-305.10.2.el8_4.x86_64
RAM Size:512GB
SSD: 1TB
The environment used IBM metal and test benchmark environment HammerDbv4.2
Test case :TPC-C

Perf data for 24vu(TPC-C)


      18.99%  postgres  postgres            [.] LWLockAcquire
     7.09%  postgres  postgres            [.] _bt_compare
     8.66%  postgres  postgres            [.] LWLockRelease
     2.28%  postgres  postgres            [.] GetSnapshotData
     2.25%  postgres  postgres            [.] hash_search_with_hash_value
     2.11%  postgres  postgres            [.] XLogInsertRecord
     1.98%  postgres  postgres            [.] PinBuffer

1.Is there a way to tune the lock contention ?
2.Is any recommendations to tune/reduce the lock contention via 
postgres.conf


Postgres.conf used  in Baremetal

shared_buffers = 128GB(1/4 th RAM size)
effective_cachesize=392 GB(1/3 or 75% of RAM size)
huge_pages = on
temp_buffers = 4000MB
work_mem = 4000MB
maintenance_work_mem = 512MB
autovacuum_work_mem = -1
max_stack_depth = 7MB
dynamic_shared_memory_type = posix
max_files_per_process = 4000
effective_io_concurrency = 32
wal_level = minimal
synchronous_commit = off
wal_buffers = 512MB
checkpoint_timeout = 1h
checkpoint_completion_target = 1
checkpoint_warning = 0
log_min_messages = error
log_min_error_statement = error
log_timezone = 'GB'
autovacuum = off
datestyle = 'iso, dmy'
timezone = 'GB'
lc_messages = 'en_GB.UTF-8'
lc_monetary = 'en_GB.UTF-8'
lc_numeric = 'en_GB.UTF-8'
lc_time = 'en_GB.UTF-8'
default_text_search_config = 'pg_catalog.english'
max_locks_per_transaction = 64
max_pred_locks_per_transaction = 64

Best Regards
Anil





Lock contention high

2021-10-12 Thread Ashkil Dighin
Hi,
Lock contention observed high in PostgreSQLv13.3
The source code compiled with GNC(GCCv11.x)
PostgreSQL version: 13.3
Operating system:   RHEL8.3
Kernel name:4.18.0-305.10.2.el8_4.x86_64
RAM Size:512GB
SSD: 1TB
The environment used IBM metal and test benchmark environment HammerDbv4.2
Test case :TPC-C

Perf data for 24vu(TPC-C)


  18.99%  postgres  postgres[.] LWLockAcquire
 7.09%  postgres  postgres[.] _bt_compare
 8.66%  postgres  postgres[.] LWLockRelease
 2.28%  postgres  postgres[.] GetSnapshotData
 2.25%  postgres  postgres[.] hash_search_with_hash_value
 2.11%  postgres  postgres[.] XLogInsertRecord
 1.98%  postgres  postgres[.] PinBuffer

1.Is there a way to tune the lock contention ?
2.Is any recommendations to tune/reduce the lock contention via
postgres.conf

Postgres.conf used  in Baremetal

shared_buffers = 128GB(1/4 th RAM size)
effective_cachesize=392 GB(1/3 or 75% of RAM size)
huge_pages = on
temp_buffers = 4000MB
work_mem = 4000MB
maintenance_work_mem = 512MB
autovacuum_work_mem = -1
max_stack_depth = 7MB
dynamic_shared_memory_type = posix
max_files_per_process = 4000
effective_io_concurrency = 32
wal_level = minimal
synchronous_commit = off
wal_buffers = 512MB
checkpoint_timeout = 1h
checkpoint_completion_target = 1
checkpoint_warning = 0
log_min_messages = error
log_min_error_statement = error
log_timezone = 'GB'
autovacuum = off
datestyle = 'iso, dmy'
timezone = 'GB'
lc_messages = 'en_GB.UTF-8'
lc_monetary = 'en_GB.UTF-8'
lc_numeric = 'en_GB.UTF-8'
lc_time = 'en_GB.UTF-8'
default_text_search_config = 'pg_catalog.english'
max_locks_per_transaction = 64
max_pred_locks_per_transaction = 64

Best Regards
Anil