Re: [HACKERS] Patches I'm thinking of pushing shortly

2017-08-12 Thread Thomas Munro
On Sat, Aug 12, 2017 at 3:24 AM, Tom Lane  wrote:
> I have some patches sitting around in my workspace that I think are
> non-controversial, and so I was considering just pushing them once
> the tree opens for v11 development.  If anyone thinks they need
> further review, I'll put them into the September commitfest, but
> otherwise we might as well skip the overhead.  These are:
>
> 1. check-hash-bucket-size-against-work_mem-2.patch from
> https://www.postgresql.org/message-id/13698.1487283...@sss.pgh.pa.us
>
> That discussion sort of trailed off, but there wasn't really anyone
> saying not to commit it, and no new ideas have surfaced.

+1

I'd vote for including this in v10.  There doesn't seem to be any
downside to this: it's a no brainer to avoid our exploding hash table
case when we can see it coming.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Tom Lane
Robert Haas  writes:
> On Sat, Aug 12, 2017 at 11:30 PM, Andres Freund  wrote:
>> That seems to involve a lot more than this though, given that currently
>> the stats collector data doesn't entirely have to be in memory. I've
>> seen sites with a lot of databases with quite some per-database stats
>> data. Don't think we can just require that to be in memory :(

> Hmm.  I'm not sure it wouldn't end up being *less* memory.  Don't we
> end up caching 1 copy of it per backend, at least for the database to
> which that backend is connected?  Accessing a shared copy would avoid
> that sort of thing.

Yeah ... the collector itself has got all that in memory anyway.
We do need to think about synchronization issues if we make that
memory globally available, but I find it hard to see how that would
lead to more memory consumption overall than what happens now.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Robert Haas
On Sat, Aug 12, 2017 at 11:30 PM, Andres Freund  wrote:
> That seems to involve a lot more than this though, given that currently
> the stats collector data doesn't entirely have to be in memory. I've
> seen sites with a lot of databases with quite some per-database stats
> data. Don't think we can just require that to be in memory :(

Hmm.  I'm not sure it wouldn't end up being *less* memory.  Don't we
end up caching 1 copy of it per backend, at least for the database to
which that backend is connected?  Accessing a shared copy would avoid
that sort of thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Andres Freund
On 2017-08-12 22:52:57 -0400, Robert Haas wrote:
> On Fri, Aug 11, 2017 at 9:55 PM, Andres Freund  wrote:
> > Well, most of the potential usecases for dsmhash I've heard about so
> > far, don't actually benefit much from incremental growth. In nearly all
> > the implementations I've seen incremental move ends up requiring more
> > total cycles than doing it at once, and for parallelism type usecases
> > the stall isn't really an issue.  So yes, I think this is something
> > worth considering.   If we were to actually use DHT for shared caches or
> > such, this'd be different, but that seems darned far off.
> 
> I think it'd be pretty interesting to look at replacing parts of the
> stats collector machinery with something DHT-based.

That seems to involve a lot more than this though, given that currently
the stats collector data doesn't entirely have to be in memory. I've
seen sites with a lot of databases with quite some per-database stats
data. Don't think we can just require that to be in memory :(

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comment in snapbuild.c file

2017-08-12 Thread Alvaro Herrera
Alvaro Herrera wrote:
> Masahiko Sawada wrote:
> > Hi all,
> > 
> > In snapbuild.c file, there is a comment as follows.
> > 
> >* NB: Because of that xmax can be lower than xmin, because we only
> >* increase xmax when a catalog modifying transaction commits. While odd
> >* looking, it's correct and actually more efficient this way since we hit
> >* fast paths in tqual.c.
> >*/
> 
> I think the whole para needs to be rethought.

Pushed, thanks.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Robert Haas
On Fri, Aug 11, 2017 at 9:55 PM, Andres Freund  wrote:
> Well, most of the potential usecases for dsmhash I've heard about so
> far, don't actually benefit much from incremental growth. In nearly all
> the implementations I've seen incremental move ends up requiring more
> total cycles than doing it at once, and for parallelism type usecases
> the stall isn't really an issue.  So yes, I think this is something
> worth considering.   If we were to actually use DHT for shared caches or
> such, this'd be different, but that seems darned far off.

I think it'd be pretty interesting to look at replacing parts of the
stats collector machinery with something DHT-based.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More race conditions in logical replication

2017-08-12 Thread Michael Paquier
On Sun, Aug 13, 2017 at 10:25 AM, Noah Misch  wrote:
> Committed.  These counts broke three times in the v10 release cycle.  It's too
> bad this defect doesn't cause an error when building the docs.

That's another argument for generating the table dynamically. Thanks
for the commit.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Timing-sensitive case in src/test/recovery TAP tests

2017-08-12 Thread Michael Paquier
On Sun, Aug 13, 2017 at 1:09 AM, Tom Lane  wrote:
> Michael Paquier  writes:
>> On Wed, Aug 9, 2017 at 3:39 PM, Tom Lane  wrote:
>>> Michael Paquier  writes:
 Let's do that please. Merging both was my first feeling when
 refactoring this test upthread. Should I send a patch?
>
>>> Sure, have at it.
>
>> And here you go.
>
> Pushed with a bit of work on the comments.

Thanks.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More race conditions in logical replication

2017-08-12 Thread Noah Misch
On Sat, Aug 12, 2017 at 05:38:29PM +0900, Michael Paquier wrote:
> On Thu, Aug 10, 2017 at 4:22 PM, Michael Paquier
>  wrote:
> > On Tue, Aug 8, 2017 at 8:11 PM, Alvaro Herrera  
> > wrote:
> >> Here's a patch.  It turned to be a bit larger than I initially expected.
> >
> > Álvaro, 030273b7 did not get things completely right. A couple of wait
> > events have been added in the docs for sections Client, IPC and
> > Activity but it forgot to update the counters for morerows . Attached
> > is a patch to fix all that.
> 
> As the documentation format is weird because of this commit, I am
> adding an open item dedicated to that. Look for example at
> WalSenderWaitForWAL in
> https://www.postgresql.org/docs/devel/static/monitoring-stats.html.
> While looking again, I have noticed that one line for the section of
> LWLock is weird, so attached is an updated patch.

Committed.  These counts broke three times in the v10 release cycle.  It's too
bad this defect doesn't cause an error when building the docs.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Thomas Munro
Thanks for your feedback.  Here are two parts that jumped out at me.
I'll address the other parts in a separate email.

On Sat, Aug 12, 2017 at 1:55 PM, Andres Freund  wrote:
>> This is complicated, and in the category that I would normally want a
>> stack of heavy unit tests for.  If you don't feel like making
>> decisions about this now, perhaps iteration (and incremental resize?)
>> could be removed, leaving only the most primitive get/put hash table
>> facilities -- just enough for this purpose?  Then a later patch could
>> add them back, with a set of really convincing unit tests...
>
> I'm inclined to go for that, yes.

I will make it so.

>> > +/*
>> > + * An entry in SharedRecordTypmodRegistry's attribute index.  The key is 
>> > the
>> > + * first REC_HASH_KEYS attribute OIDs.  That means that collisions are
>> > + * possible, but that's OK because SerializedTupleDesc objects are 
>> > arranged
>> > + * into a list.
>> > + */
>> >
>> > +/* Parameters for SharedRecordTypmodRegistry's attributes hash table. */
>> > +const static dht_parameters srtr_atts_index_params = {
>> > +   sizeof(Oid) * REC_HASH_KEYS,
>> > +   sizeof(SRTRAttsIndexEntry),
>> > +   memcmp,
>> > +   tag_hash,
>> > +   LWTRANCHE_SHARED_RECORD_ATTS_INDEX
>> > +};
>> > +
>> > +/* Parameters for SharedRecordTypmodRegistry's typmod hash table. */
>> > +const static dht_parameters srtr_typmod_index_params = {
>> > +   sizeof(uint32),
>> > +   sizeof(SRTRTypmodIndexEntry),
>> > +   memcmp,
>> > +   tag_hash,
>> > +   LWTRANCHE_SHARED_RECORD_TYPMOD_INDEX
>> > +};
>> > +
>> >
>> > I'm very much not a fan of this representation. I know you copied the
>> > logic, but I think it's a bad idea. I think the key should just be a
>> > dsa_pointer, and then we can have a proper tag_hash that hashes the
>> > whole thing, and a proper comparator too.  Just have
>> >
>> > /*
>> >  * Combine two hash values, resulting in another hash value, with decent 
>> > bit
>> >  * mixing.
>> >  *
>> >  * Similar to boost's hash_combine().
>> >  */
>> > static inline uint32
>> > hash_combine(uint32 a, uint32 b)
>> > {
>> > a ^= b + 0x9e3779b9 + (a << 6) + (a >> 2);
>> > return a;
>> > }
>> >
>> > and then hash everything.
>>
>> Hmm.  I'm not sure I understand.  I know what hash_combine is for but
>> what do you mean when you say they key should just be a dsa_pointer?
>
>> What's wrong with providing the key size, whole entry size, compare
>> function and hash function like this?
>
> Well, right now the key is "sizeof(Oid) * REC_HASH_KEYS" which imo is
> fairly ugly. Both because it wastes space for narrow cases, and because
> it leads to conflicts for wide ones. By having a dsa_pointer as a key
> and custom hash/compare functions there's no need for that, you can just
> compute the hash based on all keys, and compare based on all keys.

Ah, that.  Yeah, it is ugly, both in the pre-existing code and in my
patch.  Stepping back from this a bit more, the true key here not an
array of Oid at all (whether fixed sized or variable).  It's actually
a whole TupleDesc, because this is really a TupleDesc intern pool:
given a TupleDesc, please give me the canonical TupleDesc equal to
this one.  You might call it a hash set rather than a hash table
(key->value associative).

Ideally, we'd get rid of the ugly REC_HASH_KEYS-sized key and the ugly
extra conflict chain, and tupdesc.c would have a hashTupleDesc()
function that is compatible with equalTupleDescs().  Then the hash
table entry would simply be a TupleDesc (that is, a pointer).

There is an extra complication when we use DSA memory though:  If you
have a hash table (set) full of dsa_pointer to struct tupleDesc but
want to be able to search it given a TupleDesc (= backend local
pointer) then you have to do some extra work.  I think that work is:
the hash table entries should be a small struct that has a union {
dsa_pointer, TupleDesc } and a discriminator field to say which it is,
and the hash + eq functions should be wrappers that follow dsa_pointer
if needed and then forward to hashTupleDesc() (a function that does
hash_combine() over the Oids) and equalTupleDescs().

(That complication might not exist if tupleDesc were fixed size and
could be directly in the hash table entry, but in the process of
flattening it (= holding the attributes in it) I made it variable
size, so we have to use a pointer to it in the hash table since both
DynaHash and DHT work with fixed size entries).

Thoughts?

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Timing-sensitive case in src/test/recovery TAP tests

2017-08-12 Thread Tom Lane
Michael Paquier  writes:
> On Wed, Aug 9, 2017 at 3:39 PM, Tom Lane  wrote:
>> Michael Paquier  writes:
>>> Let's do that please. Merging both was my first feeling when
>>> refactoring this test upthread. Should I send a patch?

>> Sure, have at it.

> And here you go.

Pushed with a bit of work on the comments.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_statements query normalization, and the 'in' operator

2017-08-12 Thread Tom Lane
unixway.dr...@gmail.com writes:
> Given the following list of queries:

>create table foo (id serial, bar integer);
>select * from foo where id in (1);
>select * from foo where id in (2,3);
>select * from foo where id in (1,3,5);
>select * from foo where id in (select id from foo);

> would it be possible to have first three select queries to be normalized 
> into a single one so that 'select query from pg_stat_statements' returns 
> something like:

>select * from foo where id in (...);
>select * from foo where id in (select id from foo);

Wouldn't recommend holding your breath for that.  But you could do the
same conversion on the client side that the parser would do anyway:

select * from foo where id = any ('{1,3,5}'::integer[]);

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why not parallel seq scan for slow functions

2017-08-12 Thread Amit Kapila
On Thu, Aug 10, 2017 at 1:07 AM, Robert Haas  wrote:
> On Tue, Aug 8, 2017 at 3:50 AM, Amit Kapila  wrote:
>> Right.
>>
>> I see two ways to include the cost of the target list for parallel
>> paths before rejecting them (a) Don't reject parallel paths
>> (Gather/GatherMerge) during add_path.  This has the danger of path
>> explosion. (b)  In the case of parallel paths, somehow try to identify
>> that path has a costly target list (maybe just check if the target
>> list has anything other than vars) and use it as a heuristic to decide
>> that whether a parallel path can be retained.
>
> I think the right approach to this problem is to get the cost of the
> GatherPath correct when it's initially created.  The proposed patch
> changes the cost after-the-fact, but that (1) doesn't prevent a
> promising path from being rejected before we reach this point and (2)
> is probably unsafe, because it might confuse code that reaches the
> modified-in-place path through some other pointer (e.g. code which
> expects the RelOptInfo's paths to still be sorted by cost).  Perhaps
> the way to do that is to skip generate_gather_paths() for the toplevel
> scan/join node and do something similar later, after we know what
> target list we want.
>

I think skipping a generation of gather paths for scan node or top
level join node generated via standard_join_search seems straight
forward, but skipping for paths generated via geqo seems to be tricky
(See use of generate_gather_paths in merge_clump).  Assuming, we find
some way to skip it for top level scan/join node, I don't think that
will be sufficient, we have some special way to push target list below
Gather node in apply_projection_to_path, we need to move that part as
well in generate_gather_paths.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Regressions failures with libxml2 on ArchLinux

2017-08-12 Thread Michael Paquier
Hi all,

It's been one month since I have done some serious development with
Archlinux (I was abroad and away from the laptop dedicated to that),
and surprise, I can see failures in the PG regression tests, like the
following short extract (result compared to expected/xml.out):
  SELECT xmlparse(document '   ');
  ERROR:  invalid XML document
! DETAIL:  line 1: Start tag expected, '<' not found
 
  SELECT xmlparse(document '   ');
  ERROR:  invalid XML document
! DETAIL:  line 1: switching encoding : no input
!
! ^
! line 1: Start tag expected, '<' not found

Attached is the full diff of the thing.

On Archlinux the following package is being used, for something that
is rather up to date with upstream:
https://www.archlinux.org/packages/extra/x86_64/libxml2/
The last update of this package is dated of the 9th of July, which
corresponds more or less at the moment I have not been able to touch
my Linux laptop. So I would tend to think that something got broken
upstream because of this package update, and because the last thing
close enough to this code has been 0de791ed, but I have run the
regression tests on my Linux laptop as well.

I can notice as well that no buildfarm machines are running ArchLinux
now, so that's one reason why this got undetected until now. My own
machines running Archlinux ARM have been unplugged for a full month,
and I can see the failure there. They are not able to report back to
the buildfarm, but that's a separate issue, visibly that's an access
permission.

I have not investigated much this problem yet, but has somebody else
seen those diffs?

Thanks,
-- 
Michael


regression.diffs
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Don't force-assign transaction id when exporting a snapshot.

2017-08-12 Thread Petr Jelinek
On 14/06/17 20:57, Andres Freund wrote:
> Don't force-assign transaction id when exporting a snapshot.
> 
> Previously we required every exported transaction to have an xid
> assigned. That was used to check that the exporting transaction is
> still running, which in turn is needed to guarantee that that
> necessary rows haven't been removed in between exporting and importing
> the snapshot.
> 
> The exported xid caused unnecessary problems with logical decoding,
> because slot creation has to wait for all concurrent xid to finish,
> which in turn serializes concurrent slot creation.   It also
> prohibited snapshots to be exported on hot-standby replicas.
> 
> Instead export the virtual transactionid, which avoids the unnecessary
> serialization and the inability to export snapshots on standbys. This
> changes the file name of the exported snapshot, but since we never
> documented what that one means, that seems ok.
> 

This commit has side effect that it makes it possible to export
snapshots on the standbys. This makes it possible to do pg_dump -j on
standby with consistent snapshot. Here is one line patch (+ doc update)
which allows doing that when pg_dumping from PG10+.

I also wonder if it should be mentioned in release notes. If the
attached patch would make it into PG10 it would be no brainer to mention
it as feature under pg_dump section, but exporting snapshots alone I am
not sure about.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services
From e2bca9b5271dbf29e5ab68d99c409c4f7dd752db Mon Sep 17 00:00:00 2001
From: Petr Jelinek 
Date: Sat, 12 Aug 2017 13:16:48 +0200
Subject: [PATCH] Allow pg_dump -j on standby when dumping from PG10+

PG10 adds support for exporting snapthots on standby, allow pg_dump to
use this feature when consistent parallel dump has been requested.
---
 doc/src/sgml/ref/pg_dump.sgml | 18 ++
 src/bin/pg_dump/pg_dump.c |  2 +-
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index bafa031..3c088f3 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -337,14 +337,16 @@ PostgreSQL documentation
 but to abort the dump.


-For a consistent backup, the database server needs to support synchronized snapshots,
-a feature that was introduced in PostgreSQL 9.2. With this
-feature, database clients can ensure they see the same data set even though they use
-different connections. pg_dump -j uses multiple database
-connections; it connects to the database once with the master process and
-once again for each worker job. Without the synchronized snapshot feature, the
-different worker jobs wouldn't be guaranteed to see the same data in each connection,
-which could lead to an inconsistent backup.
+For a consistent backup, the database server needs to support
+synchronized snapshots, a feature that was introduced in
+PostgreSQL 9.2 for primary servers and 10
+for standbys. With this feature, database clients can ensure they see
+the same data set even though they use different connections.
+pg_dump -j uses multiple database connections; it
+connects to the database once with the master process and once again
+for each worker job. Without the synchronized snapshot feature, the
+different worker jobs wouldn't be guaranteed to see the same data in
+each connection, which could lead to an inconsistent backup.


 If you want to run a parallel dump of a pre-9.2 server, you need to make sure that the
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 37cb7cd..8ad9773 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1131,7 +1131,7 @@ setup_connection(Archive *AH, const char *dumpencoding,
 			 AH->remoteVersion >= 90200 &&
 			 !dopt->no_synchronized_snapshots)
 	{
-		if (AH->isStandby)
+		if (AH->isStandby && AH->remoteVersion < 10)
 			exit_horribly(NULL,
 		  "Synchronized snapshots are not supported on standby servers.\n"
 		  "Run with --no-synchronized-snapshots instead if you do not need\n"
-- 
2.7.4


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More race conditions in logical replication

2017-08-12 Thread Michael Paquier
On Thu, Aug 10, 2017 at 4:22 PM, Michael Paquier
 wrote:
> On Tue, Aug 8, 2017 at 8:11 PM, Alvaro Herrera  
> wrote:
>> Here's a patch.  It turned to be a bit larger than I initially expected.
>
> Álvaro, 030273b7 did not get things completely right. A couple of wait
> events have been added in the docs for sections Client, IPC and
> Activity but it forgot to update the counters for morerows . Attached
> is a patch to fix all that.

As the documentation format is weird because of this commit, I am
adding an open item dedicated to that. Look for example at
WalSenderWaitForWAL in
https://www.postgresql.org/docs/devel/static/monitoring-stats.html.
While looking again, I have noticed that one line for the section of
LWLock is weird, so attached is an updated patch.

Thanks,
-- 
Michael
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 12d5628266..5575c2c837 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -845,7 +845,7 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
 
   

-LWLock
+LWLock
 ShmemIndexLock
 Waiting to find or allocate space in shared memory.

@@ -1155,7 +1155,7 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
  Waiting to acquire a pin on a buffer.
 
 
- Activity
+ Activity
  ArchiverMain
  Waiting in main loop of the archiver process.
 
@@ -1212,7 +1212,7 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
  Waiting in main loop of WAL writer process.
 
 
- Client
+ Client
  ClientRead
  Waiting to read data from the client.
 
@@ -1250,7 +1250,7 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
  Waiting in an extension.
 
 
- IPC
+ IPC
  BgWorkerShutdown
  Waiting for background worker to shut down.
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers