Re: Connection hangs on new created schema

2018-03-27 Thread Laurenz Albe
Juan Manuel Cuello wrote:
> I have a postgresql database with around 4000 schemas. Each schema has around
> the same 65 tables. There are 8 processes connected to the database (it is a 
> web app).
> Each process has only one direct connection to the database (no connection 
> pool).
> 
> Everything works ok until a new schema (with all its tables, indices, etc)
> is added to the database. When this happens, the next query made to the 
> database
> takes too long, sometimes more than a minute o two. Also I see a high DB CPU 
> usage
> during that lapse. Then everything continues working ok as usual and CPU 
> drops to normal levels.
> 
> I'm mostly sure this has to do with the amount of schemas, maybe related to 
> relcache,
> but not sure why creating new schemas causes all this and where to look to 
> solve this problem.

If you know the slow query EXPLAIN (ANALYZE, BUFFERS) it and see where the time
is spent.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Issue with PostgreSQL replication and PITR

2018-03-27 Thread Laurenz Albe
Amee Sankhesara - Quipment India wrote:
> We have setup replication and PITR on this PostgreSQL server and wal files
> for replication and PITR we are storing on the another server which is
> shared with primary and secondary server.
>  
> For doing some OS migration activity for the PostgreSQL database servers
> We created two new replication server and promoted to new master server for 
> the same
>  
> During that activity one .partial file is created on master server and after
> that wal files are not copied in shared folder from which secondary server
> take it and restore it. All Wal files after .partial file are stored in 
> pg_xlog
> folder of master data so it is increasing size of data directory of master 
> server.
>  
> Wal files are not copying in shared location of master and secondary server 
> but
> still replication is running and on secondary server data is up to date.
>  
> In logs getting only below warning :
> Warning : The failed archive command was:
>   copy "D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial"
>   "\\10.0.0.35\Archive_Replication\00010A8800F8.partial"
>   | copy "D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial"
>   "\\10.0.0.35\Archive_PITR\00010A8800F8.partial"
>  
> Can’t figure out that in shared folder wal files are not copied but still
> secondary server is running in sync with master server.
>  
> What could be reason behind this?

If archiving is not working and the standby still can replicat, I would assume
that streaming replication is configured.

Is "primary_conninfo" defined in "recovery.conf"?
Are there WAL sender and WAL receiver processes?
 
> And how to remove this .partial file from pg_xlog safely and how to start
> copying wal files from pg_xlog of master server to shared location of master
> and secondary server ?

You should try to figure out why your "archive_command" fails; perhaps
try it by hand.  Once you have got that fixed, archiving will continue normally.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Hello all, I wanted to install any postgresql database for my development

2018-03-27 Thread Thomas Kellerer
sanjeev kumar schrieb am 28.03.2018 um 08:01:
> Please suggest me any verion of PostgreSQL Database for my development 
> testing are improving my skills.

If you are starting with Postgres the best choice is to use the current version 
which is 10.

If you are using Linux, please follow the instructions here: 
https://www.postgresql.org/download/ to add the Postgres repository to your 
Linux system, so that you can use your package manager to install it. 

If you are using Windows, you can download an installer from here: 
https://www.enterprisedb.com/downloads/postgres-postgresql-downloads

Thomas



Hello all, I wanted to install any postgresql database for my development

2018-03-27 Thread sanjeev kumar
Hello all,
Please suggest me any verion of PostgreSQL Database for my development
testing are improving my skills.
-- 
Regards,
-Sanjeeva


Re: pg_dump -Fd -j2 on standby in 9.6.6

2018-03-27 Thread Andreas Kretschmer
On 28 March 2018 07:02:30 CEST, Stefan Petrea  wrote:
>Hi,
>
>
>I wonder why synchronized snapshots are not supported on standby
>servers. 
>
>
>
If you want to take backups using -Fd on a standby you should pause the 
reply-process. In this case you don' t need synchronized snapshots.

Regards, Andreas.


-- 
2ndQuadrant - The PostgreSQL Support Company



RE: [EXTERNAL]pg_dump -Fd -j2 on standby in 9.6.6

2018-03-27 Thread Stefan Petrea
Just wanted to clarify what my question is. I tried this with both pg_dump 
9.6.6 and 9.6.8.
What I'd like is to use pg_dump -Fd -j2 to generate dumps faster. But I get the 
error

pg_dump: Synchronized snapshots are not supported on standby servers.
Run with --no-synchronized-snapshots instead if you do not need
synchronized snapshots.

I could take these dumps on the master instead of the standby but that would 
degrade the performance.
That was part of the reasoning behind having standbys, so we could offload the 
backup cronjobs we have to them.

How would you approach this aspect of generating dumps faster?
Or, can pg_dump -Fd -j be made to work on standbys?

Also, does using --no-synchronized-snapshots mean that the dumps will be 
inconsistent?

Thank you,
Stefan


Stefan Petrea
System Engineer/DBA, Network Engineering


stefan.pet...@tangoe.com 

     

tangoe.com



This e-mail message, including any attachments, is for the sole use of the 
intended recipient of this message, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. Tangoe makes no warranty that this e-mail or its attachments are 
error or virus free.

-Original Message-
From: Stefan Petrea [mailto:stefan.pet...@tangoe.com] 
Sent: Wednesday, March 28, 2018 8:03 AM
To: pgsql-gene...@postgresql.org
Subject: [EXTERNAL]pg_dump -Fd -j2 on standby in 9.6.6

Hi,

I downloaded the code from this github tag 
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres%2Farchive%2FREL9_6_8.tar.gz&data=01%7C01%7Cstefan.petrea%40tangoe.com%7C608018c65c68463cc42f08d594693951%7C3ba137049b66408a9fb9db51aba579e4%7C0&sdata=S3p52va81Sk1NfmGsVieXL3qyliDhY0G5adlKwzglWY%3D&reserved=0
I unpacked the code on disk where gdb was looking for it 
/build/postgresql-9.6-9VrjLn/postgresql-9.6-9.6.8

Then I ran this oneliner which is a summary of my debug process. So, apparently 
pg_dump -Fd -j2 does not work in the case of pre-9.2 but it also doesn't work 
for standby servers.

gdb /usr/lib/postgresql/9.6/bin/pg_dump -ex 'set confirm off' -ex 'set listsize 
30' -ex 'set pagination off' -ex 'b 1131' -ex 'r -Fd -j2 -d TIMS_SOMEDB -h 
/var/run/postgresql/pg-prod-7 -f /tmp/TIMS_SOMEDB_fd.dmp' -ex 'l' -ex 'p 
AH->numWorkers'  -ex 'p AH->remoteVersion' -ex 'p 
dopt->no_synchronized_snapshots'  -ex 'q'


1131else if (AH->numWorkers > 1 &&
1116 * worker, we'll be passed dumpsnapshot == NULL, but 
AH->sync_snapshot_id
1117 * is already set (if the server can handle it) and we should 
use that.
1118 */
1119if (dumpsnapshot)
1120AH->sync_snapshot_id = pg_strdup(dumpsnapshot);
1121
1122if (AH->sync_snapshot_id)
1123{
1124PQExpBuffer query = createPQExpBuffer();
1125
1126appendPQExpBuffer(query, "SET TRANSACTION SNAPSHOT ");
1127appendStringLiteralConn(query, AH->sync_snapshot_id, 
conn);
1128ExecuteSqlStatement(AH, query->data);
1129destroyPQExpBuffer(query);
1130}
1131else if (AH->numWorkers > 1 &&
1132 AH->remoteVersion >= 90200 &&
1133 !dopt->no_synchronized_snapshots)
1134{
1135if (AH->isStandby)
1136exit_horribly(NULL,
1137 "Synchronized snapshots are not supported on 
standby servers.\n"
1138  "Run with 
--no-synchronized-snapshots instead if you do not need\n"
1139  "synchronized 
snapshots.\n");
1140
1141
1142AH->sync_snapshot_id = get_synchronized_snapshot(AH);
1143}
1144}
1145
$1 = 2
$2 = 90606
$3 = 0

I was expecting the code to be the same as the one in [1], but it seems to be 
different (I don't know why) I wonder why synchronized snapshots are not 
supported on standby servers. The code in [1] says they should work for standby 
servers.

My master is 9.6.6 and my slave is 9.6.6 as well.
Any ideas would be appreciated.

Thank you,
Stefan

[1] 
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres%2Fblob%2FREL9_6_STABLE%2Fsrc%2Fbin%2Fpg_dump%2Fpg_dump.c%23L689&data=01%7C01%7Cstefan.petrea%40tangoe.com%7C608018c65c68463cc42f08d594693951%7C3ba137049b66408a9fb9db51aba579e4%7C0&sdata=jWEm%2B1Nhqa4His6Xj3p2Bxfx8pEfEuCHdrxRHfgwUs0%3D&reserved=0




Stefan Petrea
System Engineer/DBA, Network Engineering


stefan.pet...@tangoe.com 

    

pg_dump -Fd -j2 on standby in 9.6.6

2018-03-27 Thread Stefan Petrea
Hi,

I downloaded the code from this github tag 
https://github.com/postgres/postgres/archive/REL9_6_8.tar.gz
I unpacked the code on disk where gdb was looking for it 
/build/postgresql-9.6-9VrjLn/postgresql-9.6-9.6.8

Then I ran this oneliner which is a summary of my debug process. So, apparently 
pg_dump -Fd -j2 does not work
in the case of pre-9.2 but it also doesn't work for standby servers.

gdb /usr/lib/postgresql/9.6/bin/pg_dump -ex 'set confirm off' -ex 'set listsize 
30' -ex 'set pagination off' -ex 'b 1131' -ex 'r -Fd -j2 -d TIMS_SOMEDB -h 
/var/run/postgresql/pg-prod-7 -f /tmp/TIMS_SOMEDB_fd.dmp' -ex 'l' -ex 'p 
AH->numWorkers'  -ex 'p AH->remoteVersion' -ex 'p 
dopt->no_synchronized_snapshots'  -ex 'q'


1131else if (AH->numWorkers > 1 &&
1116 * worker, we'll be passed dumpsnapshot == NULL, but 
AH->sync_snapshot_id
1117 * is already set (if the server can handle it) and we should 
use that.
1118 */
1119if (dumpsnapshot)
1120AH->sync_snapshot_id = pg_strdup(dumpsnapshot);
1121
1122if (AH->sync_snapshot_id)
1123{
1124PQExpBuffer query = createPQExpBuffer();
1125
1126appendPQExpBuffer(query, "SET TRANSACTION SNAPSHOT ");
1127appendStringLiteralConn(query, AH->sync_snapshot_id, 
conn);
1128ExecuteSqlStatement(AH, query->data);
1129destroyPQExpBuffer(query);
1130}
1131else if (AH->numWorkers > 1 &&
1132 AH->remoteVersion >= 90200 &&
1133 !dopt->no_synchronized_snapshots)
1134{
1135if (AH->isStandby)
1136exit_horribly(NULL,
1137 "Synchronized snapshots are not supported on 
standby servers.\n"
1138  "Run with 
--no-synchronized-snapshots instead if you do not need\n"
1139  "synchronized 
snapshots.\n");
1140
1141
1142AH->sync_snapshot_id = get_synchronized_snapshot(AH);
1143}
1144}
1145
$1 = 2
$2 = 90606
$3 = 0

I was expecting the code to be the same as the one in [1], but it seems to be 
different (I don't know why)
I wonder why synchronized snapshots are not supported on standby servers. The 
code in [1] says they should
work for standby servers.

My master is 9.6.6 and my slave is 9.6.6 as well.
Any ideas would be appreciated.

Thank you,
Stefan

[1] 
https://github.com/postgres/postgres/blob/REL9_6_STABLE/src/bin/pg_dump/pg_dump.c#L689




Stefan Petrea
System Engineer/DBA, Network Engineering


stefan.pet...@tangoe.com 

     

tangoe.com



This e-mail message, including any attachments, is for the sole use of the 
intended recipient of this message, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. Tangoe makes no warranty that this e-mail or its attachments are 
error or virus free.




Re: connection dropped from the backend server

2018-03-27 Thread Adrian Klaver

On 03/27/2018 07:21 PM, armand pirvu wrote:


On Mar 27, 2018, at 6:49 PM, Tom Lane > wrote:


Adrian Klaver > writes:

On 03/27/2018 04:07 PM, armand pirvu wrote:

As long as the connection stays up yes data gets fine across
In pg_stat_activity I see the node ip address where tail -f piped 
into psql happens



So what does the rest of that record show? In particular for:


I wonder how often data gets put into the pipe.  If it's "not very often",
maybe the connection from psql to the server is timing out due to
inactivity?  This would be the fault of a firewall or something in
between.  You could probably fix it by enabling (more aggressive) TCP
keepalive settings.

regards, tom lane


Well there is no flow pattern, The flow can be inexistent for days , 
even weeks and then it can get super busy


The data flows as expected well untill the connection gets dropped. 
Bolded from pg_stat_activity (a test I just did)


birstdb=# select datname, pid, client_addr, client_port, backend_start, 
query_start, state from pg_stat_Activity;
  datname |  pid  |  client_addr  | client_port |         backend_start 
         |          query_start          | state

-+---+---+-+---+---+
  birstdb | 10046 |               |          -1 | 2018-03-27 
20:40:11.721804-05 | 2018-03-27 20:47:27.118434-05 | active
* birstdb | 10082 | 192.168.1.187 |       55859 | 2018-03-27 
20:43:55.301216-05 | 2018-03-27 20:46:07.190064-05 | idle*



ps -fu armandp
      UID   PID  PPID   C    STIME TTY         TIME CMD
  armandp  3264  3263   0 20:39:13 pts/2       0:00 tail -f 
/u1/sys_admin/dba/ingres2birst.fifo
  armandp  3265  3263   0 20:39:13 pts/2       0:00 psql -U csidba -d 
birstdb -h 172.16.10.93




Give it about 10 min at most and  bam out it goes

birstdb=# select datname, pid, client_addr, client_port, backend_start, 
query_start, state from pg_stat_Activity;
  datname |  pid  | client_addr | client_port |         backend_start   
       |          query_start          | state

-+---+-+-+---+---+
  birstdb | 10208 |             |          -1 | 2018-03-27 
20:51:25.835382-05 | 2018-03-27 21:08:47.164249-05 | active


Although the above two processes are still out

I think the tcp keep alives might help but I am also thinking like a 
each min check maybe and if things got in the pipe well dump ‘em to 
Postgres. Something along these lines


Any ideas/suggestions you might have to improve this ? I am not saying 


Yeah, dump the named pipe idea and just create the connection for the 
duration of the DML event.


it is perfect far from it, but I kinda took the model/idea from the 
Nagios named pipe only that one too runs at x seconds/minutes interval 
defined




Thank you both
— Armand



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: connection dropped from the backend server

2018-03-27 Thread armand pirvu


Sent from my iPhone

> On Mar 27, 2018, at 9:21 PM, armand pirvu  wrote:
> 
> 
>>> On Mar 27, 2018, at 6:49 PM, Tom Lane  wrote:
>>> 
>>> Adrian Klaver  writes:
 On 03/27/2018 04:07 PM, armand pirvu wrote:
 As long as the connection stays up yes data gets fine across
 In pg_stat_activity I see the node ip address where tail -f piped into 
 psql happens
>> 
>>> So what does the rest of that record show? In particular for:
>> 
>> I wonder how often data gets put into the pipe.  If it's "not very often",
>> maybe the connection from psql to the server is timing out due to
>> inactivity?  This would be the fault of a firewall or something in
>> between.  You could probably fix it by enabling (more aggressive) TCP
>> keepalive settings.
>> 
>>  regards, tom lane
> 
> Well there is no flow pattern, The flow can be inexistent for days , even 
> weeks and then it can get super busy
> 
> The data flows as expected well untill the connection gets dropped. Bolded 
> from pg_stat_activity (a test I just did)
> 
> birstdb=# select datname, pid, client_addr, client_port, backend_start, 
> query_start, state from pg_stat_Activity;
>  datname |  pid  |  client_addr  | client_port | backend_start
>  |  query_start  | state  
> -+---+---+-+---+---+
>  birstdb | 10046 |   |  -1 | 2018-03-27 
> 20:40:11.721804-05 | 2018-03-27 20:47:27.118434-05 | active
>  birstdb | 10082 | 192.168.1.187 |   55859 | 2018-03-27 
> 20:43:55.301216-05 | 2018-03-27 20:46:07.190064-05 | idle
> 
> 
> ps -fu armandp
>  UID   PID  PPID   CSTIME TTY TIME CMD
>  armandp  3264  3263   0 20:39:13 pts/2   0:00 tail -f 
> /u1/sys_admin/dba/ingres2birst.fifo
>  armandp  3265  3263   0 20:39:13 pts/2   0:00 psql -U csidba -d birstdb 
> -h 172.16.10.93
> 
> 
> 
> Give it about 10 min at most and  bam out it goes
> 
> birstdb=# select datname, pid, client_addr, client_port, backend_start, 
> query_start, state from pg_stat_Activity;
>  datname |  pid  | client_addr | client_port | backend_start 
> |  query_start  | state  
> -+---+-+-+---+---+
>  birstdb | 10208 | |  -1 | 2018-03-27 20:51:25.835382-05 
> | 2018-03-27 21:08:47.164249-05 | active
> 
> Although the above two processes are still out 
> 
> I think the tcp keep alives might help but I am also thinking like a each min 
> check maybe and if things got in the pipe well dump ‘em to Postgres. 
> Something along these lines
> 
> Any ideas/suggestions you might have to improve this ? I am not saying it is 
> perfect far from it, but I kinda took the model/idea from the Nagios named 
> pipe only that one too runs at x seconds/minutes interval defined
> 
> 
> 
> Thank you both
> — Armand


Sorry for the double post but as a possible solution. Why not move the named 
pipe to the postgres host and simply whatever i was dumping into said pipe 
instead of doing locally just doing over ssh

What do you think ?


Thank you
-- Armand

Re: connection dropped from the backend server

2018-03-27 Thread armand pirvu

> On Mar 27, 2018, at 6:49 PM, Tom Lane  wrote:
> 
> Adrian Klaver  writes:
>> On 03/27/2018 04:07 PM, armand pirvu wrote:
>>> As long as the connection stays up yes data gets fine across
>>> In pg_stat_activity I see the node ip address where tail -f piped into psql 
>>> happens
> 
>> So what does the rest of that record show? In particular for:
> 
> I wonder how often data gets put into the pipe.  If it's "not very often",
> maybe the connection from psql to the server is timing out due to
> inactivity?  This would be the fault of a firewall or something in
> between.  You could probably fix it by enabling (more aggressive) TCP
> keepalive settings.
> 
>   regards, tom lane

Well there is no flow pattern, The flow can be inexistent for days , even weeks 
and then it can get super busy

The data flows as expected well untill the connection gets dropped. Bolded from 
pg_stat_activity (a test I just did)

birstdb=# select datname, pid, client_addr, client_port, backend_start, 
query_start, state from pg_stat_Activity;
 datname |  pid  |  client_addr  | client_port | backend_start 
|  query_start  | state  
-+---+---+-+---+---+
 birstdb | 10046 |   |  -1 | 2018-03-27 20:40:11.721804-05 
| 2018-03-27 20:47:27.118434-05 | active
 birstdb | 10082 | 192.168.1.187 |   55859 | 2018-03-27 20:43:55.301216-05 
| 2018-03-27 20:46:07.190064-05 | idle


ps -fu armandp
 UID   PID  PPID   CSTIME TTY TIME CMD
 armandp  3264  3263   0 20:39:13 pts/2   0:00 tail -f 
/u1/sys_admin/dba/ingres2birst.fifo
 armandp  3265  3263   0 20:39:13 pts/2   0:00 psql -U csidba -d birstdb -h 
172.16.10.93



Give it about 10 min at most and  bam out it goes

birstdb=# select datname, pid, client_addr, client_port, backend_start, 
query_start, state from pg_stat_Activity;
 datname |  pid  | client_addr | client_port | backend_start |  
query_start  | state  
-+---+-+-+---+---+
 birstdb | 10208 | |  -1 | 2018-03-27 20:51:25.835382-05 | 
2018-03-27 21:08:47.164249-05 | active

Although the above two processes are still out 

I think the tcp keep alives might help but I am also thinking like a each min 
check maybe and if things got in the pipe well dump ‘em to Postgres. Something 
along these lines

Any ideas/suggestions you might have to improve this ? I am not saying it is 
perfect far from it, but I kinda took the model/idea from the Nagios named pipe 
only that one too runs at x seconds/minutes interval defined



Thank you both
— Armand

Re: connection dropped from the backend server

2018-03-27 Thread Tom Lane
Adrian Klaver  writes:
> On 03/27/2018 04:07 PM, armand pirvu wrote:
>> As long as the connection stays up yes data gets fine across
>> In pg_stat_activity I see the node ip address where tail -f piped into psql 
>> happens

> So what does the rest of that record show? In particular for:

I wonder how often data gets put into the pipe.  If it's "not very often",
maybe the connection from psql to the server is timing out due to
inactivity?  This would be the fault of a firewall or something in
between.  You could probably fix it by enabling (more aggressive) TCP
keepalive settings.

regards, tom lane



Re: connection dropped from the backend server

2018-03-27 Thread Adrian Klaver

On 03/27/2018 04:07 PM, armand pirvu wrote:

As long as the connection stays up yes data gets fine across
In pg_stat_activity I see the node ip address where tail -f piped into psql 
happens


So what does the rest of that record show? In particular for:

state
query
backend_start

and any others you might think are important from here:

https://www.postgresql.org/docs/10/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW





Sent from my iPhone


On Mar 27, 2018, at 6:03 PM, Adrian Klaver  wrote:


On 03/27/2018 03:36 PM, armand pirvu wrote:
Hi all
I have a process of some data manipulation and ultimate transfer to a postgres 
database
A DML statement gest concoted with the transformed data and pusshed into a 
named pipe
The named pipe is tailed -f in the background like this
nohup $SHELL <

Could it be that pg_stat_activity shows nothing because the DML has completed 
when you look?

Does the data find its way into the database?


Any suggestions how to approach this/make it better/monitor ?
Thanks
-- Armand



--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: connection dropped from the backend server

2018-03-27 Thread armand pirvu
As long as the connection stays up yes data gets fine across 
In pg_stat_activity I see the node ip address where tail -f piped into psql 
happens



Sent from my iPhone

> On Mar 27, 2018, at 6:03 PM, Adrian Klaver  wrote:
> 
>> On 03/27/2018 03:36 PM, armand pirvu wrote:
>> Hi all
>> I have a process of some data manipulation and ultimate transfer to a 
>> postgres database
>> A DML statement gest concoted with the transformed data and pusshed into a 
>> named pipe
>> The named pipe is tailed -f in the background like this
>> nohup $SHELL <> tail -f /u1/sys_admin/dba/mypipe.fifo | psql -U csidba -d repdb -h rephost
>> EOF
>> All good BUT I do notice every say 10 min although I see the tail and psql 
>> processes in the ps output, looking in pg_stat_activity there is really 
>> nothing the host I run the nohuped tail
> 
> Could it be that pg_stat_activity shows nothing because the DML has completed 
> when you look?
> 
> Does the data find its way into the database?
> 
>> Any suggestions how to approach this/make it better/monitor ?
>> Thanks
>> -- Armand
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: connection dropped from the backend server

2018-03-27 Thread Adrian Klaver

On 03/27/2018 03:36 PM, armand pirvu wrote:

Hi all


I have a process of some data manipulation and ultimate transfer to a postgres 
database
A DML statement gest concoted with the transformed data and pusshed into a 
named pipe
The named pipe is tailed -f in the background like this

nohup $SHELL <

Could it be that pg_stat_activity shows nothing because the DML has 
completed when you look?


Does the data find its way into the database?




Any suggestions how to approach this/make it better/monitor ?

Thanks
-- Armand






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Postgres Foreign Data Wrapper and DB2 LUW

2018-03-27 Thread legrand legrand
Hello,
not sure that there is a dedicated fdw for DB2 LUW,
but you may try one of
ODBC_FDW (and maybe JDBC_FDW)
as decribed here:
https://wiki.postgresql.org/wiki/Foreign_data_wrappers

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Troubleshooting a segfault and instance crash

2018-03-27 Thread Blair Boadway
As a follow up, we’ve been able to get the same back trace implicating 
pg_hint_plan from 2 separate crashes.  We were using pg_hint_plan 1.2.2--we 
reported the issue to pg_hint_plan github.  We’ve removed pg_hint_plan and it 
looks like the system will no longer segfault under the same conditions.  This 
strongly suggests pg_hint_plan was the root cause of our issue but we can’t yet 
be 100% certain as the issue was always transient.

-Blair


From: Pavel Stehule 
Date: Saturday, March 24, 2018 at 9:18 PM
To: Blair Boadway 
Cc: Peter Geoghegan , "pgsql-gene...@postgresql.org" 

Subject: Re: Troubleshooting a segfault and instance crash



2018-03-25 0:41 GMT+01:00 Blair Boadway 
mailto:bboad...@abebooks.com>>:
Thanks for the tip.  We are using RHEL 6.9 and definitely up to date on glibc 
(2.12-1.209.el6_9.2).  We also have the same versions on a very similar system 
with no segfault.

My colleague got a better backtrace that shows another extension

Core was generated by `postgres: batch_user_account''.
Program terminated with signal 11, Segmentation fault.
#0 0x00386712868a in __strcmp_sse42 () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install 
postgresql96-server-9.6.5-1PGDG.rhel6.x86_64
(gdb) bt
#0 0x00386712868a in __strcmp_sse42 () from /lib64/libc.so.6
#1 0x7fa3f0c7074c in get_query_string (pstate=, 
query=, jumblequery=) at 
pg_hint_plan.c:1882
#2 0x7fa3f0c70a5d in pg_hint_plan_post_parse_analyze (pstate=0x25324b8, 
query=0x25325e8) at pg_hint_plan.c:2875
#3 0x005203bc in parse_analyze ()
#4 0x006df933 in pg_analyze_and_rewrite ()
#5 0x007c6f6b in ?? ()
#6 0x007c6ff0 in CachedPlanGetTargetList ()
#7 0x006e173a in PostgresMain ()
#8 0x006812f5 in PostmasterMain ()
#9 0x00609278 in main ().


We aren’t sure if this indicates that pg_hint_plan is causing the segfault or 
if it happened to be doing something when the segfault occurred.  We aren’t 
actually using pg_hint_plan hints in this system so we’re not sure how all this 
relates to segfault when another process does a ‘grant usage on schema abc to 
user xyz;’ unrelated to the account segfaulting.

although you don't use pg_hint_plan explicitly, pg_hint_plan is active - it is 
active via planner callbacks


Short of better ideas, we will pull the pg_hint_plan extension and see if that 
removes the problem.

please, try to report this back trace to pg_hint_plan authors.
Regards
Pavel


-Blair





From: Peter Geoghegan mailto:p...@bowt.ie>>
Date: Saturday, March 24, 2018 at 4:18 PM
To: Blair Boadway mailto:bboad...@abebooks.com>>
Cc: "pgsql-gene...@postgresql.org" 
mailto:pgsql-gene...@postgresql.org>>
Subject: Re: Troubleshooting a segfault and instance crash

On Thu, Mar 8, 2018 at 9:40 AM, Blair Boadway 
mailto:bboad...@abebooks.com>> wrote:
Mar  7 14:46:35 pgprod2 kernel:postgres[29351]: segfault at 0 ip
00302f32868a sp 7ffcf1547498 error 4 in
libc-2.12.so[302f20+18a000]

Mar  7 14:46:35 pgprod2 POSTGRES[21262]: [5] user=,db=,app=client= LOG:
server process (PID 29351) was terminated by signal 11: Segmentation fault

It crashes the database, though it starts again on its own without any
apparent issues.  This has happened 3 times in 2 months and each time the
segfault error and memory address is the same.

We had a recent report of a segfault on a Redhat compatible system,
that seemed like it might originate from within its glibc [1].
Although all the versions there didn't match what you have, it's worth
considering as a possibility.

Maybe you can't install debuginfo packages because you don't yet have
the necessary debuginfo repos set up. Just a guess. That is sometimes
a required extra step.

[1] https://postgr.es/m/7369.1520528...@sss.pgh.pa.us
--
Peter Geoghegan




connection dropped from the backend server

2018-03-27 Thread armand pirvu
Hi all


I have a process of some data manipulation and ultimate transfer to a postgres 
database
A DML statement gest concoted with the transformed data and pusshed into a 
named pipe
The named pipe is tailed -f in the background like this

nohup $SHELL <

Re: ERROR: invalid byte sequence for encoding "UTF8": 0xfc

2018-03-27 Thread armand pirvu

> On Mar 27, 2018, at 3:47 PM, armand pirvu  wrote:
> 
> Folks
> 
> 
> From another system (non postgres) as data gets changed we collect in the 
> form like below into a named pipe which is tailed into psql -U csidba -d 
> repdb -hrephost
> 
> insert into foo (mydata) values ('{"mail_stop": "" , "city_id": "Garching bei 
> M?nchen"}');
> 
> And got a failure 
> ERROR:  invalid byte sequence for encoding "UTF8": 0xfc
> 
> 
> insert into foo (mydata) values ('{"mail_stop": "" , "city_id": "Garching bei 
> M?nchen"}');
> 
> Problem is said system to convert to UTF8 is nicely put a huge amount of 
> effort and we don't have the resources anyway at this point
> 
> 
> This is not a copy, just a tailed named pipe into psql
> 
> Is there any trick/way I can shove this data into Postgres ? I really don't 
> care whether it read "Garching bei M?nchen" or "Garching bei Mfoonchen"
> 
> 
> 
> Thanks
> — Armand
> 
> 

Sorry for double posting but I found that if I change the client_encoding to 
WIN1252 I am able to move forward

"city_id": "Garching bei München”



 


Congrats to PostgreSQL sponsor OpenSCG

2018-03-27 Thread Joshua D. Drake

Community,

Major PostgreSQL Sponsor OpenSCG has just been acquired by PostgreSQL 
sponsor AWS.


Congrats to both companies!

JD

--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
***  A fault and talent of mine is to tell it exactly how it is.  ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Re: Postgres Foreign Data Wrapper and DB2 LUW

2018-03-27 Thread ajmcello
I require a $7500 advance.

On Tue, Mar 27, 2018, 11:15 AM Rakesh Kumar  wrote:

> Is there a web page describing how to set up a PG FDW to talk to DB2 LUW
> ver 10.5 ?
>
>
> Thanks.
>
>


Re: Proposition for better performance

2018-03-27 Thread armand pirvu

> On Mar 27, 2018, at 4:25 PM, Paul Jungwirth  
> wrote:
> 
> On 03/27/2018 11:42 AM, hmidi slim wrote:
>> This is the message that I got:
>> Successfully run. Total query runtime: 2 min.
>> 1500 rows affected.
> 
> Sorry, I don't think this is enough information to suggest anything.
> 
> -- 
> Paul  ~{:-)
> p...@illuminatedcomputing.com
> 

I took the case prsented and ran in a sandbox and 

UPDATE 1500

explain analyze SELECT product_id, start_date, end_date, during
FROM product_availabilities
WHERE during @> daterange('2018-02-01', '2018-04-01')

 Bitmap Heap Scan on product_availabilities  (cost=1156.67..46856.67 rows=75000 
width=44) (actual time=5413.792..11367.379 rows=1500 loops
=1)
   Recheck Cond: (during @> '[2018-02-01,2018-04-01)'::daterange)
   Heap Blocks: exact=110295
   ->  Bitmap Index Scan on idx_time  (cost=0.00..1137.92 rows=75000 width=0) 
(actual time=5325.844..5325.844 rows=1500 loops=1)
 Index Cond: (during @> '[2018-02-01,2018-04-01)'::daterange)
 Planning time: 0.145 ms
 Execution time: 14055.666 ms
(7 rows)

But

considering the update (I did not check bloating or anything but still)

vacuum full product_availabilities;
analyze product_availabilities;

The plan changes to 
 Seq Scan on product_availabilities  (cost=0.00..242647.91 rows=1533 
width=26) (actual time=0.034..7207.697 rows=1500 loops=1)
   Filter: (during @> '[2018-02-01,2018-04-01)'::daterange)
 Planning time: 6.701 ms
 Execution time: 9238.285 ms

And the runtime does get in the two minutes
time psql -U csidba -d armandp  /dev/null
real2m39.767s
user1m45.576s
sys 0m12.324s


Not sure if that confirms the OP’s findings but to me a first question would be 
if the fact that the execution time reported by epxlain analyze does not seem 
to be even close to the actual run time is expected or not

BTW I was the postgres version reported ? I ran the presented case on 9.5.8


— Armand









Connection hangs on new created schema

2018-03-27 Thread Juan Manuel Cuello
I have a postgresql database with around 4000 schemas. Each schema has
around the same 65 tables. There are 8 processes connected to the database
(it is a web app). Each process has only one direct connection to the
database (no connection pool).

Everything works ok until a new schema (with all its tables, indices, etc)
is added to the database. When this happens, the next query made to the
database takes too long, sometimes more than a minute o two. Also I see a
high DB CPU usage during that lapse. Then everything continues working ok
as usual and CPU drops to normal levels.

I'm mostly sure this has to do with the amount of schemas, maybe related to
relcache, but not sure why creating new schemas causes all this and where
to look to solve this problem.

i can see also the server hast a lot of free memory and is no swapping at
all.

OS: Linux
Server version: 9.3.20

Any insight will be highly appreciated.


Re: Proposition for better performance

2018-03-27 Thread Paul Jungwirth

On 03/27/2018 11:42 AM, hmidi slim wrote:

This is the message that I got:

Successfully run. Total query runtime: 2 min.
1500 rows affected.


Sorry, I don't think this is enough information to suggest anything.

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Postgres Foreign Data Wrapper and DB2 LUW

2018-03-27 Thread Rakesh Kumar
Is there a web page describing how to set up a PG FDW to talk to DB2 LUW ver 
10.5 ?


Thanks.



Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Adrian Klaver

On 03/27/2018 01:46 PM, Ken Beck wrote:

I am working on two systems, one running in Oracle VirtualBox on my
laptop, the other in a DigitalOcean droplet. I know on one of them I
tried to remove the postgres-9.6, and it must have been my laptop, here
is the output from pg_lsclusters:

Ver Cluster Port Status Owner    Data directory   Log file
9.3 main    5432 down   postgres /var/lib/postgresql/9.3/main
/var/log/postgresql/postgresql-9.3-main.log


What happens if you do?:

sudo pg_ctlcluster 9.3 main start



Is it possible my effort to remove 9.6 was not complete or otherwise
screwed things up? (I dont remember the commands I used to do that, sorry)




Ken Beck
Liberty, Utah, USA





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Tim Cross

Ken Beck  writes:

> I recently upgraded my OS from Ubuntu 14.04 LTS to 16.04 LTS and since,
> postgresql refuses to re-start. I am concerned that if I poke around too
> much, I may lose my old data. I have consulted various web postings here
> and in ubuntu forums, and have not found an obvious solution. But please
> excuse me, I am not a Linux expert, and I had a friend help me with
> setup issues several years ago. They are no longer available, and I am
> having trouble getting things to run.
>

This is likely a Ubuntu/Debian related issue. I vaguely remember that
following an upgrade of a Ubuntu system, the upgrade issued a warning
about additional steps needed to be taken in order to upgrade
postgres. I'm pretty sure that the default postgres for 16.04 was 9.6,
so it is possible that you need to upgrade your postgres installation to
9.6 or take other action to continue using 9.3 It might be worthwhile
looking in /usr/share/doc/postgresql-common and see if there is some
relevant documentation about upgrading etc.

I also suspect there may be some startup changes you will need to
verify. I think Ubuntu 16.04 uses systemd while 14.04 used upstart?

After a Ubuntu upgrade, there is usually an upgrade log you can also
check to see if anything failed or provided warnings regarding
additional actions required.

HTH

Tim

-- 
Tim Cross



ERROR: invalid byte sequence for encoding "UTF8": 0xfc

2018-03-27 Thread armand pirvu
Folks


From another system (non postgres) as data gets changed we collect in the form 
like below into a named pipe which is tailed into psql -U csidba -d repdb 
-hrephost

insert into foo (mydata) values ('{"mail_stop": "" , "city_id": "Garching bei 
M?nchen"}');

And got a failure 
ERROR:  invalid byte sequence for encoding "UTF8": 0xfc


insert into foo (mydata) values ('{"mail_stop": "" , "city_id": "Garching bei 
M?nchen"}');

Problem is said system to convert to UTF8 is nicely put a huge amount of effort 
and we don't have the resources anyway at this point


This is not a copy, just a tailed named pipe into psql

Is there any trick/way I can shove this data into Postgres ? I really don't 
care whether it read "Garching bei M?nchen" or "Garching bei Mfoonchen"



Thanks
— Armand





Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Ken Beck
I am working on two systems, one running in Oracle VirtualBox on my
laptop, the other in a DigitalOcean droplet. I know on one of them I
tried to remove the postgres-9.6, and it must have been my laptop, here
is the output from pg_lsclusters:

Ver Cluster Port Status Owner    Data directory   Log file
9.3 main    5432 down   postgres /var/lib/postgresql/9.3/main
/var/log/postgresql/postgresql-9.3-main.log

Is it possible my effort to remove 9.6 was not complete or otherwise
screwed things up? (I dont remember the commands I used to do that, sorry)

Ken Beck
Liberty, Utah, USA

On 03/27/2018 12:17 PM, Adrian Klaver wrote:
> On 03/27/2018 11:00 AM, Ken Beck wrote:
>> I recently upgraded my OS from Ubuntu 14.04 LTS to 16.04 LTS and since,
>> postgresql refuses to re-start. I am concerned that if I poke around too
>> much, I may lose my old data. I have consulted various web postings here
>> and in ubuntu forums, and have not found an obvious solution. But please
>> excuse me, I am not a Linux expert, and I had a friend help me with
>> setup issues several years ago. They are no longer available, and I am
>> having trouble getting things to run.
>
> At a guess when you did the dist-upgrade(14.04 --> 16.04) you got the
> default version of Postgres for 16.04(9.6)
>
> Try, at the command line:
>
> pg_lsclusters
>
>
>>
>> Again, this is after upgrading from ubuntu 14.04 LTS to 16.04 LTS.
>> Consulting the book "PostgreSQL: Up and Running", I have looked at my
>> .conf files in /etc/postgresql/9.3/main, things seem to be in order.
>>
>> Here are the un-commented lines of postgresql.conf: (I have NOT changed
>> these, they are the settings that used to work in ubuntu 14.04)
>>
>> NOTE: my comments have // at beginning of the line, these lines are NOT
>> in the .conf file
>>
>> data_directory = '/var/lib/postgresql/9.3/main'        # use data in
>> another directory
>> hba_file = '/etc/postgresql/9.3/main/pg_hba.conf'    # host-based
>> authentication file
>> ident_file = '/etc/postgresql/9.3/main/pg_ident.conf'    # ident
>> configuration file
>>
>> external_pid_file = '/var/run/postgresql/9.3-main.pid'            #
>> write an extra PID file
>>
>> listen_addresses = 'localhost'        # what IP address(es) to listen
>> on;
>> port = 5432                # (change requires restart)
>> max_connections = 100            # (change requires restart)
>> unix_socket_directories = '/var/run/postgresql'    # comma-separated
>> list of directories
>> ssl = false                # (change requires restart)
>> ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'        # (change
>> requires restart)
>> ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'        # (change
>> requires restart)
>>
>> // In the logging section, these are the only entries (is this why I
>> cant see any log files when I try to start the server??)
>> log_line_prefix = '%t '            # special values:
>> log_timezone = 'Navajo'
>>
>> // In Client Connections section, these are the only items active:
>> datestyle = 'iso, mdy'
>> timezone = 'Navajo'
>> lc_messages = 'en_US.UTF-8'            # locale for system error message
>>                  # strings
>> lc_monetary = 'en_US.UTF-8'            # locale for monetary formatting
>> lc_numeric = 'en_US.UTF-8'            # locale for number formatting
>> lc_time = 'en_US.UTF-8'                # locale for time formatting
>> default_text_search_config = 'pg_catalog.english'
>>
>> The pg_ctl.conf file is basically empty
>>
>> The pg_hba.conf file has these active lines:
>> local   all postgres    trust
>> local   all all peer
>> host    all all 127.0.0.1/32    trust
>> host    all all ::1/128 md5
>>
>> The start.conf file has one active line, 'auto'
>>
>> The pg_ident.conf file has no entries.
>>
>> When I try to start postgresql service using this line:
>> service postgresql reload
>>
>> then check running services using this:
>> systemctl list-units --type service --all
>> I see the lines related to postgres as follows:
>> postgresql.service loaded    active   exited  PostgreSQL
>> RDBMS
>> ● postgresql@9.3-main.service    loaded    failed   failed
>> PostgreSQL Cluster 9.3-main
>>
>> And, looking for log files, I find none.
>>
>> If I try to start pgAdmin, I see two servers on localhost, when I tried
>> to connect, I get an error first that says it can not open the log file,
>> on clicking that I get an error can't open file /home/XXX/.pgpass,
>> permission denied. Then I get a prompt for the password for postgres,
>> and when I enter the known password for the user, I then get a big 'Guru
>> Error' box with a message:
>> could not connect to server: Connection refused Is the server running on
>> host "127.0.0.1" and accepting TCP/IP connections on port 5432?
>> Note that the postgresql.conf file does list 54

Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Ken Beck
OK, there are some logs there, most have old date/time stamps. But there
is one with today's time on it, but all it has is week-old data, likely
from near the time I did the Ubuntu upgrade. The following entry makes
me worry (it is about 2/3 of the way down...)

2018-03-22 10:14:28 MDT LOG:  database system was not properly shut
down; automatic recovery in progress
2018-03-22 10:14:28 MDT LOG:  record with zero length at 0/16178450

2018-03-18 16:41:12 MDT LOG:  received fast shutdown request
2018-03-18 16:41:12 MDT LOG:  aborting any active transactions
2018-03-18 16:41:12 MDT LOG:  autovacuum launcher shutting down
2018-03-18 16:41:12 MDT LOG:  shutting down
2018-03-18 16:41:12 MDT LOG:  database system is shut down
2018-03-19 17:00:41 MDT LOG:  database system was shut down at
2018-03-18 16:41:12 MDT
2018-03-19 17:00:41 MDT FATAL:  the database system is starting up
2018-03-19 17:00:41 MDT LOG:  incomplete startup packet
2018-03-19 17:00:41 MDT FATAL:  the database system is starting up
2018-03-19 17:00:42 MDT FATAL:  the database system is starting up
2018-03-19 17:00:42 MDT LOG:  MultiXact member wraparound protections
are now enabled
2018-03-19 17:00:42 MDT LOG:  database system is ready to accept connections
2018-03-19 17:00:42 MDT LOG:  autovacuum launcher started
2018-03-19 18:10:24 MDT LOG:  received fast shutdown request
2018-03-19 18:10:24 MDT LOG:  aborting any active transactions
2018-03-19 18:10:24 MDT LOG:  autovacuum launcher shutting down
2018-03-19 18:10:24 MDT LOG:  shutting down
2018-03-19 18:10:24 MDT LOG:  database system is shut down
2018-03-21 08:08:50 MDT LOG:  incomplete startup packet
2018-03-21 08:08:50 MDT LOG:  database system was shut down at
2018-03-19 18:10:24 MDT
2018-03-21 08:08:50 MDT FATAL:  the database system is starting up
2018-03-21 08:08:51 MDT FATAL:  the database system is starting up
2018-03-21 08:08:51 MDT FATAL:  the database system is starting up
2018-03-21 08:08:52 MDT LOG:  MultiXact member wraparound protections
are now enabled
2018-03-21 08:08:52 MDT LOG:  database system is ready to accept connections
2018-03-21 08:08:52 MDT LOG:  autovacuum launcher started
2018-03-22 10:14:22 MDT LOG:  incomplete startup packet
2018-03-22 10:14:22 MDT FATAL:  the database system is starting up
2018-03-22 10:14:22 MDT LOG:  database system was interrupted; last
known up at 2018-03-21 08:08:51 MDT
2018-03-22 10:14:22 MDT FATAL:  the database system is starting up
2018-03-22 10:14:23 MDT FATAL:  the database system is starting up
2018-03-22 10:14:23 MDT FATAL:  the database system is starting up
2018-03-22 10:14:24 MDT FATAL:  the database system is starting up
2018-03-22 10:14:24 MDT FATAL:  the database system is starting up
2018-03-22 10:14:25 MDT FATAL:  the database system is starting up
2018-03-22 10:14:25 MDT FATAL:  the database system is starting up
2018-03-22 10:14:26 MDT FATAL:  the database system is starting up
2018-03-22 10:14:26 MDT FATAL:  the database system is starting up
2018-03-22 10:14:27 MDT FATAL:  the database system is starting up
2018-03-22 10:14:27 MDT LOG:  incomplete startup packet
2018-03-22 10:14:28 MDT LOG:  database system was not properly shut
down; automatic recovery in progress
2018-03-22 10:14:28 MDT LOG:  record with zero length at 0/16178450
2018-03-22 10:14:28 MDT LOG:  redo is not required
2018-03-22 10:14:28 MDT LOG:  MultiXact member wraparound protections
are now enabled
2018-03-22 10:14:28 MDT LOG:  database system is ready to accept connections
2018-03-22 10:14:28 MDT LOG:  autovacuum launcher started
2018-03-22 11:15:02 MDT LOG:  received fast shutdown request
2018-03-22 11:15:02 MDT LOG:  aborting any active transactions
2018-03-22 11:15:02 MDT LOG:  autovacuum launcher shutting down
2018-03-22 11:15:02 MDT LOG:  shutting down
2018-03-22 11:15:02 MDT LOG:  database system is shut down
2018-03-22 11:15:03 MDT LOG:  database system was shut down at
2018-03-22 11:15:02 MDT
2018-03-22 11:15:03 MDT LOG:  MultiXact member wraparound protections
are now enabled
2018-03-22 11:15:03 MDT LOG:  database system is ready to accept connections
2018-03-22 11:15:03 MDT LOG:  autovacuum launcher started
2018-03-22 11:15:03 MDT LOG:  incomplete startup packet


Ken Beck
Liberty, Utah, USA

On 03/27/2018 12:04 PM, Joshua D. Drake wrote:
> On 03/27/2018 11:00 AM, Ken Beck wrote:
>> And, looking for log files, I find none.
>
> Nothing in /var/log/postgresql or /var/lib/postgresql/9.3/main/pg_log?
>
> JD
> -- 
>
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
> ***  A fault and talent of mine is to tell it exactly how it is.  ***
> PostgreSQL centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://postgresconf.org
> * Unless otherwise stated, opinions are my own.   *
>




Re: Check constraints.

2018-03-27 Thread Steve Rogerson
On 27/03/18 15:44, Paul Jungwirth wrote:
>   SELECT  c.conname,
>  ...
This just does a variation on select * from
information_schema.check_constraints, and has the same issue, that is the the
returned value for the constraint is not what I give when I create it - but
some 'normalised' version of it.

Steve



Re: Problem with postgreSQL

2018-03-27 Thread Adrian Klaver

On 03/27/2018 11:39 AM, Gian mario Caggiu wrote:


I downloaded two versions, 9.6.8 and 10.3, but both did not work. I 
followed the instructions on the website and in the page where I have to 
choose the version and the operating system I chose the two versions 
that I have already told you. then the installation of the program did 
not finish displaying a message with 'the database cluster 
initialization failed postgresql mac os'


Which of the downloads did you select from below?:

https://www.postgresql.org/download/macosx/

Without that information it is not possible to provide an answer.



Il giorno 27 mar 2018, alle ore 18:01, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> ha scritto:


On 03/27/2018 07:18 AM, Gian mario Caggiu wrote:
Please reply to list also.
Ccing list.


I downloaded the first time 9.6 version, and when i saw that it 
doesn’t work i downloaded an other version, 10.3, but Also this 
doesn’t work. All downloads fromhttp://www.postgresql.org 
.

And i install it how the website advised me.


There are six choices for Mac:

https://www.postgresql.org/download/macosx/

You will need to be more specific on which option you chose.

Also what Web site are you referring to when talking about install 
instructions?


When you say did not work, can you be more specific, with error 
messages if possible?




Gian Mario Caggiu
Il giorno 27 mar 2018, alle ore 06:03, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> ha 
scritto:



On 03/26/2018 04:41 AM, Gian mario Caggiu wrote:
Hi, I'm a student and I downloaded postgreSQL to my Mac but I found 
a problem. The program does not start, I do not understand why, but 
that's


Where did you download it from?

What version of Postgres?

How did you install it?

not what scares me. The fact is that a virtual user has been 
created called 'postgreSQL' which has appropriated all the 
functions of the administrator user. Can I kindly ask you why and 
how did this happen?

Gian Mario Caggiu



--
Adrian Klaver
adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Proposition for better performance

2018-03-27 Thread hmidi slim
This is the message that I got:

Successfully run. Total query runtime: 2 min.
1500 rows affected.


Re: Proposition for better performance

2018-03-27 Thread Paul Jungwirth

On 03/27/2018 11:14 AM, hmidi slim wrote:

Query plan:*
Bitmap Heap Scan on product_availabilities  (cost=33728.79..236086.04 
rows=878500 width=26) (actual time=2775.058..5792.842 rows=1500 loops=1)

   Recheck Cond: (during @> '[2018-02-01,2018-04-01)'::daterange)
   Heap Blocks: exact=31040 lossy=79255
   ->  Bitmap Index Scan on idx_time  (cost=0.00..33509.17 rows=878500 
width=0) (actual time=2767.262..2767.262 rows=1500 loops=1)

     Index Cond: (during @> '[2018-02-01,2018-04-01)'::daterange)
Planning time: 0.063 ms
Execution time: 6408.456 ms


*SELECT product_id, start_date, end_date, during
     FROM product_availabilities
     WHERE during @> daterange('2018-02-01', '2018-04-01')*
*
returns  query runtime*: 2min
*


It is 6 seconds or 2 minutes? Where is the 2 minutes number coming from? 
Are you sure that's all Postgres? With 878k rows even sending them over 
the network is going to take a while, and then more time to JSONify them 
or whatever else you need to do.


--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Proposition for better performance

2018-03-27 Thread Paul Jungwirth

(Including the list)

On 03/27/2018 10:49 AM, hmidi slim wrote:

Sorry I didn't copy it very well:
create index idx on availability using gist(during);

and during = daterange(start_date,end_date)

And the query plan used was seq scan not index scan.


It sounds like there must be some important information missing. Here is 
my attempt to mock up some fake data:


insert into availability
select p.id, concat('prod ', p.id::text), daterange((now() + 
concat(r.v, ' days')::interval)::date, (now() + concat((r.v + 1 + 
random() * 21)::int, ' days')::interval)::date)

from (select * from generate_series(1, 100)) p(id)
cross join (select * from generate_series(1, 15)) n(i)
join lateral (
  select p2.id, n2.i, (random() * 600 - 300)::int AS v
  from generate_series(1, 100) p2(id),
  generate_series(1, 15) n2(i)
) r
on r.id = p.id and r.i = n.i
;

Then

explain select * from availability where during @> 
daterange('2018-03-27', '2018-03-31');
 QUERY PLAN 



-
 Index Scan using idx on availability  (cost=0.39..1644.41 rows=1 
width=552)

   Index Cond: (during @> '[2018-03-27,2018-03-31)'::daterange)
(2 rows)

Running that query for real I get:

...
(215044 rows)

Time: 1450.099 ms (00:01.450)

So if the index isn't being used you'll have to do some digging to find 
out why.	


--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Adrian Klaver

On 03/27/2018 11:00 AM, Ken Beck wrote:

I recently upgraded my OS from Ubuntu 14.04 LTS to 16.04 LTS and since,
postgresql refuses to re-start. I am concerned that if I poke around too
much, I may lose my old data. I have consulted various web postings here
and in ubuntu forums, and have not found an obvious solution. But please
excuse me, I am not a Linux expert, and I had a friend help me with
setup issues several years ago. They are no longer available, and I am
having trouble getting things to run.


At a guess when you did the dist-upgrade(14.04 --> 16.04) you got the 
default version of Postgres for 16.04(9.6)


Try, at the command line:

pg_lsclusters




Again, this is after upgrading from ubuntu 14.04 LTS to 16.04 LTS.
Consulting the book "PostgreSQL: Up and Running", I have looked at my
.conf files in /etc/postgresql/9.3/main, things seem to be in order.

Here are the un-commented lines of postgresql.conf: (I have NOT changed
these, they are the settings that used to work in ubuntu 14.04)

NOTE: my comments have // at beginning of the line, these lines are NOT
in the .conf file

data_directory = '/var/lib/postgresql/9.3/main'        # use data in
another directory
hba_file = '/etc/postgresql/9.3/main/pg_hba.conf'    # host-based
authentication file
ident_file = '/etc/postgresql/9.3/main/pg_ident.conf'    # ident
configuration file

external_pid_file = '/var/run/postgresql/9.3-main.pid'            #
write an extra PID file

listen_addresses = 'localhost'        # what IP address(es) to listen on;
port = 5432                # (change requires restart)
max_connections = 100            # (change requires restart)
unix_socket_directories = '/var/run/postgresql'    # comma-separated
list of directories
ssl = false                # (change requires restart)
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'        # (change
requires restart)
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'        # (change
requires restart)

// In the logging section, these are the only entries (is this why I
cant see any log files when I try to start the server??)
log_line_prefix = '%t '            # special values:
log_timezone = 'Navajo'

// In Client Connections section, these are the only items active:
datestyle = 'iso, mdy'
timezone = 'Navajo'
lc_messages = 'en_US.UTF-8'            # locale for system error message
                     # strings
lc_monetary = 'en_US.UTF-8'            # locale for monetary formatting
lc_numeric = 'en_US.UTF-8'            # locale for number formatting
lc_time = 'en_US.UTF-8'                # locale for time formatting
default_text_search_config = 'pg_catalog.english'

The pg_ctl.conf file is basically empty

The pg_hba.conf file has these active lines:
local   all postgres    trust
local   all all peer
host    all all 127.0.0.1/32    trust
host    all all ::1/128 md5

The start.conf file has one active line, 'auto'

The pg_ident.conf file has no entries.

When I try to start postgresql service using this line:
service postgresql reload

then check running services using this:
systemctl list-units --type service --all
I see the lines related to postgres as follows:
postgresql.service loaded    active   exited  PostgreSQL
RDBMS
● postgresql@9.3-main.service    loaded    failed   failed
PostgreSQL Cluster 9.3-main

And, looking for log files, I find none.

If I try to start pgAdmin, I see two servers on localhost, when I tried
to connect, I get an error first that says it can not open the log file,
on clicking that I get an error can't open file /home/XXX/.pgpass,
permission denied. Then I get a prompt for the password for postgres,
and when I enter the known password for the user, I then get a big 'Guru
Error' box with a message:
could not connect to server: Connection refused Is the server running on
host "127.0.0.1" and accepting TCP/IP connections on port 5432?
Note that the postgresql.conf file does list 5432 as the port number.

I hope someone has the patience to look this over and suggest remedies. --

Ken Beck
Liberty, Utah, USA





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Proposition for better performance

2018-03-27 Thread hmidi slim
I update the example:
















*create table product_availabilities(product_id integer, start_date date,
end_date date); insert into product_availabilities(product_id,
start_date, end_date) select a, '2018-01-01', '2018-05-03' from
generate_series(1,1500) as aalter table
product_availabilities add column during daterange;update
product_availabilities set during = daterange(start_date, end_date);
CREATE INDEX idx_time ON product_availabilities USING gist(during);
explain analyze SELECT product_id, start_date, end_date, duringFROM
product_availabilitiesWHERE during @> daterange('2018-02-01',
'2018-04-01')*
Query plan:












*Bitmap Heap Scan on product_availabilities  (cost=33728.79..236086.04
rows=878500 width=26) (actual time=2775.058..5792.842 rows=1500
loops=1)  Recheck Cond: (during @> '[2018-02-01,2018-04-01)'::daterange)
Heap Blocks: exact=31040 lossy=79255  ->  Bitmap Index Scan on idx_time
(cost=0.00..33509.17 rows=878500 width=0) (actual time=2767.262..2767.262
rows=1500 loops=1)Index Cond: (during @>
'[2018-02-01,2018-04-01)'::daterange)Planning time: 0.063 msExecution time:
6408.456 ms SELECT product_id, start_date, end_date, duringFROM
product_availabilitiesWHERE during @> daterange('2018-02-01',
'2018-04-01')*
returns  query runtime
*: 2min*


Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Moreno Andreo

Il 27/03/2018 20:00, Ken Beck ha scritto:

And, looking for log files, I find none.

Have you looked in /var/log/postgresql/ ?





Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Joshua D. Drake

On 03/27/2018 11:00 AM, Ken Beck wrote:

And, looking for log files, I find none.


Nothing in /var/log/postgresql or /var/lib/postgresql/9.3/main/pg_log?

JD
--

Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
***  A fault and talent of mine is to tell it exactly how it is.  ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Ken Beck
I recently upgraded my OS from Ubuntu 14.04 LTS to 16.04 LTS and since,
postgresql refuses to re-start. I am concerned that if I poke around too
much, I may lose my old data. I have consulted various web postings here
and in ubuntu forums, and have not found an obvious solution. But please
excuse me, I am not a Linux expert, and I had a friend help me with
setup issues several years ago. They are no longer available, and I am
having trouble getting things to run.

Again, this is after upgrading from ubuntu 14.04 LTS to 16.04 LTS.
Consulting the book "PostgreSQL: Up and Running", I have looked at my
.conf files in /etc/postgresql/9.3/main, things seem to be in order.

Here are the un-commented lines of postgresql.conf: (I have NOT changed
these, they are the settings that used to work in ubuntu 14.04)

NOTE: my comments have // at beginning of the line, these lines are NOT
in the .conf file

data_directory = '/var/lib/postgresql/9.3/main'        # use data in
another directory
hba_file = '/etc/postgresql/9.3/main/pg_hba.conf'    # host-based
authentication file
ident_file = '/etc/postgresql/9.3/main/pg_ident.conf'    # ident
configuration file

external_pid_file = '/var/run/postgresql/9.3-main.pid'            #
write an extra PID file

listen_addresses = 'localhost'        # what IP address(es) to listen on;
port = 5432                # (change requires restart)
max_connections = 100            # (change requires restart)
unix_socket_directories = '/var/run/postgresql'    # comma-separated
list of directories
ssl = false                # (change requires restart)
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'        # (change
requires restart)
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'        # (change
requires restart)

// In the logging section, these are the only entries (is this why I
cant see any log files when I try to start the server??)
log_line_prefix = '%t '            # special values:
log_timezone = 'Navajo'

// In Client Connections section, these are the only items active:
datestyle = 'iso, mdy'
timezone = 'Navajo'
lc_messages = 'en_US.UTF-8'            # locale for system error message
                    # strings
lc_monetary = 'en_US.UTF-8'            # locale for monetary formatting
lc_numeric = 'en_US.UTF-8'            # locale for number formatting
lc_time = 'en_US.UTF-8'                # locale for time formatting
default_text_search_config = 'pg_catalog.english'

The pg_ctl.conf file is basically empty

The pg_hba.conf file has these active lines:
local   all postgres    trust
local   all all peer
host    all all 127.0.0.1/32    trust
host    all all ::1/128 md5

The start.conf file has one active line, 'auto'

The pg_ident.conf file has no entries.

When I try to start postgresql service using this line:
service postgresql reload

then check running services using this:
systemctl list-units --type service --all
I see the lines related to postgres as follows:
postgresql.service loaded    active   exited  PostgreSQL
RDBMS
● postgresql@9.3-main.service    loaded    failed   failed 
PostgreSQL Cluster 9.3-main

And, looking for log files, I find none.

If I try to start pgAdmin, I see two servers on localhost, when I tried
to connect, I get an error first that says it can not open the log file,
on clicking that I get an error can't open file /home/XXX/.pgpass,
permission denied. Then I get a prompt for the password for postgres,
and when I enter the known password for the user, I then get a big 'Guru
Error' box with a message:
could not connect to server: Connection refused Is the server running on
host "127.0.0.1" and accepting TCP/IP connections on port 5432?
Note that the postgresql.conf file does list 5432 as the port number.

I hope someone has the patience to look this over and suggest remedies. --

Ken Beck
Liberty, Utah, USA




Re: Proposition for better performance

2018-03-27 Thread Paul Jungwirth

On 03/27/2018 10:04 AM, hmidi slim wrote:

the query that I used to fetch products was:
select * from availability
where ('27-03-2018' between start_date and end_date)
and ('31-03-2018' between start_date and end_date);

I added another column named during of type daterange and I created a 
gist index :

create index idx on availability(during);


That creates a b-tree index.

Also, what did you set `during` to?


select * from availability
where during @> daterange(''27-03-2018', '31-03-2018');

With a table of 15M rows I got an execution time of 1 minute.


What was the query plan? Did you confirm that it used the index?

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Proposition for better performance

2018-03-27 Thread hmidi slim
the query that I used to fetch products was:
select * from availability
where ('27-03-2018' between start_date and end_date)
and ('31-03-2018' between start_date and end_date);

I added another column named during of type daterange and I created a gist
index :
create index idx on availability(during);

select * from availability
where during @> daterange(''27-03-2018', '31-03-2018');

With a table of 15M rows I got an execution time of 1 minute.Thtat's why I
asked if it is a good solution to divide the availability of a product by
intervals.
If I assume that a product has in every month 5 availability intervals, in
12 months we will have 60 intervals.For 1M products that's will be 60M
rows. This will affect the performance.


Re: Problem with postgreSQL

2018-03-27 Thread Adrian Klaver

On 03/27/2018 07:18 AM, Gian mario Caggiu wrote:
Please reply to list also.
Ccing list.



I downloaded the first time 9.6 version, and when i saw that it doesn’t work i 
downloaded an other version, 10.3, but Also this doesn’t work. All downloads 
from http://www.postgresql.org .
And i install it how the website advised me.


There are six choices for Mac:

https://www.postgresql.org/download/macosx/

You will need to be more specific on which option you chose.

Also what Web site are you referring to when talking about install 
instructions?


When you say did not work, can you be more specific, with error messages 
if possible?





Gian Mario Caggiu


Il giorno 27 mar 2018, alle ore 06:03, Adrian Klaver 
 ha scritto:


On 03/26/2018 04:41 AM, Gian mario Caggiu wrote:
Hi, I'm a student and I downloaded postgreSQL to my Mac but I found a problem. 
The program does not start, I do not understand why, but that's


Where did you download it from?

What version of Postgres?

How did you install it?


not what scares me. The fact is that a virtual user has been created called 
'postgreSQL' which has appropriated all the functions of the administrator 
user. Can I kindly ask you why and how did this happen?
Gian Mario Caggiu



--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Proposition for better performance

2018-03-27 Thread Paul Jungwirth

On 03/27/2018 07:42 AM, hmidi slim wrote:

Hi,
I'm trying to create an availability table for existing products. I'm 
fetching the suitable schema to design in order to get good performance 
when I fetch products in a table contains millions of rows.

I think to make a schema like this:
*create table availability (product_id integer, product_name 
varchar(255), start_date date, end_date date)*.


I would use a tstzrange (or daterange) instead of separate start_date 
and end_date columns. Then you can create an exclusion constraint that 
has `EXCLUDE USING gist (id WITH =, available_during WITH &&)`. That 
will automatically add a GiST index on those columns that should help 
with faster lookups. (It will also prevent contradictions where a 
product has two overlapping rows.)


You didn't mention the queries you want to be fast, but that index 
should cover the reasonable ones I think.


> Is there any solution to use such as range types or anything which
> mentions the unavailable dates.

For any product, there should be the same number of unavailable periods 
as available, right---or often one more? So I don't see any performance 
benefit in doing it that way, and it certainly seems less intuitive to 
store when something is *not* available.


--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Using Lateral

2018-03-27 Thread Paul Jungwirth

On 03/27/2018 03:22 AM, Johann Spies wrote:

In the past I could use this in a query:

 SELECT
 DISTINCT ut,
 CASE
 WHEN xpath_exists ('//t:address_spec/t:country/text()',
 q.address_spec,
 p.ns)
 THEN unnest (xpath ('//t:address_spec/t:country/text()',
 q.address_spec,
 p.ns))::citext
 ELSE NULL
 END country,

No longer.  The error message suggests I should use a lateral query.
But I could not figure out in the documentation how to get the same
result using a "lateral" construct.

Just selecting "unnest(...)" gives the wrong result because if the
xpath does not exist all the other parts of the query (like 'ut' in
this case) also are empty.


It is hard to suggest something without seeing your whole query (e.g. 
how are you joining q & p?). But it sounds like you basically want a 
left join to the unnested xpath result. It could be a lateral join or not.


It is common to use UNNEST with an implicit lateral join, like this:

SELECT  ...
FROMq, UNNEST(xpath('...', q.address_spec))

But that gives you an inner join. To get an outer join you need to be 
more explicit. Something like this:


SELECT  ...
FROMq, p
LEFT OUTER JOIN LATERAL (
  SELECT  *
  FROMunnest(xpath('//t:address_spec/t:country/text()',
   q.address_spec, p.ns))::citext
) x(country)
ON true

(Presumably you would do something more restrictive to connect q & p 
though.)


Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Autonomous transaction, background worker

2018-03-27 Thread Paul Jungwirth
I try to setup a logging facility and want it to do its work 
asynchronously in autonomous transactions. I have read 
http://blog.dalibo.com/2016/08/19/Autonoumous_transactions_support_in_PostgreSQL.html 
and chapter 47 of the documentation believing those articles are about 
the same. However, pg_background_launch mentioned in the blog is not 
available on my vanilla installation of Debian 10.3-2 from the 
PostgreSQL repositories.
CHapter 47 makes me believe I need to code some C functions to use 
background workers.


I've experimented with background workers a bit. Yes, normally you'd 
write some C, but you could also install this (mentioned in the blog 
post and the source of pg_background_launch):


https://github.com/vibhorkum/pg_background

Note that is not part of contrib and is not on pgxn, so to install it 
you need to clone the repo and run `make && sudo make install`. Also I'm 
not sure it is really production-level. In particular this is a serious 
issue:


https://github.com/vibhorkum/pg_background/issues/1

(If the launching process exits, the child process won't be able to map 
the shared memory segment.)


This is another example of background workers that might interest you, 
just to see how they're doing it:


https://github.com/citusdata/pg_cron

In that case they start the worker upon database startup, so it doesn't 
have problems with shared memory going away.


I took a stab recently at using background workers to run ad hoc async 
SQL, but eventually I wound up doing something simpler.


I think background workers is a pretty interesting feature though, so if 
you get something working I'm sure others would be happy to hear about 
it. Good luck!


Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Check constraints.

2018-03-27 Thread Paul Jungwirth

On 03/27/2018 04:23 AM, Steve Rogerson wrote:

I am trying to add/change a constraint programmatically, but not if it
already exists, and is the same as before.
...
Is there a way of getting the "normalised" version of constraint so decide if
I need to update the constraint if one already exists?


Hi Steve,

I wrote a Ruby gem to do this some years ago. Here is the SQL I used:

  SELECT  c.conname,
  t.relname,
  pg_get_expr(c.conbin, c.conrelid)
  FROMpg_catalog.pg_constraint c,
  pg_catalog.pg_class t,
  pg_catalog.pg_namespace n
  WHERE   c.contype = 'c'
  AND c.conrelid = t.oid
  AND t.relkind = 'r'
  AND n.oid = t.relnamespace
  AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
  AND pg_catalog.pg_table_is_visible(t.oid)

https://github.com/pjungwir/db_leftovers/blob/master/lib/db_leftovers/postgres_database_interface.rb#L117-L137

I haven't used it against the last few Postgres versions, but it 
probably still works or needs only minor adjustments.


--
Paul  ~{:-)
p...@illuminatedcomputing.com



Proposition for better performance

2018-03-27 Thread hmidi slim
Hi,
I'm trying to create an availability table for existing products. I'm
fetching the suitable schema to design in order to get good performance
when I fetch products in a table contains millions of rows.
I think to make a schema like this:
*create table availability (product_id integer, product_name varchar(255),
start_date date, end_date date)*.

E.g:
Given an example 'product-1' which has a product_id = 1 and available
from *27/03/2018
to 31/03/2018*:



*product_id product_namestart_dateend_date  *



* 1product-1  27-03-201831-03-2018*
However if the product is not available in 29/03/2018 I have to divide the
period to 2 intervals:











*product_id product_namestart_dateend_date
1product-1  27-03-201828-03-2018
1product-1  30-03-201831-03-2018With a
table contains millions of products and if I divide the availability by
intervals I will got a huge number of rows and fetching data will degrade
the performance. Is there any solution to use such as range types or
anything which mentions the unavailable dates.Based on the example
mentioned above, can I mention a data type like range type that take the
start_date and end_date and exclude the unavailable dates?*


logical decoder lsn order between transactions

2018-03-27 Thread Tom Dearman
Hi,

We have written a decoder plugin to use streaming replication in an aggregation 
framework and we write the changes in chunks using OutputPluginWriter from 
pg_decode_change. In order to ensure we don't process a message twice we add 
the lsn of each message to our aggregated value and check as new messages come 
in that they are not less than the present lsn saved. The problem we have found 
is that interleaved inserts in 2 separate session have interleaved lsn values, 
an example of this can be reproduced using the test_decoder:

# SELECT * FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL);
lsn |  xid   |   data   
++--
 0/3AEB0DC0 | 182312 | BEGIN 182312
 0/3AEB0DC0 | 182312 | table public.data: INSERT: id[integer]:11 data[text]:'1'
 0/3AEB0F20 | 182312 | COMMIT 182312
 0/3AEA4770 | 182265 | BEGIN 182265
 0/3AEA47D8 | 182265 | table public.data: INSERT: id[integer]:10 data[text]:'1'
 0/3AEB1790 | 182265 | COMMIT 182265
(6 rows)


The lsn of the commit is correctly ordered but the inserts/update/BEGIN seems 
to be in the order that they happened, this means that if we save value 
'0/3AEB0DC0’, when we reach '0/3AEA47D8’ we think we have already processed it 
as it has a value less then the last one processed.  I have found that if I 
change to a batched approach, writing to OutputPluginWriter in the 
pg_decode_commit_txn, then this will work as the java client used for the 
aggregation returns the value of the commit lsn as the getLastReceiveLSN, or I 
can just put the commit_lsn value into our streamed results.

If we would like to use a none-batched approach, what are the suggestions for 
how to ensure we only process a given record once, whist only saving one value 
against our aggregation value ie something like an lsn value.

Thanks.



Re: Must re-connect to see tables

2018-03-27 Thread Daniel Verite
Blake McBride wrote:

> data.sql is an untouched dump of a database from PostgreSQL 9.5.12.
> 
> The import seems to have worked.  I just need another \c after the \i.

An effect of reconnecting is to reset the search_path to what
it was before including your dump file.
You're most certainly affected by the search_path-related
changes in pg_dump done in the latest versions following
CVE-2018-1058
See https://www.postgresql.org/support/security/
or
https://www.postgresql.org/about/news/1834/

Manually issuing
  RESET search_path;
after \i data.sql should get "public" back in the search_path
without reconnecting and after that \dt would work as usual.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



Re: Must re-connect to see tables

2018-03-27 Thread Blake McBride
On Tue, Mar 27, 2018 at 6:48 AM, rob stone  wrote:

> Hello Blake,
>
> On Tue, 2018-03-27 at 06:23 -0500, Blake McBride wrote:
> > I should also mention that I am using PostgreSQL 9.3.22 on a 64 bit
> > Linux box.
> >
> > On Tue, Mar 27, 2018 at 6:22 AM, Blake McBride 
> > wrote:
> > > Hi,
> > >
> > > I have been using PostgreSQL for many years but all of a sudden a
> > > db load script I've been using no longer works.  What id does
> > > (essentially) is:
> > >
> > > psql postgres postgres
> > > drop database test;
> > > create database test;
> > > \c test
> > > \i data.sql
> > > \dt
> > > \q
> > >
> > >
> > > I tested this with a small data.sql and it worked fine.  However,
> > > when I use a real (much larger) data.sql then \dt replies "No
> > > relations"  !  I found this if I stick an extra '\c test' after the
> > > '\i data.sql' it works.
> > >
> > > The data.sql is a PostgreSQL dump from another database.  When the
> > > '\i data.sql' is done, the prompt still shows 'test#' so it's not
> > > like I've changed databases.
> > >
> > > I only have one schema, 'public'.
> > >
> > > Can't figure out why a simple example works but a large file
> > > requires an extra \c.
> > >
> > > Sure appreciate any help.
> > >
> > > Blake McBride
> > >
> >
> >
>
>
>
>
> 1) Where does your e-mail subject line "Must re-connect to see tables"
> relate to the import?
>


I do the import and then \dt return no relations.  I must *re-connect* via
\c in order to see the tables.  (According to the prompt, I was still
connected to the database.)



>
> 2) Does the larger data.sql file have a "create database" line or any
> other construct at the beginning of the file that would cause it to NOT
> insert rows into database test?
>

data.sql is an untouched dump of a database from PostgreSQL 9.5.12.

The import seems to have worked.  I just need another \c after the \i.

Thanks!

Blake




>
> Cheers,
> Rob
>


Re: Must re-connect to see tables

2018-03-27 Thread rob stone
Hello Blake,

On Tue, 2018-03-27 at 06:23 -0500, Blake McBride wrote:
> I should also mention that I am using PostgreSQL 9.3.22 on a 64 bit
> Linux box.
> 
> On Tue, Mar 27, 2018 at 6:22 AM, Blake McBride 
> wrote:
> > Hi,
> > 
> > I have been using PostgreSQL for many years but all of a sudden a
> > db load script I've been using no longer works.  What id does
> > (essentially) is:
> > 
> > psql postgres postgres
> > drop database test;
> > create database test;
> > \c test
> > \i data.sql
> > \dt
> > \q
> > 
> > 
> > I tested this with a small data.sql and it worked fine.  However,
> > when I use a real (much larger) data.sql then \dt replies "No
> > relations"  !  I found this if I stick an extra '\c test' after the
> > '\i data.sql' it works.
> > 
> > The data.sql is a PostgreSQL dump from another database.  When the
> > '\i data.sql' is done, the prompt still shows 'test#' so it's not
> > like I've changed databases.
> > 
> > I only have one schema, 'public'.
> > 
> > Can't figure out why a simple example works but a large file
> > requires an extra \c.
> > 
> > Sure appreciate any help.
> > 
> > Blake McBride
> > 
> 
> 




1) Where does your e-mail subject line "Must re-connect to see tables"
relate to the import?

2) Does the larger data.sql file have a "create database" line or any
other construct at the beginning of the file that would cause it to NOT
insert rows into database test?

Cheers,
Rob



Re: Must re-connect to see tables

2018-03-27 Thread Ron
If it worked in 9.3.22 and now it's failed in 9.3.22, then  I'd look to see 
if something has changed in data.sql.


On 03/27/2018 06:22 AM, Blake McBride wrote:

Hi,

I have been using PostgreSQL for many years but all of a sudden a db load 
script I've been using no longer works.  What id does (essentially) is:


psql postgres postgres
drop database test;
create database test;
\c test
\i data.sql
\dt
\q


I tested this with a small data.sql and it worked fine. However, when I 
use a real (much larger) data.sql then \dt replies "No relations"  !  I 
found this if I stick an extra '\c test' after the '\i data.sql' it works.


The data.sql is a PostgreSQL dump from another database. When the '\i 
data.sql' is done, the prompt still shows 'test#' so it's not like I've 
changed databases.


I only have one schema, 'public'.

Can't figure out why a simple example works but a large file requires an 
extra \c.


Sure appreciate any help.

Blake McBride



--
Angular momentum makes the world go 'round.



Check constraints.

2018-03-27 Thread Steve Rogerson
williamI am trying to add/change a constraint programmatically, but not if it
already exists, and is the same as before.


I can so something like (may not be precise ...)

select check_clause from information_schema.check_constraints where
constraint_name = 'my-name'

but this returns that clause in a normalised format that is not compatible
with the text I am trying to compare, so I'm adding something like ...

alter table my_table  add check (my_type = any (array['GOOD' , 'BAD']))

but the check_clause from above looks like ...

(((my_type)::text = ANY (ARRAY['GOOD'::text, 'BAD'::text])))

Is there a way of getting the "normalised" version of constraint so decide if
I need to update the constraint if one already exists?

Steve





Re: Must re-connect to see tables

2018-03-27 Thread Blake McBride
I should also mention that I am using PostgreSQL 9.3.22 on a 64 bit Linux
box.

On Tue, Mar 27, 2018 at 6:22 AM, Blake McBride  wrote:

> Hi,
>
> I have been using PostgreSQL for many years but all of a sudden a db load
> script I've been using no longer works.  What id does (essentially) is:
>
> psql postgres postgres
> drop database test;
> create database test;
> \c test
> \i data.sql
> \dt
> \q
>
>
> I tested this with a small data.sql and it worked fine.  However, when I
> use a real (much larger) data.sql then \dt replies "No relations"  !  I
> found this if I stick an extra '\c test' after the '\i data.sql' it works.
>
> The data.sql is a PostgreSQL dump from another database.  When the '\i
> data.sql' is done, the prompt still shows 'test#' so it's not like I've
> changed databases.
>
> I only have one schema, 'public'.
>
> Can't figure out why a simple example works but a large file requires an
> extra \c.
>
> Sure appreciate any help.
>
> Blake McBride
>
>


Must re-connect to see tables

2018-03-27 Thread Blake McBride
Hi,

I have been using PostgreSQL for many years but all of a sudden a db load
script I've been using no longer works.  What id does (essentially) is:

psql postgres postgres
drop database test;
create database test;
\c test
\i data.sql
\dt
\q


I tested this with a small data.sql and it worked fine.  However, when I
use a real (much larger) data.sql then \dt replies "No relations"  !  I
found this if I stick an extra '\c test' after the '\i data.sql' it works.

The data.sql is a PostgreSQL dump from another database.  When the '\i
data.sql' is done, the prompt still shows 'test#' so it's not like I've
changed databases.

I only have one schema, 'public'.

Can't figure out why a simple example works but a large file requires an
extra \c.

Sure appreciate any help.

Blake McBride


Using Lateral

2018-03-27 Thread Johann Spies
In the past I could use this in a query:

SELECT
DISTINCT ut,
CASE
WHEN xpath_exists ('//t:address_spec/t:country/text()',
q.address_spec,
p.ns)
THEN unnest (xpath ('//t:address_spec/t:country/text()',
q.address_spec,
p.ns))::citext
ELSE NULL
END country,

No longer.  The error message suggests I should use a lateral query.
But I could not figure out in the documentation how to get the same
result using a "lateral" construct.

Just selecting "unnest(...)" gives the wrong result because if the
xpath does not exist all the other parts of the query (like 'ut' in
this case) also are empty.

Regards
Johann

-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)



Issue with PostgreSQL replication and PITR

2018-03-27 Thread Amee Sankhesara - Quipment India
[Quipment Logo]

Hello,

We have a windows(2016) setup for PostgreSQL(Version 9.6.5).

We have setup replication and PITR on this PostgreSQL server and wal files for 
replication and PITR we are storing on the another server which is shared with 
primary and secondary server.

For doing some OS migration activity for the PostgreSQL database servers
We created two new replication server and promoted to new master server for the 
same

During that activity one .partial file is created on master server and after 
that wal files are not copied in shared folder from which secondary server take 
it and restore it. All Wal files after .partial file are stored in pg_xlog 
folder of master data so it is increasing size of data directory of master 
server.

Wal files are not copying in shared location of master and secondary server but 
still replication is running and on secondary server data is up to date.

In logs getting only below warning :
Warning : The failed archive command was: copy 
"D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial" 
"\\10.0.0.35\Archive_Replication\00010A8800F8.partial"
 | copy "D:\PostgreSQL\9.6\data\pg_xlog\00010A8800F8.partial" 
"\\10.0.0.35\Archive_PITR\00010A8800F8.partial"

Can't figure out that in shared folder wal files are not copied but still 
secondary server is running in sync with master server.

What could be reason behind this?

And how to remove this .partial file from pg_xlog safely and how to start 
copying wal files from pg_xlog of master server to shared location of master 
and secondary server ?

Thanks,
Amee Sankhesara
Database Developer

R&D .Net Department

T / +91 (0) 94295 45266
amee.sankhesara@ / 
www.quipment.in



QTwee Group BV
KvK Arnhem 60858001
BTW NL854090721B01
mail@ / www.quipment.nl

Hoofdkantoor Nijmegen
Kerkenbos 1236 C
6546 BE Nijmegen
Tel.

+31 (0)24 372 47 00

Fax.

+31 (0)24 372 47 07


Quipment India
903-904 Landmark,
Beside Titanium City Centre,
Anandnagar Road, 100ft Ring Road,
Ahmedabad
Gujarat - 380015 - India
Tel. +91 (0) 79 4009 7847




Disclaimer

This e-mail message (including any attachment) is intended only for the 
personal use of the recipient(s) named above. This message is confidential and 
may be legally privileged. If you are not an intended recipient, you may not 
review, copy or distribute this message. If you have received this 
communication in error, please notify us immediately by e-mail and delete the 
original message.




Re: Can't get policy to work correctly

2018-03-27 Thread Ivo Limmen
Well I seems I got it working. Still not sure what I did wrong.

I finally got it working by moving the sub select into a view and only use
the view in the policy:

   (u): ((hs_group_id IS NULL) OR (hs_group_id IN ( SELECT
allowed_hs_groups.hs_group_id
FROM allowed_hs_groups)))

The allowed_hs_groups:

CREATE OR REPLACE VIEW public.allowed_hs_groups AS
 SELECT gsg.hs_group_id
   FROM hs_group_sub_group gsg
 JOIN hs_system_user_sub_group sh ON sh.hs_sub_group_id =
gsg.hs_sub_group_id
 JOIN system_user su ON su.id = sh.system_user_id
  WHERE su.login_name = "current_user"()::text;

And now it works...

Thanks for reading!

Best regards,
Ivo Limmen


On Tue, Mar 27, 2018 at 9:20 AM, Ivo Limmen  wrote:

> Hi list!
>
> I am a long postgres user but only since a short time I am using the more
> advanced stuff. And now I use the row level security I run into a problem.
>
> I use postgres 9.5.12. I have multiple users; postgres, root and ivo. I
> have a table called person. It contains multiple rows that should be
> filtered using RLS. The table structure is a bit weird (not mine design) so
> the policy on the table is: (from \z)
>
>(u): ((hs_group_id IS NULL) OR (hs_group_id IN ( SELECT
> gsg.hs_group_id
> FROM ((hs_group_sub_group gsg
>
>   JOIN hs_system_user_sub_group sh ON ((sh.hs_sub_group_id =
> gsg.hs_sub_group_id)))
>   JOIN system_user su ON ((su.id = sh.system_user_id)))
>
>WHERE (su.login_name = ("current_user"())::text
>
> The tables that are used in the policy do not have a policy.
> All users have all privileges on all tables. postgres user is the owner of
> all tables (and has RLS bypass)
>
> When I execute:
> set role ivo;
> select * from person;
>
> I expect 2 rows but I only get 1 (left part of the policy; hs_group_id =
> null).
>
> Now the weird part:
>
> When doing a select * from any of the tables as the user ivo I see all the
> relevant data (nothing is filtered).
> Executing a select current_role also works.
>
> When I run:
>
> set role postgres;
> select * from person where
> ((hs_group_id IS NULL) OR (hs_group_id IN ( SELECT
> gsg.hs_group_id
> FROM ((hs_group_sub_group gsg
>
>   JOIN hs_system_user_sub_group sh ON ((sh.hs_sub_group_id =
> gsg.hs_sub_group_id)))
>   JOIN system_user su ON ((su.id = sh.system_user_id)))
>
>WHERE (su.login_name = 'ivo';
>
> ​I get the two rows I expected. This query is the same as the policy but I
> changed the current_user to a fixed argument as I am postgres in this case.
>
> I can not figure out what I am doing wrong. I hope someone has a clue.​
>
> ​Best regards,
> Ivo Limmen​
>
>
> --
> Met vriendelijke groet,
> Ivo Limmen
>



-- 
Met vriendelijke groet,
Ivo Limmen


Can't get policy to work correctly

2018-03-27 Thread Ivo Limmen
Hi list!

I am a long postgres user but only since a short time I am using the more
advanced stuff. And now I use the row level security I run into a problem.

I use postgres 9.5.12. I have multiple users; postgres, root and ivo. I
have a table called person. It contains multiple rows that should be
filtered using RLS. The table structure is a bit weird (not mine design) so
the policy on the table is: (from \z)

   (u): ((hs_group_id IS NULL) OR (hs_group_id IN ( SELECT
gsg.hs_group_id
FROM ((hs_group_sub_group
gsg
  JOIN hs_system_user_sub_group sh ON ((sh.hs_sub_group_id =
gsg.hs_sub_group_id)))
  JOIN system_user su ON ((su.id =
sh.system_user_id)))
   WHERE (su.login_name = ("current_user"())::text

The tables that are used in the policy do not have a policy.
All users have all privileges on all tables. postgres user is the owner of
all tables (and has RLS bypass)

When I execute:
set role ivo;
select * from person;

I expect 2 rows but I only get 1 (left part of the policy; hs_group_id =
null).

Now the weird part:

When doing a select * from any of the tables as the user ivo I see all the
relevant data (nothing is filtered).
Executing a select current_role also works.

When I run:

set role postgres;
select * from person where
((hs_group_id IS NULL) OR (hs_group_id IN ( SELECT
gsg.hs_group_id
FROM ((hs_group_sub_group
gsg
  JOIN hs_system_user_sub_group sh ON ((sh.hs_sub_group_id =
gsg.hs_sub_group_id)))
  JOIN system_user su ON ((su.id =
sh.system_user_id)))
   WHERE (su.login_name = 'ivo';

​I get the two rows I expected. This query is the same as the policy but I
changed the current_user to a fixed argument as I am postgres in this case.

I can not figure out what I am doing wrong. I hope someone has a clue.​

​Best regards,
Ivo Limmen​


-- 
Met vriendelijke groet,
Ivo Limmen