Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Adrian Klaver

On 04/11/2018 05:36 PM, Jonathan Leroy - Inikup wrote:

2018-04-12 1:49 GMT+02:00 Jonathan Leroy - Inikup :

Maybe I will try to "dissect" pg_wrapper to find where the error
occurs, but the script is quite complex...


OK, I've found the issue : pg_wrapper is trying to get the value of
the "data_directory" setting by reading the postgresql.conf file.
However, my user can't access the postgresql.conf file.

So I've bootstrapped a fresh Debian 8 VM with just postgresql-9.6, and
compared files permissions between that VM and my production servers.
All my production postgresql.conf files have a chmod 640 vs 644 on my new VM.

After digging, I've found that the SaltStack recipe who configure all
my servers force the file chmod to 640...
I've fixed it: everything works now. :)


Great. Thanks for the follow up.



Thank you all for your help!




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Jonathan Leroy - Inikup
2018-04-12 1:49 GMT+02:00 Jonathan Leroy - Inikup :
> Maybe I will try to "dissect" pg_wrapper to find where the error
> occurs, but the script is quite complex...

OK, I've found the issue : pg_wrapper is trying to get the value of
the "data_directory" setting by reading the postgresql.conf file.
However, my user can't access the postgresql.conf file.

So I've bootstrapped a fresh Debian 8 VM with just postgresql-9.6, and
compared files permissions between that VM and my production servers.
All my production postgresql.conf files have a chmod 640 vs 644 on my new VM.

After digging, I've found that the SaltStack recipe who configure all
my servers force the file chmod to 640...
I've fixed it: everything works now. :)

Thank you all for your help!

-- 
Jonathan Leroy



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Adrian Klaver

On 04/11/2018 04:49 PM, Jonathan Leroy - Inikup wrote:

2018-04-12 1:38 GMT+02:00 Adrian Klaver :

Another thought do you have the PGDATA environment variable set for your
user1?


Nope.

Maybe I will try to "dissect" pg_wrapper to find where the error
occurs, but the script is quite complex...



Sorry that should have been:

psql --cluster 10/main --version

--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Adrian Klaver

On 04/11/2018 04:49 PM, Jonathan Leroy - Inikup wrote:

2018-04-12 1:38 GMT+02:00 Adrian Klaver :

Another thought do you have the PGDATA environment variable set for your
user1?


Nope.

Maybe I will try to "dissect" pg_wrapper to find where the error
occurs, but the script is quite complex...



Does this happen for the other commands also? e.g:

pg_dump -V

Also what happens if you do:

psql 9.4 main -V

Substituting your cluster name for main if required.

--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Jonathan Leroy - Inikup
2018-04-12 1:38 GMT+02:00 Adrian Klaver :
> Another thought do you have the PGDATA environment variable set for your
> user1?

Nope.

Maybe I will try to "dissect" pg_wrapper to find where the error
occurs, but the script is quite complex...

-- 
Jonathan Leroy



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Jonathan Leroy - Inikup
2018-04-12 1:32 GMT+02:00 Adrian Klaver :
> What are the permissions on /usr/bin/psql?
>
> On my Ubuntu 16.04 instance I have:
>
> lrwxrwxrwx 1 root root 37 Feb  8 04:55 /usr/bin/psql ->
> ../share/postgresql-common/pg_wrapper*

I have exactly the same permissions on my side.
I don't think it's a permissions issue as I have the same exact error
on all my servers, which uses more or less the deb packages default
configuration.

-- 
Jonathan Leroy



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Adrian Klaver

On 04/11/2018 04:20 PM, Jonathan Leroy - Inikup wrote:

2018-04-12 0:26 GMT+02:00 Tim Cross :

Check your settings in /etc/postgresql-common/user_clusters. The wrapper
script uses that file to determine what databases to connect to or what
is the user default database cluster. It can also be overridden with a
local ~/.postgresqlrc, so check there are no old settings there as well.


Thank you Tim.

The file /etc/postgresql-common/user_clusters is empty on all servers
(it only contains comments). None of my users have a ~/.postgresqlrc
file.



Another thought do you have the PGDATA environment variable set for your 
user1?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Adrian Klaver

On 04/11/2018 04:20 PM, Jonathan Leroy - Inikup wrote:

2018-04-12 0:26 GMT+02:00 Tim Cross :

Check your settings in /etc/postgresql-common/user_clusters. The wrapper
script uses that file to determine what databases to connect to or what
is the user default database cluster. It can also be overridden with a
local ~/.postgresqlrc, so check there are no old settings there as well.


Thank you Tim.

The file /etc/postgresql-common/user_clusters is empty on all servers
(it only contains comments). None of my users have a ~/.postgresqlrc
file.



What are the permissions on /usr/bin/psql?

On my Ubuntu 16.04 instance I have:

lrwxrwxrwx 1 root root 37 Feb  8 04:55 /usr/bin/psql -> 
../share/postgresql-common/pg_wrapper*



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Fwd: Carto ODBC_FDW issue "ERROR: Connecting to driver" from PG to DB2

2018-04-11 Thread Adrian Klaver

On 04/11/2018 01:20 PM, karthik kumar wrote:


Db2 server on a docker container on my laptop: IP address 172.17.0.4
Db2 client where postgres server is installed is on a different 
container.  IP address: 172.17.0.3


Output of db2 list node directory

Node 2 entry:

  Node name                      = MYDB3
  Comment                        =
  Directory entry type           = LOCAL
  Protocol                       = TCPIP
  Hostname                       = 172.17.0.4
  Service name                   = 5

output of db2 list db directory

Database alias                       = SAMPLE
  Database name                        = SAMPLE
  Node name                            = MYDB3
  Database release level               = 14.00
  Comment                              =
  Directory entry type                 = Remote
  Catalog database partition number    = -1
  Alternate server hostname            =
  Alternate server port number         =


Are you using this DB2 ODBC driver?:

https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.cli.doc/doc/t0024166.html

If so check out the links below for conflict between specifying an 
Instance and Protocol =TCPIP:


https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0024132.html

https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0008807.html




$ cat /etc/odbc.ini
[SAMPLE]
Description = DB2 remote SAMPLE Database
Driver = SAMPLE
servername = 172.17.0.4
system = MYDB3
username = db2inst1
password = db2inst1
port = 5
PROTOCOL=TCPIP
Debug = 1
Commlog = 1

$ cat /etc/odbcinst.ini
[SAMPLE]
Instance       =  MYDB3
Description     = DB2 ODBC Driver
Driver          = /home/db2inst1/sqllib/lib64/libdb2.so
CommLog         = 1
FileUsage       = 1
#DontDLClose    = 1




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Jonathan Leroy - Inikup
2018-04-12 0:26 GMT+02:00 Tim Cross :
> Check your settings in /etc/postgresql-common/user_clusters. The wrapper
> script uses that file to determine what databases to connect to or what
> is the user default database cluster. It can also be overridden with a
> local ~/.postgresqlrc, so check there are no old settings there as well.

Thank you Tim.

The file /etc/postgresql-common/user_clusters is empty on all servers
(it only contains comments). None of my users have a ~/.postgresqlrc
file.

-- 
Jonathan Leroy



Re: Fwd: Carto ODBC_FDW issue "ERROR: Connecting to driver" from PG to DB2

2018-04-11 Thread Adrian Klaver

On 04/11/2018 01:20 PM, karthik kumar wrote:

This is the error we see in the ODBC trace log

ODBC][586][1523477070.240690][__handles.c][450]
         Exit:[SQL_SUCCESS]
             Environment = 0x556f874716a0
[ODBC][586][1523477070.240741][SQLSetEnvAttr.c][182]
         Entry:
             Environment = 0x556f874716a0
             Attribute = SQL_ATTR_ODBC_VERSION
             Value = 0x3
             StrLen = 0
[ODBC][586][1523477070.240762][SQLSetEnvAttr.c][349]
         Exit:[SQL_SUCCESS]
[ODBC][586][1523477070.240779][SQLAllocHandle.c][364]
         Entry:
             Handle Type = 2
             Input Handle = 0x556f874716a0
[ODBC][586][1523477070.240804][SQLAllocHandle.c][482]
         Exit:[SQL_SUCCESS]
             Output Handle = 0x556f874f2000
[ODBC][586][1523477070.240837][SQLDriverConnect.c][687]
         Entry:
             Connection = 0x556f874f2000
             Window Hdl = (nil)
             Str In = 
[database=SAMPLE;DRIVER=SAMPLE;UID=db2inst1;PWD=][length = 55 
(SQL_NTS)]

             Str Out = 0x7ffd31cf8b60
             Str Out Max = 1024
             Str Out Ptr = 0x7ffd31cf8b3c
             Completion = 1
         UNICODE Using encoding ASCII 'ISO8859-1' and UNICODE 'UCS-2LE'

         DIAG [08001] [IBM][CLI Driver] SQL30081N  A communication error 
has been detected. Communication protocol being used: "TCP/IP".  
Communication API being used: "SOCKETS".  Location where the error was 
detected: "::1".  Communication function detecting the error: 
"connect".  Protocol specific error code(s): "99", "*", "*".  SQLSTATE=08001


Protocol specific error code(s): "99":

https://www.ibm.com/support/knowledgecenter/en/SSEPGG_9.5.0/com.ibm.db2.luw.messages.doc/doc/r0052008.html#r0052008.dita__tcpcec


EADDRNOTAVAIL 	99 	The specified hostname or IP address is not available 
from the local machine.


Looks to me like it is not finding your server IP and is trying IPv6 
localhost with port 5000 and not finding it.


More comments below.




[ODBC][586][1523477070.331477][SQLDriverConnect.c][1273]
         Exit:[SQL_ERROR]

The trouble is, we are not convinced of the root cause, as using isql we 
can connect to db2 using the same account.


Env details

Db2 server on a docker container on my laptop: IP address 172.17.0.4
Db2 client where postgres server is installed is on a different 
container.  IP address: 172.17.0.3


Output of db2 list node directory

Node 2 entry:

  Node name                      = MYDB3
  Comment                        =
  Directory entry type           = LOCAL
  Protocol                       = TCPIP
  Hostname                       = 172.17.0.4
  Service name                   = 5

output of db2 list db directory

Database alias                       = SAMPLE
  Database name                        = SAMPLE
  Node name                            = MYDB3
  Database release level               = 14.00
  Comment                              =
  Directory entry type                 = Remote
  Catalog database partition number    = -1
  Alternate server hostname            =
  Alternate server port number         =



So you changed settings below.

What happened to UID AND PWD?

Also I would simplify and get rid of in odbc.ini:

system
PROTOCOL

and in odbcinst.ini:

Instance

Also in odbcinst.init I would rename the section header from SAMPLE to 
DB2 and Driver in odbc.ini to DB2 Helps with identifying what you are 
actually using.


Did you also change the FOREIGN SERVER/USER MAPPING settings and if so 
to what?





$ cat /etc/odbc.ini
[SAMPLE]
Description = DB2 remote SAMPLE Database
Driver = SAMPLE
servername = 172.17.0.4
system = MYDB3
username = db2inst1
password = db2inst1
port = 5
PROTOCOL=TCPIP
Debug = 1
Commlog = 1

$ cat /etc/odbcinst.ini
[SAMPLE]
Instance       =  MYDB3
Description     = DB2 ODBC Driver
Driver          = /home/db2inst1/sqllib/lib64/libdb2.so
CommLog         = 1
FileUsage       = 1
#DontDLClose    = 1




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Tim Cross

Jonathan Leroy - Inikup  writes:

> Hi,
>
> I'm using multiples versions of PostgreSQL from the postgresql.org
> repository (http://apt.postgresql.org/) on Debian Jessie, on multiples
> servers.
> Each postgresql-client-XX package depends on postgresql-client-common,
> which provides the pg_wrapper script
> (/usr/share/postgresql-common/pg_wrapper).
>
> A lot of PostgreSQL commands are linked to pg_wrapper. E.g.:
> /usr/bin/psql -> ../share/postgresql-common/pg_wrapper
>
> Here's my issue : when I'm logged as an user which is not root or
> postgresql, I can't use any of the commands linked to pg_wrapper:
>
> user1@server1:~ $ /usr/bin/psql --version
> Error: Invalid data directory
>
>
> However, everything works with postgres or root user:
>
> root@server1:~ # /usr/bin/psql --version
> psql (PostgreSQL) 9.4.16
>
>
> Also, everything works fine if I bypass pg_wrapper:
>
> user1@server1:~ $ /usr/lib/postgresql/9.4/bin/psql --version
> psql (PostgreSQL) 9.4.16
>
> I am missing something ?
>
> Thanks,

Check your settings in /etc/postgresql-common/user_clusters. The wrapper
script uses that file to determine what databases to connect to or what
is the user default database cluster. It can also be overridden with a
local ~/.postgresqlrc, so check there are no old settings there as well.

Tim

-- 
Tim Cross



Re: Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Tom Lane
Jonathan Leroy - Inikup  writes:
> Here's my issue : when I'm logged as an user which is not root or
> postgresql, I can't use any of the commands linked to pg_wrapper:

> user1@server1:~ $ /usr/bin/psql --version
> Error: Invalid data directory

There's no error message with exactly that spelling in PG proper,
so it must be coming out of the wrapper script.  At a guess, the
wrapper script thinks it should be able to examine the data directory,
but it can't because of restrictive file permissions.  Why your
setup is triggering that when it works for other people, I can't say.

regards, tom lane



Commands linked to pg_wrapper not working with non-root users

2018-04-11 Thread Jonathan Leroy - Inikup
Hi,

I'm using multiples versions of PostgreSQL from the postgresql.org
repository (http://apt.postgresql.org/) on Debian Jessie, on multiples
servers.
Each postgresql-client-XX package depends on postgresql-client-common,
which provides the pg_wrapper script
(/usr/share/postgresql-common/pg_wrapper).

A lot of PostgreSQL commands are linked to pg_wrapper. E.g.:
/usr/bin/psql -> ../share/postgresql-common/pg_wrapper

Here's my issue : when I'm logged as an user which is not root or
postgresql, I can't use any of the commands linked to pg_wrapper:

user1@server1:~ $ /usr/bin/psql --version
Error: Invalid data directory


However, everything works with postgres or root user:

root@server1:~ # /usr/bin/psql --version
psql (PostgreSQL) 9.4.16


Also, everything works fine if I bypass pg_wrapper:

user1@server1:~ $ /usr/lib/postgresql/9.4/bin/psql --version
psql (PostgreSQL) 9.4.16

I am missing something ?

Thanks,

-- 
Jonathan Leroy



Re: Fwd: Carto ODBC_FDW issue "ERROR: Connecting to driver" from PG to DB2

2018-04-11 Thread karthik kumar
>>
> I'm not a networking guru, but it sure looks like something is trying to
> connect with an IPv6 loopback address.  To me, this is the smoking gun.
>
>
>
>
yes that much we figured it out here.  Question is, why is isql able to
connect, but not fdw via psql.  At the end both isql and fdw should be
using the same underlying mechanism to talk to db2.


Re: Fwd: Carto ODBC_FDW issue "ERROR: Connecting to driver" from PG to DB2

2018-04-11 Thread Steven Hirsch

On Wed, 11 Apr 2018, karthik kumar wrote:


This is the error we see in the ODBC trace log

        DIAG [08001] [IBM][CLI Driver] SQL30081N  A communication error has 
been detected. Communication
protocol being used: "TCP/IP".  Communication API being used: "SOCKETS".  
Location where the error was
detected: "::1".  Communication function detecting the error: "connect".  
Protocol specific error code(s):

^ Bzzzt!!


"99", "*", "*".  SQLSTATE=08001


I'm not a networking guru, but it sure looks like something is trying to 
connect with an IPv6 loopback address.  To me, this is the smoking gun.



--

Re: Fwd: Carto ODBC_FDW issue "ERROR: Connecting to driver" from PG to DB2

2018-04-11 Thread karthik kumar
 This is the error we see in the ODBC trace log

ODBC][586][1523477070.240690][__handles.c][450]
Exit:[SQL_SUCCESS]
Environment = 0x556f874716a0
[ODBC][586][1523477070.240741][SQLSetEnvAttr.c][182]
Entry:
Environment = 0x556f874716a0
Attribute = SQL_ATTR_ODBC_VERSION
Value = 0x3
StrLen = 0
[ODBC][586][1523477070.240762][SQLSetEnvAttr.c][349]
Exit:[SQL_SUCCESS]
[ODBC][586][1523477070.240779][SQLAllocHandle.c][364]
Entry:
Handle Type = 2
Input Handle = 0x556f874716a0
[ODBC][586][1523477070.240804][SQLAllocHandle.c][482]
Exit:[SQL_SUCCESS]
Output Handle = 0x556f874f2000
[ODBC][586][1523477070.240837][SQLDriverConnect.c][687]
Entry:
Connection = 0x556f874f2000
Window Hdl = (nil)
Str In =
[database=SAMPLE;DRIVER=SAMPLE;UID=db2inst1;PWD=][length
= 55 (SQL_NTS)]
Str Out = 0x7ffd31cf8b60
Str Out Max = 1024
Str Out Ptr = 0x7ffd31cf8b3c
Completion = 1
UNICODE Using encoding ASCII 'ISO8859-1' and UNICODE 'UCS-2LE'

DIAG [08001] [IBM][CLI Driver] SQL30081N  A communication error has
been detected. Communication protocol being used: "TCP/IP".  Communication
API being used: "SOCKETS".  Location where the error was detected: "::1".
Communication function detecting the error: "connect".  Protocol specific
error code(s): "99", "*", "*".  SQLSTATE=08001


[ODBC][586][1523477070.331477][SQLDriverConnect.c][1273]
Exit:[SQL_ERROR]

The trouble is, we are not convinced of the root cause, as using isql we
can connect to db2 using the same account.

Env details

Db2 server on a docker container on my laptop: IP address 172.17.0.4
Db2 client where postgres server is installed is on a different container.
IP address: 172.17.0.3

Output of db2 list node directory

Node 2 entry:

 Node name  = MYDB3
 Comment=
 Directory entry type   = LOCAL
 Protocol   = TCPIP
 Hostname   = 172.17.0.4
 Service name   = 5

output of db2 list db directory

Database alias   = SAMPLE
 Database name= SAMPLE
 Node name= MYDB3
 Database release level   = 14.00
 Comment  =
 Directory entry type = Remote
 Catalog database partition number= -1
 Alternate server hostname=
 Alternate server port number =

$ cat /etc/odbc.ini
[SAMPLE]
Description = DB2 remote SAMPLE Database
Driver = SAMPLE
servername = 172.17.0.4
system = MYDB3
username = db2inst1
password = db2inst1
port = 5
PROTOCOL=TCPIP
Debug = 1
Commlog = 1

$ cat /etc/odbcinst.ini
[SAMPLE]
Instance   =  MYDB3
Description = DB2 ODBC Driver
Driver  = /home/db2inst1/sqllib/lib64/libdb2.so
CommLog = 1
FileUsage   = 1
#DontDLClose= 1


Re: Archiving Data to Another DB?

2018-04-11 Thread Adrian Klaver

On 04/11/2018 11:59 AM, Kumar, Virendra wrote:

Does this apply to SELECT calls as well or only for DMLs.


Easy enough to test:

test=# \d projection

Foreign table "public.projection"
  ...

Server: fdw_test_server
FDW options: (schema_name 'public', table_name 'projection')

test=# select count(*) from projection ;
 count
---
 28430
(1 row)

With log_statement = 'all':

select * from projection ;

postgres-2018-04-11 13:04:33.871 PDT-0LOG:  statement: select * from 
projection ;
postgres-2018-04-11 13:04:33.872 PDT-0LOG:  statement: START TRANSACTION 
ISOLATION LEVEL REPEATABLE READ
postgres-2018-04-11 13:04:33.872 PDT-0LOG:  execute : DECLARE 
c1 CURSOR FOR
SELECT line_id, p_item_no, c_id, method, year, qty, sub_method, 
proj_note, item_key, pot_ct, trial, ts_insert, ts_update, user_insert, 
user_update, link_key, v_number FROM public.projection

postgres-2018-04-11 13:04:33.872 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.873 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.874 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.875 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.876 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.877 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.878 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.879 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.880 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.881 PDT-0LOG:  statement: FETCH 100 FROM c1
postgres-2018-04-11 13:04:33.882 PDT-0LOG:  statement: FETCH 100 FROM c1

...

test=# explain analyse select * from projection ;
  QUERY PLAN 


--
 Foreign Scan on projection  (cost=100.00..115.34 rows=178 width=435) 
(actual time=0.844..163.493 rows=28430 loops=1)

 Planning time: 0.077 ms
 Execution time: 164.735 ms
(3 rows)




I am planning to use postgres_fdw but if it is going by one row at a 
time there will be a lot of round trip and defeat the purpose.


Regards,

Virendra.

*From:*Don Seiler [mailto:d...@seiler.us]
*Sent:* Wednesday, April 11, 2018 2:53 PM
*To:* Adrian Klaver
*Cc:* pgsql-gene...@postgresql.org
*Subject:* Re: Archiving Data to Another DB?

On Wed, Apr 11, 2018 at 1:38 PM, Don Seiler > wrote:


Yeah, I saw the same with a 132 row insert. Now imagine that with a 
monthly 50 million row insert or delete. :p Thanks for the confirmation!


I went back to look at the postgres logs on my dev server. These logs 
are rotated once they hit 2G in size. One typical log from the middle of 
my test last night hit that in 13 minutes and had over 5.2 million 
DELETE calls. There a quite a few logs like this. That would not be fun 
for the disk space on the log volume, either.


Don.

--

Don Seiler
www.seiler.us 




This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.



--
Adrian Klaver
adrian.kla...@aklaver.com



RE: Archiving Data to Another DB?

2018-04-11 Thread Kumar, Virendra
Does this apply to SELECT calls as well or only for DMLs.
I am planning to use postgres_fdw but if it is going by one row at a time there 
will be a lot of round trip and defeat the purpose.

Regards,
Virendra.

From: Don Seiler [mailto:d...@seiler.us]
Sent: Wednesday, April 11, 2018 2:53 PM
To: Adrian Klaver
Cc: pgsql-gene...@postgresql.org
Subject: Re: Archiving Data to Another DB?

On Wed, Apr 11, 2018 at 1:38 PM, Don Seiler 
mailto:d...@seiler.us>> wrote:

Yeah, I saw the same with a 132 row insert. Now imagine that with a monthly 50 
million row insert or delete. :p Thanks for the confirmation!

I went back to look at the postgres logs on my dev server. These logs are 
rotated once they hit 2G in size. One typical log from the middle of my test 
last night hit that in 13 minutes and had over 5.2 million DELETE calls. There 
a quite a few logs like this. That would not be fun for the disk space on the 
log volume, either.

Don.
--
Don Seiler
www.seiler.us



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


Re: Archiving Data to Another DB?

2018-04-11 Thread Don Seiler
On Wed, Apr 11, 2018 at 1:38 PM, Don Seiler  wrote:

>
> Yeah, I saw the same with a 132 row insert. Now imagine that with a
> monthly 50 million row insert or delete. :p Thanks for the confirmation!
>

I went back to look at the postgres logs on my dev server. These logs are
rotated once they hit 2G in size. One typical log from the middle of my
test last night hit that in 13 minutes and had over 5.2 million DELETE
calls. There a quite a few logs like this. That would not be fun for the
disk space on the log volume, either.

Don.
-- 
Don Seiler
www.seiler.us


Re: Fwd: Carto ODBC_FDW issue "ERROR: Connecting to driver" from PG to DB2

2018-04-11 Thread Adrian Klaver

On 04/11/2018 11:22 AM, karthik kumar wrote:

Hello Adrian

I built fdw with debug option and ran it with debug option.  Here is the 
output


postgres=# select * from odbc_testt;
DEBUG:  StartTransactionCommand
DEBUG:  StartTransaction
DEBUG:  name: unnamed; blockState:       DEFAULT; state: INPROGR, 
xid/subid/cid: 0/1/0, nestlvl: 1, children:

DEBUG:  odbcGetForeignRelSize
DEBUG:  odbcGetTableOptions
DEBUG:  odbcGetOptions
DEBUG:  extract_odbcFdwOptions
DEBUG:  CONN STR: database=SAMPLE;DRIVER=SAMPLE;UID=db2inst1;PWD=db2inst1
DEBUG:  Error result (-1): Connecting to driver
ERROR:  Connecting to driver

Looking at the code this is the place where the error is coming from:

/* Allocate a connection handle */
     SQLAllocHandle(SQL_HANDLE_DBC, *env, dbc);
     /* Connect to the DSN */
     ret = SQLDriverConnect(*dbc, NULL, (SQLCHAR *) conn_str.data, SQL_NTS,
                            OutConnStr, 1024, &OutConnStrLen, 
SQL_DRIVER_COMPLETE);

     check_return(ret, "Connecting to driver", dbc, SQL_HANDLE_DBC);
}

Trouble is, I can't get any unixodbc manual to indicate the meaning of -1.



Per Steven's suggestion:

https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/data-access-tracing-with-the-odbc-driver-on-linux

--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Archiving Data to Another DB?

2018-04-11 Thread Don Seiler
On Wed, Apr 11, 2018 at 1:33 PM, Adrian Klaver 
wrote:
>
> A test case here confirms it sends individual INSERTS:
>
> test_(postgres)# insert into fdw_test_table select * from fdw_test;
> INSERT 0 3
>
> Where fdw_test_table is the remote table and fdw_test is the local one.
>
> postgres-2018-04-11 11:29:23.812 PDT-0LOG:  statement: insert into
> fdw_test_table select * from fdw_test;
> postgres-2018-04-11 11:29:23.812 PDT-0LOG:  execute pgsql_fdw_prep_2:
> INSERT INTO public.fdw_test_rcv(id, fld_1, fld_2) VALUES ($1, $2, $3)
> postgres-2018-04-11 11:29:23.812 PDT-0DETAIL:  parameters: $1 = '1', $2 =
> 'one', $3 = 't'
> postgres-2018-04-11 11:29:23.813 PDT-10140LOG:  execute pgsql_fdw_prep_2:
> INSERT INTO public.fdw_test_rcv(id, fld_1, fld_2) VALUES ($1, $2, $3)
> postgres-2018-04-11 11:29:23.813 PDT-10140DETAIL:  parameters: $1 = '2',
> $2 = 'two', $3 = 'f'
> postgres-2018-04-11 11:29:23.813 PDT-10140LOG:  execute pgsql_fdw_prep_2:
> INSERT INTO public.fdw_test_rcv(id, fld_1, fld_2) VALUES ($1, $2, $3)
> postgres-2018-04-11 11:29:23.813 PDT-10140DETAIL:  parameters: $1 = '3',
> $2 = 'three', $3 = 'f'
>
> So much for that idea(:=


Yeah, I saw the same with a 132 row insert. Now imagine that with a monthly
50 million row insert or delete. :p Thanks for the confirmation!

I'm definitely leaning towards the copy/load/delete method.

Don.

-- 
Don Seiler
www.seiler.us


Re: Archiving Data to Another DB?

2018-04-11 Thread Don Seiler
On Wed, Apr 11, 2018 at 1:13 PM, Don Seiler  wrote:

>
> I had considered this as well, as this would allow me to rollback the
> delete (assuming my intel on postgres_fdw transactions was correct, which
> it may not be after all). I wondered if a remote insert would be broken up
> into individual inserts like the remote delete was, as that would be
> equally unappealing for the same reasons. But obviously worth confirming.
>

I have now confirmed that a remote INSERT does get broken up into a single
INSERT for each row.

-- 
Don Seiler
www.seiler.us


Re: Archiving Data to Another DB?

2018-04-11 Thread Adrian Klaver

On 04/11/2018 11:13 AM, Don Seiler wrote:
On Wed, Apr 11, 2018 at 12:58 PM, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


"F.33.3. Transaction Management

During a query that references any remote tables on a foreign
server, postgres_fdw opens a transaction on the remote server if one
is not already open corresponding to the current local transaction.
The remote transaction is committed or aborted when the local
transaction commits or aborts. Savepoints are similarly managed by
creating corresponding remote savepoints.

..."


Interesting, I'll work on a test case later!

I may be missing something, but why not reverse your original set up?
Assuming transactional behavior works as expected something like:

1) Setup postgres_fdw in main database.

2) Create FOREIGN TABLE pointing to table in archive database.

3) INSERT INTO/SELECT from main table to archive table.

4) DELETE FROM main table.


I had considered this as well, as this would allow me to rollback the 
delete (assuming my intel on postgres_fdw transactions was correct, 
which it may not be after all). I wondered if a remote insert woultd be 
broken up into individual inserts like the remote delete was, as that 
would be equally unappealing for the same reasons. But obviously worth 
confirming.


A test case here confirms it sends individual INSERTS:

test_(postgres)# insert into fdw_test_table select * from fdw_test;
INSERT 0 3

Where fdw_test_table is the remote table and fdw_test is the local one.

postgres-2018-04-11 11:29:23.812 PDT-0LOG:  statement: insert into 
fdw_test_table select * from fdw_test;
postgres-2018-04-11 11:29:23.812 PDT-0LOG:  execute pgsql_fdw_prep_2: 
INSERT INTO public.fdw_test_rcv(id, fld_1, fld_2) VALUES ($1, $2, $3)
postgres-2018-04-11 11:29:23.812 PDT-0DETAIL:  parameters: $1 = '1', $2 
= 'one', $3 = 't'
postgres-2018-04-11 11:29:23.813 PDT-10140LOG:  execute 
pgsql_fdw_prep_2: INSERT INTO public.fdw_test_rcv(id, fld_1, fld_2) 
VALUES ($1, $2, $3)
postgres-2018-04-11 11:29:23.813 PDT-10140DETAIL:  parameters: $1 = '2', 
$2 = 'two', $3 = 'f'
postgres-2018-04-11 11:29:23.813 PDT-10140LOG:  execute 
pgsql_fdw_prep_2: INSERT INTO public.fdw_test_rcv(id, fld_1, fld_2) 
VALUES ($1, $2, $3)
postgres-2018-04-11 11:29:23.813 PDT-10140DETAIL:  parameters: $1 = '3', 
$2 = 'three', $3 = 'f'


So much for that idea(:



Don.

--
Don Seiler
www.seiler.us 



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Fwd: Carto ODBC_FDW issue "ERROR: Connecting to driver" from PG to DB2

2018-04-11 Thread Steven Hirsch

On Wed, 11 Apr 2018, karthik kumar wrote:


Hello Adrian

I built fdw with debug option and ran it with debug option.  Here is the output

postgres=# select * from odbc_testt;
DEBUG:  StartTransactionCommand
DEBUG:  StartTransaction
DEBUG:  name: unnamed; blockState:       DEFAULT; state: INPROGR, 
xid/subid/cid: 0/1/0, nestlvl: 1, children:
DEBUG:  odbcGetForeignRelSize
DEBUG:  odbcGetTableOptions
DEBUG:  odbcGetOptions
DEBUG:  extract_odbcFdwOptions
DEBUG:  CONN STR: database=SAMPLE;DRIVER=SAMPLE;UID=db2inst1;PWD=db2inst1
DEBUG:  Error result (-1): Connecting to driver
ERROR:  Connecting to driver

Looking at the code this is the place where the error is coming from:

/* Allocate a connection handle */
    SQLAllocHandle(SQL_HANDLE_DBC, *env, dbc);
    /* Connect to the DSN */
    ret = SQLDriverConnect(*dbc, NULL, (SQLCHAR *) conn_str.data, SQL_NTS,
                           OutConnStr, 1024, &OutConnStrLen, 
SQL_DRIVER_COMPLETE);
    check_return(ret, "Connecting to driver", dbc, SQL_HANDLE_DBC);
}

Trouble is, I can't get any unixodbc manual to indicate the meaning of -1.


Doesn't the DB2 driver have a debug/trace option you can turn on with an 
environment variable?  That may tell you more.


I'd also suggest you try connecting to the database from the same host 
using the db2clp.  That gets the FDW out of the picture and may give 
better diagnostics.



--

Re: Fwd: Carto ODBC_FDW issue "ERROR: Connecting to driver" from PG to DB2

2018-04-11 Thread karthik kumar
Hello Adrian

I built fdw with debug option and ran it with debug option.  Here is the
output

postgres=# select * from odbc_testt;
DEBUG:  StartTransactionCommand
DEBUG:  StartTransaction
DEBUG:  name: unnamed; blockState:   DEFAULT; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children:
DEBUG:  odbcGetForeignRelSize
DEBUG:  odbcGetTableOptions
DEBUG:  odbcGetOptions
DEBUG:  extract_odbcFdwOptions
DEBUG:  CONN STR: database=SAMPLE;DRIVER=SAMPLE;UID=db2inst1;PWD=db2inst1
DEBUG:  Error result (-1): Connecting to driver
ERROR:  Connecting to driver

Looking at the code this is the place where the error is coming from:

/* Allocate a connection handle */
SQLAllocHandle(SQL_HANDLE_DBC, *env, dbc);
/* Connect to the DSN */
ret = SQLDriverConnect(*dbc, NULL, (SQLCHAR *) conn_str.data, SQL_NTS,
   OutConnStr, 1024, &OutConnStrLen,
SQL_DRIVER_COMPLETE);
check_return(ret, "Connecting to driver", dbc, SQL_HANDLE_DBC);
}

Trouble is, I can't get any unixodbc manual to indicate the meaning of -1.


Re: [GENERAL] missing public on schema public

2018-04-11 Thread Tom Lane
Stephen Frost  writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Yeah, I can reproduce this.  I suspect it got broken by Stephen's hacking
>> around with default ACLs.  A simple example is

> Yes, it's related to the work I did with pg_dump's ACL handling, because
> we're no longer just always including the whole revoke/grant set of ACLs
> for everything in the output.

I see that this is listed on the open items for v11, but it's hard to
justify it being there, because the bug exists only in 9.6 and 10.
(We fixed it in HEAD as part of the pg_dump-vs-pg_dumpall refactoring.)

I gather, from the lack of any progress since November, that you're
probably not going to fix it in the back branches.  I'm not excited
about working on it either, but I dislike leaving such a bug unfixed.

In any case, I think it should be removed from the v11 list.  The "older
bugs" section is meant to capture pre-existing bugs that we might possibly
fix as part of v11 stabilization, and this isn't that.

regards, tom lane



Re: Archiving Data to Another DB?

2018-04-11 Thread Don Seiler
On Wed, Apr 11, 2018 at 12:58 PM, Adrian Klaver 
wrote:
>
> "F.33.3. Transaction Management
>
> During a query that references any remote tables on a foreign server,
> postgres_fdw opens a transaction on the remote server if one is not already
> open corresponding to the current local transaction. The remote transaction
> is committed or aborted when the local transaction commits or aborts.
> Savepoints are similarly managed by creating corresponding remote
> savepoints.
>
> ..."


Interesting, I'll work on a test case later!


> I may be missing something, but why not reverse your original set up?
> Assuming transactional behavior works as expected something like:
>
> 1) Setup postgres_fdw in main database.
>
> 2) Create FOREIGN TABLE pointing to table in archive database.
>
> 3) INSERT INTO/SELECT from main table to archive table.
>
> 4) DELETE FROM main table.


I had considered this as well, as this would allow me to rollback the
delete (assuming my intel on postgres_fdw transactions was correct, which
it may not be after all). I wondered if a remote insert would be broken up
into individual inserts like the remote delete was, as that would be
equally unappealing for the same reasons. But obviously worth confirming.

Don.

-- 
Don Seiler
www.seiler.us


Re: Archiving Data to Another DB?

2018-04-11 Thread Adrian Klaver

On 04/11/2018 09:15 AM, Don Seiler wrote:

Let's say I have two DBs: main (9.6.6) and archive (10.2).

I have a table in main where I want to archive data older then 60 days. 
For various reasons, the table is not partitioned, so for now we must 
use DELETE. The destination table in the archive DB is partitioned with 
the new Pg10 partitioning.


My initial plan was to have a stored procedure on the archive DB use 
postgres_fdw to do an INSERT INTO / SELECT to select the data remotely 
and insert into the local archive table. It would then issue a single 
DELETE command to remotely remove the data from the main DB. However I 
found that doing this resulted in the main DB calling thousands (perhaps 
millions if it's one-per-row) of individual DELETE statements based on a 
ctid column. Aside from WAL behavior concerns, it is flooding my 
postgresql server logs since I log any DML.


On top of that, I'm told that a remote DELETE wouldn't be transactional, 
so if I were to compare inserted rows vs deleted rows and found a 
mismatch, I couldn't just rollback the DELETE. I plan to verify this 
with a small test case later but for now I'll assume this to be true.


That would be worth testing:

https://www.postgresql.org/docs/9.6/static/postgres-fdw.html

"F.33.3. Transaction Management

During a query that references any remote tables on a foreign server, 
postgres_fdw opens a transaction on the remote server if one is not 
already open corresponding to the current local transaction. The remote 
transaction is committed or aborted when the local transaction commits 
or aborts. Savepoints are similarly managed by creating corresponding 
remote savepoints.


..."



Right now I'm thinking of falling back to the far-less-elegant method of 
dumping the data to a flat file via COPY, running psql to connect to the 
archive DB remotely and running a COPY to load the data (or maybe 
transferring the flat file to the archive DB to load it there, 
offloading that part of the workload), then deleting the data from the 
main DB. I could capture the rows dumped in a control table and compare 
the rows deleted against that and then rollback the delete if necessary.


I may be missing something, but why not reverse your original set up?
Assuming transactional behavior works as expected something like:

1) Setup postgres_fdw in main database.

2) Create FOREIGN TABLE pointing to table in archive database.

3) INSERT INTO/SELECT from main table to archive table.

4) DELETE FROM main table.



Like I said, not elegant, but I don't want to risk losing data that 
wasn't successfully archived to the archive DB. I'm very interested to 
hear what others might be doing for tasks like this.


Don.

--
Don Seiler
www.seiler.us 



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Archiving Data to Another DB?

2018-04-11 Thread Rob Sargent



On 04/11/2018 10:24 AM, Ron wrote:



On 04/11/2018 11:15 AM, Don Seiler wrote:

Let's say I have two DBs: main (9.6.6) and archive (10.2).

I have a table in main where I want to archive data older then 60 
days. For various reasons, the table is not partitioned, so for now 
we must use DELETE. The destination table in the archive DB is 
partitioned with the new Pg10 partitioning.


My initial plan was to have a stored procedure on the archive DB use 
postgres_fdw to do an INSERT INTO / SELECT to select the data 
remotely and insert into the local archive table. It would then issue 
a single DELETE command to remotely remove the data from the main DB. 
However I found that doing this resulted in the main DB calling 
thousands (perhaps millions if it's one-per-row) of individual DELETE 
statements based on a ctid column. Aside from WAL behavior concerns, 
it is flooding my postgresql server logs since I log any DML.


On top of that, I'm told that a remote DELETE wouldn't be 
transactional, so if I were to compare inserted rows vs deleted rows 
and found a mismatch, I couldn't just rollback the DELETE. I plan to 
verify this with a small test case later but for now I'll assume this 
to be true.


Right now I'm thinking of falling back to the far-less-elegant method 
of dumping the data to a flat file via COPY, running psql to connect 
to the archive DB remotely and running a COPY to load the data (or 
maybe transferring the flat file to the archive DB to load it there, 
offloading that part of the workload), then deleting the data from 
the main DB. I could capture the rows dumped in a control table and 
compare the rows deleted against that and then rollback the delete if 
necessary.


Like I said, not elegant, but I don't want to risk losing data that 
wasn't successfully archived to the archive DB. I'm very interested 
to hear what others might be doing for tasks like this.


It might not be elegant, but a COPY / DELETE / LOAD is granular, so 
you can restart at any point.




I might be inclined to COPY/LOAD/check/DELETE




Re: Archiving Data to Another DB?

2018-04-11 Thread David G. Johnston
On Wed, Apr 11, 2018 at 9:15 AM, Don Seiler  wrote:

> Right now I'm thinking of falling back to the far-less-elegant method of
> dumping the data to a flat file via COPY, running psql to connect to the
> archive DB remotely and running a COPY to load the data (or maybe
> transferring the flat file to the archive DB to load it there, offloading
> that part of the workload), then deleting the data from the main DB. I
> could capture the rows dumped in a control table and compare the rows
> deleted against that and then rollback the delete if necessary.
>

​This is what I would lean toward - with a data compression/decompression
step surrounding the network transfer.

Not sure exactly how it might fit in but don't forget about "DELETE FROM
... RETURNING *" and being able to place that into a CTE/WITH clause (same
goes for insert/update)

David J.
​


Re: Archiving Data to Another DB?

2018-04-11 Thread Ron



On 04/11/2018 11:15 AM, Don Seiler wrote:

Let's say I have two DBs: main (9.6.6) and archive (10.2).

I have a table in main where I want to archive data older then 60 days. 
For various reasons, the table is not partitioned, so for now we must use 
DELETE. The destination table in the archive DB is partitioned with the 
new Pg10 partitioning.


My initial plan was to have a stored procedure on the archive DB use 
postgres_fdw to do an INSERT INTO / SELECT to select the data remotely and 
insert into the local archive table. It would then issue a single DELETE 
command to remotely remove the data from the main DB. However I found that 
doing this resulted in the main DB calling thousands (perhaps millions if 
it's one-per-row) of individual DELETE statements based on a ctid column. 
Aside from WAL behavior concerns, it is flooding my postgresql server logs 
since I log any DML.


On top of that, I'm told that a remote DELETE wouldn't be transactional, 
so if I were to compare inserted rows vs deleted rows and found a 
mismatch, I couldn't just rollback the DELETE. I plan to verify this with 
a small test case later but for now I'll assume this to be true.


Right now I'm thinking of falling back to the far-less-elegant method of 
dumping the data to a flat file via COPY, running psql to connect to the 
archive DB remotely and running a COPY to load the data (or maybe 
transferring the flat file to the archive DB to load it there, offloading 
that part of the workload), then deleting the data from the main DB. I 
could capture the rows dumped in a control table and compare the rows 
deleted against that and then rollback the delete if necessary.


Like I said, not elegant, but I don't want to risk losing data that wasn't 
successfully archived to the archive DB. I'm very interested to hear what 
others might be doing for tasks like this.


It might not be elegant, but a COPY / DELETE / LOAD is granular, so you can 
restart at any point.



--
Angular momentum makes the world go 'round.



Archiving Data to Another DB?

2018-04-11 Thread Don Seiler
Let's say I have two DBs: main (9.6.6) and archive (10.2).

I have a table in main where I want to archive data older then 60 days. For
various reasons, the table is not partitioned, so for now we must use
DELETE. The destination table in the archive DB is partitioned with the new
Pg10 partitioning.

My initial plan was to have a stored procedure on the archive DB use
postgres_fdw to do an INSERT INTO / SELECT to select the data remotely and
insert into the local archive table. It would then issue a single DELETE
command to remotely remove the data from the main DB. However I found that
doing this resulted in the main DB calling thousands (perhaps millions if
it's one-per-row) of individual DELETE statements based on a ctid column.
Aside from WAL behavior concerns, it is flooding my postgresql server logs
since I log any DML.

On top of that, I'm told that a remote DELETE wouldn't be transactional, so
if I were to compare inserted rows vs deleted rows and found a mismatch, I
couldn't just rollback the DELETE. I plan to verify this with a small test
case later but for now I'll assume this to be true.

Right now I'm thinking of falling back to the far-less-elegant method of
dumping the data to a flat file via COPY, running psql to connect to the
archive DB remotely and running a COPY to load the data (or maybe
transferring the flat file to the archive DB to load it there, offloading
that part of the workload), then deleting the data from the main DB. I
could capture the rows dumped in a control table and compare the rows
deleted against that and then rollback the delete if necessary.

Like I said, not elegant, but I don't want to risk losing data that wasn't
successfully archived to the archive DB. I'm very interested to hear what
others might be doing for tasks like this.

Don.

-- 
Don Seiler
www.seiler.us


Re: pg_basebackup restore a single table

2018-04-11 Thread David G. Johnston
On Wed, Apr 11, 2018 at 8:22 AM, Ron  wrote:

> On 04/11/2018 10:21 AM, Andreas Kretschmer wrote:
>>
>> Am 11.04.2018 um 15:53 schrieb camarillo:
>>
>>> Can I do a restore of a single table or single base using the archive
>>> generated for the basebackup without having to delete the filesystem
>>> (/var/lib/pgsql/9.5/*)?.
>>>
>>
>> No, but you can use a spare machine to restore the hole database
>> (point-in-time-recovery) and restore the table from that using pg_dump or
>> copy.
>>
>
> Not everyone has a spare machine and a few TB lying around..
> ​.
>

​The machine one tests their backup restoration process on should suffice
for this purpose...​

David J.


Re: pg_basebackup restore a single table

2018-04-11 Thread Melvin Davidson
On Wed, Apr 11, 2018 at 11:22 AM, Ron  wrote:

>
>
> On 04/11/2018 10:21 AM, Andreas Kretschmer wrote:
>
>>
>>
>> Am 11.04.2018 um 15:53 schrieb camarillo:
>>
>>> Can I do a restore of a single table or single base using the archive
>>> generated for the basebackup without having to delete the filesystem
>>> (/var/lib/pgsql/9.5/*)?.
>>>
>>
>> No, but you can use a spare machine to restore the hole database
>> (point-in-time-recovery) and restore the table from that using pg_dump or
>> copy.
>>
>
> Not everyone has a spare machine and a few TB lying around...
>
> --
> Angular momentum makes the world go 'round.
>
>



*>Can I do a restore of a single table or single base using the archive
>generated for the basebackup ...*

*As Ron has previously stated, this is not possible, but it begs the
question...why not simply do a pg_dump of one table and restore that?*
-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: pg_basebackup restore a single table

2018-04-11 Thread Ron



On 04/11/2018 10:21 AM, Andreas Kretschmer wrote:



Am 11.04.2018 um 15:53 schrieb camarillo:

Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/*)?.


No, but you can use a spare machine to restore the hole database 
(point-in-time-recovery) and restore the table from that using pg_dump or 
copy.


Not everyone has a spare machine and a few TB lying around...

--
Angular momentum makes the world go 'round.



Re: pg_basebackup restore a single table

2018-04-11 Thread Andreas Kretschmer



Am 11.04.2018 um 15:53 schrieb camarillo:

Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/*)?.


No, but you can use a spare machine to restore the hole database 
(point-in-time-recovery) and restore the table from that using pg_dump 
or copy.



Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: dblink: give search_path

2018-04-11 Thread Adrian Klaver

On 04/10/2018 11:44 PM, Thiemo Kellner wrote:

Hi all

I try to execute a function not in the Schema I connect to with dblink. 
Is there way to tell dblink to set search_path in a specific way? I have 
not found a solution in the documentation. I tried with the set 
search_path definition in the function declarations to no avail.


In addition to my previous suggestions:

test=# SELECT public.dblink_connect('dbname=production ');
 dblink_connect

 OK

test=# select * from  public.dblink('show search_path') as 
t1(search_path text);

 search_path
-
 main

test=# select public.dblink_exec('set search_path=main,utility');
 dblink_exec
-
 SET
(1 row)

test=# select * from  public.dblink('show search_path') as 
t1(search_path text);

  search_path
---
 main, utility



Function Schema: logger
Database: act
User: act
User Default Schema: act

Kind regards

Thiemo




--
Adrian Klaver
adrian.kla...@aklaver.com



pg_basebackup restore a single table

2018-04-11 Thread camarillo
Hello how are you?

Currently, I'm doing a basebackup of a postgres instance 9.5 due to the
growth of it, doing a pg_dump took a long time.
Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/*)?.  

Thanks,
Carlos 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: dblink: give search_path

2018-04-11 Thread Adrian Klaver

On 04/10/2018 11:44 PM, Thiemo Kellner wrote:

Hi all

I try to execute a function not in the Schema I connect to with dblink. 
Is there way to tell dblink to set search_path in a specific way? I have 
not found a solution in the documentation. I tried with the set 
search_path definition in the function declarations to no avail.


https://www.postgresql.org/docs/10/static/contrib-dblink-connect.html

"
Notes

If untrusted users have access to a database that has not adopted a 
secure schema usage pattern, begin each session by removing 
publicly-writable schemas from search_path. One could, for example, add 
options=-csearch_path= to connstr. This consideration is not specific to 
dblink; it applies to every interface for executing arbitrary SQL commands.

"

or schema qualify the function:

select logger.some_func();



Function Schema: logger
Database: act
User: act
User Default Schema: act

Kind regards

Thiemo




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: ERROR: found multixact from before relminmxid

2018-04-11 Thread Alexandre Arruda
2018-04-10 19:53 GMT-03:00 Andres Freund :
> On 2018-04-10 08:31:20 -0300, Alexandre Arruda wrote:
>> 2018-04-09 23:51 GMT-03:00 Peter Geoghegan :
>> > On Mon, Apr 9, 2018 at 6:56 PM, Alexandre Arruda  
>> > wrote:
>> >> (... and all other indexes returns null too)
>> >>
>> >> I tried with bt_index_check too. Same results.
>> >
>> > That's interesting, because it tells me that you have a table that
>> > appears to not be corrupt, despite the CLUSTER error. Also, the error
>> > itself comes from sanity checking added to MultiXact freezing fairly
>> > recently, in commit 699bf7d0.
>> >
>> > You didn't say anything about regular VACUUM being broken. Do you find
>> > that it works without any apparent issue?
>> >
>> > I have a suspicion that this could be a subtle bug in
>> > CLUSTER/freezing. The only heap_freeze_tuple() caller is code used by
>> > CLUSTER, so it's not that hard to imagine a MultiXact freezing bug
>> > that is peculiar to CLUSTER. Though I haven't thought about it in much
>> > detail.
>> >
>> > --
>> > Peter Geoghegan
>>
>> Hi Peter,
>>
>> Actualy, I first notice the problem in logs by autovacuum:
>>
>> 2018-04-10 08:22:15.385 -03 [55477] CONTEXT:  automatic vacuum of
>> table "production.public.fn06t"
>> 2018-04-10 08:22:16.815 -03 [55477] ERROR:  found multixact 68834765
>> from before relminmxid 73262006
>>
>> production=# vacuum analyze verbose fn06t;
>> INFO:  vacuuming "public.fn06t"
>> ERROR:  found multixact 76440919 from before relminmxid 122128619
>
> What does the function in
> https://www.postgresql.org/message-id/20180319181723.ugaf7hfkluqyo...@alap3.anarazel.de
> say about your table?
>
> Could you post pg_controldata output and
> SELECT * FROM pg_class WHERE oid = 'public.fn06t'::regclass;
> ?
>
> Greetings,
>
> Andres Freund

Hi Andres,

I install pageinspect and create the function, but it returns is null:


select * from check_rel('fn06t') limit 100;
blockno | lp | xmin
-++--
(0 rows)

Other affected table:

vacuum verbose fn06t4;
INFO:  vacuuming "public.fn06t4"
ERROR:  found multixact 68834765 from before relminmxid 73262006

production=# select * from check_rel('fn06t4') limit 100;
blockno | lp | xmin
-++--
(0 rows)

Best regards,

Alexandre



Re: psql variable to plpgsql?

2018-04-11 Thread Pavel Stehule
2018-04-11 11:36 GMT+02:00 Thiemo Kellner, NHC Barhufpflege <
thiemo.kell...@gelassene-pferde.biz>:

> Zitat von Pavel Stehule :
>
> No, there is not possible to read/write client side variables from server
>> side.
>>
>
> I did not mean that the variable be replaced by its value at execution
> time but at Installation time.


no. The :xxx is not evaluated inside string. The workaround is using GUC
variables and related functions. Can be used from psql and from plpgsql too.

https://stackoverflow.com/questions/13172524/passing-user-id-to-postgresql-triggers/13172964#13172964

Regards

Pavel



>
>
> --
> Öffentlicher PGP-Schlüssel: http://pgp.mit.edu/pks/lookup?
> op=get&search=0x8F70EFD2D972CBEF
>
> 
> This message was sent using IMP, the Internet Messaging Program.
>
>


Re: Planning to change autovacuum_vacuum_scale_factor value to zero. Please suggest me if any negative impact.

2018-04-11 Thread Laurenz Albe
Raghavendra Rao J S V wrote:
> Are you suggesting me to keep "autovacuum_vacuum_cost_limit"  to zero or
> "  autovacuum_vacuum_scale_factor" to zero or both? Please clarify me.

autovacuum_vacuum_cost_limit is effectively 100 by default.
You could raise it to 2000 or more.

If that does not take care of your problems, set autovacuum_vacuum_cost_delay 
to 0.

That will make autovacuum even more aggressive.

Do *not* set autovacuum_vacuum_scale_factor to 0.  That will just cause 
autovacuum
to run all the time without getting anywhere.  Sorry for trying to use irony in
e-mail; I am aware I shouldn't do that.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Multiple records returned by a JOIN

2018-04-11 Thread Alexander Farber
Last night I have inexplicably missed 2 conditions /facepalm

Now my JOIN works ok, without multiple records -

CREATE OR REPLACE FUNCTION words_stat_games(
in_social integer,
in_sidtext
) RETURNS TABLE (
out_gidinteger,
out_reason text,
out_state1 text,
out_score1 integer,
out_score2 integer
) AS
$func$
SELECT
g.gid,
g.reason,
CASE WHEN g.player1 = s.uid THEN g.state1 ELSE g.state2 END,
CASE WHEN g.player1 = s.uid THEN g.score1 ELSE g.score2 END,
CASE WHEN g.player1 = s.uid THEN g.score2 ELSE g.score1 END
FROMwords_games g
JOINwords_social s ON (g.player1 = s.uid OR g.player2 = s.uid)
WHERE   g.finished IS NOT NULL
AND s.social = in_social  -- MISSED
CONDITION
AND s.sid = in_sid   -- MISSED
CONDITION
ORDER BY g.finished DESC
LIMIT   10;

$func$ LANGUAGE sql STABLE;


Re: Planning to change autovacuum_vacuum_scale_factor value to zero. Please suggest me if any negative impact.

2018-04-11 Thread Raghavendra Rao J S V
Thank you very much for your prompt response.

I requested in my previous mail as  , planning to make '
*autovacuum_vacuum_scale_factor*' value to *zero *and
*autovacuum_vacuum_threshold
*value to *150 * in postgreconf file.

Are you suggesting me to keep "autovacuum_vacuum_cost_limit"  to zero or "
autovacuum_vacuum_scale_factor" to zero or both? Please clarify me.

Regards,
Raghavendra Rao

On Wed, Apr 11, 2018 at 12:59 PM, Laurenz Albe 
wrote:

> Raghavendra Rao J S V wrote:
> > We are using postgres 9.2  version on Centos operating system.  We have
> around 1300+ tables.
> > We have following auto vacuum settings are enables. Still few of the
> tables which are always busy are not vacuumed. Due to that tables are
> bloating and observed few areas has performance degradation.
> >
> > autovacuum_max_workers = 6
> > autovacuum_naptime = 15s
> > autovacuum_vacuum_threshold = 25
> > autovacuum_analyze_threshold = 10
> > autovacuum_vacuum_scale_factor = 0.1
> > autovacuum_analyze_scale_factor = 0.05
> > autovacuum_vacuum_cost_delay = 10ms
> > autovacuum_vacuum_cost_limit = 1000
> >
> > To avoid the above problem, I am planning to make '
> autovacuum_vacuum_scale_factor' value to zero and
> autovacuum_vacuum_threshold  value to 150. Please suggest me does it have
> any negative impact.
>
> That's an excellent way to keep your database from functioning well.
>
> Rather, raise autovacuum_vacuum_cost_limit, or, more aggressively,
> set autovacuum_vacuum_cost_delay to 0.
>
> It is better to change the settings on individual busy tables than
> changing them globally.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>



-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: dblink: give search_path

2018-04-11 Thread Thiemo Kellner, NHC Barhufpflege

Zitat von Rene Romero Benavides :


What about setting the search path at the user level?
ALTER ROLE act SET search_path = act,logger;

Best.


Thanks for the inspiration. Maybe it is best to create a dedicated  
user for logging anyway...


--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: psql variable to plpgsql?

2018-04-11 Thread Thiemo Kellner, NHC Barhufpflege

Zitat von Pavel Stehule :


No, there is not possible to read/write client side variables from server
side.


I did not mean that the variable be replaced by its value at execution  
time but at Installation time.


--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: Planning to change autovacuum_vacuum_scale_factor value to zero. Please suggest me if any negative impact.

2018-04-11 Thread Laurenz Albe
Raghavendra Rao J S V wrote:
> We are using postgres 9.2  version on Centos operating system.  We have 
> around 1300+ tables.
> We have following auto vacuum settings are enables. Still few of the tables 
> which are always busy are not vacuumed. Due to that tables are bloating and 
> observed few areas has performance degradation. 
> 
> autovacuum_max_workers = 6 
> autovacuum_naptime = 15s   
> autovacuum_vacuum_threshold = 25   
> autovacuum_analyze_threshold = 10  
> autovacuum_vacuum_scale_factor = 0.1   
> autovacuum_analyze_scale_factor = 0.05 
> autovacuum_vacuum_cost_delay = 10ms
> autovacuum_vacuum_cost_limit = 1000
> 
> To avoid the above problem, I am planning to make ' 
> autovacuum_vacuum_scale_factor' value to zero and  
> autovacuum_vacuum_threshold  value to 150. Please suggest me does it have any 
> negative impact.

That's an excellent way to keep your database from functioning well.

Rather, raise autovacuum_vacuum_cost_limit, or, more aggressively,
set autovacuum_vacuum_cost_delay to 0.

It is better to change the settings on individual busy tables than
changing them globally.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Suggest the best values for the postgres configuration parameters

2018-04-11 Thread Laurenz Albe
Raghavendra Rao J S V wrote:
> We are using postgres 9.2  version on Centos operating system. 

That's no good, 9.2 is out of support.
Use v10 or at least 9.6.

> Total ram available is 80GB . At present we don't have any connection pool 
> mechanisiam.  Max number of  connections are allowed is 1000. 

That's not good either. Use a connection pool.

> Could you please suggest the best values for the below configuration 
> parameters?
> 
> shared_buffers 

4 or 8 GB

> effective_cache_size 

80 GB minus what the OS and other applications need.

> work_mem 

Depends on your queries. With max_connections absurdly high as 1000, no more 
than 50MB.

> maintenance_work_mem 

1GB

> checkpoint_segments 

Depends on your workload.

> wal_keep_segments 

0  (unless you need replication and have nothing better)

> checkpoint_completion_target 

0.8

> Max_prepared_transactions =0  

0  (unless you need distributed transactions)

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: psql variable to plpgsql?

2018-04-11 Thread Pavel Stehule
Hi

2018-04-11 8:51 GMT+02:00 Thiemo Kellner :

> Hi all
>
> Is there a way to pass the value of a psql variable into function code? I
> create a schema with help of psql variable
>
>\set SCHEMA_NAME LOGGER
>create
>  schema :SCHEMA_NAME;
>
> I would like to create a function that has the Schema hard coded like
>declare
>   V_SCHEMA_NAME text := :SCHEMA_NAME;
>begin
> but as the plpgsql code is within quotes, it Fails.
>

No, there is not possible to read/write client side variables from server
side.

Regards

Pavel

>
> Kind regards
>
> Thiemo
>
> --
> Öffentlicher PGP-Schlüssel: http://pgp.mit.edu/pks/lookup?
> op=get&search=0x8F70EFD2D972CBEF
>
> 
> This message was sent using IMP, the Internet Messaging Program.
>
>


Re: Barman versus pgBackRest

2018-04-11 Thread Thomas Poty
Hello David,

Sorry for answering only now but I just saw you answer only now.

> To be clear, I'm the original author and primary maintainer of
pgBackRest.

I am very happy to see guys like you to take time to answer me. Thank you

> This a good feature, and one that has been requested for pgBackRest. You
> can do this fairly trivially with ssh, however, so it generally hasn't
> been a big deal for people.  Is there a particular reason you need this
> feature?

The reason is probably a psychologic matter but I like the idea of a unique
connecting point to restore DBs of different location.

I am very impatient to see "replication slot" support and "remote restore"
feature added.

Thank you for your time,

Regards,

Thomas



2018-03-09 15:56 GMT+01:00 David Steele :

> Hi Thomas,
>
> On 3/6/18 2:53 PM, Thomas Poty wrote:
> > Hello Community,
> >
> > I hesitate to use barman or pgBackRest.  I have found a lot of common
> > points between them and a few differences:
>
> To be clear, I'm the original author and primary maintainer of
> pgBackRest.  I'll let the Barman folks speak to their strengths, but I'm
> happy to address your points below.
>
> > About pgBarman, I like :
> > - be able restore on a remote server from the backup server
>
> This a good feature, and one that has been requested for pgBackRest. You
> can do this fairly trivially with ssh, however, so it generally hasn't
> been a big deal for people.  Is there a particular reason you need this
> feature?
>
> > - use replication slots for backingup wal on the backup server.
>
> Another good feature.  We have not added it yet because pgBackRest was
> originally written for very high-volume clusters (100K+ WAL per day) and
> our parallel async feature answers that need much better.  We recommend
> a replicated standby for more update-to-date data.
>
> Even so, we are looking at adding support for replication slots to
> pgBackRest.  We are considering a hybrid scheme that will use
> replication to keep the WAL archive as up to date as possible, while
> doing bulk transfer with archive_command.
>
> > About pgBackRest, I like :
> >
> > - real differential backup.
> > - lots of options
> > - option for backingup if PostgreSQL is already in backup mode
> >
> >
> > I would like to have :
> > - advices or feedbach about using pgBackrest or barman.
> > - pros and cons of these solutions
>
> I'll stick with some of the major pgBackRest pros:
>
> - Parallel backup including compression and checksums
> - Encryption
> - S3 support
> - Parallel archive
> - Delta restore
> - Page checksum validation
> - Backup resume
>
> More about features here: https://pgbackrest.org
>
> > - differences that I would not have seen.
>
> pgBackRest is used in some very demanding environments and we are
> constantly answering the needs of our users with features and
> performance improvements, e.g. the enormous improvements to archive-push
> speed in the 2.0 release.
>
> I'd be happy to answer any specific questions you have about pgBackRest.
>
> Regards,
> --
> -David
> da...@pgmasters.net
>


Re: dblink: give search_path

2018-04-11 Thread Rene Romero Benavides
What about setting the search path at the user level?
ALTER ROLE act SET search_path = act,logger;

Best.

2018-04-11 1:44 GMT-05:00 Thiemo Kellner :

> Hi all
>
> I try to execute a function not in the Schema I connect to with dblink. Is
> there way to tell dblink to set search_path in a specific way? I have not
> found a solution in the documentation. I tried with the set search_path
> definition in the function declarations to no avail.
>
> Function Schema: logger
> Database: act
> User: act
> User Default Schema: act
>
> Kind regards
>
> Thiemo
>
> --
> Öffentlicher PGP-Schlüssel: http://pgp.mit.edu/pks/lookup?
> op=get&search=0x8F70EFD2D972CBEF
>
> 
> This message was sent using IMP, the Internet Messaging Program.
>
>


-- 
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/