Re: [GENERAL] Linux equivalent library for "postgres.lib" from Windows

2016-11-07 Thread Michael Paquier
On Tue, Nov 8, 2016 at 1:29 PM, John R Pierce  wrote:
> I am not sure what this postgres.lib is, what are the functions you're using
> ?

It contains references to all the exposed functions of the backend on
Windows. Using something like dumpbin /exports postgres.lib would show
exactly that if I recall correctly. But, anyway, what Kiran is looking
for here is to build his stuff via PGXS.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclude pg_largeobject form pg_dump

2016-11-07 Thread amul sul
On Tue, Nov 8, 2016 at 5:36 AM, Andreas Joseph Krogh  wrote:
>
>
> I don't see what you mean. It forces dump of Blobs if we didn't use -B and
> if we include everything in the dump, which seems good to me. What did you
> try that didn't work as expected?
>
>
> I guess what he means is that if -B is given, the following code sets
> dopt.outputBlobs = false
>
> +case 'B':/* Don't dump blobs */
> +dopt.outputBlobs = false;
> +break;
>
>
> Then this IF sets it back to TRUE:
>
> +if (dopt.include_everything && !dopt.schemaOnly && !dopt.outputBlobs)
>  dopt.outputBlobs = true;
>
>
> ...making it impossible to turn off dumping of blobs.
>

Yes, thats the reason v4 patch  was not as expected.

Regards,
Amul Sul


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux equivalent library for "postgres.lib" from Windows

2016-11-07 Thread John R Pierce

On 11/7/2016 7:51 PM, Michael Paquier wrote:

There is no need to go down to this level of details perhaps? You
could just use the PGXS infrastructure to guess it for you.


+1 I forgot to mention PGXS, that provides a portable method of 
building server-side extensions.


See https://www.postgresql.org/docs/current/static/extend-pgxs.html


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux equivalent library for "postgres.lib" from Windows

2016-11-07 Thread John R Pierce

On 11/6/2016 9:28 PM, Gadamsetty, Kiran wrote:


I am new to the product and in windows “postgres.lib” provides certain 
functions which we are using in windows for creating extensions.


Now I am porting the project to Linux and there no straight library 
with this name in Linux binaries packages.


Can someone please advise the equivalent library for postgres.lib in 
Linux?




I am not sure what this postgres.lib is, what are the functions you're 
using ?


libpq.so is the standard postgres client library.  it provides the 
PQxx functions documented here, 
https://www.postgresql.org/docs/current/static/libpq.html


or are you talking about the SPI_x functions used by C functions 
being called within postgresql ?






--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Linux equivalent library for "postgres.lib" from Windows

2016-11-07 Thread Michael Paquier
On Mon, Nov 7, 2016 at 2:28 PM, Gadamsetty, Kiran
 wrote:
> I am new to the product and in windows “postgres.lib” provides certain
> functions which we are using in windows for creating extensions.
>
> Now I am porting the project to Linux and there no straight library with
> this name in Linux binaries packages.
>
> Can someone please advise the equivalent library for postgres.lib in Linux?

There is no need to go down to this level of details perhaps? You
could just use the PGXS infrastructure to guess it for you.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Linux equivalent library for "postgres.lib" from Windows

2016-11-07 Thread Gadamsetty, Kiran
Hi,

I am new to the product and in windows "postgres.lib" provides certain 
functions which we are using in windows for creating extensions.
Now I am porting the project to Linux and there no straight library with this 
name in Linux binaries packages.

Can someone please advise the equivalent library for postgres.lib in Linux?

Thanks,
Kiran G


[GENERAL] Fwd: Creating multiple instances of postresql on Windows environment

2016-11-07 Thread kaustubh kelkar
Hi ,

I am a PostgreSQL user who wants to create multiple instances of PostgreSQL
database server. I am using PostgreSQL 9.4 and above.

I tried to create more than 2 instances on Linux environment in which I was
successful. But, for windows environment, I tried with the help of pgAdmin4
and with the help of commands (initdb and some more commands.) In both
cases , there is some issue related to the ports. Please find the attached
screenshot for the reference.



Please help me to resolve the issue.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High load average every 105 minutes

2016-11-07 Thread Nhan Nguyen
I forgot to mention, my DB is running on a c4.xlarge instance.

Nhan Nguyen
System Engineer

MB: (+84) 934 008 031
Skype: live:ducnhan813

> On Nov 7, 2016, at 5:03 PM, Chris Mair  wrote:
> 
> 
>>> with AWS, your system is sharing the vendors virtual machine environment 
>>> with other customers, and performance is pretty much out of your control.
> 
>> I found no strange processes or queries while load average was at peak. IO 
>> also didn't change. Some more slow queries were logged, but not many.
>> I think sharing the VM with other customers doesn’t have much to do with 
>> this. I checked my other servers too, and only those that have postgresql 
>> have the load average issue. Generally it doesn’t impact my system much, but 
>> when there are slow queries, this issue just makes everything worse.
> 
> Hi,
> 
> generally speaking AWS is pretty good at isolating users (and you can request 
> single tenancy machines or
> dedicated machines as well if you're concerned about this).
> 
> However, if you're running t1 or t2 instances, you get the concept of CPU 
> credits. When those run out, your
> system is slowed down until the credits recover. I could imagine that this 
> way some cyclic load patterns
> emerge, if there is constant load on the machines.
> 
> Nhan, what instance types are you running?
> 
> Bye,
> Chris.
> 
> 
> 



Re: [GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Michael Paquier
On Mon, Nov 7, 2016 at 9:32 PM, Tom DalPozzo  wrote:
> I'm using 9.5.3 . I had read about that bug but I didn't know that
> wal_level=archive is equivalent to hot_standby from this point of view! I
> guess it's equivalent in 9.5.3 too.

No, this only applies to 9.6 and onward as a result of the
introduction of wal_level = replica. archive and hot_standby are kept
as aliases for backward-compatibility.

For the rest, Amul is right. Switching a segment will happen as long
as the current segment is not empty, producing itself new WAL, and
making checkpoints happening again. Each behavior taken individually
is not harming, it's when they work together that things could be
improved. That's the combination of all those activities that the
patch I mention upthread is aiming at improving. If this gets
integrated, that won't be back-patched though as it is pretty
invasive, and that's not a bug if you think about it.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclude pg_largeobject form pg_dump

2016-11-07 Thread Andreas Joseph Krogh
På mandag 07. november 2016 kl. 22:01:41, skrev Guillaume Lelarge <
guilla...@lelarge.info >:
2016-11-07 7:06 GMT+01:00 amul sul >: On Mon, Nov 7, 2016 at 2:03 AM, Guillaume Lelarge
 > wrote:
 >>
 >> Agreed. I was afraid of that, but for some reason, didn't find that. I'll
 >> fix this.
 >
 >
 > Fixed in v4.
 >

 This fix is broken.

  70  -   if (dopt.include_everything && !dopt.schemaOnly)
  71 +   if (dopt.include_everything && !dopt.schemaOnly && !dopt.outputBlobs)
  72         dopt.outputBlobs = true;

 dopt.outputBlobs set to FALSE when option -B is specified and this IF
 condition reverts to TRUE which force to dump blobs.
    
I don't see what you mean. It forces dump of Blobs if we didn't use -B and if 
we include everything in the dump, which seems good to me. What did you try 
that didn't work as expected?



 
I guess what he means is that if -B is given, the following code 
sets dopt.outputBlobs = false
 
+            case 'B':            /* Don't dump blobs */
 +                dopt.outputBlobs = false;
 +                break;
  
 
Then this IF sets it back to TRUE:
 
+    if (dopt.include_everything && !dopt.schemaOnly && !dopt.outputBlobs)
          dopt.outputBlobs = true;
 
 
...making it impossible to turn off dumping of blobs.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 




Re: [GENERAL] Exclude pg_largeobject form pg_dump

2016-11-07 Thread Guillaume Lelarge
2016-11-07 7:06 GMT+01:00 amul sul :

> On Mon, Nov 7, 2016 at 2:03 AM, Guillaume Lelarge
>  wrote:
> >>
> >> Agreed. I was afraid of that, but for some reason, didn't find that.
> I'll
> >> fix this.
> >
> >
> > Fixed in v4.
> >
>
> This fix is broken.
>
>  70  -   if (dopt.include_everything && !dopt.schemaOnly)
>  71 +   if (dopt.include_everything && !dopt.schemaOnly &&
> !dopt.outputBlobs)
>  72 dopt.outputBlobs = true;
>
> dopt.outputBlobs set to FALSE when option -B is specified and this IF
> condition reverts to TRUE which force to dump blobs.
>
>
I don't see what you mean. It forces dump of Blobs if we didn't use -B and
if we include everything in the dump, which seems good to me. What did you
try that didn't work as expected?

>>
> >>
> >>>
> >>> #2 :
> >>> We should add note for default behaviour if --no-blobs & --blobs both
> >>> are specified.
> >>>
> >>
> >> Right. I don't know how I will handle this, but you're right that the
> >> behaviour should be specified. I'll also fix this.
> >>
> >
> > I checked other options, such as --format, and there's nothing noted as a
> > default behaviour. Last one wins, which is what this patch does.
> >
>
> Such note exists for --table & --exclude-table option, see following
> lines in pg_dump.sgml
>
>  569
>  570 When both -t and -T are given, the
> behavior
>  571 is to dump just the tables that match at least one
> -t
>  572 switch but no -T switches.  If -T
> appears
>  573 without -t, then tables matching -T are
>  574 excluded from what is otherwise a normal dump.
>  575
>  576   
>
>
You're right on this. v5 fixes this.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com
diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index 371a614..fb69d6d 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -145,6 +145,21 @@ PostgreSQL documentation
  
 
  
+  -B
+  --no-blobs
+  
+   
+Exclude large objects in the dump.
+   
+
+   
+When both -b and -B are given, the behavior
+is dependent on which option is last on the command line.
+   
+  
+ 
+
+ 
   -c
   --clean
   
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 0e20985..f90c074 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -137,6 +137,7 @@ InitDumpOptions(DumpOptions *opts)
 {
 	memset(opts, 0, sizeof(DumpOptions));
 	/* set any fields that shouldn't default to zeroes */
+	opts->outputBlobs = true;
 	opts->include_everything = true;
 	opts->dumpSections = DUMP_UNSECTIONED;
 }
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 4da297f..4d3e245 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -291,6 +291,7 @@ main(int argc, char **argv)
 	static struct option long_options[] = {
 		{"data-only", no_argument, NULL, 'a'},
 		{"blobs", no_argument, NULL, 'b'},
+		{"no-blobs", no_argument, NULL, 'B'},
 		{"clean", no_argument, NULL, 'c'},
 		{"create", no_argument, NULL, 'C'},
 		{"dbname", required_argument, NULL, 'd'},
@@ -379,7 +380,7 @@ main(int argc, char **argv)
 
 	InitDumpOptions();
 
-	while ((c = getopt_long(argc, argv, "abcCd:E:f:F:h:j:n:N:oOp:RsS:t:T:U:vwWxZ:",
+	while ((c = getopt_long(argc, argv, "abBcCd:E:f:F:h:j:n:N:oOp:RsS:t:T:U:vwWxZ:",
 			long_options, )) != -1)
 	{
 		switch (c)
@@ -392,6 +393,10 @@ main(int argc, char **argv)
 dopt.outputBlobs = true;
 break;
 
+			case 'B':			/* Don't dump blobs */
+dopt.outputBlobs = false;
+break;
+
 			case 'c':			/* clean (i.e., drop) schema prior to create */
 dopt.outputClean = 1;
 break;
@@ -708,7 +713,7 @@ main(int argc, char **argv)
 	 * Dumping blobs is now default unless we saw an inclusion switch or -s
 	 * ... but even if we did see one of these, -b turns it back on.
 	 */
-	if (dopt.include_everything && !dopt.schemaOnly)
+	if (dopt.include_everything && !dopt.schemaOnly && !dopt.outputBlobs)
 		dopt.outputBlobs = true;
 
 	/*
@@ -864,6 +869,7 @@ help(const char *progname)
 	printf(_("\nOptions controlling the output content:\n"));
 	printf(_("  -a, --data-only  dump only the data, not the schema\n"));
 	printf(_("  -b, --blobs  include large objects in dump\n"));
+	printf(_("  -B, --no-blobs   exclude large objects in dump\n"));
 	printf(_("  -c, --clean  clean (drop) database objects before recreating\n"));
 	printf(_("  -C, --create include commands to create database in dump\n"));
 	printf(_("  -E, --encoding=ENCODING  dump the data in encoding ENCODING\n"));

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Abscence of synonym

2016-11-07 Thread Andy Colson

On 11/7/2016 1:39 PM, Rakesh Kumar wrote:


I need some ideas here.

Let us say we have a bunch of tables, called a,b,c,d. We will get a batch of 
data in files which need to be processed. At the end of processing, the tables 
will get a fresh set of data.  The problem we would like to solve is to allow 
access to the tables while they are getting processed by the new batch. 
Obviously, during the processing the data will be in an inconsistent state, 
which the app should not be exposed to. At the end of processing, the app can 
see the new data.

In RDBMS where synonyms are allowed we would do this:

keep two set of tables a_1 and a_2
A synonym a will either point to a_1 or a_2.
applications will refer synonym a only, never a_1 or a_2.
While synonym is referring to a_1, we can process a_2.
At the end of processing change the synonym to point to a_2.
next time, repeat it, just flip the tables from a_2 to a_1.

How this can be done in PG 9.5.3?

Thanks.



I do it with schemas.  A live and a work schema.  When batch processing 
is finished live tables get removed and work gets renamed, in a single 
transaction:


begin trans;

drop table live.junk1;
alter table work.junk1 set schema live;

drop table live.junk2;
alter table work.junk2 set schema live;

.. about 200 more tables ...

commit;


works like a champ.

-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Abscence of synonym

2016-11-07 Thread Rakesh Kumar

I need some ideas here.

Let us say we have a bunch of tables, called a,b,c,d. We will get a batch of 
data in files which need to be processed. At the end of processing, the tables 
will get a fresh set of data.  The problem we would like to solve is to allow 
access to the tables while they are getting processed by the new batch. 
Obviously, during the processing the data will be in an inconsistent state, 
which the app should not be exposed to. At the end of processing, the app can 
see the new data.

In RDBMS where synonyms are allowed we would do this:

keep two set of tables a_1 and a_2
A synonym a will either point to a_1 or a_2. 
applications will refer synonym a only, never a_1 or a_2.
While synonym is referring to a_1, we can process a_2. 
At the end of processing change the synonym to point to a_2.
next time, repeat it, just flip the tables from a_2 to a_1.

How this can be done in PG 9.5.3?

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on Post Setup MASTER and STANDBY replication - Postgres9.1

2016-11-07 Thread Joanna Xu




Jim Nasby [mailto:jim.na...@bluetreble.com] wrote:



>On 11/2/16 2:49 PM, Joanna Xu wrote:

>> The replication is verified and works.  My questions are what's the

>> reason causing "cp: cannot stat

>> `/opt/postgres/9.1/archive/00010003': No such file or

>> directory" on STANDBY and how to fix it?



>What instructions/tools did you use to setup replication?



The following steps were involved in setting up the replication:



* Stop the STANDBY database.

* Start the hot backup on the MASTER database.

* Push data to the STANDBY database.

* Stop the hot backup on the MASTER database.

* Reload with the new configuration on MASTER.

* Setup new configuration on the STANDBY database.

* Start the STANDBY database.



Also, we see the message "LOG:  terminating walsender process due to 
replication timeout" in the logs on MASTER. Is this expected in the condition 
when the system is less busy or else?



Thanks,

Joanna



-Original Message-
From: Jim Nasby [mailto:jim.na...@bluetreble.com]
Sent: Wednesday, November 02, 2016 7:15 PM
To: Joanna Xu ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Questions on Post Setup MASTER and STANDBY replication - 
Postgres9.1



On 11/2/16 2:49 PM, Joanna Xu wrote:

> The replication is verified and works.  My questions are what's the

> reason causing "cp: cannot stat

> `/opt/postgres/9.1/archive/00010003': No such file or

> directory" on STANDBY and how to fix it?



What instructions/tools did you use to setup replication?



> Also, it seems the startup

> process stucks on "recovering 00010004", how to

> resolve it?



As far as I know that's normal while in streaming mode.

--

Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in 
Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! 
http://BlueTreble.com

855-TREBLE2 (855-873-2532)   mobile: 512-569-9461

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


Re: [GENERAL] Changing foreign key referential actions in big databases

2016-11-07 Thread Arthur Silva
On Nov 7, 2016 3:34 PM, "Tom Lane"  wrote:
>
> Arthur Silva  writes:
> > We recently started looking into a long standing ticket to change some
> > foreign keys referential actions from CASCADE to RESTRICT for our own
> > safety. Everything else in the FK stays the same.
> > The problem is that running a query like the one bellow takes an
exclusive
> > lock for too long (order of minutes in some tables when testing against
a
> > backup db).
> > ...
> > Is it safe(ish) to just update pg_constraint.confupdtype and
> > pg_constraint.confdeltype for those?
>
> Well, it's probably safe, but it wouldn't have the results you want.
> What actually drives that behavior is the choice of trigger functions
> applied to the relations, so you'd have to also update the related
> pg_trigger rows appropriately.
>
> Also, I'm not too sure about the cacheing situation for pg_trigger,
> but it's moderately likely that a manual UPDATE on pg_trigger wouldn't
> force a cache flush, so that you'd have to do something extra to get
> running backends to notice the pg_trigger changes.  Since you're living
> dangerously already, a dummy UPDATE on the pg_class row for the affected
> relation would be good enough.
>
> You could probably get away with all that as long as your application
> isn't doing anything that makes it matter critically which semantics
> get applied while the changeover is being made.
>
> But test on a scratch database ...
>
> regards, tom lane

I see. Unfortunately I think all that would cross our "living too
dangerously" line.


Re: [GENERAL] Changing foreign key referential actions in big databases

2016-11-07 Thread Arthur Silva
On Nov 7, 2016 3:29 PM, "Adrian Klaver"  wrote:
>
> On 11/07/2016 02:09 AM, Arthur Silva wrote:
>>
>> Hi all, we're running a few Pg databases in production.
>>
>> Ubuntu 14.04 x64
>> 32 x64 cores
>> 64GB to 256GB memory, depending on cluster
>> PostgreSQL 9.3.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
>> 4.8.2-19ubuntu1) 4.8.2, 64-bit
>> FusionIO storage
>>
>> We recently started looking into a long standing ticket to change some
>> foreign keys referential actions from CASCADE to RESTRICT for our own
>> safety. Everything else in the FK stays the same.
>>
>> The problem is that running a query like the one bellow takes an
>> exclusive lock for too long (order of minutes in some tables when
>> testing against a backup db).
>>
>> ALTER TABLE "partneracl"
>> DROP CONSTRAINT "partneracl_partner_fkey",
>> ADD CONSTRAINT "partneracl_partner_fkey"
>> FOREIGN KEY ("partner")
>> REFERENCES "partner"("name");
>>
>> Is there any way to change the foreign key referential actions quickly
>> and/or without an exclusive lock?
>
>
> Are there indexes on the child columns?
>

Yes, they're all backed by indexes.

>
>> Is it safe(ish) to just update pg_constraint.confupdtype and
>> pg_constraint.confdeltype for those?
>>
>> Regards
>>
>> --
>> Arthur Silva
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


Re: [GENERAL] Surviving connections after internet problem

2016-11-07 Thread Albe Laurenz
Durumdara wrote:
> Linux server, 9.4 PG, Windows clients far-far away.
> 
> They called us that they had an "internet reset" at 13.00, but many client 
> locks are alive now
> (14:00).
> I checked server status, and and saw 16 connections.
> 
> In Windows PG server I read about keepalive parameters which are control and 
> redefine default TCP
> keepalive values.
> 
> As I read it could be two hours in Windows, and it is a system default, so we 
> can change for all
> applications.
> 
> I don't know what happens with Linux server and Windows clients.
> 
> May Linux version of PGSQL also uses 2 hour keepalive (default), or the it 
> caused by Windows clients,
> or an active device?
> 
> Or how could they survive this internet connection reset? :-o
> 
> May we must limit these parameters in clients after the starting of the 
> connection?

Don't bother about the clients, just see that the backends go away on the 
server.

You can use pg_terminate_backend to kill a database session.

Setting the keepalive options in postgresql.conf can make PostgreSQL
discover dead connections more quickly.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changing foreign key referential actions in big databases

2016-11-07 Thread Tom Lane
Arthur Silva  writes:
> We recently started looking into a long standing ticket to change some
> foreign keys referential actions from CASCADE to RESTRICT for our own
> safety. Everything else in the FK stays the same.
> The problem is that running a query like the one bellow takes an exclusive
> lock for too long (order of minutes in some tables when testing against a
> backup db).
> ...
> Is it safe(ish) to just update pg_constraint.confupdtype and
> pg_constraint.confdeltype for those?

Well, it's probably safe, but it wouldn't have the results you want.
What actually drives that behavior is the choice of trigger functions
applied to the relations, so you'd have to also update the related
pg_trigger rows appropriately.

Also, I'm not too sure about the cacheing situation for pg_trigger,
but it's moderately likely that a manual UPDATE on pg_trigger wouldn't
force a cache flush, so that you'd have to do something extra to get
running backends to notice the pg_trigger changes.  Since you're living
dangerously already, a dummy UPDATE on the pg_class row for the affected
relation would be good enough.

You could probably get away with all that as long as your application
isn't doing anything that makes it matter critically which semantics
get applied while the changeover is being made.

But test on a scratch database ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changing foreign key referential actions in big databases

2016-11-07 Thread Adrian Klaver

On 11/07/2016 02:09 AM, Arthur Silva wrote:

Hi all, we're running a few Pg databases in production.

Ubuntu 14.04 x64
32 x64 cores
64GB to 256GB memory, depending on cluster
PostgreSQL 9.3.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
4.8.2-19ubuntu1) 4.8.2, 64-bit
FusionIO storage

We recently started looking into a long standing ticket to change some
foreign keys referential actions from CASCADE to RESTRICT for our own
safety. Everything else in the FK stays the same.

The problem is that running a query like the one bellow takes an
exclusive lock for too long (order of minutes in some tables when
testing against a backup db).

ALTER TABLE "partneracl"
DROP CONSTRAINT "partneracl_partner_fkey",
ADD CONSTRAINT "partneracl_partner_fkey"
FOREIGN KEY ("partner")
REFERENCES "partner"("name");

Is there any way to change the foreign key referential actions quickly
and/or without an exclusive lock?


Are there indexes on the child columns?


Is it safe(ish) to just update pg_constraint.confupdtype and
pg_constraint.confdeltype for those?

Regards

--
Arthur Silva




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to hint 2 coulms IS NOT DISTINCT FROM each other

2016-11-07 Thread Merlin Moncure
On Fri, Nov 4, 2016 at 9:38 AM, Alban Hertroys  wrote:
> On 4 November 2016 at 14:41, Merlin Moncure  wrote:
>> On Fri, Nov 4, 2016 at 8:08 AM, Kim Rose Carlsen  wrote:
>>> The nulls are generated by something like this
>>> SELECT c.circuit_id,
>>>cc.customer_id
>>>FROM circuit AS c
>>> LEFT JOIN circuit_customer AS cc
>>>  ON c.circuit_id = cc.circuit_id
>>>
>>> To make a magic '0' customer we would be required to use
>>>   COALESCE(cc.customer_id, '0')
>>> I dont think the optimizer will do anything clever with the '0' we have
>>> computed from null.
>>
>> It would if you explicitly indexed it as such;
>> CREATE INDEX ON circuit_customer((COALESCE(customer_id, '0'));
>
> Merlin, it's a LEFT JOIN. There probably are no NULLs in the
> circuit_customer.customer_id column, so that COALESCE isn't going to
> achieve anything at all.

Hang on -- upthread the context was inner join, and the gripe was join
fast with '=', slow with INDF.  When he said the nulls were
'generated', I didn't follow that they were part of the original
query.  If the nulls are generated along with the query, sure, an
index won't help.

I maintain my earlier point; with respect to the original query, to
get from performance of INDF to =, you have three options:
a) expr index the nulls  (assuming they are physically stored)
b) convert to ((a = b) or a is null and b is null) which can help with
a bitmap or plan
c) covert to union all equivalent of "b"

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Recovery from Corrupted Dump or Raw database table file.

2016-11-07 Thread Howard News


On 07/11/2016 13:44, Vick Khera wrote:

On Mon, Nov 7, 2016 at 8:23 AM, Howard News  wrote:

pg_restore: executing SEQUENCE SET example_seq
pg_restore: processing data for table example_table
pg_restore: [compress_io]

** crash **

What crashes? the pg_restore process or the backend server?



The pg_restore crashes.

For example if I run the "pg_restore -f example.sql backup" then it will 
fall over when it tries to restore one of the corrupted tables. 
Presumably the backup itself is corrupted.


The "pg_restore --list" works fine, so I can run a selective restore by 
commenting out the dodgy tables using the "pg_restore -L" option.


Regarding the postgres ids, these appear to be listed in the pg_restore 
-L  so I can find out which data file the tables refer to, but I have 
not found out how to restore a single one of these files to a usable or 
readable state yet.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Recovery from Corrupted Dump or Raw database table file.

2016-11-07 Thread Vick Khera
On Mon, Nov 7, 2016 at 8:23 AM, Howard News  wrote:
> pg_restore: executing SEQUENCE SET example_seq
> pg_restore: processing data for table example_table
> pg_restore: [compress_io]
>
> ** crash **

What crashes? the pg_restore process or the backend server?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Recovery from Corrupted Dump or Raw database table file.

2016-11-07 Thread Howard News



On 07/11/2016 13:12, Albe Laurenz wrote:

Howard News wrote:

I have a raid catastrophe which has effectively blitzed a cluster data
directory.  I have several pg_dump backups but these will not restore
cleanly. I assume the disk has been failing for some time and the
backups are of the corrupted database.

Using a selective pg_restore on the dumps, I have restored 2/3rds of the
data but some tables I cannot recover directly, so I would like to see
if it is possible to examine the dumps (they are in compressed format)
to see if there are any rows which are recoverable. I do not know how or
if it is even possible to pick out rows from an individual table, when
in all likelyhood the file itself is corrupted.

I also have some parts of the data directory, so the tables may be
accessible if I know the ID of the corrupt datatables. Is the ID listed
in the pg_dump --list file?  And can the data be extracted from the raw
data files without running a cluster. I am unsure if there is enough
data to actually start a cluster.

A backup created with pg_dump consists of SQL statements to recreate the
objects.

You can extract the SQL statements as text with

   pg_restore -f sqlfile backupfilename

That should help you with restoring the data.

What exactly do you mean by "do not restore cleanly"?
Do you get error messages or is the content not ok?

Yours,
Laurenz Albe

Hi Laurenz,

Yes I get errors when trying to restore. When I try to restore to an 
uncompressed file as suggested, I get the same errors - or the 
pg_restore crashes.


e.g. in the console I get the following...

...
pg_restore: executing SEQUENCE SET example_seq
pg_restore: processing data for table example_table
pg_restore: [compress_io]

** crash **

If I selectively restore I have about 10 tables that do not recover, and 
all of the Blobs.


Howard.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Documentation archive links broken for 6.3 up to 7.1

2016-11-07 Thread Daniel Westermann
Hi, 

just noticed that the links from 6.3 to 7.1 are broken here: 
https://www.postgresql.org/docs/manuals/archive/ 

Regards 
Daniel 


Re: [GENERAL] Database Recovery from Corrupted Dump or Raw database table file.

2016-11-07 Thread Albe Laurenz
Howard News wrote:
> I have a raid catastrophe which has effectively blitzed a cluster data
> directory.  I have several pg_dump backups but these will not restore
> cleanly. I assume the disk has been failing for some time and the
> backups are of the corrupted database.
> 
> Using a selective pg_restore on the dumps, I have restored 2/3rds of the
> data but some tables I cannot recover directly, so I would like to see
> if it is possible to examine the dumps (they are in compressed format)
> to see if there are any rows which are recoverable. I do not know how or
> if it is even possible to pick out rows from an individual table, when
> in all likelyhood the file itself is corrupted.
> 
> I also have some parts of the data directory, so the tables may be
> accessible if I know the ID of the corrupt datatables. Is the ID listed
> in the pg_dump --list file?  And can the data be extracted from the raw
> data files without running a cluster. I am unsure if there is enough
> data to actually start a cluster.

A backup created with pg_dump consists of SQL statements to recreate the
objects.

You can extract the SQL statements as text with

  pg_restore -f sqlfile backupfilename

That should help you with restoring the data.

What exactly do you mean by "do not restore cleanly"?
Do you get error messages or is the content not ok?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Surviving connections after internet problem

2016-11-07 Thread Durumdara
Hello!


Linux server, 9.4 PG, Windows clients far-far away.

They called us that they had an "internet reset" at 13.00, but many client
locks are alive now (14:00).
I checked server status, and and saw 16 connections.

In Windows PG server I read about keepalive parameters which are control
and redefine default TCP keepalive values.

As I read it could be two hours in Windows, and it is a system default, so
we can change for all applications.

I don't know what happens with Linux server and Windows clients.

May Linux version of PGSQL also uses 2 hour keepalive (default), or the it
caused by Windows clients, or an active device?

Or how could they survive this internet connection reset? :-o

May we must limit these parameters in clients after the starting of the
connection?

Thank you for your help!

dd


[GENERAL] Database Recovery from Corrupted Dump or Raw database table file.

2016-11-07 Thread Howard News

Hi all,

I have a raid catastrophe which has effectively blitzed a cluster data 
directory.  I have several pg_dump backups but these will not restore 
cleanly. I assume the disk has been failing for some time and the 
backups are of the corrupted database.


Using a selective pg_restore on the dumps, I have restored 2/3rds of the 
data but some tables I cannot recover directly, so I would like to see 
if it is possible to examine the dumps (they are in compressed format) 
to see if there are any rows which are recoverable. I do not know how or 
if it is even possible to pick out rows from an individual table, when 
in all likelyhood the file itself is corrupted.


I also have some parts of the data directory, so the tables may be 
accessible if I know the ID of the corrupt datatables. Is the ID listed 
in the pg_dump --list file?  And can the data be extracted from the raw 
data files without running a cluster. I am unsure if there is enough 
data to actually start a cluster.


Thanks,

Howard.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Tom DalPozzo
I'm using 9.5.3 . I had read about that bug but I didn't know that
wal_level=archive is equivalent to hot_standby from this point of view! I
guess it's equivalent in 9.5.3 too.
Regards
Pupillo




2016-11-07 13:26 GMT+01:00 Michael Paquier :

> On Mon, Nov 7, 2016 at 9:21 PM, Tom DalPozzo  wrote:
> > I know that, but with neither database activity or chekpoint, it doesn't
> > force anything. The fact is that there are checkpoints being executed
> every
> > checkpoint_timeout, and I don't understand why as if no WAL has been
> written
> > we should not care about passing the timeout.
>
> You may want to look at that:
> https://www.postgresql.org/message-id/20151016203031.3019.72930@wrigleys.
> postgresql.org
> And the patches on this thread to fix the checkpoint skip logic:
> https://www.postgresql.org/message-id/flat/CAB7nPqQcPqxEM3S735Bd2RzApNqSN
> JVietAC=6kfkyv_45d...@mail.gmail.com#CAB7nPqQcPqxEM3S735Bd2RzApNqSN
> JVietAC=6kfkyv_45d...@mail.gmail.com
>
> My guess is that you are using 9.6 because wal_level = archive is
> equivalent to hot_standby, and the checkpoint skip logic is broken
> because of standby snapshots happening in the bgwriter...
> --
> Michael
>


Re: [GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Michael Paquier
On Mon, Nov 7, 2016 at 9:21 PM, Tom DalPozzo  wrote:
> I know that, but with neither database activity or chekpoint, it doesn't
> force anything. The fact is that there are checkpoints being executed every
> checkpoint_timeout, and I don't understand why as if no WAL has been written
> we should not care about passing the timeout.

You may want to look at that:
https://www.postgresql.org/message-id/20151016203031.3019.72...@wrigleys.postgresql.org
And the patches on this thread to fix the checkpoint skip logic:
https://www.postgresql.org/message-id/flat/CAB7nPqQcPqxEM3S735Bd2RzApNqSNJVietAC=6kfkyv_45d...@mail.gmail.com#CAB7nPqQcPqxEM3S735Bd2RzApNqSNJVietAC=6kfkyv_45d...@mail.gmail.com

My guess is that you are using 9.6 because wal_level = archive is
equivalent to hot_standby, and the checkpoint skip logic is broken
because of standby snapshots happening in the bgwriter...
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Michael Paquier
On Mon, Nov 7, 2016 at 9:14 PM, amul sul  wrote:
> On Mon, Nov 7, 2016 at 4:20 PM, Tom DalPozzo  wrote:
>> I have:
>> checkpoint_timeout = 2min
>> wal_level = archive
>> archive_mode=on
>> archive_timeout = 30
>>
>> With NO dbase activity, I see the WAL being modified every 2min (and,
>> consequently, one WAL file archived every 2min too ).
>>
>> Is it right? I read: "If no WAL has been written since the previous
>> checkpoint, new checkpoints will be skipped even if checkpoint_timeout has
>> passed."
>
> I guess archive_timeout forcing to switch new WAL file, see this :
> https://www.postgresql.org/docs/devel/static/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT

Out of curiosity, which version are you using? That's 9.6, no?
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Tom DalPozzo
I know that, but with neither database activity or chekpoint, it doesn't
force anything. The fact is that there are checkpoints being executed
every checkpoint_timeout,
and I don't understand why as if no WAL has been written we should not care
about passing the timeout.
Regards
Pupillo





2016-11-07 13:14 GMT+01:00 amul sul :

> On Mon, Nov 7, 2016 at 4:20 PM, Tom DalPozzo  wrote:
> > Hi,
> > I have:
> > checkpoint_timeout = 2min
> > wal_level = archive
> > archive_mode=on
> > archive_timeout = 30
> >
> > With NO dbase activity, I see the WAL being modified every 2min (and,
> > consequently, one WAL file archived every 2min too ).
> >
> > Is it right? I read: "If no WAL has been written since the previous
> > checkpoint, new checkpoints will be skipped even if checkpoint_timeout
> has
> > passed."
>
> I guess archive_timeout forcing to switch new WAL file, see this :
> https://www.postgresql.org/docs/devel/static/runtime-
> config-wal.html#GUC-ARCHIVE-TIMEOUT
>
> Regards,
> Amul
>


Re: [GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread amul sul
On Mon, Nov 7, 2016 at 4:20 PM, Tom DalPozzo  wrote:
> Hi,
> I have:
> checkpoint_timeout = 2min
> wal_level = archive
> archive_mode=on
> archive_timeout = 30
>
> With NO dbase activity, I see the WAL being modified every 2min (and,
> consequently, one WAL file archived every 2min too ).
>
> Is it right? I read: "If no WAL has been written since the previous
> checkpoint, new checkpoints will be skipped even if checkpoint_timeout has
> passed."

I guess archive_timeout forcing to switch new WAL file, see this :
https://www.postgresql.org/docs/devel/static/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT

Regards,
Amul


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: What is the best thing to do with PUBLIC schema in Postgresql database

2016-11-07 Thread Albe Laurenz
Patricia Hu wrote:
> Since it could potentially be a security loop hole. So far the action taken 
> to address it falls into
> these two categories:
> 
> drop the PUBLIC schema altogether. One of the concerns is with some of 
> the system objects that
> have been exposed through PUBLIC schema previously, now they will need other 
> explicit grants to be
> accessible to users. e.g pg_stat_statements.
> keep the PUBLIC schema but revoke all privileges to it from public role, 
> then grant as necessity
> comes up.
> 
> Any feedback and lessons from those who have implemented this?

I'd prefer the second approach as it is less invasive and prevents
undesirable objects in schema "public" just as well.

> Confidentiality Notice::  This email, including attachments, may include 
> non-public, proprietary,
> confidential or legally privileged information.  If you are not an intended 
> recipient or an authorized
> agent of an intended recipient, you are hereby notified that any 
> dissemination, distribution or
> copying of the information contained in or transmitted with this e-mail is 
> unauthorized and strictly
> prohibited.

You are hereby notified that any dissemination, distribution or copying of the 
information
contained in or transmitted with your e-mail is hunky-dory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] checkpoint_timout with no WAL activity

2016-11-07 Thread Tom DalPozzo
Hi,
I have:
checkpoint_timeout = 2min
wal_level = archive
archive_mode=on
archive_timeout = 30

With NO dbase activity, I see the WAL being modified every 2min (and,
consequently, one WAL file archived every 2min too ).

Is it right? I read: "If no WAL has been written since the previous
checkpoint, new checkpoints will be skipped even if checkpoint_timeout has
passed."




Regards
Pupillo


[GENERAL] Changing foreign key referential actions in big databases

2016-11-07 Thread Arthur Silva
Hi all, we're running a few Pg databases in production.

Ubuntu 14.04 x64
32 x64 cores
64GB to 256GB memory, depending on cluster
PostgreSQL 9.3.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
4.8.2-19ubuntu1) 4.8.2, 64-bit
FusionIO storage

We recently started looking into a long standing ticket to change some
foreign keys referential actions from CASCADE to RESTRICT for our own
safety. Everything else in the FK stays the same.

The problem is that running a query like the one bellow takes an exclusive
lock for too long (order of minutes in some tables when testing against a
backup db).

ALTER TABLE "partneracl"
DROP CONSTRAINT "partneracl_partner_fkey",
ADD CONSTRAINT "partneracl_partner_fkey"
FOREIGN KEY ("partner")
REFERENCES "partner"("name");

Is there any way to change the foreign key referential actions quickly
and/or without an exclusive lock?
Is it safe(ish) to just update pg_constraint.confupdtype and
pg_constraint.confdeltype for those?

Regards

--
Arthur Silva


Re: [GENERAL] Dynamic execution returning large result sets

2016-11-07 Thread Emrul
Read up on refcursors - exactly what I wanted, thank you Adrian!



--
View this message in context: 
http://postgresql.nabble.com/Dynamic-execution-returning-large-result-sets-tp5929177p5929211.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High load average every 105 minutes

2016-11-07 Thread Chris Mair



with AWS, your system is sharing the vendors virtual machine environment with 
other customers, and performance is pretty much out of your control.



I found no strange processes or queries while load average was at peak. IO also 
didn't change. Some more slow queries were logged, but not many.
I think sharing the VM with other customers doesn’t have much to do with this. 
I checked my other servers too, and only those that have postgresql have the 
load average issue. Generally it doesn’t impact my system much, but when there 
are slow queries, this issue just makes everything worse.


Hi,

generally speaking AWS is pretty good at isolating users (and you can request 
single tenancy machines or
dedicated machines as well if you're concerned about this).

However, if you're running t1 or t2 instances, you get the concept of CPU 
credits. When those run out, your
system is slowed down until the credits recover. I could imagine that this way 
some cyclic load patterns
emerge, if there is constant load on the machines.

Nhan, what instance types are you running?

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High load average every 105 minutes

2016-11-07 Thread Chris Mair



with AWS, your system is sharing the vendors virtual machine environment with 
other customers, and performance is pretty much out of your control.



I found no strange processes or queries while load average was at peak. IO also 
didn't change. Some more slow queries were logged, but not many.
I think sharing the VM with other customers doesn’t have much to do with this. 
I checked my other servers too, and only those that have postgresql have the 
load average issue. Generally it doesn’t impact my system much, but when there 
are slow queries, this issue just makes everything worse.


Hi,

generally speaking AWS is pretty good at isolating users (and you can request 
single tenancy machines or
dedicated machines as well if you're concerned about this).

However, if you're running t1 or t2 instances, you get the concept of CPU 
credits. When those run out, your
system is slowed down until the credits recover. I could imagine that this way 
some cyclic load patterns
emerge, if there is constant load on the machines.

Nhan, what instance types are you running?

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High load average every 105 minutes

2016-11-07 Thread Chris Mair



with AWS, your system is sharing the vendors virtual machine environment with 
other customers, and performance is pretty much out of your control.



I found no strange processes or queries while load average was at peak. IO also 
didn't change. Some more slow queries were logged, but not many.
I think sharing the VM with other customers doesn’t have much to do with this. 
I checked my other servers too, and only those that have postgresql have the 
load average issue. Generally it doesn’t impact my system much, but when there 
are slow queries, this issue just makes everything worse.


Hi,

generally speaking AWS is pretty good at isolating users (and you can request 
single tenancy machines or
dedicated machines as well if you're concerned about this).

However, if you're running t1 or t2 instances, you get the concept of CPU 
credits. When those run out, your
system is slowed down until the credits recover. I could imagine that this way 
some cyclic load patterns
emerge, if there is constant load on the machines.

Nhan, what instance types are you running?

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general