Re: [GENERAL] Need for re-index after pg_upgrade

2015-06-29 Thread Andreas Joseph Krogh
På mandag 29. juni 2015 kl. 15:42:22, skrev Tom Lane t...@sss.pgh.pa.us 
mailto:t...@sss.pgh.pa.us:
Andreas Joseph Krogh andr...@visena.com writes:
  In the man-page for pg_upgrade we see this:
  pg_upgrade will require a reindex if:
   * an index is of type hash or GIN

 I think that probably refers to some version-specific upgrade
 situations; I can't see a reason why it would be true in general.
 Bruce, doesn't that doc need improvement?

 regards, tom lane
 
I'm specifically thinking about this (from 9.4 release-notes):
E.5.3.1.1. Indexes 


 * Reduce GIN http://www.postgresql.org/docs/9.4/static/gin.html index size 
(Alexander Korotkov, Heikki Linnakangas)

Indexes upgraded via pg_upgrade 
http://www.postgresql.org/docs/9.4/static/pgupgrade.html will work fine but 
will still be in the old, largerGIN format. Use REINDEX 
http://www.postgresql.org/docs/9.4/static/sql-reindex.html to recreate old 
GIN indexes in the new format.


 
It'd be great if pg_upgrade generated re-index scripts to make those indexes 
use the new format.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com mailto:andr...@visena.com
www.visena.com https://www.visena.com
 https://www.visena.com


 


[GENERAL] Which replication is the best for our case ?

2015-06-29 Thread ben.play
Hi guys, 

We have a PG database with more than 400 GB of data.
At this moment, a cron runs each ten minutes and updates about 10 000 lines
with complex algorithms in PHP. 

Each time the cron runs, the website is almost down because some queries
have to make an update on the FULL table... 

Therefore, I'm asking if it's possible to duplicate my main database on a
slave server in order to run these cron on this second server... then,
replicate these changes on the main database (master).

Which replication is the best in this case ?

http://www.postgresql.org/docs/9.3/static/warm-standby.html ?

Do you have any links or tutorial which explain this kind of operation ?

Thanks a lot !



--
View this message in context: 
http://postgresql.nabble.com/Which-replication-is-the-best-for-our-case-tp5855685.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which replication is the best for our case ?

2015-06-29 Thread Adrian Klaver

On 06/29/2015 06:02 AM, ben.play wrote:

Hi guys,

We have a PG database with more than 400 GB of data.
At this moment, a cron runs each ten minutes and updates about 10 000 lines
with complex algorithms in PHP.

Each time the cron runs, the website is almost down because some queries
have to make an update on the FULL table...

Therefore, I'm asking if it's possible to duplicate my main database on a
slave server in order to run these cron on this second server... then,
replicate these changes on the main database (master).


So you want the replication to go from the standby back to the master?
If that is the case, it is not possible.



Which replication is the best in this case ?

http://www.postgresql.org/docs/9.3/static/warm-standby.html ?

Do you have any links or tutorial which explain this kind of operation ?

Thanks a lot !



--
View this message in context: 
http://postgresql.nabble.com/Which-replication-is-the-best-for-our-case-tp5855685.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for re-index after pg_upgrade

2015-06-29 Thread Tom Lane
Andreas Joseph Krogh andr...@visena.com writes:
 In the man-page for pg_upgrade we see this:
 pg_upgrade will require a reindex if:
  * an index is of type hash or GIN

I think that probably refers to some version-specific upgrade
situations; I can't see a reason why it would be true in general.
Bruce, doesn't that doc need improvement?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for re-index after pg_upgrade

2015-06-29 Thread Adrian Klaver

On 06/29/2015 06:42 AM, Tom Lane wrote:

Andreas Joseph Krogh andr...@visena.com writes:

In the man-page for pg_upgrade we see this:
pg_upgrade will require a reindex if:
  * an index is of type hash or GIN


I think that probably refers to some version-specific upgrade
situations; I can't see a reason why it would be true in general.
Bruce, doesn't that doc need improvement?



Yes for coming from 8.3:

http://www.postgresql.org/docs/9.4/static/pgupgrade.html

Limitations in Upgrading from PostgreSQL 8.3


pg_upgrade will require a reindex if:

an index is of type hash or GIN

an index uses bpchar_pattern_ops

This how it shows up my 9.4 man file also.



regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which replication is the best for our case ?

2015-06-29 Thread Adrian Klaver

On 06/29/2015 06:02 AM, ben.play wrote:

Hi guys,

We have a PG database with more than 400 GB of data.
At this moment, a cron runs each ten minutes and updates about 10 000 lines
with complex algorithms in PHP.

Each time the cron runs, the website is almost down because some queries
have to make an update on the FULL table...

Therefore, I'm asking if it's possible to duplicate my main database on a
slave server in order to run these cron on this second server... then,
replicate these changes on the main database (master).

Which replication is the best in this case ?

http://www.postgresql.org/docs/9.3/static/warm-standby.html ?

Do you have any links or tutorial which explain this kind of operation ?


I should have been clearer in my original post. What you want is not 
possible using the procedures found at the link you posted. What you are 
looking for is Master to Master replication. This is not something I 
have done, so I am not the person to offer detailed advice on that 
particular style of replication. I do know that this list sees a lot of 
activity with regard to 
BDR(http://bdr-project.org/docs/stable/index.html) which provides Master 
to Master replication.




Thanks a lot !






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need for re-index after pg_upgrade

2015-06-29 Thread Andreas Joseph Krogh
Hi all.
 
In the man-page for pg_upgrade we see this:
pg_upgrade will require a reindex if:


 * an index is of type hash or GIN


 
A bit further up we see:
All failure, rebuild, and reindex cases will be reported by pg_upgrade if they 
affect your installation; post-upgrade scripts to rebuild tables and indexes 
will be generated automatically.
 
I have several GIN-indexes and upgraded from 9.3 to 9.4 but no re-index 
scripts were generated. Does this mean there's no need to rebuild the indexes 
or just that I have to do it manually?
 
Thanks.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com mailto:andr...@visena.com
www.visena.com https://www.visena.com
 https://www.visena.com




Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-29 Thread Stéphane Schildknecht
On 16/06/2015 10:55, Xavier 12 wrote:
 Hi everyone,
 
 Questions about pg_xlogs again...
 I have two Postgresql 9.1 servers in a master/slave stream replication
 (hot_standby).
 
 Psql01 (master) is backuped with Barman and pg_xlogs is correctly
 purged (archive_command is used).
 
 Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
 only, it keeps growing up until disk space is full). I have found
 documentation and tutorials, mailing list, but I don't know what is
 suitable for a Slave. Leads I've found :
 
 - checkpoints
 - archive_command
 - archive_cleanup
 
 Master postgresq.conf :
 
 [...]
 wal_level = 'hot_standby'
 archive_mode = on
 archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
 bar...@nas.lan:/data/pgbarman/psql01/incoming/%f'
 max_wal_senders = 5
 wal_keep_segments = 64
 autovacuum = on
 
 Slave postgresql.conf :
 
 [...]
 wal_level = minimal
 wal_keep_segments = 32
 hot_standby = on
 
 Slave recovery.conf :
 
 standby_mode = 'on'
 primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
 trigger_file = '/var/lib/postgresql/9.1/triggersql'
 restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f %p'
 archive_cleanup_command =
 '/usr/lib/postgresql/9.1/bin/pg_archivecleanup
 /var/lib/postgresql/9.1/wal_archive/ %r'
 
 
 
 
 
 How can I reduce the number of WAL files on the hot_stanby slave ?
 
 Thanks
 
 Regards.
 
 Xavier C.
 
 


I wonder why you are doing cp in your recovery.conf on the slave.
That is quite correct when the streaming can't get WAL from the master. But
cp is probably not the right tool.

You also cp from the master archive directory, and are cleaning on that
directory as well.

You don't clean up the standby xlog directory. And cp may copy incomplete WAL
files.

The streaming replication can take care of your xlog clean up, until you
introduce WAL files by another mean (manual cp for instance).

S.

-- 
Stéphane Schildknecht
Contact régional PostgreSQL pour l'Europe francophone
Loxodata - Conseil, expertise et formations
06.17.11.37.42


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-29 Thread Xavier 12



On 29/06/2015 11:38, Stéphane Schildknecht wrote:

On 16/06/2015 10:55, Xavier 12 wrote:

Hi everyone,

Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).

Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).

Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :

- checkpoints
- archive_command
- archive_cleanup

Master postgresq.conf :

[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
bar...@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
autovacuum = on

Slave postgresql.conf :

[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = on

Slave recovery.conf :

standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f %p'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'





How can I reduce the number of WAL files on the hot_stanby slave ?

Thanks

Regards.

Xavier C.




I wonder why you are doing cp in your recovery.conf on the slave.
That is quite correct when the streaming can't get WAL from the master. But
cp is probably not the right tool.

You also cp from the master archive directory, and are cleaning on that
directory as well.

You don't clean up the standby xlog directory. And cp may copy incomplete WAL
files.

The streaming replication can take care of your xlog clean up, until you
introduce WAL files by another mean (manual cp for instance).

S.


cp because /var/lib/postgresq/9.1/wal_archive/ is a temporary directory.
I use it to build the replication (copy wal with rsync from the master).
The slave use it to start, then next wal are written ton 
/var/lib/postgresql/9.1/main/pg_xlog.


Xavier C.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Correct place for feature requests

2015-06-29 Thread Merlin Moncure
On Thu, Jun 25, 2015 at 1:59 PM, Алексей Бережняк c...@irc.lg.ua wrote:
 I think that PostgreSQL is great RDBMS, but one important (for me)
 feature that it missing is case-insensitive identifier quotes
 ([table].[column]) like in Microsoft SQL Server.

 I know that there are double quotes, but they are case-sensitive.

 Or maybe some option that will make double quotes case-insensitive.

This behavior is not going to change because of having to support
decades of code written around the current set of rules.  SQL Server
is very much in the same boat having inherited its somewhat quixotic
casing rules.  SQL server's semantics are not SQL standard anyways so
even if postgres did change it would not be in that direction.  SQL
server is basically fully case insensitive while postgres is case
sensitive (but works around this to some extent via case folding).

It's really somewhat of a shame, but to write portable SQL you want to
name_identifiers_like_this (particularly with postgres due to case
folding) and completely avoid syntax that requires identifier quoting.
  Also stay away from keywords or anything that might become one.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which replication is the best for our case ?

2015-06-29 Thread Holger.Friedrich-Fa-Trivadis
Arthur Silva wrote on Monday, June 29, 2015 5:23 PM:
 Therefore, I'm asking if it's possible to duplicate my main database on a
 slave server in order to run these cron on this second server... then,
 replicate these changes on the main database (master).

 http://www.postgresql.org/docs/9.3/static/warm-standby.html ?

If you do it the other way around, a hot standby (see:  
http://www.postgresql.org/docs/9.3/static/hot-standby.html) might work for you.

That is, your cron scripts would UPDATE the master server, and the Web site 
would do read-only queries against the slave server.

That’s for the built-in replication of PostgreSQL (sorry, I have no experience 
with the add-on replication solutions).




Re: [GENERAL] Which replication is the best for our case ?

2015-06-29 Thread Arthur Silva
On Mon, Jun 29, 2015 at 10:02 AM, ben.play benjamin.co...@playrion.com
wrote:

 Hi guys,

 We have a PG database with more than 400 GB of data.
 At this moment, a cron runs each ten minutes and updates about 10 000 lines
 with complex algorithms in PHP.

 Each time the cron runs, the website is almost down because some queries
 have to make an update on the FULL table...

 Therefore, I'm asking if it's possible to duplicate my main database on a
 slave server in order to run these cron on this second server... then,
 replicate these changes on the main database (master).

 Which replication is the best in this case ?

 http://www.postgresql.org/docs/9.3/static/warm-standby.html ?

 Do you have any links or tutorial which explain this kind of operation ?

 Thanks a lot !



 --
 View this message in context:
 http://postgresql.nabble.com/Which-replication-is-the-best-for-our-case-tp5855685.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Hello Adrian, can you give us one example of such FULL table update queries?

By website down you mean slowed to a halt or read-only mode (due to the
update locks)?

Either way it doesn't look like replication is going to solve your problem.


Re: [GENERAL] Which replication is the best for our case ?

2015-06-29 Thread Adrian Klaver

On 06/29/2015 08:23 AM, Arthur Silva wrote:

On Mon, Jun 29, 2015 at 10:02 AM, ben.play benjamin.co...@playrion.com
mailto:benjamin.co...@playrion.com wrote:

Hi guys,

We have a PG database with more than 400 GB of data.
At this moment, a cron runs each ten minutes and updates about 10
000 lines
with complex algorithms in PHP.

Each time the cron runs, the website is almost down because some queries
have to make an update on the FULL table...

Therefore, I'm asking if it's possible to duplicate my main database
on a
slave server in order to run these cron on this second server... then,
replicate these changes on the main database (master).

Which replication is the best in this case ?

http://www.postgresql.org/docs/9.3/static/warm-standby.html ?

Do you have any links or tutorial which explain this kind of operation ?

Thanks a lot !



--
View this message in context:

http://postgresql.nabble.com/Which-replication-is-the-best-for-our-case-tp5855685.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Hello Adrian, can you give us one example of such FULL table update queries?


Actually it is the OP(Ben) that is going to have to supply that.



By website down you mean slowed to a halt or read-only mode (due to the
update locks)?

Either way it doesn't look like replication is going to solve your problem.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Get the difference between two timestamp cells but in a special format in PostgreSQL

2015-06-29 Thread Colin Lieberman
Check the formatting functions documentation:
http://www.postgresql.org/docs/9.4/static/functions-formatting.html

# select to_char( now() - '2015-06-27 14:33:24'
, 'Yyears MMmonths DDdays HH:MI:SS.MS' );

   to_char
-
 0years 00months 01days 06:24:59.381
(1 row)


On Fri, Jun 26, 2015 at 10:52 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 06/25/2015 08:23 PM, litu16 wrote:

 Hi Adrian,

 but I would like to get the time diff in this format

 0years 0months 0days 00:00:00.000

 not only hours, minutes, seconds.

 is this possible???


 Well age:

 http://www.postgresql.org/docs/9.4/interactive/functions-datetime.html

 does that sort of:

 production=# SELECT age(timestamp '2015-06-26 09:15:15', timestamp
 '2015-06-26 02:16:00');
age
 --
  06:59:15
 (1 row)

 production=# SELECT age(timestamp '2015-06-26 09:15:15', timestamp
 '2015-06-20 02:16:00');
age
 -
  6 days 06:59:15
 (1 row)

 production=# SELECT age(timestamp '2015-06-26 09:15:15', timestamp
 '2014-06-20 02:16:00');
   age
 
  1 year 6 days 06:59:15
 (1 row)






 Thanks Advanced.



 --
 View this message in context:
 http://postgresql.nabble.com/Get-the-difference-between-two-timestamp-cells-but-in-a-special-format-in-PostgreSQL-tp5855074p5855215.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Which replication is the best for our case ?

2015-06-29 Thread Jeff Janes
On Mon, Jun 29, 2015 at 6:02 AM, ben.play benjamin.co...@playrion.com
wrote:

 Hi guys,

 We have a PG database with more than 400 GB of data.
 At this moment, a cron runs each ten minutes and updates about 10 000 lines
 with complex algorithms in PHP.

 Each time the cron runs, the website is almost down because some queries
 have to make an update on the FULL table...


Why is it updating the full table of 400GB if it only changes 10,000 lines?

If most of the rows are being updated degenerately (they update the column
to have the same value it already has) then just add a where clause to
filter out those degenerate updates, unless the degenerate update is needed
for locking purposes, which is rare.


Therefore, I'm asking if it's possible to duplicate my main database on a
 slave server in order to run these cron on this second server... then,
 replicate these changes on the main database (master).


Sounds like you are trying to use a bulldozer to change a lightbulb.

Improving queries (including the effect running some queries has on the
entire system)  starts with EXPLAIN (ANALYZE, BUFFERS), not with
multimaster replication.

Cheers,

Jeff


Re: [GENERAL] Which replication is the best for our case ?

2015-06-29 Thread John R Pierce

On 6/29/2015 10:41 AM, Melvin Davidson wrote:
I think it would help immensely if you provided details such as 
table_structure, indexes the actual UPDATE query and the reason all 
rows of the table must be updated.




indeed, the whole model of massaging the entire database every 10 
minutes is highly suspect.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which replication is the best for our case ?

2015-06-29 Thread Melvin Davidson
I think it would help immensely if you provided details such as
table_structure, indexes the actual UPDATE query and the reason all rows of
the table must be updated.

On Mon, Jun 29, 2015 at 1:15 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Mon, Jun 29, 2015 at 6:02 AM, ben.play benjamin.co...@playrion.com
 wrote:

 Hi guys,

 We have a PG database with more than 400 GB of data.
 At this moment, a cron runs each ten minutes and updates about 10 000
 lines
 with complex algorithms in PHP.

 Each time the cron runs, the website is almost down because some queries
 have to make an update on the FULL table...


 Why is it updating the full table of 400GB if it only changes 10,000 lines?

 If most of the rows are being updated degenerately (they update the column
 to have the same value it already has) then just add a where clause to
 filter out those degenerate updates, unless the degenerate update is needed
 for locking purposes, which is rare.


 Therefore, I'm asking if it's possible to duplicate my main database on a
 slave server in order to run these cron on this second server... then,
 replicate these changes on the main database (master).


 Sounds like you are trying to use a bulldozer to change a lightbulb.

 Improving queries (including the effect running some queries has on the
 entire system)  starts with EXPLAIN (ANALYZE, BUFFERS), not with
 multimaster replication.

 Cheers,

 Jeff




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Inserting from multiple processes?

2015-06-29 Thread Francisco Olarte
Hi Dave:

On Mon, Jun 29, 2015 at 6:32 AM, Dave Johansen davejohan...@gmail.com wrote:
 The issue is that the following uses 5 XIDs when I would only expect it to
 us 1:
 BEGIN;
 SELECT insert_test_no_dup('2015-01-01', 1, 1);

 END;

I see.

 It appears that the unique violation that is caught and ignored increments
 the XID even though I didn't expect that to happen. I agree that our
 software was burning XIDs needlessly and Postgres handled this situation as
 best as it could. It also sounds like Postgres 9.5 adds features to support
 this sort of use more efficiently, but the XID incrementing on the unique
 violation seems like it could/should be fixed, if it hasn't been already.

IIRC you were using BEGIN/EXCEPTION, which I think uses a savepoint
internally, which maybe what is burning the xid on every execution (
it probably needs one to implement rollback to savepoint properly ).
I've done a simple test which burns one very time the exception is
raised ( using a division by zero ).

If this is your case you may be able to work around it using a
conditional insert instead of an exception, and as you are using a
function the potential ugliness will remain encapsulated ( it may even
be faster, as the docs explicitly say exception blocks are expensive,
but as usual YMMV depending on the exact query and the collision ratio
).

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_timeout and WAL size

2015-06-29 Thread Adrian Klaver

On 06/29/2015 11:49 AM, Edson Richter wrote:

Dear community,

I'm using PostgreSQL 9.3.6 on Linux x64.

Would sound a stupid questions, and sorry if it was already asked
before: if I set the archive_timeout, and then I have them sent every
minute (for example), are the files still 16MB in size, or are they
truncated and sent in smaller sizes as well?
I've not found this information in docs.


See here:

http://www.postgresql.org/docs/9.4/interactive/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING

archive_timeout (integer)

The archive_command is only invoked for completed WAL segments. 
Hence, if your server generates little WAL traffic (or has slack periods 
where it does so), there could be a long delay between the completion of 
a transaction and its safe recording in archive storage. To limit how 
old unarchived data can be, you can set archive_timeout to force the 
server to switch to a new WAL segment file periodically. When this 
parameter is greater than zero, the server will switch to a new segment 
file whenever this many seconds have elapsed since the last segment file 
switch, and there has been any database activity, including a single 
checkpoint. (Increasing checkpoint_timeout will reduce unnecessary 
checkpoints on an idle system.) Note that archived

   ^^
files that are closed early due to a forced switch are still the same
^
length as completely full files. Therefore, it is unwise to use a very
^^^
short archive_timeout — it will bloat your archive storage. 
archive_timeout settings of a minute or so are usually reasonable. You 
should consider using streaming replication, instead of archiving, if 
you want data to be copied off the master server more quickly than that. 
This parameter can only be set in the postgresql.conf file or on the 
server command line.




Thanks,




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_timeout and WAL size

2015-06-29 Thread Edson Richter

Thanks, Adrian.

That's the reference I was looking for.


Atenciosamente,

Edson Carlos Ericksson Richter

Em 29/06/2015 15:55, Adrian Klaver escreveu:

On 06/29/2015 11:49 AM, Edson Richter wrote:

Dear community,

I'm using PostgreSQL 9.3.6 on Linux x64.

Would sound a stupid questions, and sorry if it was already asked
before: if I set the archive_timeout, and then I have them sent every
minute (for example), are the files still 16MB in size, or are they
truncated and sent in smaller sizes as well?
I've not found this information in docs.


See here:

http://www.postgresql.org/docs/9.4/interactive/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING 



archive_timeout (integer)

The archive_command is only invoked for completed WAL segments. 
Hence, if your server generates little WAL traffic (or has slack 
periods where it does so), there could be a long delay between the 
completion of a transaction and its safe recording in archive storage. 
To limit how old unarchived data can be, you can set archive_timeout 
to force the server to switch to a new WAL segment file periodically. 
When this parameter is greater than zero, the server will switch to a 
new segment file whenever this many seconds have elapsed since the 
last segment file switch, and there has been any database activity, 
including a single checkpoint. (Increasing checkpoint_timeout will 
reduce unnecessary checkpoints on an idle system.) Note that archived

^^
files that are closed early due to a forced switch are still the same
^
length as completely full files. Therefore, it is unwise to use a very
^^^
short archive_timeout — it will bloat your archive storage. 
archive_timeout settings of a minute or so are usually reasonable. You 
should consider using streaming replication, instead of archiving, if 
you want data to be copied off the master server more quickly than 
that. This parameter can only be set in the postgresql.conf file or on 
the server command line.




Thanks,








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] plpgsql question: select into multiple variables ?

2015-06-29 Thread Day, David
Hi,



Postgres version 9.3.9


What is wrong with my usage of the plpgsql  select into concept 
I have a function to look into a calendar table to find the first and 
Last weekend date of a month.

In this simplified concept function I end up with a NULL for first or last 
weekend variable.


create or replace function sys.time_test ()
returns date as 
$$
DECLARE
   first_weekend date;
   last_weekend date;
BEGIN

  SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date, 
last_weekend::date FROM sys.calendar
  WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');

 RETURN( COALESCE(last_weekend,'01-jun-2014'));
   
END
$$
LANGUAGE plpgsql volatile;


If I execute the same select logic from a psql shell I get the correct result.  


(1 row)

ace_db=# ace_db=#  SELECT MIN(CAL_DATE),MAX(CAL_DATE) FROM sys.calendar cal 

WHERE cal.month_of_year = 
(extract(MONTH FROM current_date))::int AND 
 
cal.year_of_date = (extract(YEAR FROM current_date))::int AND   

 cal.day_of_week IN ( 'Sat','Sun');
min |max
+
 2015-06-06 | 2015-06-28
(1 row)


If I simplify to a single variable it works. i.e


create or replace function sys.time_test ()
returns date as 
$$
DECLARE
   first_weekend date;
   last_weekend date;
BEGIN

  SELECT MIN(CAL_DATE) INTO first_weekend::date FROM sys.calendar
  WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');

 RETURN( COALESCE(first_weekend,'01-jun-2014'));
   
END
$$
LANGUAGE plpgsql volatile;



I suppose I can adjust to write my actual function to have 2 selects; one for 
each variable.
However, I thought according to the documentation the targets could/must match 
the result columns for select into ?


Thoughts


Thanks


Dave Day




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] archive_timeout and WAL size

2015-06-29 Thread Edson Richter

Dear community,

I'm using PostgreSQL 9.3.6 on Linux x64.

Would sound a stupid questions, and sorry if it was already asked 
before: if I set the archive_timeout, and then I have them sent every 
minute (for example), are the files still 16MB in size, or are they 
truncated and sent in smaller sizes as well?

I've not found this information in docs.

Thanks,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql question: select into multiple variables ?

2015-06-29 Thread Day, David

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Monday, June 29, 2015 4:03 PM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] plpgsql question: select into multiple variables ?

On 06/29/2015 12:07 PM, Day, David wrote:
 Hi,



 Postgres version 9.3.9


 What is wrong with my usage of the plpgsql  select into concept I 
 have a function to look into a calendar table to find the first and 
 Last weekend date of a month.

 In this simplified concept function I end up with a NULL for first or last 
 weekend variable.


 create or replace function sys.time_test () returns date as $$ DECLARE
 first_weekend date;
 last_weekend date;
 BEGIN

SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date, 
 last_weekend::date FROM sys.calendar
WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
year_of_date = (extract(YEAR FROM current_date))::int AND
   day_of_week IN ( 'Sat','Sun');

   RETURN( COALESCE(last_weekend,'01-jun-2014'));

 END
 $$
 LANGUAGE plpgsql volatile;

The ::date cast seem to be the problem. When I tried a version of the function 
here with them I got the same output. Eliminating them got the correct output. 
They are redundant as you already DECLAREd first_weekend and last_weekend to be 
DATE type. So:

  SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend, last_weekend .


 If I execute the same select logic from a psql shell I get the correct result.


 (1 row)

 ace_db=# ace_db=#  SELECT MIN(CAL_DATE),MAX(CAL_DATE) FROM sys.calendar cal   
   
 WHERE cal.month_of_year = 
 (extract(MONTH FROM current_date))::int AND   

 cal.year_of_date = (extract(YEAR FROM current_date))::int AND 
   
  cal.day_of_week IN ( 'Sat','Sun');
  min |max
 +
   2015-06-06 | 2015-06-28
 (1 row)


 If I simplify to a single variable it works. i.e


 create or replace function sys.time_test () returns date as $$ DECLARE
 first_weekend date;
 last_weekend date;
 BEGIN

SELECT MIN(CAL_DATE) INTO first_weekend::date FROM sys.calendar
WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
year_of_date = (extract(YEAR FROM current_date))::int AND
   day_of_week IN ( 'Sat','Sun');

   RETURN( COALESCE(first_weekend,'01-jun-2014'));

 END
 $$
 LANGUAGE plpgsql volatile;



 I suppose I can adjust to write my actual function to have 2 selects; one for 
 each variable.
 However, I thought according to the documentation the targets could/must 
 match the result columns for select into ?


 Thoughts


 Thanks


 Dave Day






--
Adrian Klaver
adrian.kla...@aklaver.com



I agree with your evaluation.
I originally had that, but in playing around with the function had added the 
casts with no benefit and seemingly no harm either.
I later noted that I was comparing to 'SAT','SUN' rather then 'Sat','Sun' in my 
calendar table.
After repairing that I forgot to back out the date cast.  

Although the cast was redundant as you pointed out. I am  not quite sure why it 
made it not work.
Nontheless,  I am happy to move on to other issues.


Thanks very much for your assistance.



Dave Day



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql question: select into multiple variables ?

2015-06-29 Thread David G. Johnston
On Mon, Jun 29, 2015 at 4:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Adrian Klaver adrian.kla...@aklaver.com writes:
  On 06/29/2015 12:07 PM, Day, David wrote:
  What is wrong with my usage of the plpgsql  select into concept
  I have a function to look into a calendar table to find the first and
  Last weekend date of a month.
 
  create or replace function sys.time_test ()
  returns date as
  $$
  DECLARE
  first_weekend date;
  last_weekend date;
  BEGIN
 
  SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date,
 last_weekend::date FROM sys.calendar ...

  The ::date cast seem to be the problem.

 Indeed.  Here's what's happening: the argument of INTO can basically only
 be a list of variable names.  (Well, they can be qualified field names,
 but certainly not cast expressions.)  And there's this messy legacy syntax
 rule that says the INTO clause can be anywhere inside the SELECT list.
  So what happens is the plpgsql parser reads INTO first_weekend, notes
 the next token is :: which can't be part of INTO, and drops back to
 handling the rest of the input as SELECT text.  So what you wrote here is
 equivalent to

 SELECT MIN(CAL_DATE),MAX(CAL_DATE)  ::date, last_weekend::date INTO
 first_weekend FROM sys.calendar ...

 which accidentally looks like perfectly valid SELECT syntax.  And I think
 it doesn't complain about too many output columns either.  So you end up
 with no reported error and very confusing results.


​Does it help to recognize the fact that first_weekend::​date is not a
valid identifier name (because it is lacking double-quotes)?  It knows that
:: cannot be part of INTO but it is in the middle of reading the
characters of an identifier and without quoting it cannot one of those
either.  Can that be made to take precedence and at least cause this
specific case to fail?


 To make this noticeably better, we'd probably have to insist that
 INTO come at the end of the SELECT list,


Are you missing a not here?  ...insist that INTO not come at the end of
the SELECT list?  It does seem any other location results in a syntax
error - including in between the two select-list columns (i.e., MAX(...)
INTO var1::date, var2::date MIN(...))


 which would break lots and
 lots of existing client code ... so I'm not holding my breath.

 Moral of the story: being user-friendly by accepting sloppy syntax
 is not an unalloyed win.


​From the documentation:

​
SELECT select_expressions INTO [STRICT] target FROM ...;

If a row or a variable list is used as target, the query's result columns
must exactly match the structure of the target as to number and data types,
or else a run-time error occurs. When a record variable is the target, it
automatically configures itself to the row type of the query result columns.

The INTO clause can appear almost anywhere in the SQL command. Customarily
it is written either just before or just after the list of
select_expressions in a SELECT command, or at the end of the command for
other command types. It is recommended that you follow this convention in
case the PL/pgSQL parser becomes stricter in future versions.
​​

I've never really liked the above customarily advice and do so less given
this example.  For all other statement types the INTO is the last clause
written and while that may not be what experienced people default to doing
it seems reasonable, safe, and consistent to suggest the same location for
SELECT queries while noting that indeed its position just before or after
the select list are common in the wild.  The comment about becoming
stricter should probably just be removed because, as noted, it ain't gonna
happen.

I'll admit that this all is not likely worth a great deal of effort given
the lack of complains and the obviousness of the problem's manifestation.
But it is the case that the lack of an error occurs in the recommended
syntax form.

David J.


Re: [GENERAL] plpgsql question: select into multiple variables ?

2015-06-29 Thread Tom Lane
Adrian Klaver adrian.kla...@aklaver.com writes:
 On 06/29/2015 12:07 PM, Day, David wrote:
 What is wrong with my usage of the plpgsql  select into concept
 I have a function to look into a calendar table to find the first and
 Last weekend date of a month.
 
 create or replace function sys.time_test ()
 returns date as
 $$
 DECLARE
 first_weekend date;
 last_weekend date;
 BEGIN
 
 SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date, 
 last_weekend::date FROM sys.calendar ...

 The ::date cast seem to be the problem.

Indeed.  Here's what's happening: the argument of INTO can basically only
be a list of variable names.  (Well, they can be qualified field names,
but certainly not cast expressions.)  And there's this messy legacy syntax
rule that says the INTO clause can be anywhere inside the SELECT list.
 So what happens is the plpgsql parser reads INTO first_weekend, notes
the next token is :: which can't be part of INTO, and drops back to
handling the rest of the input as SELECT text.  So what you wrote here is
equivalent to

SELECT MIN(CAL_DATE),MAX(CAL_DATE)  ::date, last_weekend::date INTO 
first_weekend FROM sys.calendar ...

which accidentally looks like perfectly valid SELECT syntax.  And I think
it doesn't complain about too many output columns either.  So you end up
with no reported error and very confusing results.

To make this noticeably better, we'd probably have to insist that
INTO come at the end of the SELECT list, which would break lots and
lots of existing client code ... so I'm not holding my breath.

Moral of the story: being user-friendly by accepting sloppy syntax
is not an unalloyed win.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql question: select into multiple variables ?

2015-06-29 Thread Tom Lane
David G. Johnston david.g.johns...@gmail.com writes:
 On Mon, Jun 29, 2015 at 4:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 ... So what you wrote here is equivalent to
 
 SELECT MIN(CAL_DATE),MAX(CAL_DATE)  ::date, last_weekend::date INTO
 first_weekend FROM sys.calendar ...

 ​Does it help to recognize the fact that first_weekend::​date is not a
 valid identifier name (because it is lacking double-quotes)?

No.  You're supposing that we *should* reject this case, which is not
true given the current rules.  As a counterexample consider

 SELECT INTO x - y FROM foo

which per current rules means

 SELECT - y INTO x FROM foo

The only real difference between this and the :: case is that :: doesn't
come in a prefix-operator form, but that's an awfully weak reed to hang
a cross-language syntax rule on.

 To make this noticeably better, we'd probably have to insist that
 INTO come at the end of the SELECT list,

 Are you missing a not here?

No, I'm not.  See previous example.  To detect errors more completely,
we'd need a rule that what follows the INTO clause be FROM and nothing
else (well, maybe GROUP BY and some other cases, but in any case a fully
reserved word).  As things stand, to support INTO-someplace-else we have
to suppose that anything other than identifiers and commas is not part of
INTO but belongs to the SELECT expression list.  It's precisely the lack
of any clear delimiter between INTO's arguments and the main SELECT syntax
that is biting us, and as long as we allow INTO somewhere other than after
the SELECT expression list, we can't have a delimiter because of the
historical choice not to.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] serialization failure why?

2015-06-29 Thread Simon Riggs
On 29 June 2015 at 21:13, Kevin Grittner kgri...@ymail.com wrote:

 Simon Riggs si...@2ndquadrant.com wrote:
  On 17 June 2015 at 13:52, Kevin Grittner kgri...@ymail.com wrote:
  Filipe Pina filipe.p...@impactzero.pt wrote:

  if drop the foreign key constraint on stuff_ext table there are
  no failures at all…
 
  It is my recollection that we were excluding the queries used to
  enforce referential integrity constraints from the conflict
  tracking, so I am surprised you are seeing this.  What is the exact
  version you are using (as reported by the version() function)?
 
  I don't see any mechanism for excluding anything from
  serializable checks, so I can't see how that would work.

 It is a matter of where calls to PredicateLockXxx and
 CheckForSerializableConflictXxx calls were inserted into, for
 example, heap and index AM code.  At least I think we omitted
 placing some at locations which were known to be used for RI
 enforcement; but apparently some more generic code is exercised by
 the RI trigger execution which can still trigger serialization
 failures based on FKs.

  I can't find any mention of serializability concerns in the RI
  code itself.

 It is mentioned in the README-SSI file.

  AFAIK it would be strange to exclude FK checks from
  serializability checks, since they represent a valid observation
  of an intermediate state.

 The idea that this is OK is based on the observations in the paper
 Automating the Detection of Snapshot Isolation Anomalies by
 Sudhir Jorwekar, Alan Fekete, Krithi Ramamritham, and S.
 Sudarshan[1].  To quote a key sentence from that paper:


So we are saying we can exclude FK checks from serialization, but we do
not, yet.

Since the FK checks run with a special snapshot it should be simple to
exclude them.


  Mat Views are excluded but I don't understand why that should be
  the case.  There is no documented explanation.

 Good point; it should be documented.  Basically, since the matview
 is a materialized copy of data from other relations from some prior
 point in time, the race conditions caught by SSI would be trivial
 compared to those likely to exist based on the elapsed time since
 the last REFRESH; so it would be kind of silly to try to enforce
 the more subtle interactions while ignoring the big, glaring,
 obvious one.  It would be a bit like treating a laceration of
 someone's hand when they were not breathing -- it's not the thing
 to worry about.  As we enhance matviews to have associated
 freshness information and especially once we use them like indexes
 to optimize queries this will deserve a close look, as there is
 likely to be something meaningful we can do at that time.


We should add that as a code comment.

Thanks for complete answers to those questions.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


Re: [GENERAL] plpgsql question: select into multiple variables ?

2015-06-29 Thread Adrian Klaver

On 06/29/2015 12:07 PM, Day, David wrote:

Hi,



Postgres version 9.3.9


What is wrong with my usage of the plpgsql  select into concept
I have a function to look into a calendar table to find the first and
Last weekend date of a month.

In this simplified concept function I end up with a NULL for first or last 
weekend variable.


create or replace function sys.time_test ()
returns date as
$$
DECLARE
first_weekend date;
last_weekend date;
BEGIN

   SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date, 
last_weekend::date FROM sys.calendar
   WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
   year_of_date = (extract(YEAR FROM current_date))::int AND
  day_of_week IN ( 'Sat','Sun');

  RETURN( COALESCE(last_weekend,'01-jun-2014'));

END
$$
LANGUAGE plpgsql volatile;


The ::date cast seem to be the problem. When I tried a version of the 
function here with them I got the same output. Eliminating them got the 
correct output. They are redundant as you already DECLAREd first_weekend 
and last_weekend to be DATE type. So:


 SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend, last_weekend .



If I execute the same select logic from a psql shell I get the correct result.


(1 row)

ace_db=# ace_db=#  SELECT MIN(CAL_DATE),MAX(CAL_DATE) FROM sys.calendar cal 

WHERE cal.month_of_year = 
(extract(MONTH FROM current_date))::int AND 
 
cal.year_of_date = (extract(YEAR FROM current_date))::int AND   

 cal.day_of_week IN ( 'Sat','Sun');
 min |max
+
  2015-06-06 | 2015-06-28
(1 row)


If I simplify to a single variable it works. i.e


create or replace function sys.time_test ()
returns date as
$$
DECLARE
first_weekend date;
last_weekend date;
BEGIN

   SELECT MIN(CAL_DATE) INTO first_weekend::date FROM sys.calendar
   WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
   year_of_date = (extract(YEAR FROM current_date))::int AND
  day_of_week IN ( 'Sat','Sun');

  RETURN( COALESCE(first_weekend,'01-jun-2014'));

END
$$
LANGUAGE plpgsql volatile;



I suppose I can adjust to write my actual function to have 2 selects; one for 
each variable.
However, I thought according to the documentation the targets could/must match 
the result columns for select into ?


Thoughts


Thanks


Dave Day







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] WAL archive resend policy

2015-06-29 Thread Edson Richter

Dear all,

Another question about WAR archiving: what is the resend policy if 
remote storage runs out of space?
The failed archives will be resend automatically in future, or there is 
need for manual interation?


Thanks,

--
Atenciosamente,

Edson Carlos Ericksson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL archive resend policy

2015-06-29 Thread Joshua D. Drake


On 06/29/2015 12:49 PM, Edson Richter wrote:


Dear all,

Another question about WAR archiving: what is the resend policy if
remote storage runs out of space?
The failed archives will be resend automatically in future, or there is
need for manual interation?


They will be resent.



Thanks,




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature request: fsync and commit_delay options per database

2015-06-29 Thread Tom Lane
=?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?= brauli...@gmail.com writes:
 Besides increasing it, it would be great to have these two options
 (fsync and commit_delay) per database, that is, valid only for
 databases configured with them. That would greatly speed up test
 running and still make the cluster available for other real
 databases.

 Is this feature or something similar planned?

No.  Neither of them make any sense per-database.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] serialization failure why?

2015-06-29 Thread Kevin Grittner
Simon Riggs si...@2ndquadrant.com wrote:
 On 17 June 2015 at 13:52, Kevin Grittner kgri...@ymail.com wrote:
 Filipe Pina filipe.p...@impactzero.pt wrote:

 if drop the foreign key constraint on stuff_ext table there are
 no failures at all…

 It is my recollection that we were excluding the queries used to
 enforce referential integrity constraints from the conflict
 tracking, so I am surprised you are seeing this.  What is the exact
 version you are using (as reported by the version() function)?

 I don't see any mechanism for excluding anything from
 serializable checks, so I can't see how that would work.

It is a matter of where calls to PredicateLockXxx and
CheckForSerializableConflictXxx calls were inserted into, for
example, heap and index AM code.  At least I think we omitted
placing some at locations which were known to be used for RI
enforcement; but apparently some more generic code is exercised by
the RI trigger execution which can still trigger serialization
failures based on FKs.

 I can't find any mention of serializability concerns in the RI
 code itself.

It is mentioned in the README-SSI file.

 AFAIK it would be strange to exclude FK checks from
 serializability checks, since they represent a valid observation
 of an intermediate state.

The idea that this is OK is based on the observations in the paper
Automating the Detection of Snapshot Isolation Anomalies by
Sudhir Jorwekar, Alan Fekete, Krithi Ramamritham, and S.
Sudarshan[1].  To quote a key sentence from that paper:

| The database system ensures the preservation of some integrity
| constraints which are explicitly declared to the system in the
| schema definition, such as uniqueness of primary key and
| referential integrity.  Some of the SI anomalies are avoided due
| to the dbms enforcement of these constraints.

For all the gory details, please reference that paper.  It was
discussed during PostgreSQL SSI development, although I'm not sure
whether that made it to the lists or was off-list discussion amongst
those working on it at the time.  Basically, the anomalies are
avoided due to the enforcement of the constraints, and it is next
to impossible to generate a serialization failure instead of the
constraint failure due to the timings involved.  Since the
constraints are more narrowly targeted (always at the tuple level),
using SSI techniques would be redundant effort (hurting
performance) that could only generate false positives.  If you see
some exception to that which we missed, let's discuss.

Perhaps that paper should be cited in the source code and/or
README.

 Mat Views are excluded but I don't understand why that should be
 the case.  There is no documented explanation.

Good point; it should be documented.  Basically, since the matview
is a materialized copy of data from other relations from some prior
point in time, the race conditions caught by SSI would be trivial
compared to those likely to exist based on the elapsed time since
the last REFRESH; so it would be kind of silly to try to enforce
the more subtle interactions while ignoring the big, glaring,
obvious one.  It would be a bit like treating a laceration of
someone's hand when they were not breathing -- it's not the thing
to worry about.  As we enhance matviews to have associated
freshness information and especially once we use them like indexes
to optimize queries this will deserve a close look, as there is
likely to be something meaningful we can do at that time.

Anyway, it appears that there is more that could be done to avoid
generating serialization failures based on the actions of RI
enforcement triggers.  I don't think we can properly call it a bug,
since it doesn't allow an incorrect state to be created; but it is
clearly an opportunity for performance enhancement of the feature.
One of many, unfortunately.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


[1] http://www.vldb.org/conf/2007/papers/industrial/p1263-jorwekar.pdf


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Feature request: fsync and commit_delay options per database

2015-06-29 Thread Bráulio Bhavamitra
Hello all,

After reading 
http://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing
I've tried to use commit_delay to make commits really slow on a test
environment. Unfortunetely, the maximum value is 100ms (100_000
microseconds).

Besides increasing it, it would be great to have these two options
(fsync and commit_delay) per database, that is, valid only for
databases configured with them. That would greatly speed up test
running and still make the cluster available for other real
databases.

Is this feature or something similar planned?

cheers,
bráulio

-- 
Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos.
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature request: fsync and commit_delay options per database

2015-06-29 Thread Bráulio Bhavamitra
On Mon, Jun 29, 2015 at 7:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 =?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?= brauli...@gmail.com writes:
 Besides increasing it, it would be great to have these two options
 (fsync and commit_delay) per database, that is, valid only for
 databases configured with them. That would greatly speed up test
 running and still make the cluster available for other real
 databases.

 Is this feature or something similar planned?

 No.  Neither of them make any sense per-database.
Is there something else planned for in-memory databases?


 regards, tom lane



-- 
Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos.
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Andy Erskine
no snapshot available .. i don't mind running basebackup once i've finished
my test.

So if someone could help with the steps to turn off replication and bring
secondary up as a standalone db that would be great
thanks.

On 29 June 2015 at 15:58, John R Pierce pie...@hogranch.com wrote:

 On 6/28/2015 10:52 PM, Michael Paquier wrote:

 Well, yes. It depends on what you want to do then. If this testing
 really requires to promote the standby then you will need to take a
 new fresh base backup knowing that you are using 9.2.


 if the standby is running on a file system with snapshotting (like zfs),
 and the master is doing WAL archiving, you could, in theory, pause the
 replication and snapshot the slave, do read/write tests on the slave, then
 restore that snapshot and resume replication, pulling from the WAL archive
 til it catches up.



 --
 john r pierce, recycling bits in santa cruz



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 

*Andy Erskine*



*JDS Australia*
*P* 1300 780 432 | *M* 0431919301 | *F* 03 90124578

*E:* *andy.ersk...@jds.net.au andy.ersk...@jds.net.au*

*W*: www.jds.net.au
Level 8, 2 Russell Street, Melbourne, VIC 3000
GPO Box 4777, Melbourne VIC 3001

[image: JDS Signature v1]


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Andy Erskine
Thanks Jeff,

I don't want a clone - i want to temporaily turn off replication (and
therefore failover) and load a different db into the secondary which is now
writable and run some tests. Then i will remove this db and run a
basebackup to reinstate a copy of the master and turn on replication again.

So :

shutdown secondary
change hba_conf so primary cannot connect
rename recovery.conf
start db
load new db
run tests

shutdown db
basebackup db from master
revert streaming settings
startdb

thanks.

On 29 June 2015 at 17:12, John R Pierce pie...@hogranch.com wrote:

 On 6/29/2015 12:06 AM, Jeff Janes wrote:

 But since you want a clone, what is the point of first setting up
 streaming, and then breaking it?  Just use pg_basebackup to set up a clone
 directly, without ever having started streaming.  It seems like you are
 just going to confuse yourself about what is a standby meant for fail over,
 and what is a clone meant for testing.  With possibly disastrous
 consequences.



 VERY good point!




 --
 john r pierce, recycling bits in santa cruz



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 

*Andy Erskine*



*JDS Australia*
*P* 1300 780 432 | *M* 0431919301 | *F* 03 90124578

*E:* *andy.ersk...@jds.net.au andy.ersk...@jds.net.au*

*W*: www.jds.net.au
Level 8, 2 Russell Street, Melbourne, VIC 3000
GPO Box 4777, Melbourne VIC 3001

[image: JDS Signature v1]


Re: [GENERAL] create index on a field of udt

2015-06-29 Thread John R Pierce

On 6/28/2015 11:24 PM, Shujie Shang wrote:

insert into test values (generate_series(1, 300), (1, 'hi')::info);
explain select * from test where i.id http://i.id=1;
the result is : seqscan


does not every row of that match i.id = 1 ?

try ...

insert into test values (generate_series(1, 300), 
(generate_series(1, 300), 'hi')::info);

analyze test;
explain select * from test where getID(i) http://i.id= 1;



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] create index on a field of udt

2015-06-29 Thread Charles Clavadetscher
+1

 

create index on test (((i).id));

ANALYZE

explain select * from test where (i).id = 8909;

   QUERY PLAN

-

Index Scan using test_id_idx on test  (cost=0.43..8.45 rows=1 width=34)

   Index Cond: ((i).id = 8909)

(2 rows)

 

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jeff Janes
Sent: Montag, 29. Juni 2015 08:42
To: Shujie Shang
Cc: John R Pierce; PostgreSQL mailing lists
Subject: Re: [GENERAL] create index on a field of udt

 

On Sun, Jun 28, 2015 at 10:31 PM, Shujie Shang ssh...@pivotal.io 
mailto:ssh...@pivotal.io  wrote:

Oh, I didn't explain my question well, actually I want to create an index on an 
udt in a table.

 

e.g.

create type info as (id int, name text);

creat table test (i info);

I want to run:

create index myindex on test (i.id http://i.id )

 

 

 

It is a matter of finding the correct level of magic parentheses.

 

create index on test (((i).id));

 

The outer layer are always needed for creating indexes.  The middle layer are 
needed because you are indexing an expression, not a column.  And the inner 
layer is needed because, well, that is just how udt works.

 

Cheers,

 

Jeff



Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Andy Erskine
Cheers Micheal,

So i don't want to touch the file as this will promote it too a master - i
would just like it as a standalone db for this instance - i need to load a
db and do some testing.

After which i will delete the data directory and run the basebackup cmd and
pull the db back from the master and set up streaming again.

Just need to be certain of the right steps so i don't effect the Master in
any way.



On 29 June 2015 at 15:52, Michael Paquier michael.paqu...@gmail.com wrote:

 On Mon, Jun 29, 2015 at 2:42 PM, Andy Erskine andy.ersk...@jds.net.au
 wrote:
  I'd like to turn off the streaming on my postgresdb (9.2) It is currently
  setup to stream from master to a single secondary.
 
  I want to shutdown the secondary and turn it into another master and
 connect
  it to another application for testing - after which i want to revert it
 back
  to a streaming secondary.

 What kind of tests? If this is read-only activity you could simply cut
 the network connection between the master and the slave, or restart
 the slave after updating recovery.conf such as it is still a standby
 but has no primary_conninfo so as it is performing archive recovery,
 or at least a fake one.

  Is this possible while not touching the Master DB ? If so what are the
  correct steps pls ?

 Well, yes. It depends on what you want to do then. If this testing
 really requires to promote the standby then you will need to take a
 new fresh base backup knowing that you are using 9.2.
 Regards,
 --
 Michael




-- 

*Andy Erskine*



*JDS Australia*
*P* 1300 780 432 | *M* 0431919301 | *F* 03 90124578

*E:* *andy.ersk...@jds.net.au andy.ersk...@jds.net.au*

*W*: www.jds.net.au
Level 8, 2 Russell Street, Melbourne, VIC 3000
GPO Box 4777, Melbourne VIC 3001

[image: JDS Signature v1]


Re: [GENERAL] create index on a field of udt

2015-06-29 Thread Jeff Janes
On Sun, Jun 28, 2015 at 10:31 PM, Shujie Shang ssh...@pivotal.io wrote:

 Oh, I didn't explain my question well, actually I want to create an index
 on an udt in a table.

 e.g.
 create type info as (id int, name text);
 creat table test (i info);
 I want to run:
 create index myindex on test (i.id)



It is a matter of finding the correct level of magic parentheses.

create index on test (((i).id));

The outer layer are always needed for creating indexes.  The middle layer
are needed because you are indexing an expression, not a column.  And the
inner layer is needed because, well, that is just how udt works.

Cheers,

Jeff


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread John R Pierce

On 6/29/2015 12:06 AM, Jeff Janes wrote:
But since you want a clone, what is the point of first setting up 
streaming, and then breaking it?  Just use pg_basebackup to set up a 
clone directly, without ever having started streaming.  It seems like 
you are just going to confuse yourself about what is a standby meant 
for fail over, and what is a clone meant for testing.  With possibly 
disastrous consequences.



VERY good point!



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread John R Pierce

On 6/28/2015 11:58 PM, Andy Erskine wrote:
If i touch my trigger file and promote my secondary to a master - what 
effect will that have on the Master - will i need to make any changes 
on that side ? Will it still try and stream data across to the 
promoted secondary and just fill up the log files with error messages ?


its not 'push', the slave pulls the data from the master.   as long as 
the master has WAL file archiving enabled (which is a separate but 
related thing to streaming), the slave will catch up... if the slave is 
understands where the WAL archive is, then when its woken back up after 
being restored to how it was before your testing, it will query the 
master, find out its way ahead of its timeline, and consult with the WAL 
archives, fetching as many as are needed to catch up to the servers' 
current timeline, then resume streaming ...




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] create index on a field of udt

2015-06-29 Thread Shujie Shang
Hi,
I find a way to create index, I create a function returns the 'id' field of
udt info, then I create index based on this function.
e.g

create type info as (id int, name text);

creat table test (id int, i info);

create or replace function getID(i info) returns int as
$$ select $1.id $$
language sql;

create index infoindex on test (getID(i));

I want to use this index, but after I insert lots of data to the table
'test' and run 'select * from test where i.id=5', it still use 'seqscan',
not 'index scan'. How can I verify the index is build correctly?

e.g.
insert into test values (generate_series(1, 300), (1, 'hi')::info);
explain select * from test where i.id=1;
the result is : seqscan

On Mon, Jun 29, 2015 at 1:57 PM, Charles Clavadetscher 
clavadetsc...@swisspug.org wrote:

 Hello



 I am not sure it is that simple. Probably you need to create operator
 classes to be used for indexing.



 http://www.postgresql.org/docs/9.4/static/xtypes.html



 You are probably better off using the basic data type in your table and
 using a composite index.



 Bye

 Charles





 *From:* pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] *On Behalf Of *John R Pierce
 *Sent:* Montag, 29. Juni 2015 07:51
 *To:* pgsql-general@postgresql.org
 *Subject:* Re: [GENERAL] create index on a field of udt



 On 6/28/2015 10:31 PM, Shujie Shang wrote:

 Oh, I didn't explain my question well, actually I want to create an index
 on an udt in a table.



 e.g.

 create type info as (id int, name text);

 creat table test (i info);

 I want to run:

 create index myindex on test (i.id)


 create table test of info primary key(id);

 or, if you want to use your type plus other stuff in the table, I believe
 its something like...

 create table test (i info, stuff...) primary key (i.id)
 or
 create index test(i.id);


 watch out for ambiguity if the type names match the table or field name.
 see http://www.postgresql.org/docs/current/static/rowtypes.html#AEN7836


 --

 john r pierce, recycling bits in santa cruz




Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Andy Erskine
If i touch my trigger file and promote my secondary to a master - what
effect will that have on the Master - will i need to make any changes on
that side ? Will it still try and stream data across to the promoted
secondary and just fill up the log files with error messages ?

On 29 June 2015 at 16:50, Michael Paquier michael.paqu...@gmail.com wrote:



 On Mon, Jun 29, 2015 at 3:34 PM, Andy Erskine andy.ersk...@jds.net.au
 wrote:

 no snapshot available .. i don't mind running basebackup once i've
 finished my test.

 So if someone could help with the steps to turn off replication and bring
 secondary up as a standalone db that would be great.


 How to promote a node: pg_ctl promote or use a trigger_file:

 http://www.postgresql.org/docs/devel/static/warm-standby.html#STANDBY-SERVER-OPERATION
 Turning temporarily off replication has little meaning if you intend to
 bring back a new standby afterwards.
 My 2c.
 --
 Michael




-- 

*Andy Erskine*



*JDS Australia*
*P* 1300 780 432 | *M* 0431919301 | *F* 03 90124578

*E:* *andy.ersk...@jds.net.au andy.ersk...@jds.net.au*

*W*: www.jds.net.au
Level 8, 2 Russell Street, Melbourne, VIC 3000
GPO Box 4777, Melbourne VIC 3001

[image: JDS Signature v1]


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Jeff Janes
On Sun, Jun 28, 2015 at 11:34 PM, Andy Erskine andy.ersk...@jds.net.au
wrote:

 no snapshot available .. i don't mind running basebackup once i've
 finished my test.

 So if someone could help with the steps to turn off replication and bring
 secondary up as a standalone db that would be great
 thanks.


If people are used to connecting the standby in order to run production
queries (to spare load from the master), then make sure they know they
can't do that anymore.  Configure the network to block them, change
pg_hba.conf, etc.

Make sure that it is not in archive mode, or at least that it isn't
archiving to the same directory as master is.

Shut it down, remove (or rename) the recovery.conf file, and start it back
up again.  Instead of doing this you could use the trigger file (configured
in recovery.conf) or pg_ctl promote, but I wouldn't do that.  You don't
want to make it look like you have promoted it to master, when that is not
what you have done.

But since you want a clone, what is the point of first setting up
streaming, and then breaking it?  Just use pg_basebackup to set up a clone
directly, without ever having started streaming.  It seems like you are
just going to confuse yourself about what is a standby meant for fail over,
and what is a clone meant for testing.  With possibly disastrous
consequences.

Cheers,

Jeff


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Michael Paquier
On Mon, Jun 29, 2015 at 3:34 PM, Andy Erskine andy.ersk...@jds.net.au
wrote:

 no snapshot available .. i don't mind running basebackup once i've
 finished my test.

 So if someone could help with the steps to turn off replication and bring
 secondary up as a standalone db that would be great.


How to promote a node: pg_ctl promote or use a trigger_file:
http://www.postgresql.org/docs/devel/static/warm-standby.html#STANDBY-SERVER-OPERATION
Turning temporarily off replication has little meaning if you intend to
bring back a new standby afterwards.
My 2c.
-- 
Michael


Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-29 Thread Xavier 12



On 19/06/2015 03:31, Sameer Kumar wrote:


On Thu, 18 Jun 2015 15:17 Xavier 12 mania...@gmail.com 
mailto:mania...@gmail.com wrote:


On 18/06/2015 04:00, Sameer Kumar wrote:



On Wed, 17 Jun 2015 15:24 Xavier 12 mania...@gmail.com
mailto:mania...@gmail.com wrote:


On 17/06/2015 03:17, Sameer Kumar wrote:


On Tue, 16 Jun 2015 16:55 Xavier 12
mania...@gmail.com mailto:mania...@gmail.com wrote:

Hi everyone,

Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a
master/slave stream replication
(hot_standby).

Psql01 (master) is backuped with Barman and
pg_xlogs is correctly
purged (archive_command is used).

Hower, Psql02 (slave) has a huge pg_xlog (951
files, 15G for 7 days
only, it keeps growing up until disk space is
full). I have found
documentation and tutorials, mailing list, but I
don't know what is
suitable for a Slave. Leads I've found :

- checkpoints
- archive_command
- archive_cleanup

Master postgresq.conf :

[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az
/var/lib/postgresql/9.1/main/pg_xlog/%f
bar...@nas.lan:/data/pgbarman/psql01/incoming/%f
mailto:bar...@nas.lan:/data/pgbarman/psql01/incoming/%25f'
max_wal_senders = 5
wal_keep_segments = 64

What's this parameter's value on Slave?

Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.



That was not my point. I was actually asking about
wal_keep_segment. Nevermind I found that I had misses the info
(found it below. Please see my response).
Besides I try to keep my master and standby config as same as
possible(so my advise ia to not switchoff autovacuum). The
parameters which are imeffective on slave anyways won't have
an effect. Same goes for parameters on master.
This helps me when I swap roles or do a failover. I have less
parameters to be worried about.


Okay


Can you check the pg_log for log files. They may have se info?
I am sorry if you have already provided that info (after I
finish I will try to look at your previous emails on this thread)


Nothing...
/var/log/postgresql/postgresql-2015-06-17_31.log is empty
(except old messages at the begining related to a configuration
issue - which is now solved - after rebuilding the cluster yesterday).
/var/log/syslog has nothing but these :

Jun 18 09:10:11 Bdd02 postgres[28400]: [2-1] 2015-06-18 09:10:11
CEST LOG:  paquet de d?marrage incomplet
Jun 18 09:10:41 Bdd02 postgres[28523]: [2-1] 2015-06-18 09:10:41
CEST LOG:  paquet de d?marrage incomplet
Jun 18 09:11:11 Bdd02 postgres[28557]: [2-1] 2015-06-18 09:11:11
CEST LOG:  paquet de d?marrage incomplet
Jun 18 09:11:41 Bdd02 postgres[28652]: [2-1] 2015-06-18 09:11:41
CEST LOG:  paquet de d?marrage incomplet
Jun 18 09:12:11 Bdd02 postgres[28752]: [2-1] 2015-06-18 09:12:11
CEST LOG:  paquet de d?marrage incomplet
Jun 18 09:12:41 Bdd02 postgres[28862]: [2-1] 2015-06-18 09:12:41
CEST LOG:  paquet de d?marrage incomplet
Jun 18 09:13:11 Bdd02 postgres[28891]: [2-1] 2015-06-18 09:13:11
CEST LOG:  paquet de d?marrage incomplet
Jun 18 09:13:40 Bdd02 postgres[28987]: [2-1] 2015-06-18 09:13:40
CEST LOG:  paquet de d?marrage incomplet

These messages are related to Zabbix (psql port check).


You sure these are the only messages you have in the log files?


Also can you share the vacuum cost parameters in your environm

en

t?



I don't understand that part... is this in postgresql.conf ?

There are vacuum cost parameters in postgresql.conf

http://www.postgresql.org/docs/9.4/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST



autovacuum = on

Slave postgresql.conf :

[...]
wal_level = minimal
wal_keep_segments = 32



Sorry I missed this somehow earlier. Any reason why you think
you need to retain 32 wal files on slave?


No but I get the feeling that the parameter is ignored by my
slave... should I try another value ?




AFAIK you don't nees this parameter to set to  0 unless you have 
cascaded replica pull wal 

Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Tomas Vondra

Hi,

On 06/29/2015 09:27 AM, Andy Erskine wrote:

Thanks Jeff,

I don't want a clone - i want to temporaily turn off replication
(and therefore failover) and load a different db into the secondary
which is now writable and run some tests. Then i will remove this db
and run a basebackup to reinstate a copy of the master and turn on
replication  again.


So you want replica and you don't want replica at the same time?

Is there any reason why you simply don't want to either create a new 
snapshot using pg_basebackup, or just simply shut down the standby, 
create a copy of the data directory, remove the recovery conf and start 
it again as a standalone database?


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Andy Erskine
agreed there is an element of risk.

however a backup of the master will be carried out prior (ok there could
potentially be a gap of data during downtime)

unfortunately i have nothing else big enough (diskwise) to run my tests on.

On 30 June 2015 at 15:47, Michael Paquier michael.paqu...@gmail.com wrote:



 On Tue, Jun 30, 2015 at 2:29 PM, Andy Erskine andy.ersk...@jds.net.au
 wrote:

 Ok so a question i should have asked at the very beginning ..

 If i touch my trigger file - promoting the secondary to a master - will
 that in anyway effect the master thats already running ?
 IE no files on the master will change ?


 Streaming replication is designed to let the master run properly when a
 standby disconnects.


 Then all i'll have to do is shutdown the secondary when i've finished and
 run the basebackup process again to restore replication (and change
 recovery.done - conf again.


 Yes.

 Now something that has been mentioned by the others: isn't your standby
 here for a reason? Like in case of failure don't you have a process to
 failover automatically? Perhaps you are shooting yourself in the foot by
 unplugging this standby, hence you should, and other recommend the same,
 simply let the existing standby alone and create a new instance by taking a
 new base backup from either the master or the standby and use it for your
 tests. Then eliminate the node you created. In short: avoid doing stupid
 things...
 --
 Michael




-- 

*Andy Erskine*



*JDS Australia*
*P* 1300 780 432 | *M* 0431919301 | *F* 03 90124578

*E:* *andy.ersk...@jds.net.au andy.ersk...@jds.net.au*

*W*: www.jds.net.au
Level 8, 2 Russell Street, Melbourne, VIC 3000
GPO Box 4777, Melbourne VIC 3001

[image: JDS Signature v1]


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Andy Erskine
No i don't want a replica.

I would like to reconfigure my streaming scenario into two standalone db's
i don't want the Master to be effected in anyway and i want it running
consistantly .. the secondary i want to reconfigure as a standalone to load
a different db and do some testing.

When i've finished testing - i want to reconfigure for streaming again.

On 30 June 2015 at 12:37, Tomas Vondra tomas.von...@2ndquadrant.com wrote:

 Hi,

 On 06/29/2015 09:27 AM, Andy Erskine wrote:

 Thanks Jeff,

 I don't want a clone - i want to temporaily turn off replication
 (and therefore failover) and load a different db into the secondary
 which is now writable and run some tests. Then i will remove this db
 and run a basebackup to reinstate a copy of the master and turn on
 replication  again.


 So you want replica and you don't want replica at the same time?

 Is there any reason why you simply don't want to either create a new
 snapshot using pg_basebackup, or just simply shut down the standby, create
 a copy of the data directory, remove the recovery conf and start it again
 as a standalone database?

 regards

 --
 Tomas Vondra  http://www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 

*Andy Erskine*



*JDS Australia*
*P* 1300 780 432 | *M* 0431919301 | *F* 03 90124578

*E:* *andy.ersk...@jds.net.au andy.ersk...@jds.net.au*

*W*: www.jds.net.au
Level 8, 2 Russell Street, Melbourne, VIC 3000
GPO Box 4777, Melbourne VIC 3001

[image: JDS Signature v1]


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Michael Paquier
On Tue, Jun 30, 2015 at 1:39 PM, Andy Erskine andy.ersk...@jds.net.au
wrote:

 No i don't want a replica.

 I would like to reconfigure my streaming scenario into two standalone db's
 i don't want the Master to be effected in anyway and i want it running
 consistantly .. the secondary i want to reconfigure as a standalone to load
 a different db and do some testing.

When i've finished testing - i want to reconfigure for streaming again.


Well, in this case:
1) promote your standby
2) Run your tests on it.
3) Recreate a new standby
The documentation online, as well as the PostgreSQL wiki have all the
documentation to help you achieve those steps.
Regards,
-- 
Michael


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Andy Erskine
Ok so a question i should have asked at the very beginning ..

If i touch my trigger file - promoting the secondary to a master - will
that in anyway effect the master thats already running ?

IE no files on the master will change ?

Then all i'll have to do is shutdown the secondary when i've finished and
run the basebackup process again to restore replication (and change
recovery.done - conf again.

thanks.

On 30 June 2015 at 15:22, Michael Paquier michael.paqu...@gmail.com wrote:



 On Tue, Jun 30, 2015 at 1:39 PM, Andy Erskine andy.ersk...@jds.net.au
 wrote:

 No i don't want a replica.

 I would like to reconfigure my streaming scenario into two standalone
 db's i don't want the Master to be effected in anyway and i want it running
 consistantly .. the secondary i want to reconfigure as a standalone to load
 a different db and do some testing.

 When i've finished testing - i want to reconfigure for streaming again.


 Well, in this case:
 1) promote your standby
 2) Run your tests on it.
 3) Recreate a new standby
 The documentation online, as well as the PostgreSQL wiki have all the
 documentation to help you achieve those steps.
 Regards,
 --
 Michael




-- 

*Andy Erskine*



*JDS Australia*
*P* 1300 780 432 | *M* 0431919301 | *F* 03 90124578

*E:* *andy.ersk...@jds.net.au andy.ersk...@jds.net.au*

*W*: www.jds.net.au
Level 8, 2 Russell Street, Melbourne, VIC 3000
GPO Box 4777, Melbourne VIC 3001

[image: JDS Signature v1]


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-29 Thread Michael Paquier
On Tue, Jun 30, 2015 at 2:29 PM, Andy Erskine andy.ersk...@jds.net.au
wrote:

 Ok so a question i should have asked at the very beginning ..

 If i touch my trigger file - promoting the secondary to a master - will
 that in anyway effect the master thats already running ?
 IE no files on the master will change ?


Streaming replication is designed to let the master run properly when a
standby disconnects.


 Then all i'll have to do is shutdown the secondary when i've finished and
 run the basebackup process again to restore replication (and change
 recovery.done - conf again.


Yes.

Now something that has been mentioned by the others: isn't your standby
here for a reason? Like in case of failure don't you have a process to
failover automatically? Perhaps you are shooting yourself in the foot by
unplugging this standby, hence you should, and other recommend the same,
simply let the existing standby alone and create a new instance by taking a
new base backup from either the master or the standby and use it for your
tests. Then eliminate the node you created. In short: avoid doing stupid
things...
-- 
Michael