Re: [GENERAL] Failure loading materialized view with pg_restore

2015-02-18 Thread BladeOfLight16
On Wed, Feb 18, 2015 at 5:48 AM, Brian Sutherland br...@vanguardistas.net
wrote:

 # dump and reload
 pg_dump --username super --format c -f dump.dump orig
 createdb copied


It might be helpful to dump in the plain SQL format and look at what it's
doing.


[GENERAL] Failure loading materialized view with pg_restore

2015-02-18 Thread Brian Sutherland
Hi,

If I run this set of commands against PostgreSQL 9.4.1 I pg_restore
throws an error with a permission problem. Why it does so is a mystery
to me, given that the user performing the restore is a superuser:

# superuser creates database and materialized view
createuser -s super
createdb --username super orig
psql --username super -c select 'USING:' || version(); orig
psql --username super -c 'CREATE TABLE x (y int);' orig
psql --username super -c 'CREATE MATERIALIZED VIEW myview AS select * from 
x' orig

# change the owner of the view to myview
createuser -S nobody
psql --username super -c 'ALTER TABLE myview OWNER TO nobody;' orig

# dump and reload
pg_dump --username super --format c -f dump.dump orig
createdb copied

# pg_restore errors
pg_restore --username super -d copied dump.dump

The error I get is:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2260; 0 16569 MATERIALIZED 
VIEW DATA myview nobody
pg_restore: [archiver (db)] could not execute query: ERROR:  permission 
denied for relation x
Command was: REFRESH MATERIALIZED VIEW myview;

In pg_hba I am using the trust method for everything (this is a test
cluster).

Is this expected behaviour or a bug?

-- 
Brian Sutherland


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql93-9.3.5: deadlock when updating parent table expected?

2015-02-18 Thread Bill Moran
On Wed, 18 Feb 2015 18:30:09 +
Dmitry O Litvintsev litvi...@fnal.gov wrote:
 
 Yes, deadlock occurs when there are multiple processes insert 
 concurrently into file table with the same volume id field. 
 I used sometimes  as opposed to all the time.

I resonded in that way since I frequently hear people complaining,
we're seeing all kinds of deadlocks as if the whole world is
collapsing under the deadlocks, but when pressed for more information
it turns out they're only seeing a few deadlocks per hour when the
system is under the heaviest load -- that scenario is hardly
unexpected. As a result, having more detailed information than
just sometimes helps to understand what's really going on.

 I think you advise to retry transaction or add select for update prior
 to insert. I will pursue this (together with upgrade to 9.3.6 suggested by 
 Alvaro). 

The nice thing about a retry strategy is that it always works.
The problem with a retry strategy is that it's easy to do wrong
(i.e. it may be more than just the transaction that needs to
restart ... depending on what data has changed, calculations may
need to be redone, the user requeried for certain information,
etc).

The problem with the SELECT ... FOR UPDATE is that it's a bit
slower, and can be significantly slower unders some circumstances,
but it's easier to implement correctly.

The good news form Alvaro is that this is probably happening more
frequently than necessary because of the bug he mentioned ... so
upgrading may cause the problem to happen infrequently enough that
you don't really care about it. The solutions I suggest are still
relevent, they just might not be as immediately important.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Starting new cluster from base backup

2015-02-18 Thread Adrian Klaver

On 02/18/2015 10:24 AM, Guillaume Drolet wrote:



2015-02-18 11:06 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com:





So is E:\ a network drive shared by both machines?


No, E:\ is a local drive on which I created a tablespace, in order to
have enough space for my database. In my current setup on the source
machine, PGDATA is in the default PGSQL installation on the OS disk so
space is limited. On the destination machine, PGDATA will be on a
different, larger disk than the OS disk.


So is there an E:\ drive available on the destination machine?





Anyway, in the end I want to move the database that's in that
tablespace
back to pg_default. I see two possibilities:

1) Moving it now, before taking the base backup, using ALTER
DATABASE
mydb SET TABLESPACE pg_default; Then I assume I should be able
to use -X
stream and plain format with pg_basebackup.

Or

2) Delete the symbolic link in data/pg_tblspc, use pg_basebackup
with -X
stream and plain format, copy the tablespace from the source to the
destination machine. Create a new symbolic link in
data/pg_tblspc on the
new machine and point it to the copied tablespace.

Are these two approaches feasible?


I would say 1 would be more feasible then 2. If you use 2, delete
the symlink and do the backup, what happens with any dependencies
between objects in the default tablespace and the one you cut out?
Also the pg_basebackup will be taking a backup of one part of the
cluster at one point in time and the copy of the remote tablespace
will possibly be at another point in time. I do no see that ending well.


You're probably right about that. My understanding was that, since this
is a single-user database (at least for now) on my machine, if I wasn't
performing any query or task during the backup, then the problem you
mentioned would in fact not be a problem.


Except Postgres performs tasks behind the scenes, so changes are 
happening. There is also still the dependency issue.










 Thanks.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Starting new cluster from base backup

2015-02-18 Thread Guillaume Drolet
2015-02-18 13:40 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 02/18/2015 10:24 AM, Guillaume Drolet wrote:



 2015-02-18 11:06 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com
 mailto:adrian.kla...@aklaver.com:



 So is E:\ a network drive shared by both machines?


 No, E:\ is a local drive on which I created a tablespace, in order to
 have enough space for my database. In my current setup on the source
 machine, PGDATA is in the default PGSQL installation on the OS disk so
 space is limited. On the destination machine, PGDATA will be on a
 different, larger disk than the OS disk.


 So is there an E:\ drive available on the destination machine?


Yes there is an E:\ drive available on the destination machine. But for
now, these two machines don't communicate. I take the backup on a hot swap
disk (F:\) and then swap it into the destination machine. Ultimately when
my database will be running on the destination machine, I'll connect to it
from other machines in my local network.






 Anyway, in the end I want to move the database that's in that
 tablespace
 back to pg_default. I see two possibilities:

 1) Moving it now, before taking the base backup, using ALTER
 DATABASE
 mydb SET TABLESPACE pg_default; Then I assume I should be able
 to use -X
 stream and plain format with pg_basebackup.

 Or

 2) Delete the symbolic link in data/pg_tblspc, use pg_basebackup
 with -X
 stream and plain format, copy the tablespace from the source to
 the
 destination machine. Create a new symbolic link in
 data/pg_tblspc on the
 new machine and point it to the copied tablespace.

 Are these two approaches feasible?


 I would say 1 would be more feasible then 2. If you use 2, delete
 the symlink and do the backup, what happens with any dependencies
 between objects in the default tablespace and the one you cut out?
 Also the pg_basebackup will be taking a backup of one part of the
 cluster at one point in time and the copy of the remote tablespace
 will possibly be at another point in time. I do no see that ending
 well.


 You're probably right about that. My understanding was that, since this
 is a single-user database (at least for now) on my machine, if I wasn't
 performing any query or task during the backup, then the problem you
 mentioned would in fact not be a problem.


 Except Postgres performs tasks behind the scenes, so changes are
 happening. There is also still the dependency issue.


Can't the dependency issue be fixed by creating a new junction in
data/pg_tblspc that would point to the relocated tablespace?










  Thanks.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Starting new cluster from base backup

2015-02-18 Thread Guillaume Drolet
2015-02-18 11:06 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 02/18/2015 04:26 AM, Guillaume Drolet wrote:



 2015-02-17 17:14 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com
 mailto:adrian.kla...@aklaver.com:


 On 02/17/2015 06:54 AM, Guillaume Drolet wrote:

 Adrian: thanks for this information.

 I tried running pg_basebackup in plain format with option -X
 stream
 (pg_basebackup -D F:\208376PT\db -X stream -l
 208376PT17022015 -U
 postgres -P) but I got the message:

 pg_basebackup: directory E:\Data\Database exists but is not
 empty

 I creatde a tablespace using CREATE TABLESPACE at the location
 mentioned
 in the message. According to what I read online about this, this
 message
 is issued when a tablespace was created under PGDATA. In my
 case, only
 the directory junction pointing to my tablespace (on a different
 drive
 than PGDATA) exists under PGDATA, not the tablespace itself.

 The only way I can run pg_basebackup with WAL files is with
 option -Ft
 and -X fetch. I'd much prefer using plain mode since my 670 GB
 tablespace takes a lot of time to extract when tarred. Is there
 another
 way to approach this?


 All I can come up with at the moment

 So what is the path on the original machine and can it be replicated
 on the new machine, at least temporarily?


 The path on the original (i.e. source) machine is:
 E:\Data\Database\PG_9.3_201306121\..


 I'm thinking if the path can be replicated, let pg_basebackup write
 to it and then create the tablespace you want and do ALTER TABLE SET
 TABLESPACE to move the tables. You would also need to do this for
 indexes.


 Not sure I understand when you say let pg_basebackup write to it. This
 tablespace already exists on the source machine so cannot be written
 over. It needs to be written in the backup so that I can than recreate
 it on the destination machine.


 So is E:\ a network drive shared by both machines?


No, E:\ is a local drive on which I created a tablespace, in order to have
enough space for my database. In my current setup on the source machine,
PGDATA is in the default PGSQL installation on the OS disk so space is
limited. On the destination machine, PGDATA will be on a different, larger
disk than the OS disk.



 Anyway, in the end I want to move the database that's in that tablespace
 back to pg_default. I see two possibilities:

 1) Moving it now, before taking the base backup, using ALTER DATABASE
 mydb SET TABLESPACE pg_default; Then I assume I should be able to use -X
 stream and plain format with pg_basebackup.

 Or

 2) Delete the symbolic link in data/pg_tblspc, use pg_basebackup with -X
 stream and plain format, copy the tablespace from the source to the
 destination machine. Create a new symbolic link in data/pg_tblspc on the
 new machine and point it to the copied tablespace.

 Are these two approaches feasible?


 I would say 1 would be more feasible then 2. If you use 2, delete the
 symlink and do the backup, what happens with any dependencies between
 objects in the default tablespace and the one you cut out? Also the
 pg_basebackup will be taking a backup of one part of the cluster at one
 point in time and the copy of the remote tablespace will possibly be at
 another point in time. I do no see that ending well.


You're probably right about that. My understanding was that, since this is
a single-user database (at least for now) on my machine, if I wasn't
performing any query or task during the backup, then the problem you
mentioned would in fact not be a problem.







 Thanks.




 --
 Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] postgresql93-9.3.5: deadlock when updating parent table expected?

2015-02-18 Thread Dmitry O Litvintsev
Thank you, Bill, 

Yes, deadlock occurs when there are multiple processes insert 
concurrently into file table with the same volume id field. 
I used sometimes  as opposed to all the time. 

I think you advise to retry transaction or add select for update prior
to insert. I will pursue this (together with upgrade to 9.3.6 suggested by 
Alvaro). 

Thanks,
Dmitry


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql93-9.3.5: deadlock when updating parent table expected?

2015-02-18 Thread dinesh kumar
Hi,

If you feel FOR UPDATE is taking much time, then I believe,we can solve
this kind of issues using advisory locks
http://www.postgresql.org/docs/9.1/static/explicit-locking.html, .

Regards,
Dinesh
manojadinesh.blogspot.com


On Wed, Feb 18, 2015 at 10:45 AM, Bill Moran wmo...@potentialtech.com
wrote:

 On Wed, 18 Feb 2015 18:30:09 +
 Dmitry O Litvintsev litvi...@fnal.gov wrote:
 
  Yes, deadlock occurs when there are multiple processes insert
  concurrently into file table with the same volume id field.
  I used sometimes  as opposed to all the time.

 I resonded in that way since I frequently hear people complaining,
 we're seeing all kinds of deadlocks as if the whole world is
 collapsing under the deadlocks, but when pressed for more information
 it turns out they're only seeing a few deadlocks per hour when the
 system is under the heaviest load -- that scenario is hardly
 unexpected. As a result, having more detailed information than
 just sometimes helps to understand what's really going on.

  I think you advise to retry transaction or add select for update prior
  to insert. I will pursue this (together with upgrade to 9.3.6 suggested
 by
  Alvaro).

 The nice thing about a retry strategy is that it always works.
 The problem with a retry strategy is that it's easy to do wrong
 (i.e. it may be more than just the transaction that needs to
 restart ... depending on what data has changed, calculations may
 need to be redone, the user requeried for certain information,
 etc).

 The problem with the SELECT ... FOR UPDATE is that it's a bit
 slower, and can be significantly slower unders some circumstances,
 but it's easier to implement correctly.

 The good news form Alvaro is that this is probably happening more
 frequently than necessary because of the bug he mentioned ... so
 upgrading may cause the problem to happen infrequently enough that
 you don't really care about it. The solutions I suggest are still
 relevent, they just might not be as immediately important.

 --
 Bill Moran


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Starting new cluster from base backup

2015-02-18 Thread Guillaume Drolet
2015-02-18 16:11 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 02/18/2015 11:51 AM, Guillaume Drolet wrote:



 2015-02-18 13:40 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com
 mailto:adrian.kla...@aklaver.com:

 On 02/18/2015 10:24 AM, Guillaume Drolet wrote:



 2015-02-18 11:06 GMT-05:00 Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com
 mailto:adrian.klaver@aklaver.__com
 mailto:adrian.kla...@aklaver.com:



  So is E:\ a network drive shared by both machines?


 No, E:\ is a local drive on which I created a tablespace, in
 order to
 have enough space for my database. In my current setup on the
 source
 machine, PGDATA is in the default PGSQL installation on the OS
 disk so
 space is limited. On the destination machine, PGDATA will be on a
 different, larger disk than the OS disk.


 So is there an E:\ drive available on the destination machine?


 Yes there is an E:\ drive available on the destination machine. But for
 now, these two machines don't communicate. I take the backup on a hot
 swap disk (F:\) and then swap it into the destination machine.
 Ultimately when my database will be running on the destination machine,
 I'll connect to it from other machines in my local network.


 So if I understand correctly you have:

 1) On source machine a directory E:\Data\Database.
 2) On the source machine in Postgres you have a created a tablespace that
 points at E:\Data\Database.
 3) On destination machine you have an E:\ drive also.

 You're correct


 Then have you tried:

 1) Create \Data\Database directory under E:\ on the destination machine.

2) Do the pg_basebackup.


I'm not sure I understand why, at this moment in the sequence of operation,
I would create \Data\Database under E:\ on the destination machine.
pg_basebackup, when run on the source DB on the source machine, has no idea
about the destination machine. Maybe you're confused with the F:\ drive,
which is the drive on which I tried to save my base backup with the command:

pg_basebackup -D F:\208376PT\db -X stream -l 208376PT17022015 -U
postgres -P

This drive (F:\) is not the destination machine, it's a swappable disk I
use to move my base backup from one machine (the source) to another (the
destination).




 Can't the dependency issue be fixed by creating a new junction in
 data/pg_tblspc that would point to the relocated tablespace?


 The docs say you can:

 http://www.postgresql.org/docs/9.3/static/manage-ag-tablespaces.html

 The directory $PGDATA/pg_tblspc contains symbolic links that point to
 each of the non-built-in tablespaces defined in the cluster. Although not
 recommended, it is possible to adjust the tablespace layout by hand by
 redefining these links. Under no circumstances perform this operation while
 the server is running. Note that in PostgreSQL 9.1 and earlier you will
 also need to update the pg_tablespace catalog with the new locations. (If
 you do not, pg_dump will continue to output the old tablespace locations.)


 I have not done it and I see the Although not recommended.. part above,
 so I would say that is a last resort solution.


I confirm this method works. I've done it in the past using the steps in
this blog and its comments:

http://www.databasesoup.com/2013/11/moving-tablespaces.html













   Thanks.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Fwd: Data corruption after restarting replica

2015-02-18 Thread Adrian Klaver

On 02/16/2015 02:44 AM, Novák, Petr wrote:

Hello,

sorry for posting to second list, but as I've received  no reply
there, I'm trying my luck here.

Thanks
Petr


-- Forwarded message --
From: Novák, Petr nov...@avast.com
Date: Tue, Feb 10, 2015 at 12:49 PM
Subject: Data corruption after restarting replica
To: pgsql-b...@postgresql.org


Hi all,

we're experiencing data corruption after switching streamed replica to primary.
This is not the first time I've encountered this issue, so I'l try to
describe it in more detail.

For this particular cluster we have 6 servers in two datacenters (3 in
each). There are two instances running on each server, each with its
own port and datadir. On the first two servers in each datacenter one
instance is primary and the other is replica for the primary from the
other server. Third server holds two offsite replicas from the other
datacenter (for DR purposes)

Each replica was set up by taking pg_basebackup from primary
(pg_basebackup -h hostname -p 5430 -D /data2/basebackup -P -v -U
user -x -c fast). Then directories from initdb were replaced with
the ones from basebackup (only the configuration files remained) and
the replica started and was successfully connected to primary. It was
running with no problem keeping up with the primary. We were
experiencing some connection problem between the two datacenters, but
replication didn't break.

Then we needed to take one datacenter offline due to hardware
maintenance. So I've switched the applications down, verified that no
more clients were connected to primary, then shut the primary down and
restarted replica without recovery.conf and the application were
started using the new db with no problem. Other replica even
successfully reconnected to this new primary.


What other replica?



Few hours from the switch lines appeared in the server log (which
didn't appear before), indicating a corruption:

ERROR:  index account_username_key contains unexpected zero page at
block 1112135
ERROR:  right sibling's left-link doesn't match: block 476354 links to
1062443 instead of expected 250322 in index account_pkey

..and many more reporting corruption in several other indexes.


What happened to the primary you shut down?



The issue was resolved by creating new indexes and dropping the
affected ones, although there were already some duplicities in the
data, that has to be resolved, as some of the indexes were unique.

This particular case uses Postgres 9.1.14 on both primary and replica.
But I've experienced similar behavior on 9.2.9. OS Centos 6.6 in all
cases. This may mean, that there can be something wrong with our
configuration or the replication setup steps, but I've set up another
instance using the same steps with no problem.

Fsync related setting are at their defaults. Data directories are on
RAID10 arrays, with BBUs. Filesystem is ext4 mounted with nobarrier
option.

Database is fairly large ~120GB with several 50mil+ tables, lots of
indexes and FK constraints. It is mostly queried,
updates/inserts/deletes are only several rows/s.

Any help will be appreciated.

Petr Novak

System Engineer
Avast s.r.o.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Starting new cluster from base backup

2015-02-18 Thread Adrian Klaver

On 02/18/2015 11:51 AM, Guillaume Drolet wrote:



2015-02-18 13:40 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com:

On 02/18/2015 10:24 AM, Guillaume Drolet wrote:



2015-02-18 11:06 GMT-05:00 Adrian Klaver
adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com
mailto:adrian.klaver@aklaver.__com
mailto:adrian.kla...@aklaver.com:



 So is E:\ a network drive shared by both machines?


No, E:\ is a local drive on which I created a tablespace, in
order to
have enough space for my database. In my current setup on the source
machine, PGDATA is in the default PGSQL installation on the OS
disk so
space is limited. On the destination machine, PGDATA will be on a
different, larger disk than the OS disk.


So is there an E:\ drive available on the destination machine?


Yes there is an E:\ drive available on the destination machine. But for
now, these two machines don't communicate. I take the backup on a hot
swap disk (F:\) and then swap it into the destination machine.
Ultimately when my database will be running on the destination machine,
I'll connect to it from other machines in my local network.


So if I understand correctly you have:

1) On source machine a directory E:\Data\Database.
2) On the source machine in Postgres you have a created a tablespace 
that points at E:\Data\Database.

3) On destination machine you have an E:\ drive also.

Then have you tried:

1) Create \Data\Database directory under E:\ on the destination machine.
2) Do the pg_basebackup.





Can't the dependency issue be fixed by creating a new junction in
data/pg_tblspc that would point to the relocated tablespace?


The docs say you can:

http://www.postgresql.org/docs/9.3/static/manage-ag-tablespaces.html

The directory $PGDATA/pg_tblspc contains symbolic links that point to 
each of the non-built-in tablespaces defined in the cluster. Although 
not recommended, it is possible to adjust the tablespace layout by hand 
by redefining these links. Under no circumstances perform this operation 
while the server is running. Note that in PostgreSQL 9.1 and earlier you 
will also need to update the pg_tablespace catalog with the new 
locations. (If you do not, pg_dump will continue to output the old 
tablespace locations.)



I have not done it and I see the Although not recommended.. part 
above, so I would say that is a last resort solution.













  Thanks.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Data corruption after restarting replica

2015-02-18 Thread dinesh kumar
Hi,

On Mon, Feb 16, 2015 at 2:44 AM, Novák, Petr nov...@avast.com wrote:

 Hello,

 sorry for posting to second list, but as I've received  no reply
 there, I'm trying my luck here.

 Thanks
 Petr


 -- Forwarded message --
 From: Novák, Petr nov...@avast.com
 Date: Tue, Feb 10, 2015 at 12:49 PM
 Subject: Data corruption after restarting replica
 To: pgsql-b...@postgresql.org


 Hi all,

 we're experiencing data corruption after switching streamed replica to
 primary.
 This is not the first time I've encountered this issue, so I'l try to
 describe it in more detail.

 For this particular cluster we have 6 servers in two datacenters (3 in
 each). There are two instances running on each server, each with its
 own port and datadir. On the first two servers in each datacenter one
 instance is primary and the other is replica for the primary from the
 other server. Third server holds two offsite replicas from the other
 datacenter (for DR purposes)

 Each replica was set up by taking pg_basebackup from primary
 (pg_basebackup -h hostname -p 5430 -D /data2/basebackup -P -v -U
 user -x -c fast). Then directories from initdb were replaced with
 the ones from basebackup (only the configuration files remained) and
 the replica started and was successfully connected to primary. It was
 running with no problem keeping up with the primary. We were
 experiencing some connection problem between the two datacenters, but
 replication didn't break.

 Then we needed to take one datacenter offline due to hardware
 maintenance. So I've switched the applications down, verified that no
 more clients were connected to primary, then shut the primary down and
 restarted replica without recovery.conf and the application were
 started using the new db with no problem. Other replica even
 successfully reconnected to this new primary.


Before restarting replica, did you make sure that, all master transactions
applied to replication node.
May we know, why did you restarted replica without recovery.conf. Do you
want to maintain the same timeline for the xlogs. Or any specific other
reasons. ??

Regards,
Dinesh
manojadinesh.blogspot.com


 Few hours from the switch lines appeared in the server log (which
 didn't appear before), indicating a corruption:

 ERROR:  index account_username_key contains unexpected zero page at
 block 1112135
 ERROR:  right sibling's left-link doesn't match: block 476354 links to
 1062443 instead of expected 250322 in index account_pkey

 ..and many more reporting corruption in several other indexes.

 The issue was resolved by creating new indexes and dropping the
 affected ones, although there were already some duplicities in the
 data, that has to be resolved, as some of the indexes were unique.

 This particular case uses Postgres 9.1.14 on both primary and replica.
 But I've experienced similar behavior on 9.2.9. OS Centos 6.6 in all
 cases. This may mean, that there can be something wrong with our
 configuration or the replication setup steps, but I've set up another
 instance using the same steps with no problem.

 Fsync related setting are at their defaults. Data directories are on
 RAID10 arrays, with BBUs. Filesystem is ext4 mounted with nobarrier
 option.

 Database is fairly large ~120GB with several 50mil+ tables, lots of
 indexes and FK constraints. It is mostly queried,
 updates/inserts/deletes are only several rows/s.

 Any help will be appreciated.

 Petr Novak

 System Engineer
 Avast s.r.o.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] #Personal#: Reg: Multiple queries in a transaction

2015-02-18 Thread Medhavi Mahansaria
Hi Kevin,

Thanks!

But savepoint concept will not work for me as desired.

Is there any other way apart from SAVEPOINT that can be incorporated.

I am not using a script. I am writing a c++ program.

My problem is that I have 2 cases:

Case 1: When Q2 fails (we delete the error), i want to continue to Q3 and 
commit changes done by Q1 and Q3 once Q3 has executed successfully.

Case 2: When Q2 fails, I want it to throw an error. and rollback the changes 
made by Q1 and not proceed to Q3 at all.

Note: This is just a small example. I need a solution for an entire application 
which follows the same concept across multiple queries.

How can I incorporate this?

Thanks  Regards
Medhavi Mahansaria
Tata Consultancy Services Limited
Unit-VI, No.78, 79 83,
L-Centre, EPIP Industrial Estate,
Whitefield
Bangalore - 560066,Karnataka
India
Ph:- +91 80 67253769
Cell:- +91 9620053040
Mailto: medhavi.mahansa...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting



-Kevin Grittner kgri...@ymail.com wrote: - 
To: Medhavi Mahansaria medhavi.mahansa...@tcs.com, 
pgsql-general@postgresql.org pgsql-general@postgresql.org
From: Kevin Grittner kgri...@ymail.com
Date: 02/18/2015 09:40PM
Subject: Re: [GENERAL] #Personal#: Reg: Multiple queries in a transaction


Medhavi Mahansaria medhavi.mahansa...@tcs.com wrote:

 I need to execute a series of queries
 in a transaction, say Q1, Q2, Q3.

 Q1 - success
 Q2 - Failed
 Q3 - Success

 My issue is that after Q2 fails all
 the queries that  follow give errorERROR: current transaction
 is aborted, commands ignored until end of transaction block

 I want to move ahead in the transaction
 and execute Q3 also even though Q2 was a failure.

 Can you please suggest a way to do so
 in PostgreSQL 9.3.

 Is there a way to turn autocommit off?

The server does not support that, but if you're running a script
through psql you should look at using:

\set ON_ERROR_ROLLBACK on

Docs at:

http://www.postgresql.org/docs/9.3/interactive/app-psql.html

If you are not using psql you can use savepoints:

http://www.postgresql.org/docs/8.4/interactive/sql-savepoint.html

ROLLBACK TO SAVEPOINT if the statement fails.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: [GENERAL] #Personal#: Reg: Multiple queries in a transaction

2015-02-18 Thread Medhavi Mahansaria
Hi Bill,
 
Thanks!
 
But savepoint concept will not work for me as desired.
 
Is there any other way apart from SAVEPOINT that can be incorporated.
 
I am not using a script. I am writing a c++ program.
 
My problem is that I have 2 cases:
 
Case 1: When Q2 fails (we delete the error), i want to continue to Q3 and 
commit changes done by Q1 and Q3 once Q3 has executed successfully.
 
Case 2: When Q2 fails, I want it to throw an error. and rollback the changes 
made by Q1 and not proceed to Q3 at all.
 
Note: This is just a small example. I need a solution for an entire application 
which follows the same concept across multiple queries.
 
How can I incorporate this?

Thanks  Regards
Medhavi Mahansaria
Tata Consultancy Services Limited
Unit-VI, No.78, 79 83,
L-Centre, EPIP Industrial Estate,
Whitefield
Bangalore - 560066,Karnataka
India
Ph:- +91 80 67253769
Cell:- +91 9620053040
Mailto: medhavi.mahansa...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting



-Bill Moran wmo...@potentialtech.com wrote: - 
To: Medhavi Mahansaria medhavi.mahansa...@tcs.com
From: Bill Moran wmo...@potentialtech.com
Date: 02/18/2015 09:23PM
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] #Personal#: Reg: Multiple queries in a transaction


On Wed, 18 Feb 2015 20:36:45 +0530
Medhavi Mahansaria medhavi.mahansa...@tcs.com wrote:

 I need to execute a series of queries in a transaction, say Q1, Q2, Q3.
 
 Q1 - success
 Q2 - Failed
 Q3 - Success
 
 My issue is that after Q2 fails all the queries that  follow give error E
 RROR: current transaction is aborted, commands ignored until end of 
 transaction block
 
 I want to move ahead in the transaction and execute Q3 also even though Q2 
 was a failure.
 
 Can you please suggest a way to do so in PostgreSQL 9.3.

I believe savepoints are what you want:
http://www.postgresql.org/docs/9.3/static/sql-savepoint.html

Create a savepoint prior to each query, then decide how to proceed
based on the success status of that query. For example, in the scenario
you describe above:

BEGIN
SAVEPOINT q1
Q1 - success
RELEASE SAVEPOINT q1
SAVEPOINT q2
Q2 - failure
ROLLBACK TO SAVEPOINT q2
SAVEPOINT q3
Q3 - success
RELEASE SAVEPOINT q3
COMMIT

In which case Q1 and Q3 would successfully be committed.

-- 
Bill Moran
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: [GENERAL] #Personal#: Reg: Multiple queries in a transaction

2015-02-18 Thread David G Johnston
Medhavi Mahansaria wrote
 Hi Bill,
  
 Thanks!
  
 But savepoint concept will not work for me as desired.
  
 Is there any other way apart from SAVEPOINT that can be incorporated.
  
 I am not using a script. I am writing a c++ program.
  
 My problem is that I have 2 cases:
  
 Case 1: When Q2 fails (we delete the error), i want to continue to Q3 and
 commit changes done by Q1 and Q3 once Q3 has executed successfully.
  
 Case 2: When Q2 fails, I want it to throw an error. and rollback the
 changes made by Q1 and not proceed to Q3 at all.
  
 Note: This is just a small example. I need a solution for an entire
 application which follows the same concept across multiple queries.
  
 How can I incorporate this?

Forgo transactions or use savepoints.  Those are your tools.  If you cannot
find a way to solve your problem with those tools you either need to choose,
or build, a different toolbox or explain your actual problem in greater
detail so that others can see if there are solutions you are overlooking.

Or redefine your problem.

David J.





--
View this message in context: 
http://postgresql.nabble.com/Personal-Reg-Multiple-queries-in-a-transaction-tp5838427p5838539.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] #Personal#: Reg: Multiple queries in a transaction

2015-02-18 Thread David G Johnston
Medhavi Mahansaria wrote
 I need to execute a series of queries in a transaction, say Q1, Q2, Q3.
 [...]
 Is there a way to turn autocommit off?

I assume you meant turn Autocommit on?

So, do you want to execute the queries in a transaction or do you want to
autocommit each one?

Autocommit is a behavior of your client library, not the server.  Check its
documentation, whatever it is, for details.

As mentioned save points will probably work too.

If you explain why you think you need this then you may also get alternative
suggestions.

David J.




--
View this message in context: 
http://postgresql.nabble.com/Personal-Reg-Multiple-queries-in-a-transaction-tp5838427p5838444.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql93-9.3.5: deadlock when updating parent table expected?

2015-02-18 Thread Bill Moran
On Wed, 18 Feb 2015 04:55:47 +
Dmitry O Litvintsev litvi...@fnal.gov wrote:

 Hi, 
 
 I recently updated to postgresql93-9.3.5 (from 9.2.9). I see frequent 
 deadlocks 
 when updating parent table in insert into child table. There is foreign key 
 constraint between 
 child table and parent table. Parent table is updated on by trigger in insert 
 into child table. So 
 pretty much standard thing. Is it expected to deadlock?
 
 A simplified version:
 
  create table volume (
id serial primary key, 
name varchar, 
counter integer default(0));
 
  create table file (
id serial primary key, 
name varchar, 
volume bigint, foreign key (volume) references volume(id));
   
  create or replace function update_volume_file_counter() 
  returns trigger as $$
  begin
  if (tg_op='INSERT') then
   update volume set counter=counter+1 where volume.id=new.volume;
   return new;
  elseif (tg_op='DELETE') then
   update volume set counter=counter-1 where volume.id=old.volume;
   return old;
  end if;
  end;
  $$
  language plpgsql;
 
  create trigger update_volume_counter
 after insert or delete on file
 for each row 
 execute procedure update_volume_file_counter();
 
 So record is inserted into file table and counter gets updated in volume 
 table. Nothing 
 fancy. 
 
 insert into volume (name) values ('foo');
 insert into file(name,volume) values ('f1',(select id from volume where 
 name='foo'));
 insert into file(name,volume) values ('f2',(select id from volume where 
 name='foo'));
 
 select * from volume;
  id | name | counter 
 +--+-
   2 | foo  |   2
 (1 row)
 
 delete from file where name='f2';
 DELETE 1
 billing=# select * from volume;
  id | name | counter 
 +--+-
   2 | foo  |   1
 (1 row)
 
 So, counter increments/decrements as it should.
 Works fine. 
 But in real life application where multiple threads are inserting into file 
 table I see sometimes:
 
 CSTERROR:  deadlock detected
  Process 24611 waits for ExclusiveLock on tuple (1749,58) of relation 
 138328329 of database 138328263; blocked by process 25082.
  Process 25082 waits for ShareLock on transaction 14829630; blocked by 
 process 24611.
  Process 24611: update volume set counter=counter+1 where 
 id=new.volume;
  Process 25082: insert into file(name,volume) 
 values('f1',(select id from volume where name='foo'));
   CSTHINT:  See server log for query details.
 
 (not a real log file excerpt). 
 
 This does not happen all the time, happens sometimes when multiple threads 
 add file to the same volume;. 
 
 Question - am I doing something wrong or this deadlock is expected? ( I read 
 somewhere 
 that when inserting into  child table the corresponding record of parent 
 table is locked). 
 I did not seem to encounter this issue in postgresql 9.2 and 8.4 which I had 
 before.

Operations on the file table will take out a sharelock on the
corresponding row in the volume table, to ensure the foreign
key isn't made invalid by another process while this transaction
is in progress.

In order for this to deadlock, I believe you would have to have
2 processes operating on the same volume id at the same time.
You're using ambiguous terms like sometimes to describe the
frequency. The chance of it happening is a factor of how much
INSERT/DELETE traffic you have on the file table, and how often
those INSERT/DELETEs center around a single volume id.

 Should I drop foreign key constraint ?

If you don't feel that the relational guarantees provided by the
constraint are necessary, then you should delete it. You should
NOT delete the foreign key in an attempt to reduce deadlocks, as
you'd simply be avoiding one relational problem by allowing another
to happen.

Deadlocks are a perfectly normal consequence of high write activity.
It's possible to design schemas and access patterns that can't
deadlock, but it's time-consuming, complex, and generally performs
poorly.

In this case, however, I think you can avoid the deadlock with the
following:

BEGIN;
SELECT id FROM volume WHERE id = $? FOR UPDATE;
-- INSERT or DELETE here
COMMIT;

I don't believe this will create a significant performance
degradation, but you'll have to test it against your workload
to be sure.

A more general solution is to have your application code catch
deadlocks and replay the applicable transaction when they happen.
A deadlock can generally be consider I can't do that right now,
please try again later and unless the server is under a
tremendous load, the second attemp usually succeeds (of course,
there is a chance that it will deadlock again, so you have to
take into account that it might take an arbitrary number of
attempts before it succeeds)

I've seen this all too many times: many application developers
assume that a deadlock is an error that should never happen, and
this seems to result from the fact that most application developers
have only worked on applications that are 99% read 

Re: [GENERAL] BDR Monitoring, missing pg_stat_logical_decoding view

2015-02-18 Thread Andres Freund
Hi,

On 2015-02-17 22:37:43 +, Steve Boyle wrote:
 I'm trying to setup replication monitoring for BDR, following the doc here:
 https://wiki.postgresql.org/wiki/BDR_Monitoring
 
 My BDR installs seem to be missing the pg_stat_logical_decoding view.  Is 
 there something specific I need to do to install/create that view?

It has been renamed since - it's part of postgresql 9.4 and named
pg_replication_slots. It seems most of the page refers to it by the
correct name, just a subsection doesn't... Sorry for that.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failure loading materialized view with pg_restore

2015-02-18 Thread Rémi Cura
Hey,
pg_hba is to manage who has *access* to database.
Your problem seems to be who has* SELECT permission* to x table.
Cheers,
Rémi-C

2015-02-18 12:03 GMT+01:00 BladeOfLight16 bladeofligh...@gmail.com:

 On Wed, Feb 18, 2015 at 5:48 AM, Brian Sutherland br...@vanguardistas.net
  wrote:

 # dump and reload
 pg_dump --username super --format c -f dump.dump orig
 createdb copied


 It might be helpful to dump in the plain SQL format and look at what it's
 doing.



Re: [GENERAL] Starting new cluster from base backup

2015-02-18 Thread Guillaume Drolet
2015-02-17 17:14 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 02/17/2015 06:54 AM, Guillaume Drolet wrote:

 Adrian: thanks for this information.

 I tried running pg_basebackup in plain format with option -X stream
 (pg_basebackup -D F:\208376PT\db -X stream -l 208376PT17022015 -U
 postgres -P) but I got the message:

 pg_basebackup: directory E:\Data\Database exists but is not empty

 I creatde a tablespace using CREATE TABLESPACE at the location mentioned
 in the message. According to what I read online about this, this message
 is issued when a tablespace was created under PGDATA. In my case, only
 the directory junction pointing to my tablespace (on a different drive
 than PGDATA) exists under PGDATA, not the tablespace itself.

 The only way I can run pg_basebackup with WAL files is with option -Ft
 and -X fetch. I'd much prefer using plain mode since my 670 GB
 tablespace takes a lot of time to extract when tarred. Is there another
 way to approach this?


 All I can come up with at the moment

 So what is the path on the original machine and can it be replicated on
 the new machine, at least temporarily?


The path on the original (i.e. source) machine is: E:\Data\Database\
PG_9.3_201306121\..


 I'm thinking if the path can be replicated, let pg_basebackup write to it
 and then create the tablespace you want and do ALTER TABLE SET TABLESPACE
 to move the tables. You would also need to do this for indexes.


Not sure I understand when you say let pg_basebackup write to it. This
tablespace already exists on the source machine so cannot be written over.
It needs to be written in the backup so that I can than recreate it on the
destination machine.

Anyway, in the end I want to move the database that's in that tablespace
back to pg_default. I see two possibilities:

1) Moving it now, before taking the base backup, using ALTER DATABASE mydb
SET TABLESPACE pg_default; Then I assume I should be able to use -X stream
and plain format with pg_basebackup.

Or

2) Delete the symbolic link in data/pg_tblspc, use pg_basebackup with -X
stream and plain format, copy the tablespace from the source to the
destination machine. Create a new symbolic link in data/pg_tblspc on the
new machine and point it to the copied tablespace.

Are these two approaches feasible?





 Thanks.




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?

2015-02-18 Thread Day, David
Update/Information sharing: ( FreeBSD 10.0 (amd64) – Postgres 9.3.5 – Perl 5.18 
)

I have converted our Postgres  plperlu functions to plpython2u to see if the 
postgres segmentation faults disappear.
Lacking a known way to reproduce the error on demand, I will have to wait a few 
weeks for the absence of the symptom before I might conclude that this bug 
reported to me by Guy  Helmer was the issue.  Migration/Upgrade  to FreeBsd 
10.1 was not an immediate option.


Regards

Dave



Guy,

No I had not seen that bug report before.  ( 
https://rt.perl.org/Public/Bug/Display.html?id=122199 )

We did migrate from FreeBSD 9.x (2?) and I think it true
that we were not experiencing the problem at time.
So it might be a good fit/explanation for our current experience

There were a couple of suggestions to follow up on.
I’ll keep the thread updated.

Thanks, a  good start to my  Friday the 13th.


Regards


Dave Day






From: Guy Helmer [mailto:ghel...@palisadesystems.com]
Sent: Thursday, February 12, 2015 6:19 PM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu 
related ?


On Feb 12, 2015, at 3:21 PM, Day, David 
d...@redcom.commailto:d...@redcom.com wrote:

Update/Information sharing on my pursuit of  segmentation faults

FreeBSD 10.0-RELEASE-p12 amd64
Postgres version 9.3.5

Below are three postgres core files generated from two different machine ( 
Georgia and Alabama ) on Feb 11.
These cores would not be caused  from an  environment update issue that I last 
suspected might be causing the segfaults
So I am kind of back to square one in terms of thinking what is occurring.

?  I am not sure that I understand the associated time events in the  postgres 
log file output.  Is this whatever happens to be running on the other postgress 
forked process when the cored  process was detected ?
If this is the case then I have probably been reading to much from the content 
of the postgres log file at the time of core.
This probably just represents collateral damage of routine transactions that 
were in other forked  processes at the time one of the processes cored ?

Therefore I would now just assert  that postgres has a sporadic segmentation 
problem,  no known way to reliably cause it
and am uncertain as to how to proceed to resolve it.

. . .

 Georgia-Core 8:38 -  Feb 11
[New process 101032]
[New Thread 802c06400 (LWP 101032)]
Core was generated by `postgres'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
(gdb) bt
#0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#1  0x00080c4cab49 in Perl_sv_clear () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#2  0x00080c4cb13a in Perl_sv_free2 () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#3  0x00080c4e5102 in Perl_free_tmps () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#4  0x00080bcfedea in plperl_destroy_interp () from 
/usr/local/lib/postgresql/plperl.so
#5  0x00080bcfec05 in plperl_fini () from 
/usr/local/lib/postgresql/plperl.so
#6  0x006292c6 in ?? ()
#7  0x0062918d in proc_exit ()
#8  0x006443f3 in PostgresMain ()
#9  0x005ff267 in PostmasterMain ()
#10 0x005a31ba in main ()
(gdb) info threads
  Id   Target Id Frame
* 2Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
* 1Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18


Given two of the coredumps are in down in libperl and this is FreeBSD 10.0 
amd64, have you seen this?

https://rt.perl.org/Public/Bug/Display.html?id=122199

Michael Moll suggested trying setting vm.pmap.pcid_enabled to 0 but I don’t 
recall seeing if that helped.

Guy




Re: [GENERAL] #Personal#: Reg: Multiple queries in a transaction

2015-02-18 Thread Bill Moran
On Wed, 18 Feb 2015 20:36:45 +0530
Medhavi Mahansaria medhavi.mahansa...@tcs.com wrote:

 I need to execute a series of queries in a transaction, say Q1, Q2, Q3.
 
 Q1 - success
 Q2 - Failed
 Q3 - Success
 
 My issue is that after Q2 fails all the queries that  follow give error E
 RROR: current transaction is aborted, commands ignored until end of 
 transaction block
 
 I want to move ahead in the transaction and execute Q3 also even though Q2 
 was a failure.
 
 Can you please suggest a way to do so in PostgreSQL 9.3.

I believe savepoints are what you want:
http://www.postgresql.org/docs/9.3/static/sql-savepoint.html

Create a savepoint prior to each query, then decide how to proceed
based on the success status of that query. For example, in the scenario
you describe above:

BEGIN
SAVEPOINT q1
Q1 - success
RELEASE SAVEPOINT q1
SAVEPOINT q2
Q2 - failure
ROLLBACK TO SAVEPOINT q2
SAVEPOINT q3
Q3 - success
RELEASE SAVEPOINT q3
COMMIT

In which case Q1 and Q3 would successfully be committed.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Starting new cluster from base backup

2015-02-18 Thread Adrian Klaver

On 02/18/2015 04:26 AM, Guillaume Drolet wrote:



2015-02-17 17:14 GMT-05:00 Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com:

On 02/17/2015 06:54 AM, Guillaume Drolet wrote:

Adrian: thanks for this information.

I tried running pg_basebackup in plain format with option -X stream
(pg_basebackup -D F:\208376PT\db -X stream -l
208376PT17022015 -U
postgres -P) but I got the message:

pg_basebackup: directory E:\Data\Database exists but is not empty

I creatde a tablespace using CREATE TABLESPACE at the location
mentioned
in the message. According to what I read online about this, this
message
is issued when a tablespace was created under PGDATA. In my
case, only
the directory junction pointing to my tablespace (on a different
drive
than PGDATA) exists under PGDATA, not the tablespace itself.

The only way I can run pg_basebackup with WAL files is with
option -Ft
and -X fetch. I'd much prefer using plain mode since my 670 GB
tablespace takes a lot of time to extract when tarred. Is there
another
way to approach this?


All I can come up with at the moment

So what is the path on the original machine and can it be replicated
on the new machine, at least temporarily?


The path on the original (i.e. source) machine is:
E:\Data\Database\PG_9.3_201306121\..


I'm thinking if the path can be replicated, let pg_basebackup write
to it and then create the tablespace you want and do ALTER TABLE SET
TABLESPACE to move the tables. You would also need to do this for
indexes.


Not sure I understand when you say let pg_basebackup write to it. This
tablespace already exists on the source machine so cannot be written
over. It needs to be written in the backup so that I can than recreate
it on the destination machine.


So is E:\ a network drive shared by both machines?



Anyway, in the end I want to move the database that's in that tablespace
back to pg_default. I see two possibilities:

1) Moving it now, before taking the base backup, using ALTER DATABASE
mydb SET TABLESPACE pg_default; Then I assume I should be able to use -X
stream and plain format with pg_basebackup.

Or

2) Delete the symbolic link in data/pg_tblspc, use pg_basebackup with -X
stream and plain format, copy the tablespace from the source to the
destination machine. Create a new symbolic link in data/pg_tblspc on the
new machine and point it to the copied tablespace.

Are these two approaches feasible?


I would say 1 would be more feasible then 2. If you use 2, delete the 
symlink and do the backup, what happens with any dependencies between 
objects in the default tablespace and the one you cut out? Also the 
pg_basebackup will be taking a backup of one part of the cluster at one 
point in time and the copy of the remote tablespace will possibly be at 
another point in time. I do no see that ending well.








Thanks.




--
Adrian Klaver
adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] #Personal#: Reg: Multiple queries in a transaction

2015-02-18 Thread Kevin Grittner
Medhavi Mahansaria medhavi.mahansa...@tcs.com wrote:

 I need to execute a series of queries
 in a transaction, say Q1, Q2, Q3.

 Q1 - success
 Q2 - Failed
 Q3 - Success

 My issue is that after Q2 fails all
 the queries that  follow give errorERROR: current transaction
 is aborted, commands ignored until end of transaction block

 I want to move ahead in the transaction
 and execute Q3 also even though Q2 was a failure.

 Can you please suggest a way to do so
 in PostgreSQL 9.3.

 Is there a way to turn autocommit off?

The server does not support that, but if you're running a script
through psql you should look at using:

\set ON_ERROR_ROLLBACK on

Docs at:

http://www.postgresql.org/docs/9.3/interactive/app-psql.html

If you are not using psql you can use savepoints:

http://www.postgresql.org/docs/8.4/interactive/sql-savepoint.html

ROLLBACK TO SAVEPOINT if the statement fails.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failure loading materialized view with pg_restore

2015-02-18 Thread Tom Lane
Brian Sutherland br...@vanguardistas.net writes:
 If I run this set of commands against PostgreSQL 9.4.1 I pg_restore
 throws an error with a permission problem. Why it does so is a mystery
 to me, given that the user performing the restore is a superuser:

The same thing would happen without any dump and restore:

regression=# create user nobody;
CREATE ROLE
regression=# CREATE TABLE x (y int); 
CREATE TABLE
regression=# CREATE MATERIALIZED VIEW myview AS select * from x;
SELECT 0
regression=# ALTER TABLE myview OWNER TO nobody;
ALTER TABLE
regression=# REFRESH MATERIALIZED VIEW myview;
ERROR:  permission denied for relation x

User nobody does not have permission to read table x, so the REFRESH
fails, because the view's query executes as the view's owner.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Advise on unit testing framework enhancement

2015-02-18 Thread Sid
There are many postgres unit testing libraries like pgtap
http://pgtap.org/, PGUnit http://en.dklab.ru/lib/dklab_pgunit/ and Epic
http://www.epictest.org/.

But I think there might be some important things that are missing that
makes using these tools easier, so I wrote pypgTAP
https://github.com/itissid/pypgTAP project to make it easier at least for
me to use these tools.

*Here are the current issues that it solves for me:*
1. Creates a throwaway postgres server that loads up pgTAP library ready
for writing tests against and then spins it down.
2. Shares the python virtualenv of the client with the plpythonu stored
procedures on the throwaway. This allows easier python code reuse.
3. Allows a user to discover a resources within his test directory for
example for loading data for tests.
4. Multiple tests can be written
https://github.com/itissid/pypgTAP#how-to-use-in-your-projectexample in
pure SQL with pgtap and transparently executed very easily.

*How might I make it more useful for you?*
1. What missing features prevents people from testing your SQL/psql in
production that this tool might be enhanced with?
2. How do you maintain production code typically for postgres? For example,
does the DataDefinition exist in projects in some VCS like git/svn/hg or
does it exist as something managed by tools like Alembic; The latter can be
a deal breaker for you using this tool, as it assumes DataDefinition along
with DML code and its tests exist as physical files in a project directory.
3. Perhaps you want to plug in your own SQL libraries instead of pgTAP to
write tests against?


-Regards
Sid


Re: [GENERAL] postgresql93-9.3.5: deadlock when updating parent table expected?

2015-02-18 Thread Alvaro Herrera
Dmitry O Litvintsev wrote:
 Hi, 
 
 I recently updated to postgresql93-9.3.5 (from 9.2.9). I see frequent 
 deadlocks 
 when updating parent table in insert into child table. There is foreign key 
 constraint between 
 child table and parent table. Parent table is updated on by trigger in insert 
 into child table. So 
 pretty much standard thing. Is it expected to deadlock?

This is probably caused by a bug that was fixed in 9.3.6:

Author: Alvaro Herrera alvhe...@alvh.no-ip.org
Branch: master [0e5680f47] 2014-12-26 13:52:27 -0300
Branch: REL9_4_STABLE Release: REL9_4_1 [0e3a1f71d] 2014-12-26 13:52:27 -0300
Branch: REL9_3_STABLE Release: REL9_3_6 [048912386] 2014-12-26 13:52:27 -0300

Grab heavyweight tuple lock only before sleeping

We were trying to acquire the lock even when we were subsequently
not sleeping in some other transaction, which opens us up unnecessarily
to deadlocks.  In particular, this is troublesome if an update tries to
lock an updated version of a tuple and finds itself doing EvalPlanQual
update chain walking; more than two sessions doing this concurrently
will find themselves sleeping on each other because the HW tuple lock
acquisition in heap_lock_tuple called from EvalPlanQualFetch races with
the same tuple lock being acquired in heap_update -- one of these
sessions sleeps on the other one to finish while holding the tuple lock,
and the other one sleeps on the tuple lock.

Per trouble report from Andrew Sackville-West in

http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230

His scenario can be simplified down to a relatively simple
isolationtester spec file which I don't include in this commit; the
reason is that the current isolationtester is not able to deal with more
than one blocked session concurrently and it blocks instead of raising
the expected deadlock.  In the future, if we improve isolationtester, it
would be good to include the spec file in the isolation schedule.  I
posted it in
http://www.postgresql.org/message-id/20141212205254.gc1...@alvh.no-ip.org

Hat tip to Mark Kirkwood, who helped diagnose the trouble.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing table from in INSERT RETURNING

2015-02-18 Thread Rémi Cura
Hey folks,
thanks for the answers.
As you guessed it is just a synthetic example,
(so of course RETURNING some_value / 10  is possible in this case, but not
in general when the function is much more complicated than /10).
Same wise, adding a column is just not a serious option.

It correspond to a real need that is that you have rows associated to an
*id*.
Now you want to insert part of this row into a table with a serial field (
*gid*).
Upon insertion, the serial field is automatically filled, and you get it
with a returning statement.
The problem is that you have no way to know which value of *gid* is
associated to which *id*.

The other workaround I found is to get nextvalue() before insert to to know
beforehand what will be the *(gid, id)*  association.

It is suboptimal and ugly, so I would prefer another solution.

Cheers,
Rémi-C


2015-02-17 21:33 GMT+01:00 John McKown john.archie.mck...@gmail.com:

 On Tue, Feb 17, 2015 at 2:15 PM, David G Johnston 
 david.g.johns...@gmail.com wrote:

 On Tue, Feb 17, 2015 at 1:08 PM, John McKown [via PostgreSQL] [hidden
 email] http:///user/SendEmail.jtp?type=nodenode=5838309i=0 wrote:

 I haven't seen any one else reply. I don't know if you've gotten a
 solution. But the following seemed to work for me:


 ​mine apparently got bounced...​



 WITH serie AS (
 select s, s*10 as computing
 from generate_series(1,10) as s
 )
 INSERT INTO test_insert_returning (some_value)
 SELECT computing
 FROM serie
 RETURNING gid, some_value;


 ​or, RETURNING some_value / 10​


 From my reading on the RETURNING phrase, you can only return values from
 the table into which you are doing the INSERT. Not any other table or view
 which might be referenced.


 ​This is correct; and I am curious on the use case that requires
 otherwise.​


 ​A weird one might be where in data available (s) in the CTE is in
 English measure (feet, miles, etc) and the OP wants to insert the
 equivalent Metric value (computing) into the table, but needs to return
 the English value to the caller (why?). He does not want to put the English
 measure into the table itself, just to be able to return it. And not need
 to do a reverse conversion. As I said, just a weird thought. From a
 effervescent fount of weird thoughts - me. Or perhaps what he is storing in
 the table is a one-way hash of a password, and wants to return the
 clear-text password to the caller? Hum, that is almost reasonable. I'll
 need to be more on guard.




 ​David J.​




 --
 He's about as useful as a wax frying pan.

 10 to the 12th power microphones = 1 Megaphone

 Maranatha! 
 John McKown



[GENERAL] #Personal#: Reg: Multiple queries in a transaction

2015-02-18 Thread Medhavi Mahansaria
Hi,

I need to execute a series of queries in a transaction, say Q1, Q2, Q3.

Q1 - success
Q2 - Failed
Q3 - Success

My issue is that after Q2 fails all the queries that  follow give error E
RROR: current transaction is aborted, commands ignored until end of 
transaction block

I want to move ahead in the transaction and execute Q3 also even though Q2 
was a failure.

Can you please suggest a way to do so in PostgreSQL 9.3.

Is there a way to turn autocommit off?

Thanks  Regards
Medhavi Mahansaria
Tata Consultancy Services Limited
Unit-VI, No.78, 79 83,
L-Centre, EPIP Industrial Estate,
Whitefield
Bangalore - 560066,Karnataka
India
Ph:- +91 80 67253769
Cell:- +91 9620053040
Mailto: medhavi.mahansa...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: [GENERAL] Fwd: Data corruption after restarting replica

2015-02-18 Thread Tomas Vondra
On 16.2.2015 11:44, Novák, Petr wrote:
 Hello,
 
 sorry for posting to second list, but as I've received  no reply
 there, I'm trying my luck here.
 
 Thanks
 Petr
 
 
 -- Forwarded message --
 From: Novák, Petr nov...@avast.com
 Date: Tue, Feb 10, 2015 at 12:49 PM
 Subject: Data corruption after restarting replica
 To: pgsql-b...@postgresql.org
 
 
 Hi all,
 
 we're experiencing data corruption after switching streamed replica 
 to primary.This is not the first time I've encountered this issue,
 so I'l try to describe it in more detail.
 
 For this particular cluster we have 6 servers in two datacenters (3
 in each). There are two instances running on each server, each with
 its own port and datadir. On the first two servers in each datacenter
 one instance is primary and the other is replica for the primary from
 the other server. Third server holds two offsite replicas from the
 other datacenter (for DR purposes)

So essentially you have three servers in each datacenter, configured
like this:

server A

instance A1 (primary)
instance A2 (standby for B1)

server B

instance B1 (primary)
instance B2 (standby for B2)

server C (in the other datacenter)
--
instance C1 (standby for A1)
instance C2 (standby for B1)

So if A or B fails, you can swich to the other machine to the replica in
th same datacenter, and if the whole datacenter fails then you have C in
the other one.

Correct?

 Each replica was set up by taking pg_basebackup from primary 
 (pg_basebackup -h hostname -p 5430 -D /data2/basebackup -P -v -U 
 user -x -c fast). Then directories from initdb were replaced with 
 the ones from basebackup (only the configuration files remained) and 
 the replica started and was successfully connected to primary. It
 was running with no problem keeping up with the primary. We were 
 experiencing some connection problem between the two datacenters,
 but replication didn't break.

OK, that sounds about right. How long was the replication running before
the failover? Have you tried running some queries on the standby?

BTW are you running async or sync replication (I guess async)?

 Then we needed to take one datacenter offline due to hardware 
 maintenance. So I've switched the applications down, verified that
 no more clients were connected to primary, then shut the primary down
 and restarted replica without recovery.conf and the application were 
 started using the new db with no problem. Other replica even 
 successfully reconnected to this new primary.

Other replica? If I understand your deployment right, and you've
switched to the other datacenter, all the related replicas (original
master + first standby) were in the other datacetenter (now offline). So
what replica reconnected to the newly promoted primary?

Restart after removing recovery.conf is a bit cumbersome because of the
restart (pg_ctl promote does not need that IIRC), but it's a legal way
to do the promote.

 Few hours from the switch lines appeared in the server log (which 
 didn't appear before), indicating a corruption:
 
 ERROR: index account_username_key contains unexpected zero page at 
 block 1112135 ERROR: right sibling's left-link doesn't match: block
 476354 links to 1062443 instead of expected 250322 in index
 account_pkey
 
 ..and many more reporting corruption in several other indexes.
 
 The issue was resolved by creating new indexes and dropping the 
 affected ones, although there were already some duplicities in the 
 data, that has to be resolved, as some of the indexes were unique.

So the machine was running for a few hours just fine, and then something
happened and these errors started occuring?

Can you check whether the table was vacuumed recently (before the errors
started to appear) or what else interesting happened?

 This particular case uses Postgres 9.1.14 on both primary and 
 replica. But I've experienced similar behavior on 9.2.9. OS Centos 
 6.6 in all cases. This may mean, that there can be something wrong 
 with our configuration or the replication setup steps, but I've set 
 up another instance using the same steps with no problem.

How different are the systems? I see you're running 6.6 on both ends,
but maybe there's some tiny difference? One common issue we see from
time to time are minor locale differences, affecting the indexes.

 Fsync related setting are at their defaults. Data directories are on 
 RAID10 arrays, with BBUs. Filesystem is ext4 mounted with nobarrier 
 option.

Good ;-)

 Database is fairly large ~120GB with several 50mil+ tables, lots of 
 indexes and FK constraints. It is mostly queried, 
 updates/inserts/deletes are only several rows/s.

So it was queried for a few hours? Any idea if it was using the indexes,
or if it was choosing different execution plans?

-- 
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing 

Re: [GENERAL] Starting new cluster from base backup

2015-02-18 Thread Adrian Klaver

On 02/18/2015 01:48 PM, Guillaume Drolet wrote:








So if I understand correctly you have:

1) On source machine a directory E:\Data\Database.
2) On the source machine in Postgres you have a created a tablespace
that points at E:\Data\Database.
3) On destination machine you have an E:\ drive also.

You're correct

Then have you tried:

1) Create \Data\Database directory under E:\ on the destination machine.

2) Do the pg_basebackup.


I'm not sure I understand why, at this moment in the sequence of
operation, I would create \Data\Database under E:\ on the destination
machine.
pg_basebackup, when run on the source DB on the source machine, has no
idea about the destination machine. Maybe you're confused with the F:\
drive, which is the drive on which I tried to save my base backup with
the command:


I am confused, but not about F:\ drive:). My confusion was on where the 
error directory E:\Data\Database exists but is not empty occurred. I 
just ran a test. So the issue is in plain mode pg_basebackup does the 
binary copy to F:\208376PT\db which is fine. The problem is that it can
still see E:\Data\Database on the source machine, so when it tries to 
set up the copy of the tablespace it sees that the directory is not 
empty and stops. So the only way this going to work in 9.3 with plain is 
to copy not to F:\ but to the destination machine directly. I am 
guessing that is not possible?  It works in the tar case because the 
tablespace directory gets renamed.




pg_basebackup -D F:\208376PT\db -X stream -l 208376PT17022015 -U
postgres -P

This drive (F:\) is not the destination machine, it's a swappable disk I
use to move my base backup from one machine (the source) to another (the
destination).





I have not done it and I see the Although not recommended.. part
above, so I would say that is a last resort solution.


I confirm this method works. I've done it in the past using the steps in
this blog and its comments:

http://www.databasesoup.com/2013/11/moving-tablespaces.html


Interesting post, I missed it the first time around. Seems worth a try.








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general