[GENERAL] Audit database to recreate former states using JSON functions

2014-01-14 Thread Felix Kunde
Hello



last december Ive delvoped a few PL/pgSQL functions to log

all changes in my database and recreate former database states.

I used row_to_json to log and json_populate_recordset to reproduce

my tables. I think its cool feature and I like to share and discuss it

with you. I would be very happy to recieve some feedback. Im also

interested in other versioning approaches, if there are (?).



You can find the repository here: https://github.com/FxKu/audit

Ive written all the relevant stuff into the README.



At last, I really like to thank Hans-Jrgen Schnig (http://www.cybertec.at/)

for giving me the advice to use JSON functions. This made the whole

developement way more easy.



Have a great day.

Felix







[GENERAL] Postgres usage of session identifiers for security

2014-01-14 Thread Keith Minsel
Can anyone describe how Postgres generates session identifiers and how
Postgres uses a session identifier for community action protection?

More specifically, will Postgres allow a session identifier to be
non-system generated?  How does Postgres respond when presented with a
non-system generated session identifier?

My security team is asking and I can't find any documentation on this.

Thanks,
Keith


[GENERAL] GSSAPI auth issue with windows 7 client, postgresql 9.2.3 linux server

2014-01-14 Thread jaime soler
Hi list,

I am able to successfully authenticate a Windows server AD user with
PostgreSQL 9.2.3 version from linux clients but It doesn't work with
Windows client.
First I logon windows 7, using a Active Directory's user, then I tried
to login to postgresql 9.2.3 server with psql:

psql -h hostname -p 5444

psql: SSPI continuation error: The encryption type requested is not
supported by the KDC. (80090342)

postgresql.log :

-1 log: connection received: host = 172.XXX.XXX.XXX port = 61877
-1FATAL: GSSAPI authentication failed for 'userXX' user


userXX exists in our postgresql server and pg_hba.conf applied to
172.XXX.XXX.XXX subnet is:

hostall all 172.0.0.0/8 gss

Our postgresql.conf use keytab:
krb_server_keyfile = 'postgres.keytab'


And I have generated keytab with this ktpass command:

ktpass -out postgres.keytab -princ
postgres/hostname@domain -mapUser postgres -pass
XXX -crypto DES-CBC-MD5

The postgres user, exists in the Active Directory and it has a spn
defined:

C:\Users\Administratorsetspn -S postgres/hostname
 domain\postgres

Checking domain DC=domain
Registering ServicePrincipalNames for CN=postgres,CN=Users,DC=domain 
postgres/hostname

Updated object


If I login from a linux client to linux server, there is no problem:

[root@hostnane datos]# su - userXX
[userXX@hostname ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_503
Default principal: userXX@domain
 
Valid starting ExpiresService principal
12/11/13 08:42:04  12/11/13 18:42:09  krbtgt/domain@domain
renew until 12/18/13 08:42:04
12/11/13 08:42:43  12/11/13 18:42:09  postgres/hostname@domain
renew until 12/18/13 08:42:04
[userXX@u2vbddpg ~]$ psql -h 172.XX.XX.XX
psql (9.2.1.3)
Type help for help.
 
edb=
 
 
Thanks 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 924, Windows 2012, error code 487

2014-01-14 Thread Abraham, Danny
Hi,

Does anyone know whether this link has an open bug? Maybe a fix?

http://www.postgresql.org/message-id/5046caeb.4010...@grammatech.com

Thanks

Danny



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore - table restoration

2014-01-14 Thread Day, David
Adrian,

Thanks for confirming my observations.

My hope was that I would be able to create one archive file with pg_dump -Fc,
which at a future time could be used to do either a total restoration
or partial restorations via options of pg_restore; ie. Not to have create
addeded specialized pg_dump for each recovery case.

I had as you suggested observed stdout of my test cases.

a.) pg_restore -c -t tbl1 -t tbl2  archive_file
There are no SQL CONSTRAINT or TRIGGER statements related to these
2 tables.
When I add the -d my_db it confirms that table is restored,
But with no constraints and no triggers.

b.) pg_restore -c -v -a -t tbl2 -t tbl2 archive_file  
As previously noted I get verbose indication that the table data is being 
dropped.
However there are no SQL commands that would cause that ( DELETE or TRUNCATE ) 
The attempt ends up failing as the table ends up with duplicated data.
This ( -a -c ) would be a nice combination of pg_restore as pg_dump as I recall
does not allow for that combination.



Rgds


Dave


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Monday, January 13, 2014 7:36 PM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore - table restoration

On 01/13/2014 01:43 PM, Day, David wrote:
 Hi,

 This is sort of a continuation of = problems I was working on last 
 week

 with selective restorations of an archive file at the schema or table 
 level.  ( V9.3)

 Given that I dumped the entire database ( pg_dump -Fc  my_db -f 
 archive_file )

 When I pg_restore an entire schema ( -n ) everything is wonderful.

 If I try to attempt two tables in one of the schemas I encounter problems.

 I get a success of sort with these option  variations:

 pg_restore -c  -t tbl1 -t tbl2 -U username -d my_db  archive_file

 In this case the tables are recreated with data but all the original
   constraints for these tables are missing

 As are triggers that are associated with the tables.   I guess I can
 understand this.


 Is this a bug or a mis-understanding on my part?


Oops, turns out I did not exactly replicate what you where doing and my 
previous answer is wrong.

What I found.

When I do this:

/usr/local/pgsql93/bin/pg_dump -Fc  -U hplc_admin -p 5452 -f hplc.out hplc

and then this:

/usr/local/pgsql93/bin/pg_restore -c -t student_sessions -t student_attendance 
-f hplc_table.sql hplc.out

I see what you see, no constraints or triggers in the SQL.

When I do what I originally posted about:

/usr/local/pgsql93/bin/pg_dump -Fc -c -t student_sessions -t student_attendance 
-U hplc_admin -p 5452 -f hplc_tables_2.out hplc

and then this:

/usr/local/pgsql93/bin/pg_restore -c -f hplc_table_2.sql hplc_tables_2.out

I do see the constraints and triggers in the output.

I was under the impression that the result should be the same for both. 
So I would consider this puzzling at the least and a bug at the most. 
Someone else will have to chime in on what is really happening because I do not 
know and I see nothing in the docs to suggest the behavior should be different.



 Regards

 Dave Day



--
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres usage of session identifiers for security

2014-01-14 Thread Marti Raudsepp
On Tue, Jan 14, 2014 at 2:36 PM, Keith Minsel mins...@gmail.com wrote:
 Can anyone describe how Postgres generates session identifiers and how
 Postgres uses a session identifier for community action protection?

PostgreSQL is a database system, it does not generate session identifiers.

You probably have an application that uses PostgreSQL, you should look
into that application instead.

Regards,
Marti


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 924, Windows 2012, error code 487

2014-01-14 Thread Adrian Klaver

On 01/14/2014 04:54 AM, Abraham, Danny wrote:

Hi,

Does anyone know whether this link has an open bug? Maybe a fix?

http://www.postgresql.org/message-id/5046caeb.4010...@grammatech.com


If it where me I would file a bug report here;


http://www.postgresql.org/support/submitbug/


In the report reference Bug #5578, the link above and this thread.




Thanks

Danny




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore - table restoration

2014-01-14 Thread Adrian Klaver

On 01/14/2014 06:06 AM, Day, David wrote:

Adrian,

Thanks for confirming my observations.

My hope was that I would be able to create one archive file with pg_dump -Fc,
which at a future time could be used to do either a total restoration
or partial restorations via options of pg_restore; ie. Not to have create
addeded specialized pg_dump for each recovery case.

I had as you suggested observed stdout of my test cases.


Actually my suggestion was to use -f which captures the restore into a 
file. This creates something you can look at leisure:)




a.) pg_restore -c -t tbl1 -t tbl2  archive_file
There are no SQL CONSTRAINT or TRIGGER statements related to these
2 tables.
When I add the -d my_db it confirms that table is restored,
But with no constraints and no triggers.

b.) pg_restore -c -v -a -t tbl2 -t tbl2 archive_file
As previously noted I get verbose indication that the table data is being 
dropped.
However there are no SQL commands that would cause that ( DELETE or TRUNCATE )


Yes, it is outputting dropping TABLE DATA, where TABLE DATA is a command 
I am not familiar with and which does not show up in the dump file.




The attempt ends up failing as the table ends up with duplicated data.
This ( -a -c ) would be a nice combination of pg_restore as pg_dump as I recall
does not allow for that combination.


From what I see it does not actually 'drop' the table data, so you are 
just doing a COPY over existing data.






Rgds


Dave





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 924, Windows 2012, error code 487

2014-01-14 Thread Adrian Klaver

On 01/14/2014 04:54 AM, Abraham, Danny wrote:

Hi,

Does anyone know whether this link has an open bug? Maybe a fix?

http://www.postgresql.org/message-id/5046caeb.4010...@grammatech.com



In addition to my suggestion to file a bug report, I came with another idea.

In a previous post you said:

 Memory problems within Windows (Error 487) used to be common in 8.3.7 
... but got solved in 8.3.15.


Was that something you just noticed or was the move to 8.3.15 a 
deliberate act based on some relevant information?


If so what was fixed in 8.3.15 that helped with error?

It that is known it may help narrow the list of suspects.



Thanks

Danny




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about checksum in 9.3

2014-01-14 Thread Mike Broers
Thanks for the responses, that clarifies the checksum feature for me.

FWIW, my pgbench tests between a 9.2 server and a 9.3 server with checksum
showed very similar performance characteristics and system resource
utilization.  Im going to set up another load test with our actual
application to see if that reveals any noticeable performance difference.

Thanks again
Mike


On Mon, Jan 13, 2014 at 7:11 PM, Michael Paquier
michael.paqu...@gmail.comwrote:

 On Tue, Jan 14, 2014 at 1:50 AM, Mike Broers mbro...@gmail.com wrote:
  Hello, I am in the process of planning a 9.3 migration of postgres and I
 am
  curious about the checksum features available.  In my test 9.3 instance
 it
  seemed like this feature provides a log entry of the exact database/oid
 of
  the corrupt object when it is accessed, but not much else.  I can't find
  much documentation on anything else this feature provides.
 Few things:
 - The only way to know if a server is using data checksums is to use
 pg_controldata.
 - Be aware as well of the potential performance impact on your CPU,
 checksums are checked each time a page is read, and recalculated each
 time a page is updated.
 - ignore_checksum_failure can be used to ignore failures. Don't use
 that on a production system.

  Is there a built-in method of scanning the server to check for
 corruption or
  will I have to wait for a corrupt object to be accessed to see the log
  entry?
 You can as well access manually tables with some for example
 sequential scan to check if blocks are broken or not.

  Is there a relation that stores last checksum status or anyway of
  reporting on what objects are identified by postgres as corrupt or not
  corrupt?
 No, you could build one though with a background worker that scans
 relation pages and registers that failing blocks.

  Are there any other features of the checksum I am missing besides the log
  entry?
 9.4 has a new GUC parameter called data_checksums that allow you to
 check with a psql client if checksums are used on a server.

 Regards,
 --
 Michael



Re: [GENERAL] question about checksum in 9.3

2014-01-14 Thread Torsten Förtsch
On 13/01/14 17:50, Mike Broers wrote:
 Is there a built-in method of scanning the server to check for
 corruption or will I have to wait for a corrupt object to be accessed to
 see the log entry?

This presentation:

  http://www.youtube.com/watch?v=TovZ0lb16-Q

suggests pg_filedump.

Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Correct query to check streaming replication lag

2014-01-14 Thread Granthana Biswas
Hello All,

Can anyone please tell me which of the following is the correct replication
lag query to find streaming replication lag in seconds?

1. SELECT extract(seconds from (now() - pg_last_xact_replay_timestamp()))
AS time_lag;

2. SELECT CASE WHEN pg_last_xlog_receive_location() =
pg_last_xlog_replay_location() THEN 0 ELSE EXTRACT (EPOCH FROM now() -
pg_last_xact_replay_timestamp()) END AS log_delay;


Regards,
Granthana


[GENERAL] non-zero xmax yet visible

2014-01-14 Thread Ming Li
I'm a little bit confused by the meaning of xmax.

The documentation at
http://www.postgresql.org/docs/current/static/ddl-system-columns.html
says
xmax

The identity (transaction ID) of the deleting transaction, or zero for
an undeleted row version. It is possible for this column to be nonzero
in a visible row version. That usually indicates that the deleting
transaction hasn't committed yet, or that an attempted deletion was
rolled back.

According to this, it seems a committed change should result in an
xmax value of zero. But a huge number of rows in our database have
non-zero xmax values and are still visible.

I did the following experiment with 2 sessions.

Session 1

= create table test_data (id int, value int);
= insert into test_data(id) values(1);
= commit;
= update test_data set value = 1 where id = 1;
= select txid_current();
 txid_current
--
362938838

Session 2

= select xmin, xmax, id, value from test_data;
   xmin|   xmax| id | value
---+---++---
 362938803 | 362938838 |  1 |

= update test_data set value = 2 where id = 1;

Session 1

= commit;

Session 2

= select txid_current();
 txid_current
--
362938861

= commit;
= select xmin, xmax, id, value from test_data;
   xmin|   xmax| id | value
---+---++---
 362938861 | 362938861 |  1 | 2


So in this case, xmax is equal to xmin. I've also seen cases where
xmax is larger than xmin and the row is visible.

Is this an expected behavior? How shall we interpret xmax in these cases?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] HeadlineWordEntry bit fields

2014-01-14 Thread Zev Benjamin

Hi,

Could anyone explain the meaning of the bit fields in struct 
HeadlineWordEntry?  Specifically, I'm not completely sure about 
selected, in, replace, repeated, and skip.



Thanks,
Zev


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGSYSCONFDIR?

2014-01-14 Thread John Sutton
Hi there

Having spent about 2 hours trying to solve a simple problem, I think it might 
be worthwhile to record my efforts.  Perhaps someone can point out how 
extremely silly I have been… or is the documentation lacking?

My original question was: where is the system-wide psqlrc file located?

Some material on the web suggests that this is ~postgres/.psqlrc but this not 
true, this is just the postgres user’s user-specific config file.

I tried putting it alongside pg_hba.conf etc but that didn’t work.

The psqlrc.sample file contains the wording “Copy this to your sysconf 
directory (typically /usr/local/pqsql/etc) …”  but that directory doesn’t exist 
on either of my target systems!  (I’m using postgres 9.1 on Ubuntu and Mac OS 
X.)

As a last resort (which surely shouldn’t be necessary) on the Ubuntu system I 
did:

strings /usr/bin/psql | grep -i sysconf

$ENV{'PGSYSCONFDIR'} = '/etc/postgresql-common' if !$ENV{'PGSYSCONFDIR’};

So that’s where it needs to be: /etc/postgresql-common/psqlrc

I’ve still no clue for Mac OS X however, since the same trick only finds a 
placeholder :( :

strings /Applications/Postgres.app/Contents/MacOS/bin/psql | grep -i sysconf

PGSYSCONFDIR
PGSYSCONFDIR=%s

Hope this saves somebody some time.

--
John Sutton




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSYSCONFDIR?

2014-01-14 Thread Alvaro Herrera
John Sutton escribió:

 As a last resort (which surely shouldn’t be necessary) on the Ubuntu system I 
 did:
 
 strings /usr/bin/psql | grep -i sysconf
 
 $ENV{'PGSYSCONFDIR'} = '/etc/postgresql-common' if !$ENV{'PGSYSCONFDIR’};
 
 So that’s where it needs to be: /etc/postgresql-common/psqlrc

Meh.  /usr/bin/psql in Debian/Ubuntu is a shell script provided by the
packaging.

 I’ve still no clue for Mac OS X however, since the same trick only finds a 
 placeholder :( :
 
 strings /Applications/Postgres.app/Contents/MacOS/bin/psql | grep -i sysconf
 
 PGSYSCONFDIR
 PGSYSCONFDIR=%s

This is probably what you would get if you had stringied the binary in
Debian/Ubuntu, too, instead of the wrapper script.

I think the way to get the PGSYSCONFDIR would be to use
  pg_config --sysconfdir

If you don't have pg_config, ...  Tough.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSYSCONFDIR?

2014-01-14 Thread Tom Lane
John Sutton johnericsut...@gmail.com writes:
 Hi there
 Having spent about 2 hours trying to solve a simple problem, I think it might 
 be worthwhile to record my efforts.  Perhaps someone can point out how 
 extremely silly I have been… or is the documentation lacking?

 My original question was: where is the system-wide psqlrc file located?

The easy way to find that out is pg_config --sysconfdir.  I agree that
the psql man page ought to mention that, and fails to.  Will see about
fixing that...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSYSCONFDIR?

2014-01-14 Thread Steve Atkins

On Jan 14, 2014, at 10:58 AM, John Sutton johnericsut...@gmail.com wrote:

 Hi there
 
 Having spent about 2 hours trying to solve a simple problem, I think it might 
 be worthwhile to record my efforts.  Perhaps someone can point out how 
 extremely silly I have been… or is the documentation lacking?
 
 My original question was: where is the system-wide psqlrc file located?

The default is a compile-time configuration option.

You can get that for your installation using pg_config --sysconfdir”

The environment PGSYSCONFDIR variable can override it if it’s set. Like a lot 
of client configuration settings it’s not really handled by the client, but by 
libpq. That’s good; makes for a nice consistent UI. What’s less good is that it 
means that they’re documented in the libpq docs - 
http://www.postgresql.org/docs/current/static/libpq-envars.html

 
 Some material on the web suggests that this is ~postgres/.psqlrc but this not 
 true, this is just the postgres user’s user-specific config file.
 
 I tried putting it alongside pg_hba.conf etc but that didn’t work.
 
 The psqlrc.sample file contains the wording “Copy this to your sysconf 
 directory (typically /usr/local/pqsql/etc) …”  but that directory doesn’t 
 exist on either of my target systems!  (I’m using postgres 9.1 on Ubuntu and 
 Mac OS X.)
 
 As a last resort (which surely shouldn’t be necessary) on the Ubuntu system I 
 did:
 
 strings /usr/bin/psql | grep -i sysconf
 
 $ENV{'PGSYSCONFDIR'} = '/etc/postgresql-common' if !$ENV{'PGSYSCONFDIR’};

On Ubuntu that’s not really psql, it’s a shell script wrapper that runs the 
real psql - and it looks like they’re overriding whatever the built-in default 
is in their wrapper.

 
 So that’s where it needs to be: /etc/postgresql-common/psqlrc
 
 I’ve still no clue for Mac OS X however, since the same trick only finds a 
 placeholder :( :
 
 strings /Applications/Postgres.app/Contents/MacOS/bin/psql | grep -i sysconf

satsuke:shared (develop)$ pg_config --sysconfdir
/Applications/Postgres.app/Contents/MacOS/etc

:)

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_basebackup failing

2014-01-14 Thread Alan Nilsson
Could someone give me some insight to the following error message:

[mqsql06:/Volumes/SQL_Set] _postgres% pg_basebackup --user=replicate 
--host=mqsql03 -xP -Fp --pgdata=pgsql
19439890/65873894 kB (29%), 1/1 tablespace
pg_basebackup: could not get transaction log end position from server: ERROR:  
archive member base/17040/25491 too large for tar format

I find the error message confusing since I have the format explicitly set to 
plain.  Is there a solution/workaround?  Can I break up the referenced file 
somehow?  

I'ld rather use pg_basebackup over the backup mode-rsync-out of backup mode 
dance.

thanks
alan




Re: [GENERAL] pg_restore - table restoration

2014-01-14 Thread Day, David
I note for the observed failure of pg_restore -c -a -t,
I have worked around this by performing a
Truncate operation on the tables prior to invoking
pg_restore with a simpler -a -t option combination.

If this matter needs to be reposted as bug or needs
further action/information from me,  please advise.

Thanks

Dave

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Tuesday, January 14, 2014 10:33 AM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore - table restoration

On 01/14/2014 06:06 AM, Day, David wrote:
 Adrian,

 Thanks for confirming my observations.

 My hope was that I would be able to create one archive file with 
 pg_dump -Fc, which at a future time could be used to do either a total 
 restoration or partial restorations via options of pg_restore; ie. Not 
 to have create addeded specialized pg_dump for each recovery case.

 I had as you suggested observed stdout of my test cases.

Actually my suggestion was to use -f which captures the restore into a file. 
This creates something you can look at leisure:)


 a.) pg_restore -c -t tbl1 -t tbl2  archive_file There are no SQL 
 CONSTRAINT or TRIGGER statements related to these
 2 tables.
 When I add the -d my_db it confirms that table is restored, But with 
 no constraints and no triggers.

 b.) pg_restore -c -v -a -t tbl2 -t tbl2 archive_file As previously 
 noted I get verbose indication that the table data is being dropped.
 However there are no SQL commands that would cause that ( DELETE or 
 TRUNCATE )

Yes, it is outputting dropping TABLE DATA, where TABLE DATA is a command I am 
not familiar with and which does not show up in the dump file.


 The attempt ends up failing as the table ends up with duplicated data.
 This ( -a -c ) would be a nice combination of pg_restore as pg_dump as I 
 recall
 does not allow for that combination.

 From what I see it does not actually 'drop' the table data, so you are 
just doing a COPY over existing data.




 Rgds


 Dave




-- 
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore - table restoration

2014-01-14 Thread Adrian Klaver

On 01/14/2014 02:29 PM, Day, David wrote:

I note for the observed failure of pg_restore -c -a -t,
I have worked around this by performing a
Truncate operation on the tables prior to invoking
pg_restore with a simpler -a -t option combination.

If this matter needs to be reposted as bug or needs
further action/information from me,  please advise.


Well the bug if there is one would to me be the dropping TABLE DATA below:

aklaver@panda:~ /usr/local/pgsql93/bin/pg_restore -c -v -a -t 
student_sessions -t student_attendance -d hplc -U hplc_admin -p 5452 
hplc.out

pg_restore: connecting to database for restore
pg_restore: dropping TABLE DATA student_sessions
pg_restore: dropping TABLE DATA student_attendance
pg_restore: processing data for table student_attendance
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3145; 0 21204 TABLE 
DATA student_attendance hplc_admin
pg_restore: [archiver (db)] COPY failed for table student_attendance: 
ERROR:  duplicate key value violates unique constraint 
student_attendance_pkey

DETAIL:  Key (attendance_id)=(1) already exists.

Seems to me pg_restore either does what it reports, drop the data, or it 
does not report dropping TABLE DATA because that is not actually happening.


At this point this has reached my limits of understanding. I would say 
file a bug report, if for no other reason then to get a clarification on 
what the behavior in this situation is meant to be.





Thanks

Dave





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] non-zero xmax yet visible

2014-01-14 Thread Michael Paquier
On Wed, Jan 15, 2014 at 1:26 AM, Ming Li mli89...@gmail.com wrote:
 I'm a little bit confused by the meaning of xmax.

 The documentation at
 http://www.postgresql.org/docs/current/static/ddl-system-columns.html
 says
 xmax

 The identity (transaction ID) of the deleting transaction, or zero for
 an undeleted row version. It is possible for this column to be nonzero
 in a visible row version. That usually indicates that the deleting
 transaction hasn't committed yet, or that an attempted deletion was
 rolled back.

 According to this, it seems a committed change should result in an
 xmax value of zero. But a huge number of rows in our database have
 non-zero xmax values and are still visible.
Not exactly, this is only the case of a tuple that has been only
inserted in a transaction. To put it in simple words an inserted row
will have its xmin set to the current transaction ID with xman set at
0, and a deleted row will have its xmax updated to the transaction ID
of the transaction that removed it.
An updated row is the combination of a deletion and an insertion.

The data visible from other sessions depends as well on the isolation level:
http://www.postgresql.org/docs/current/static/transaction-iso.html
The default, read committed, means that the query will see data
committed by other sessions before the *query* began.

 I did the following experiment with 2 sessions.

 Session 1

 = create table test_data (id int, value int);
 = insert into test_data(id) values(1);
 = commit;
 = update test_data set value = 1 where id = 1;
 = select txid_current();
  txid_current
 --
 362938838

 Session 2

 = select xmin, xmax, id, value from test_data;
xmin|   xmax| id | value
 ---+---++---
  362938803 | 362938838 |  1 |
This session is using a transaction ID between 362938803 and
362938838, explaining why it is the one visible. You are also not
giving all the information of session 2, a transaction began there as
well.

 = update test_data set value = 2 where id = 1;

 Session 1

 = commit;

 Session 2

 = select txid_current();
  txid_current
 --
 362938861

 = commit;
 = select xmin, xmax, id, value from test_data;
xmin|   xmax| id | value
 ---+---++---
  362938861 | 362938861 |  1 | 2
In this case what this session

 So in this case, xmax is equal to xmin. I've also seen cases where
 xmax is larger than xmin and the row is visible.
With the isolation level read committed, changes committed by other
sessions during a transaction are visible.

 Is this an expected behavior? How shall we interpret xmax in these cases?
This is part of how MVCC works in Postgres, xman is the transaction ID
until when this tuple is visible for other sessions.
Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup failing

2014-01-14 Thread Michael Paquier
On Wed, Jan 15, 2014 at 6:53 AM, Alan Nilsson anils...@apple.com wrote:
 Could someone give me some insight to the following error message:

 [mqsql06:/Volumes/SQL_Set] _postgres% pg_basebackup --user=replicate
 --host=mqsql03 -xP -Fp --pgdata=pgsql
 19439890/65873894 kB (29%), 1/1 tablespace
 pg_basebackup: could not get transaction log end position from server:
 ERROR:  archive member base/17040/25491 too large for tar format

 I find the error message confusing since I have the format explicitly set to
 plain.
pg_basebackup fetches a full backup from server using tar all the time
for performance purposes to reduce the amount of data sent through the
stream. pg_basebackup contains some code to decode this tarball and
generate a plain format from it.
The error you are seeing is triggered because this relation file
exceeds MAX_TAR_MEMBER_FILELEN or 8GB for a single tar member, which
is as well the norm for tar.

 Is there a solution/workaround?
Not use a pg_basebackup but an external backup solution or a custom
script. Break this relation file into several pieces by doing some
partitions on it.
 Can I break up the referenced file somehow?
Partitions on the parent table.
Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general