[GENERAL] postgresql93-9.3.5: deadlock when updating parent table expected?

2015-02-17 Thread Dmitry O Litvintsev
Hi, 

I recently updated to postgresql93-9.3.5 (from 9.2.9). I see frequent deadlocks 
when updating parent table in insert into child table. There is foreign key 
constraint between 
child table and parent table. Parent table is updated on by trigger in insert 
into child table. So 
pretty much standard thing. Is it expected to deadlock?

A simplified version:

 create table volume (
   id serial primary key, 
   name varchar, 
   counter integer default(0));

 create table file (
   id serial primary key, 
   name varchar, 
   volume bigint, foreign key (volume) references volume(id));
  
 create or replace function update_volume_file_counter() 
 returns "trigger" as $$
 begin
 if (tg_op='INSERT') then
  update volume set counter=counter+1 where volume.id=new.volume;
  return new;
 elseif (tg_op='DELETE') then
  update volume set counter=counter-1 where volume.id=old.volume;
  return old;
 end if;
 end;
 $$
 language plpgsql;

 create trigger update_volume_counter
after insert or delete on file
for each row 
execute procedure update_volume_file_counter();

So record is inserted into file table and counter gets updated in volume table. 
Nothing 
fancy. 

insert into volume (name) values ('foo');
insert into file(name,volume) values ('f1',(select id from volume where 
name='foo'));
insert into file(name,volume) values ('f2',(select id from volume where 
name='foo'));

select * from volume;
 id | name | counter 
+--+-
  2 | foo  |   2
(1 row)

delete from file where name='f2';
DELETE 1
billing=# select * from volume;
 id | name | counter 
+--+-
  2 | foo  |   1
(1 row)

So, counter increments/decrements as it should.
Works fine. 
But in real life application where multiple threads are inserting into file 
table I see sometimes:

CSTERROR:  deadlock detected
 Process 24611 waits for ExclusiveLock on tuple (1749,58) of relation 138328329 
of database 138328263; blocked by process 25082.
 Process 25082 waits for ShareLock on transaction 14829630; blocked by process 
24611.
 Process 24611: update volume set counter=counter+1 where 
id=new.volume;
 Process 25082: insert into file(name,volume) 
values('f1',(select id from volume where name='foo'));
  CSTHINT:  See server log for query details.

(not a "real" log file excerpt). 

This does not happen all the time, happens sometimes when multiple threads 
"add" file to the same volume;. 

Question - am I doing something wrong or this deadlock is expected? ( I read 
somewhere 
that when inserting into  child table the corresponding record of parent table 
is locked). 
I did not seem to encounter this issue in postgresql 9.2 and 8.4 which I had 
before. 

Should I drop foreign key constraint ?

Thanks,
Dmitry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] which is better- storing data as array or json?

2015-02-17 Thread David G. Johnston
On Tue, Feb 17, 2015 at 9:00 PM, zach cruise  wrote:

> i can't keep creating tables


​Where did you get this idea?​

or adding columns


​Of course not...​

every time i need to
> add a nickname- this happens a lot.
>
>
​OK​...


> so i want to put everything in an array or json.
>
>
​Those are not the only two solutions...​

remember rows can have different number of nicknames.
>

​Which is why you setup a "one-to-many (optional)" relationship between two
tables; the first table's row (i.e., person)​ can have many related rows
(i.e., nicknames) on the second table.

​Each nickname a person has is represented by a single row on the
"person-nickname" table with whatever attributes you wish to keep track
of.  New nickname means you add a new row - just like you would add a new
array item to your JSON model.

If you need a truly dynamic representation (i.e, you do not know what
nickname attributes you want to keep track of - like visibility and order
as already shown - or want to allow users to add their own) you'd have to
go with JSON (or add an hstore column to the nickname table); the array
will not get you want you need​

​because multi-dimensional arrays are not a solution.

The solution proposed solves the "variable number of nicknames per person"
need that you describe.  If there are other constraints you are dealing
with you need to list them.​​

​
​David J.​


Re: [GENERAL] which is better- storing data as array or json?

2015-02-17 Thread John R Pierce

On 2/17/2015 8:00 PM, zach cruise wrote:

i can't keep creating tables or adding columns every time i need to
add a nickname- this happens a lot.

so i want to put everything in an array or json.

remember rows can have different number of nicknames.


david was suggesting a join table.

one table with names (id, first name, last name, other attributes...)  
with one row per person


and another table with (id, names_id, nickname, nickname attribute) with 
as many rows as there are nicks and users...


you'd join these to get all the nicks for a given name, or to get all 
the names for a given nickname, etc.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] which is better- storing data as array or json?

2015-02-17 Thread zach cruise
i can't keep creating tables or adding columns every time i need to
add a nickname- this happens a lot.

so i want to put everything in an array or json.

remember rows can have different number of nicknames.


On 2/17/15, David G Johnston  wrote:
> zach cruise wrote
>> for indexing, accessing, filtering and searching?
>>
>> as simple array-
>> first name   | last name | nicknames
>> tom | jerry | {cat}, {mouse}
>>
>> as multi-dimensional array-
>> first name   | last name | nicknames
>> tom | jerry | {cat, kat}, {mouse, mice}
>>
>> as simple json-
>> first name   | last name | nicknames
>> tom | jerry | {"public": "cat", "private": "mouse"}
>>
>> as multi-nested json-
>> first name   | last name | nicknames
>> tom | jerry  | {"public": {"first": "cat", "second": "kat"},
>> "private": {"first": "mouse", "second": "mice"}}
>
> The choice of proper model depends on how you intend to make use of it.
>
> That said, I'd go with "none of the above" by default.
>
> My first reaction in this scenario would be to create a nicknames table:
>
> [nick_person_id, person_id, nick_name, nick_scope, nick_scope_order]
>
> You could maybe normalize further by having a nickname table with integer
> keys that then end up as FKs on this many-to-many relation.
>
> An array is too complicated given the fact you need to track attributes on
> the nicknames.  You could possible do an array over a composite type but
> I'm
> not sure how indexing and searching would fare in that setup.
>
> Why are you even considering storing the information in JSON?  The answer
> to
> that question would make it more obvious whether that solution is viable
> but
> do you really want any application that makes use of this data to have to
> speak JSON to do so when the time-tested relational model can likely give
> you everything you need - and probably more.  Even if you had to serialize
> the data to and from JSON I would say that storing the data in that format
> to avoid the serializing is an instance of pre-mature optimization.
>
> David J.
>
>
>
> --
> View this message in context:
> http://postgresql.nabble.com/which-is-better-storing-data-as-array-or-json-tp5838358p5838362.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Revoking access for pg_catalog schema objects

2015-02-17 Thread David G Johnston
Tom Lane-2 wrote
> Saimon <

> aimon.slim@

> > writes:
>> I want to restrict access for some user for tables and views in
>> pg_catalog
>> schema.
> 
> The system is not designed to support this, and you should not expect to
> succeed at hiding things this way.

I would expect a note at:

http://www.postgresql.org/docs/9.4/interactive/catalogs.html

indicating what you've noted above.

"Furthermore, the contents of each table, unless noted in the table's
description, is viewable by all users and cannot be revoked.  In particular,
the contents of functions (pg_proc) are visible even if the user has not
been given permissions sufficient to EXECUTE the function."

A similar note should be added to both the pg_proc page and the "CREATE
FUNCTION" SQL command page.  The fact that the contents of a function are
visible even to users unable to execute said function is not something that
would be readily assumed or considered by a novice.

Have I generalized to the point of being incorrect and/or miss where this is
discussed elsewhere in the documentation? While not frequent this seems to
come up enough to warrant documentation of the system's design choices in
this area.

David J.





--
View this message in context: 
http://postgresql.nabble.com/Revoking-access-for-pg-catalog-schema-objects-tp5838337p5838367.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] window function ordering not working as expected

2015-02-17 Thread Lonni J Friedman
On Tue, Feb 17, 2015 at 4:18 PM, Tom Lane  wrote:
> Lonni J Friedman  writes:
>> I'm interested in seeing:
>> * the date for the most recent result
>> * test name (identifier)
>> * most recent result (decimal value)
>> * the worst (lowest decimal value) test result from the past 21 days
>> * the date which corresponds with the worst test result from the past 21 days
>> * the 2nd worst (2nd lowest decimal value) test result
>> ...
>> The problem that I'm seeing is in the prv_score column. It should show
>> a value of 0.6, which corresponds with 2015-02-13, however instead its
>> returning 0.7. I thought by ordering by metrics->>'PT TWBR' I'd always
>> be sorting by the scores, and as a result, the lead(metrics->>'PT
>> TWBR', 1) would give me the next greatest value of the score. Thus my
>> confusion as to why ORDER BY metrics->>'PT TWBR' isn't working as
>> expected.
>
> lead() and lag() retrieve values from rows that are N away from the
> current row in the specified ordering.  That isn't what you want here
> AFAICS.
>
> I think the worst test result would be obtained with
> nth_value(metrics->>'PT TWBR', 1)
> which is equivalent to what you used,
> first_value(metrics->>'PT TWBR')
> while the 2nd worst result would be obtained with
> nth_value(metrics->>'PT TWBR', 2)
>
> However, "worst" and "2nd worst" with this implementation would mean
> "worst and 2nd worst within the partition", which isn't the stated
> goal either, at least not with the partition definition you're using.
>
> What you really want for the "worst in last 21 days" is something like
>
> min(metrics->>'PT TWBR') OVER (
>PARTITION BY ... that same mess you used ...
>ORDER BY tstamp
>RANGE BETWEEN '21 days'::interval PRECEDING AND CURRENT ROW)
>
> However Postgres doesn't implement RANGE x PRECEDING yet.  You could
> get "worst in last 21 observations" easily:
>
> min(metrics->>'PT TWBR') OVER (
>PARTITION BY ... that mess ...
>ORDER BY tstamp
>ROWS BETWEEN 20 PRECEDING AND CURRENT ROW)
>
> and maybe that's close enough.
>
> I do not know an easy way to get "second worst" :-(.  You could build a
> user-defined aggregate to produce "second smallest value among the inputs"
> and then apply it in the same way as I used min() here.

Thanks Tom, much appreciate the fast reply.  I'll chew this over and
see if I have any other questions.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] which is better- storing data as array or json?

2015-02-17 Thread David G Johnston
zach cruise wrote
> for indexing, accessing, filtering and searching?
> 
> as simple array-
> first name| last name | nicknames
> tom | jerry | {cat}, {mouse}
> 
> as multi-dimensional array-
> first name| last name | nicknames
> tom | jerry | {cat, kat}, {mouse, mice}
> 
> as simple json-
> first name| last name | nicknames
> tom | jerry | {"public": "cat", "private": "mouse"}
> 
> as multi-nested json-
> first name| last name | nicknames
> tom | jerry   | {"public": {"first": "cat", "second": "kat"},
> "private": {"first": "mouse", "second": "mice"}}

The choice of proper model depends on how you intend to make use of it.

That said, I'd go with "none of the above" by default.

My first reaction in this scenario would be to create a nicknames table:

[nick_person_id, person_id, nick_name, nick_scope, nick_scope_order]

You could maybe normalize further by having a nickname table with integer
keys that then end up as FKs on this many-to-many relation.

An array is too complicated given the fact you need to track attributes on
the nicknames.  You could possible do an array over a composite type but I'm
not sure how indexing and searching would fare in that setup.

Why are you even considering storing the information in JSON?  The answer to
that question would make it more obvious whether that solution is viable but
do you really want any application that makes use of this data to have to
speak JSON to do so when the time-tested relational model can likely give
you everything you need - and probably more.  Even if you had to serialize
the data to and from JSON I would say that storing the data in that format
to avoid the serializing is an instance of pre-mature optimization.

David J.



--
View this message in context: 
http://postgresql.nabble.com/which-is-better-storing-data-as-array-or-json-tp5838358p5838362.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] which is better- storing data as array or json?

2015-02-17 Thread zach cruise
for indexing, accessing, filtering and searching?

as simple array-
first name  | last name | nicknames
tom | jerry | {cat}, {mouse}

as multi-dimensional array-
first name  | last name | nicknames
tom | jerry | {cat, kat}, {mouse, mice}

as simple json-
first name  | last name | nicknames
tom | jerry | {"public": "cat", "private": "mouse"}

as multi-nested json-
first name  | last name | nicknames
tom | jerry | {"public": {"first": "cat", "second": "kat"},
"private": {"first": "mouse", "second": "mice"}}


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Tom Lane
Stephen Frost  writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> This is the standard mistake about pg_dump, which is to imagine that it
>> depends only on userspace operations while inspecting schema info.  It
>> doesn't; it makes use of things like ruleutils.c which operate on "latest
>> available data" rules.

> There's two different points here- the first is the whole discussion
> around why pg_dump is depending on the backend for bits and pieces but
> not everything, but the second is- aren't the accesses from ruleutils.c
> now using an MVCC snapshot?

Yeah, they're using *an* MVCC snapshot.  But it's not the transaction
snapshot, it's one that postdates all sinval traffic the backend has
received.  Robert's changes to get rid of SnapshotNow didn't really
affect this issue at all.  (To clarify: I'm worried about all the stuff
that involves syscache consultations; those queries executed via SPI
are not the issue.)

It now strikes me that it might be possible to use Andreas' logical
decoding infrastructure to allow pg_dump's backend to operate with a
"historical catalog snapshot", which perhaps could resolve this problem.
But there's no such logic there today, and I have no idea what the
disadvantages might be.

> Certainly there's a comment about that
> happening for pg_get_constraintdef_worker(), and other parts appear to
> go through SPI, but not everything does.

Yeah, Jan originally had a plan of making ruleutils operate exclusively
through SPI, but that lasted probably about a month and a half before
people started using syscache-accessing shortcuts.  I think we really
would be best off to eliminate the SPI usage there altogether; it has
little effect except to waste cycles and mislead the credulous into
thinking ruleutils operates in userspace.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] window function ordering not working as expected

2015-02-17 Thread Tom Lane
Lonni J Friedman  writes:
> I'm interested in seeing:
> * the date for the most recent result
> * test name (identifier)
> * most recent result (decimal value)
> * the worst (lowest decimal value) test result from the past 21 days
> * the date which corresponds with the worst test result from the past 21 days
> * the 2nd worst (2nd lowest decimal value) test result
> ...
> The problem that I'm seeing is in the prv_score column. It should show
> a value of 0.6, which corresponds with 2015-02-13, however instead its
> returning 0.7. I thought by ordering by metrics->>'PT TWBR' I'd always
> be sorting by the scores, and as a result, the lead(metrics->>'PT
> TWBR', 1) would give me the next greatest value of the score. Thus my
> confusion as to why ORDER BY metrics->>'PT TWBR' isn't working as
> expected.

lead() and lag() retrieve values from rows that are N away from the
current row in the specified ordering.  That isn't what you want here
AFAICS.

I think the worst test result would be obtained with
nth_value(metrics->>'PT TWBR', 1)
which is equivalent to what you used,
first_value(metrics->>'PT TWBR')
while the 2nd worst result would be obtained with
nth_value(metrics->>'PT TWBR', 2)

However, "worst" and "2nd worst" with this implementation would mean
"worst and 2nd worst within the partition", which isn't the stated
goal either, at least not with the partition definition you're using.

What you really want for the "worst in last 21 days" is something like

min(metrics->>'PT TWBR') OVER (
   PARTITION BY ... that same mess you used ...
   ORDER BY tstamp
   RANGE BETWEEN '21 days'::interval PRECEDING AND CURRENT ROW)

However Postgres doesn't implement RANGE x PRECEDING yet.  You could
get "worst in last 21 observations" easily:

min(metrics->>'PT TWBR') OVER (
   PARTITION BY ... that mess ...
   ORDER BY tstamp
   ROWS BETWEEN 20 PRECEDING AND CURRENT ROW)

and maybe that's close enough.

I do not know an easy way to get "second worst" :-(.  You could build a
user-defined aggregate to produce "second smallest value among the inputs"
and then apply it in the same way as I used min() here.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost  writes:
> > The issue is that pg_dump wants to lock the table against changes, which
> > is really to prevent the table to change between "we got the definition
> > of the table" and "pulling the records out of the table."  It's not
> > immediately obvious, to me at least, that there's really any need to
> > lock the tables when doing a schema-only dump.  Accesses to the catalogs
> > should be consistent across the lifetime of the transaction which
> > pg_dump is operating in and a schema-only dump isn't doing anything
> > else.
> 
> This is the standard mistake about pg_dump, which is to imagine that it
> depends only on userspace operations while inspecting schema info.  It
> doesn't; it makes use of things like ruleutils.c which operate on "latest
> available data" rules.  Accordingly, no we're not going to skip taking
> the table locks.  At least not without a ground-up rewrite of that whole
> mess, which as you know has been discussed multiple times without anything
> useful happening.

There's two different points here- the first is the whole discussion
around why pg_dump is depending on the backend for bits and pieces but
not everything, but the second is- aren't the accesses from ruleutils.c
now using an MVCC snapshot?  Certainly there's a comment about that
happening for pg_get_constraintdef_worker(), and other parts appear to
go through SPI, but not everything does.

Of particular relevance to this appears to be trigger and index
handling, considering that the only thing pg_dump locks is relations
anyway, much of the rest isn't relevant.

Thanks,

Stephen


signature.asc
Description: Digital signature


[GENERAL] window function ordering not working as expected

2015-02-17 Thread Lonni J Friedman
Greetings,
I have a postgresql-9.3.x database with a table with a variety of date
stamped test results, some of which are stored in json format
(natively in the database). I'm attempting to use some window
functions to pull out specific data from the test results over a a
time window, but part of the results are not making sense. Some tests
run every day, others less frequently. For each unique test's results,
I'm interested in seeing:

* the date for the most recent result
* test name (identifier)
* most recent result (decimal value)
* the worst (lowest decimal value) test result from the past 21 days
* the date which corresponds with the worst test result from the past 21 days
* the 2nd worst (2nd lowest decimal value) test result

Here's a sample of the data and resulting score for one test (tname)
from the past few weeks:

  tstamp   |  tname  | score
+-+
2015-02-17 | dfw001.ix-cr-02 | 0.7
2015-02-15 | dfw001.ix-cr-02 | 0.6
2015-02-14 | dfw001.ix-cr-02 | 0.6
2015-02-14 | dfw001.ix-cr-02 | 0.7
2015-02-13 | dfw001.ix-cr-02 | 0.6
2015-02-12 | dfw001.ix-cr-02 | 0.7
2015-02-11 | dfw001.ix-cr-02 | 0.7
2015-02-10 | dfw001.ix-cr-02 | 0.7
2015-02-09 | dfw001.ix-cr-02 | 0.7
2015-02-08 | dfw001.ix-cr-02 | 0.7
2015-02-08 | dfw001.ix-cr-02 | 0.5
2015-02-07 | dfw001.ix-cr-02 | 0.7
2015-02-07 | dfw001.ix-cr-02 | 0.5
2015-02-06 | dfw001.ix-cr-02 | 0.7
2015-02-05 | dfw001.ix-cr-02 | 0.7
2015-02-04 | dfw001.ix-cr-02 | 0.7
2015-01-30 | dfw001.ix-cr-02 | 0.7

Here's the SQL query that I'm running:

SELECT * FROM
(SELECT tstamp,
concat_ws('/',attrs->>'RCluster ID',
regexp_replace(replace(replace(attrs->>'ASN HTML','',''),'http://ncapp100.prod.com/Cluster3.php?asn=',''),'\d+(&d=5d''
target=''_blank''>)','')) AS tname ,
metrics->>'PT TWBR' AS score,
first_value(metrics->>'PT TWBR') OVER
(PARTITION BY concat_ws('/',attrs->>'Route Cluster
ID', regexp_replace(replace(replace(attrs->>'ASN HTML','',''),'http://ncapp100.prod.com/Cluster3.php?asn=',''),'\d+(&d=5d''
target=''_blank''>)',''))
ORDER BY metrics->>'PT TWBR') AS worst_score,
first_value(tstamp) OVER
(PARTITION BY concat_ws('/',attrs->>'Route Cluster
ID', regexp_replace(replace(replace(attrs->>'ASN HTML','',''),'http://ncapp100.prod.com/Cluster3.php?asn=',''),'\d+(&d=5d''
target=''_blank''>)',''))
ORDER BY metrics->>'PT TWBR') AS worst_date,
lead(metrics->>'PT TWBR', 1) OVER
(PARTITION BY concat_ws('/',attrs->>'Route Cluster
ID', regexp_replace(replace(replace(attrs->>'ASN HTML','',''),'http://ncapp100.prod.com/Cluster3.php?asn=',''),'\d+(&d=5d''
target=''_blank''>)',''))
ORDER BY metrics->>'PT TWBR') AS prv_score
FROM btworks
WHERE
age(now(),tstamp) < '21 days'
ORDER BY tstamp DESC, rank
) AS stuff
WHERE
tstamp = '2015-02-17';

Here's the data from the above query as it pertains to the data
(tname='dfw001.ix-cr-02') set that I posted above:

   tstamp   | tname | score | worst_score | worst_date
| prv_score
+---+---+-++---
 2015-02-17 | dfw001.ix-cr-02   | 0.7   | 0.5 | 2015-02-08 | 0.7

The problem that I'm seeing is in the prv_score column. It should show
a value of 0.6, which corresponds with 2015-02-13, however instead its
returning 0.7. I thought by ordering by metrics->>'PT TWBR' I'd always
be sorting by the scores, and as a result, the lead(metrics->>'PT
TWBR', 1) would give me the next greatest value of the score. Thus my
confusion as to why ORDER BY metrics->>'PT TWBR' isn't working as
expected.


thanks in advance for any pointers.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Revoking access for pg_catalog schema objects

2015-02-17 Thread Tom Lane
Saimon  writes:
> I want to restrict access for some user for tables and views in pg_catalog
> schema.

The system is not designed to support this, and you should not expect to
succeed at hiding things this way.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Tom Lane
Stephen Frost  writes:
> * Daniel LaMotte (lamott...@gmail.com) wrote:
>> I understand this.  This is the behavior I want.  What I don't understand
>> is why the readonly user can inspect the schema of the table interactively
>> when pg_dump refuses to do the same via the command line (assumably it asks
>> for too much permission when simply trying to dump the schema [NOT the
>> table data]).

> The issue is that pg_dump wants to lock the table against changes, which
> is really to prevent the table to change between "we got the definition
> of the table" and "pulling the records out of the table."  It's not
> immediately obvious, to me at least, that there's really any need to
> lock the tables when doing a schema-only dump.  Accesses to the catalogs
> should be consistent across the lifetime of the transaction which
> pg_dump is operating in and a schema-only dump isn't doing anything
> else.

This is the standard mistake about pg_dump, which is to imagine that it
depends only on userspace operations while inspecting schema info.  It
doesn't; it makes use of things like ruleutils.c which operate on "latest
available data" rules.  Accordingly, no we're not going to skip taking
the table locks.  At least not without a ground-up rewrite of that whole
mess, which as you know has been discussed multiple times without anything
useful happening.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Stephen Frost
Melvin,

* Melvin Davidson (melvin6...@gmail.com) wrote:
> Simply put, giving access to a schema DOES NOT automatically give access to
> any table in the schema. So if you want a specific user ( or role) to be
> able to read (or pg_dump) all tables in the schema, then you must GRANT
> SELECT of all tables in that schema to the user (or role).

That's not the question at hand, however.  He's not asking about
dumping out the records of the table but rather about pulling out the
schema, which any user can do, regardless of their permissions on the
tables.  There's no need to 'grant select' access on a table, or even
to 'grant usage' on the schema, simply issue queries against the
catalog.

pg_dump is going beyond what's strictly necessary for a schema-only dump
though, it's trying to lock all of the tables too, which really isn't
necessary in this case.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Melvin Davidson
Simply put, giving access to a schema DOES NOT automatically give access to
any table in the schema. So if you want a specific user ( or role) to be
able to read (or pg_dump) all tables in the schema, then you must GRANT
SELECT of all tables in that schema to the user (or role).

On Tue, Feb 17, 2015 at 5:41 PM, Stephen Frost  wrote:

> Daniel,
>
> * Daniel LaMotte (lamott...@gmail.com) wrote:
> > I understand this.  This is the behavior I want.  What I don't understand
> > is why the readonly user can inspect the schema of the table
> interactively
> > when pg_dump refuses to do the same via the command line (assumably it
> asks
> > for too much permission when simply trying to dump the schema [NOT the
> > table data]).  I do not care about the data.  I only care that the
> pg_dump
> > would emit "CREATE TABLE ..." statements for the table.
> >
> > The --schema-only option makes me think that it would emit only these
> > CREATE TABLE ... statements and not the COPY statements (which consist of
> > table data).
>
> The issue is that pg_dump wants to lock the table against changes, which
> is really to prevent the table to change between "we got the definition
> of the table" and "pulling the records out of the table."  It's not
> immediately obvious, to me at least, that there's really any need to
> lock the tables when doing a schema-only dump.  Accesses to the catalogs
> should be consistent across the lifetime of the transaction which
> pg_dump is operating in and a schema-only dump isn't doing anything
> else.
>
> So, for my 2c, it seems like we should be able avoid issuing the LOCK
> TABLE statements when we're doing a schema-only dump and then this would
> work.
>
> Thanks!
>
> Stephen
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Determine all listeners subscribed to notifcations and what channels

2015-02-17 Thread Merlin Moncure
On Tue, Feb 17, 2015 at 4:01 PM, Tom Lane  wrote:
> Cory Tucker  writes:
>> I'm interested in trying to figure out which channels have been subscribed
>> to (using LISTEN).  From what I could tell via a little Googling, there
>> used to be a table named pg_catalog.pg_listener that contained all this
>> information, but that seems to have disappeared somewhere in the 9.x
>> release (I'm using 9.3.x).
>
>> Is there a way to find out which channels have listeners?
>
> No, not any more --- that capability was intentionally given up in the
> 9.0 LISTEN/NOTIFY rewrite.  Not that it wouldn't be nice to have, but
> the cost/benefit ratio was pretty awful.

A userland wrapper could probably approximate this:
*) create a global id for each channel you want to listen on
(basically a table with channel names and a sequence)
*) create a function that looks up the id by channel and sharelocks
the id with an advisory lock, then listens on it
*) the advisory locks will clean themselves up when session ends
*) you can scan pg_locks table for type=advisory lock, pid, and the
ids of interest to get the data you want

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Revoking access for pg_catalog schema objects

2015-02-17 Thread Saimon
Hi
I want to restrict access for some user for tables and views in pg_catalog
schema.

After the following command in psql:
REVOKE ALL ON SCHEMA pg_catalog FROM PUBLIC;

Access, for example, for table pg_proc was restricted:
SELECT * from pg_catalog.pg_proc;
> ERROR:  permission denied for schema pg_catalog

So, it seems that the goal is reached.

But if I run command:
SELECT * from pg_proc;

I receive data from table pg_catalog.pg_proc. I don't know how to explain
this result.

And if I also explicitly revoke access for this table using
REVOKE ALL ON pg_catalog.pg_proc FROM PUBLIC;

Both SELECT queries, with and without schema specifying, will fail.

So, why tables and views are still available after revoking all privileges
from containing system schema? 
Is it ok according to the documentation?



--
View this message in context: 
http://postgresql.nabble.com/Revoking-access-for-pg-catalog-schema-objects-tp5838337.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Stephen Frost
Daniel,

* Daniel LaMotte (lamott...@gmail.com) wrote:
> I understand this.  This is the behavior I want.  What I don't understand
> is why the readonly user can inspect the schema of the table interactively
> when pg_dump refuses to do the same via the command line (assumably it asks
> for too much permission when simply trying to dump the schema [NOT the
> table data]).  I do not care about the data.  I only care that the pg_dump
> would emit "CREATE TABLE ..." statements for the table.
> 
> The --schema-only option makes me think that it would emit only these
> CREATE TABLE ... statements and not the COPY statements (which consist of
> table data).

The issue is that pg_dump wants to lock the table against changes, which
is really to prevent the table to change between "we got the definition
of the table" and "pulling the records out of the table."  It's not
immediately obvious, to me at least, that there's really any need to
lock the tables when doing a schema-only dump.  Accesses to the catalogs
should be consistent across the lifetime of the transaction which
pg_dump is operating in and a schema-only dump isn't doing anything
else.

So, for my 2c, it seems like we should be able avoid issuing the LOCK
TABLE statements when we're doing a schema-only dump and then this would
work.

Thanks!

Stephen


signature.asc
Description: Digital signature


[GENERAL] BDR Monitoring, missing pg_stat_logical_decoding view

2015-02-17 Thread Steve Boyle
I'm trying to setup replication monitoring for BDR, following the doc here:
https://wiki.postgresql.org/wiki/BDR_Monitoring

My BDR installs seem to be missing the pg_stat_logical_decoding view.  Is there 
something specific I need to do to install/create that view?

Thanks,
Steve Boyle


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Daniel LaMotte
I understand this.  This is the behavior I want.  What I don't understand
is why the readonly user can inspect the schema of the table interactively
when pg_dump refuses to do the same via the command line (assumably it asks
for too much permission when simply trying to dump the schema [NOT the
table data]).  I do not care about the data.  I only care that the pg_dump
would emit "CREATE TABLE ..." statements for the table.

The --schema-only option makes me think that it would emit only these
CREATE TABLE ... statements and not the COPY statements (which consist of
table data).

I want the pg_dump to dump the "schema" of the table but without the data.

My example is that the readonly user is able to inspect the schema of the
table (it can see [even though I have not given permission to the table]
that the table has int and text columns).  I would expect that since I can
inspect the schema of the table, that pg_dump using the --schema-only
option would emit a CREATE TABLE ... for the table _even though_ it cannot
dump the data of the table.

Have I made myself clear?

I have no interest in this user being allowed to COPY or SELECT the tables
data.  My only interest is in the user's ability to see what columns and
column types exist for the table so it can emit CREATE TABLE ... commands
in the pg_dump output.

In my mind, the following is true:

% pg_dump --table=mytable
...
CREATE TABLE xyz (...);
COPY TO xyz ...; -- table data
...

% pg_dump --schema-only --table=mytable
...
CREATE TABLE xyz (...);
...

The second example uses --schema-only and does not dump table data.
Therefore, if the user can inspect the schema, that is all it needs
permissions to do in order to write a CREATE TABLE statement for the
table.  But it seems that pg_dump still asks for permission to read the
table data in order to simply dump the schema (which is what I'm trying to
confirm if its intentional or a bug) which results in permission denied
error.

- Dan

On Tue, Feb 17, 2015 at 8:55 PM, Adrian Klaver 
wrote:

> On 02/17/2015 03:11 AM, Daniel LaMotte wrote:
> > The point is that the user seems to have permissions to view the schema
> > but not the table data.  If I can interactively inspect the table schema
> > but pg_dump is unable to dump the table schema, that seems like a bug.
> >
> > The account explicitly is not allowed access to the table's data but
> > seems to be able to access the schema (at least interactively).
> >
> > Does that make more sense?
>
> Whenever I deal with permissions I am reminded of the old Abbott and
> Costello skit; Who's on first?
> To put it another way, it can be confusing. So let me walk through it
> below:
>
> aklaver@panda:~> /usr/local/pgsql93/bin/psql  -d test -h localhost -U
> aklaver -p 5452
>
> # Changed to int, because I do not have uuid on this instance.
> test=# create table mytable_is_readonly (id int primary key, text text not
> null);
> CREATE TABLE
> test=# create table mytable_is_not_readonly (id int primary key, text text
> not null);
> CREATE TABLE
>
>
> # Need to change because aklaver does have role creation privileges.
> test=# \c - postgres
> You are now connected to database "test" as user "postgres".
> test=# create user readonly with password 'readonly';
> CREATE ROLE
>
> # Change back and create data
> test=# \c - aklaver
> You are now connected to database "test" as user "aklaver".
>
> test=# insert into mytable_is_readonly values (1, 'test1'), (2, 'test2'),
> (3, 'test3');
> INSERT 0 3
> test=# insert into mytable_is_not_readonly values (1, 'test1'), (2,
> 'test2'), (3, 'test3');
> INSERT 0 3
>
> # Show who owns the tables
> test=> \dt+ mytable_is_readonly
>   List of relations
>  Schema |Name | Type  |  Owner  | Size  | Description
> +-+---+-+---+-
>  public | mytable_is_readonly | table | aklaver | 16 kB |
> (1 row)
>
> test=> \dt+ mytable_is_not_readonly
> List of relations
>  Schema |  Name   | Type  |  Owner  | Size  | Description
> +-+---+-+---+-
>  public | mytable_is_not_readonly | table | aklaver | 16 kB |
> (1 row)
>
>
> # As aklaver GRANT SELECT to readonly.
> test=> grant select on mytable_is_readonly to readonly;
> GRANT
>
> # Show permissions for tables.
> test=> \dp mytable_is_readonly
>  Access privileges
>  Schema |Name | Type  |Access privileges| Column
> access privileges
>
> +-+---+-+--
>  public | mytable_is_readonly | table | aklaver=arwdDxt/aklaver+|
> | |   | readonly=r/aklaver  |
> (1 row)
>
> test=> \dp mytable_is_not_readonly
> Access privileges
>  Schema |  Name   | Type  | Access privileges | Column
> access privileges

Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Daniel LaMotte
The point is that the user seems to have permissions to view the schema but
not the table data.  If I can interactively inspect the table schema but
pg_dump is unable to dump the table schema, that seems like a bug.

The account explicitly is not allowed access to the table's data but seems
to be able to access the schema (at least interactively).

Does that make more sense?

- Dan

On Fri, Feb 13, 2015 at 4:48 AM, Adrian Klaver 
wrote:

> On 02/11/2015 01:47 PM, Daniel LaMotte wrote:
>
>> Here’s the situation:
>>
>> | % psql --version
>>  psql (PostgreSQL) 9.3.5
>>  % postgres --version
>>  postgres (PostgreSQL) 9.3.5
>>  % psql mydatabase
>>  create table mytable_is_readonly (id uuid primary key, text text not
>> null);
>>  create table mytable_is_not_readonly (id uuid primary key, text text
>> not null);
>>  create user readonly with password 'readonly';
>>  grant select on mytable_is_readonly to readonly;
>>
>>  % psql mydatabase readonly
>>  \d mytable_is_readonly
>>   Table "public.mytable_is_readonly"
>>   Column │  Type   │Modifiers
>>  ┼─┼─
>> ─
>>   id │ integer │ not null default nextval('mytable_is_readonly_
>> id_seq'::regclass)
>>   text   │ text│ not null
>>  Indexes:
>>  "mytable_is_readonly_pkey" PRIMARY KEY, btree (id)
>>
>>  \d mytable_is_not_readonly
>>   Table "public.mytable_is_not_readonly"
>>   Column │  Type   │  Modifiers
>>  ┼─┼─
>> ─
>>   id │ integer │ not null default nextval('mytable_is_not_
>> readonly_id_seq'::regclass)
>>   text   │ text│ not null
>>  Indexes:
>>  "mytable_is_not_readonly_pkey" PRIMARY KEY, btree (id)
>>
>>  % pg_dump -U readonly mydatabase --schema-only
>> --table=mytable_is_readonly
>>  ... this outputs and works
>>
>>  % pg_dump -U readonly mydatabase --schema-only
>> --table=mytable_is_not_readonly
>>  pg_dump: [archiver (db)] query failed: ERROR:  permission denied for
>> relation mytable_is_not_readonly
>>  pg_dump: [archiver (db)] query was: LOCK TABLE
>> public.mytable_is_not_readonly IN ACCESS SHARE MODE
>> |
>>
>> Is this a bug? Or defined behavior that is expected? My use case is that
>> I have some tables that I don’t want to allow the readonly account to
>> access data in but want to allow it to see the schema of that table.
>>
>
> To me at least SELECT is accessing the data, so I am not sure that the
> above meets your criteria in any case. I would do \dt+
> mytable_is_not_readonly to see who has permissions on the table.
>
>
>
>  My
>
>> guess was that since it could see the schema interactively in psql, that
>> it should be allowed to pg_dump the table with schema only no problem.
>>
>> Thanks for the help!
>>
>> - Dan
>>
>> ​
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Starting new cluster from base backup

2015-02-17 Thread Adrian Klaver

On 02/17/2015 06:54 AM, Guillaume Drolet wrote:

Adrian: thanks for this information.

I tried running pg_basebackup in plain format with option -X stream
(pg_basebackup -D "F:\208376PT\db" -X stream -l "208376PT17022015" -U
postgres -P) but I got the message:

pg_basebackup: directory "E:\Data\Database" exists but is not empty"

I creatde a tablespace using CREATE TABLESPACE at the location mentioned
in the message. According to what I read online about this, this message
is issued when a tablespace was created under PGDATA. In my case, only
the directory junction pointing to my tablespace (on a different drive
than PGDATA) exists under PGDATA, not the tablespace itself.

The only way I can run pg_basebackup with WAL files is with option -Ft
and -X fetch. I'd much prefer using plain mode since my 670 GB
tablespace takes a lot of time to extract when tarred. Is there another
way to approach this?


All I can come up with at the moment:

So what is the path on the original machine and can it be replicated on 
the new machine, at least temporarily?


I'm thinking if the path can be replicated, let pg_basebackup write to 
it and then create the tablespace you want and do ALTER TABLE SET 
TABLESPACE to move the tables. You would also need to do this for indexes.




Thanks.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Starting new cluster from base backup

2015-02-17 Thread Adrian Klaver

On 02/17/2015 06:54 AM, Guillaume Drolet wrote:

Adrian: thanks for this information.

I tried running pg_basebackup in plain format with option -X stream
(pg_basebackup -D "F:\208376PT\db" -X stream -l "208376PT17022015" -U
postgres -P) but I got the message:

pg_basebackup: directory "E:\Data\Database" exists but is not empty"

I creatde a tablespace using CREATE TABLESPACE at the location mentioned
in the message. According to what I read online about this, this message
is issued when a tablespace was created under PGDATA. In my case, only
the directory junction pointing to my tablespace (on a different drive
than PGDATA) exists under PGDATA, not the tablespace itself.


I believe all pg_basebackup cares about is whether the directory or not. 
It does not do any sort of further investigation to determine what is in 
the directory.




The only way I can run pg_basebackup with WAL files is with option -Ft
and -X fetch. I'd much prefer using plain mode since my 670 GB
tablespace takes a lot of time to extract when tarred. Is there another
way to approach this?

Thanks.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Determine all listeners subscribed to notifcations and what channels

2015-02-17 Thread Tom Lane
Cory Tucker  writes:
> I'm interested in trying to figure out which channels have been subscribed
> to (using LISTEN).  From what I could tell via a little Googling, there
> used to be a table named pg_catalog.pg_listener that contained all this
> information, but that seems to have disappeared somewhere in the 9.x
> release (I'm using 9.3.x).

> Is there a way to find out which channels have listeners?

No, not any more --- that capability was intentionally given up in the
9.0 LISTEN/NOTIFY rewrite.  Not that it wouldn't be nice to have, but
the cost/benefit ratio was pretty awful.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Igor Neyman


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Adrian Klaver
Sent: Tuesday, February 17, 2015 4:12 PM
To: Daniel LaMotte
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Issue dumping schema using readonly user

On 02/17/2015 08:43 AM, Daniel LaMotte wrote:
> I understand this.  This is the behavior I want.  What I don't 
> understand is why the readonly user can inspect the schema of the 
> table interactively when pg_dump refuses to do the same via the 
> command line (assumably it asks for too much permission when simply 
> trying to dump the schema [NOT the table data]).  I do not care about 
> the data.  I only care that the pg_dump would emit "CREATE TABLE ..." 
> statements for the table.

Just to be clear, you want the readonly user to be able to dump the schema 
definition for mytable_is_not_readonly?


>
> The --schema-only option makes me think that it would emit only these
> CREATE TABLE ... statements and not the COPY statements (which consist
> of table data).
>
> I want the pg_dump to dump the "schema" of the table but without the data.
>
> My example is that the readonly user is able to inspect the schema of
> the table (it can see [even though I have not given permission to the
> table] that the table has int and text columns).  I would expect that
> since I can inspect the schema of the table, that pg_dump using the
> --schema-only option would emit a CREATE TABLE ... for the table _even
> though_ it cannot dump the data of the table.
>
> Have I made myself clear?

Yes, I understand now.

>
> I have no interest in this user being allowed to COPY or SELECT the
> tables data.  My only interest is in the user's ability to see what
> columns and column types exist for the table so it can emit CREATE TABLE
> ... commands in the pg_dump output.
>
> In my mind, the following is true:
>
> % pg_dump --table=mytable
> ...
> CREATE TABLE xyz (...);
> COPY TO xyz ...; -- table data
> ...
>
> % pg_dump --schema-only --table=mytable
> ...
> CREATE TABLE xyz (...);
> ...
>
> The second example uses --schema-only and does not dump table data.
> Therefore, if the user can inspect the schema, that is all it needs
> permissions to do in order to write a CREATE TABLE statement for the
> table.  But it seems that pg_dump still asks for permission to read the
> table data in order to simply dump the schema (which is what I'm trying
> to confirm if its intentional or a bug) which results in permission
> denied error.

The issue such as it is, arises because the information you are getting 
is coming from two different paths.

1) psql
Using the -E switch to psql shows what happens in the background

aklaver@killi:~> /usr/local/pgsql93/bin/psql -d test -U aklaver -p 5452 -E

aklaver@test=> \dt mytable_is_not_readonly
* QUERY **
SELECT n.nspname as "Schema",
   c.relname as "Name",
   CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' 
THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' 
WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' END as "Type",
   pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','s','')
   AND n.nspname !~ '^pg_toast'
   AND c.relname ~ '^(mytable_is_not_readonly)$'
   AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
**

  List of relations
  Schema |  Name   | Type  |  Owner
+-+---+-
  public | mytable_is_not_readonly | table | aklaver
(1 row)

As you can see in psql Postgres uses the system catalogs to fetch the 
schema definition. By default the information in there is available to 
all users. There is another thread going on that addresses this in 
relation to function definitions. If you want some gory details:

http://www.postgresql.org/message-id/caokvcq66muzw7qyeyro0n8v4r4fjzcabysk9u3h+fmrfzw1...@mail.gmail.com


2) pg_dump

aklaver@killi:~> /usr/local/pgsql93/bin/pg_dump -U readonly  test 
--schema-only --table=mytable_is_not_readonly -p 5452
pg_dump: [archiver (db)] query failed: ERROR:  permission denied for 
relation mytable_is_not_readonly
pg_dump: [archiver (db)] query was: LOCK TABLE 
public.mytable_is_not_readonly IN ACCESS SHARE MODE


pg_dump uses the -U to determine permissions and from that what can or 
cannot be dumped based on what permissions are actually set on the 
objects. As is mentioned below, a user has to have at a minimum SELECT 
permissions on an object.

http://www.postgresql.org/docs/9.3/interactive/app-pgdump.html
"
Diagnostics

pg_dump internally executes SELECT statements. If you have problems 
running pg_dump, make sure you are able to select information from the 
database using, for example, psql."

To answer your question, this is intentional. At the moment a work 
around does not co

Re: [GENERAL] Determine all listeners subscribed to notifcations and what channels

2015-02-17 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Cory Tucker
Sent: Tuesday, February 17, 2015 4:21 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Determine all listeners subscribed to notifcations and what 
channels

I'm interested in trying to figure out which channels have been subscribed to 
(using LISTEN).  From what I could tell via a little Googling, there used to be 
a table named pg_catalog.pg_listener that contained all this information, but 
that seems to have disappeared somewhere in the 9.x release (I'm using 9.3.x).

Is there a way to find out which channels have listeners?

thanks
--Cory


Take a look at pg_listening_channels() in PG docs.

Regards,
Igor Neyman


[GENERAL] Determine all listeners subscribed to notifcations and what channels

2015-02-17 Thread Cory Tucker
I'm interested in trying to figure out which channels have been subscribed
to (using LISTEN).  From what I could tell via a little Googling, there
used to be a table named pg_catalog.pg_listener that contained all this
information, but that seems to have disappeared somewhere in the 9.x
release (I'm using 9.3.x).

Is there a way to find out which channels have listeners?

thanks
--Cory


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Adrian Klaver

On 02/17/2015 08:43 AM, Daniel LaMotte wrote:

I understand this.  This is the behavior I want.  What I don't
understand is why the readonly user can inspect the schema of the table
interactively when pg_dump refuses to do the same via the command line
(assumably it asks for too much permission when simply trying to dump
the schema [NOT the table data]).  I do not care about the data.  I only
care that the pg_dump would emit "CREATE TABLE ..." statements for the
table.


Just to be clear, you want the readonly user to be able to dump the 
schema definition for mytable_is_not_readonly?





The --schema-only option makes me think that it would emit only these
CREATE TABLE ... statements and not the COPY statements (which consist
of table data).

I want the pg_dump to dump the "schema" of the table but without the data.

My example is that the readonly user is able to inspect the schema of
the table (it can see [even though I have not given permission to the
table] that the table has int and text columns).  I would expect that
since I can inspect the schema of the table, that pg_dump using the
--schema-only option would emit a CREATE TABLE ... for the table _even
though_ it cannot dump the data of the table.

Have I made myself clear?


Yes, I understand now.



I have no interest in this user being allowed to COPY or SELECT the
tables data.  My only interest is in the user's ability to see what
columns and column types exist for the table so it can emit CREATE TABLE
... commands in the pg_dump output.

In my mind, the following is true:

% pg_dump --table=mytable
...
CREATE TABLE xyz (...);
COPY TO xyz ...; -- table data
...

% pg_dump --schema-only --table=mytable
...
CREATE TABLE xyz (...);
...

The second example uses --schema-only and does not dump table data.
Therefore, if the user can inspect the schema, that is all it needs
permissions to do in order to write a CREATE TABLE statement for the
table.  But it seems that pg_dump still asks for permission to read the
table data in order to simply dump the schema (which is what I'm trying
to confirm if its intentional or a bug) which results in permission
denied error.


The issue such as it is, arises because the information you are getting 
is coming from two different paths.


1) psql
Using the -E switch to psql shows what happens in the background

aklaver@killi:~> /usr/local/pgsql93/bin/psql -d test -U aklaver -p 5452 -E

aklaver@test=> \dt mytable_is_not_readonly
* QUERY **
SELECT n.nspname as "Schema",
  c.relname as "Name",
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' 
THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' 
WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' END as "Type",

  pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','s','')
  AND n.nspname !~ '^pg_toast'
  AND c.relname ~ '^(mytable_is_not_readonly)$'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
**

 List of relations
 Schema |  Name   | Type  |  Owner
+-+---+-
 public | mytable_is_not_readonly | table | aklaver
(1 row)

As you can see in psql Postgres uses the system catalogs to fetch the 
schema definition. By default the information in there is available to 
all users. There is another thread going on that addresses this in 
relation to function definitions. If you want some gory details:


http://www.postgresql.org/message-id/caokvcq66muzw7qyeyro0n8v4r4fjzcabysk9u3h+fmrfzw1...@mail.gmail.com


2) pg_dump

aklaver@killi:~> /usr/local/pgsql93/bin/pg_dump -U readonly  test 
--schema-only --table=mytable_is_not_readonly -p 5452
pg_dump: [archiver (db)] query failed: ERROR:  permission denied for 
relation mytable_is_not_readonly
pg_dump: [archiver (db)] query was: LOCK TABLE 
public.mytable_is_not_readonly IN ACCESS SHARE MODE



pg_dump uses the -U to determine permissions and from that what can or 
cannot be dumped based on what permissions are actually set on the 
objects. As is mentioned below, a user has to have at a minimum SELECT 
permissions on an object.


http://www.postgresql.org/docs/9.3/interactive/app-pgdump.html
"
Diagnostics

pg_dump internally executes SELECT statements. If you have problems 
running pg_dump, make sure you are able to select information from the 
database using, for example, psql."


To answer your question, this is intentional. At the moment a work 
around does not come to mind, but the gears are grinding:)






- Dan





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing table from in INSERT RETURNING

2015-02-17 Thread John McKown
On Tue, Feb 17, 2015 at 2:15 PM, David G Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Feb 17, 2015 at 1:08 PM, John McKown [via PostgreSQL] <[hidden
> email] > wrote:
>
>> I haven't seen any one else reply. I don't know if you've gotten a
>> solution. But the following seemed to work for me:
>>
>>
> ​mine apparently got bounced...​
>
>
>
>> WITH serie AS (
>> select s, s*10 as computing
>> from generate_series(1,10) as s
>> )
>> INSERT INTO test_insert_returning (some_value)
>> SELECT computing
>> FROM serie
>> RETURNING gid, some_value;
>>
>
> ​or, "RETURNING some_value / 10"​
>
>
>> From my reading on the RETURNING phrase, you can only return values from
>> the table into which you are doing the INSERT. Not any other table or view
>> which might be referenced.
>>
>>
> ​This is correct; and I am curious on the use case that requires
> otherwise.​
>

​A weird one might be where in data available ("s") in the CTE is in
English measure (feet, miles, etc) and the OP wants to insert the
equivalent Metric value ("computing") into the table, but needs to return
the English value to the caller (why?). He does not want to put the English
measure into the table itself, just to be able to return it. And not need
to do a reverse conversion. As I said, just a weird thought. From a
effervescent fount of weird thoughts - me. Or perhaps what he is storing in
the table is a one-way hash of a password, and wants to return the
clear-text password to the caller? Hum, that is almost reasonable. I'll
need to be more on guard.



>
> ​David J.​
>
>


-- 
He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


Re: [GENERAL] Missing table from in INSERT RETURNING

2015-02-17 Thread David G Johnston
On Tue, Feb 17, 2015 at 1:08 PM, John McKown [via PostgreSQL] <
ml-node+s1045698n5838306...@n5.nabble.com> wrote:

> I haven't seen any one else reply. I don't know if you've gotten a
> solution. But the following seemed to work for me:
>
>
​mine apparently got bounced...​



> WITH serie AS (
> select s, s*10 as computing
> from generate_series(1,10) as s
> )
> INSERT INTO test_insert_returning (some_value)
> SELECT computing
> FROM serie
> RETURNING gid, some_value;
>

​or, "RETURNING some_value / 10"​


> From my reading on the RETURNING phrase, you can only return values from
> the table into which you are doing the INSERT. Not any other table or view
> which might be referenced.
>
>
​This is correct; and I am curious on the use case that requires otherwise.​

​David J.​




--
View this message in context: 
http://postgresql.nabble.com/Missing-table-from-in-INSERT-RETURNING-tp5838274p5838309.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Missing table from in INSERT RETURNING

2015-02-17 Thread John McKown
On Tue, Feb 17, 2015 at 2:07 PM, John McKown 
wrote:

> I haven't seen any one else reply. I don't know if you've gotten a
> solution. But the following seemed to work for me:
>
> WITH serie AS (
> select s, s*10 as computing
> from generate_series(1,10) as s
> )
> INSERT INTO test_insert_returning (some_value)
> SELECT computing
> FROM serie
> RETURNING gid, some_value;
>
> From my reading on the RETURNING phrase, you can only return values from
> the table into which you are doing the INSERT. Not any other table or view
> which might be referenced.
>

​OOPS, I see what I did wrong. You wanted the "s" value from serie and my
example showed the other value from serie. My apologies. Why not insert the
"s" value into a third column in "test_insert_​returning"? That is:

CREATE TABLE test_insert_returning(
 gid SERIAL,
 s_temp integer,
 some_value int
);

WITH serie AS (
  SELECT s, s*10 as computing
  FROM generate_series(1,10) as s
)
INSERT INTO test_insert_returning(some_value,s)
SELECT computing, s
FROM serie
RETURNING gid, s_temp
;

​You end up getting what is desired, at the cost of a "junk" column in your
table.​



>
> On Tue, Feb 17, 2015 at 10:18 AM, Rémi Cura  wrote:
>
>> Hello dear list,
>> I would appreciate some help on a small matter that has been bothering me
>> for a long time :
>>
>>
>> CREATE TABLE test_insert_returning(
>> gid SERIAL
>> ,some_value int
>> );
>> WITH serie AS (
>> select s, s*10 as computing
>> from generate_series(1,10) as s
>> )
>> INSERT INTO test_insert_returning (some_value)
>> SELECT computing
>> FROM serie
>> RETURNING gid, serie.s
>>
>> doesn't work.
>>
>> The only workaround I found was to create a plpgsql function that doesan
>> idnividual insert
>> so that :
>> WITH serie AS (
>> select s, s*10 as computing
>> from generate_series(1,10) as s
>> )
>> SELECT serie.s, my_inserting_function(computing)
>> FROM serie ;
>>
>> But it is very annoying and potentially bad for performance because many
>> insert may be fired.
>>
>> Any solution?
>> (postgres 9.3)
>> Cheers,
>> Rémi-C
>>
>>
>
>
> --
> He's about as useful as a wax frying pan.
>
> 10 to the 12th power microphones = 1 Megaphone
>
> Maranatha! <><
> John McKown
>



-- 
He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


Re: [GENERAL] Missing table from in INSERT RETURNING

2015-02-17 Thread John McKown
I haven't seen any one else reply. I don't know if you've gotten a
solution. But the following seemed to work for me:

WITH serie AS (
select s, s*10 as computing
from generate_series(1,10) as s
)
INSERT INTO test_insert_returning (some_value)
SELECT computing
FROM serie
RETURNING gid, some_value;

>From my reading on the RETURNING phrase, you can only return values from
the table into which you are doing the INSERT. Not any other table or view
which might be referenced.

On Tue, Feb 17, 2015 at 10:18 AM, Rémi Cura  wrote:

> Hello dear list,
> I would appreciate some help on a small matter that has been bothering me
> for a long time :
>
>
> CREATE TABLE test_insert_returning(
> gid SERIAL
> ,some_value int
> );
> WITH serie AS (
> select s, s*10 as computing
> from generate_series(1,10) as s
> )
> INSERT INTO test_insert_returning (some_value)
> SELECT computing
> FROM serie
> RETURNING gid, serie.s
>
> doesn't work.
>
> The only workaround I found was to create a plpgsql function that doesan
> idnividual insert
> so that :
> WITH serie AS (
> select s, s*10 as computing
> from generate_series(1,10) as s
> )
> SELECT serie.s, my_inserting_function(computing)
> FROM serie ;
>
> But it is very annoying and potentially bad for performance because many
> insert may be fired.
>
> Any solution?
> (postgres 9.3)
> Cheers,
> Rémi-C
>
>


-- 
He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


Re: [GENERAL] Starting new cluster from base backup

2015-02-17 Thread Guillaume Drolet
This provides part of the answer to my previous post, from the 9.4 doc
(although I'm running 9.3 but I guess the second phrase in the paragraph
applies to my case):

Tablespaces will in plain format by default be backed up to the same path
they have on the server, unless the option --tablespace-mapping is used.
Without this option, running a plain format base backup on the same host as
the server will not work if tablespaces are in use, because the backup
would have to be written to the same directory locations as the original
tablespaces.

I know the -T option is not available in 9.3. Is there another way to
circumvent the problem and still be able to backup using -X stream and
plain format when tablespace have been created elsewhere?

Thanks!

2015-02-17 9:54 GMT-05:00 Guillaume Drolet :

> Adrian: thanks for this information.
>
> I tried running pg_basebackup in plain format with option -X stream
> (pg_basebackup -D "F:\208376PT\db" -X stream -l "208376PT17022015" -U
> postgres -P) but I got the message:
>
> pg_basebackup: directory "E:\Data\Database" exists but is not empty"
>
> I creatde a tablespace using CREATE TABLESPACE at the location mentioned
> in the message. According to what I read online about this, this message is
> issued when a tablespace was created under PGDATA. In my case, only the
> directory junction pointing to my tablespace (on a different drive than
> PGDATA) exists under PGDATA, not the tablespace itself.
>
> The only way I can run pg_basebackup with WAL files is with option -Ft and
> -X fetch. I'd much prefer using plain mode since my 670 GB tablespace takes
> a lot of time to extract when tarred. Is there another way to approach
> this?
>
> Thanks.
>
>
>
> 2015-02-16 15:21 GMT-05:00 Adrian Klaver :
>
> On 02/16/2015 11:31 AM, Guillaume Drolet wrote:
>>
>>> Dear listers,
>>>
>>> I want to move a cluster from one machine to another. I used
>>> pg_basebackup to create an archive and copied/extracted it over the old
>>> PGDATA location on the new machine (the server was stopped). If I start
>>> pgsql I get these messages in my log file:
>>>
>>> 2015-02-16 14:29:12 EST LOG:  database system was interrupted; last
>>> known up at 2015-02-07 06:31:41 EST
>>> 2015-02-16 14:29:12 EST LOG:  invalid checkpoint record
>>> 2015-02-16 14:29:12 EST FATAL:  could not locate required checkpoint
>>> record
>>> 2015-02-16 14:29:12 EST HINT:  If you are not restoring from a backup,
>>> try removing the file "E:/data/backup_label".
>>> 2015-02-16 14:29:12 EST LOG:  startup process (PID 3148) exited with
>>> exit code 1
>>> 2015-02-16 14:29:12 EST LOG:  aborting startup due to startup process
>>> failure
>>>
>>> I assume this is due to the fact the pg_xlog folder is empty (this is
>>> how pg_basebackup makes it in the archive) and that I haven't supplied a
>>> recovery.conf file with the restore restore_command = 'copy
>>> "E:\\archivedir\\%f" "%p"', and the archived WAL files.
>>>
>>> Now my question is: it this a correct way of moving a cluster between
>>> machines?
>>>
>>> If yes, what WAL files will I put in pg_xlog? Would I have needed to
>>> copy those that were in the old machine right after the base backup?
>>>
>>> If this is not the right way to do it, what is the best way?
>>>
>>
>> http://www.postgresql.org/docs/9.3/interactive/app-pgbasebackup.html
>>
>> "
>> -X method
>> --xlog-method=method
>>
>> Includes the required transaction log files (WAL files) in the
>> backup. This will include all transaction logs generated during the backup.
>> If this option is specified, it is possible to start a postmaster directly
>> in the extracted directory without the need to consult the log archive,
>> thus making this a completely standalone backup 
>> "
>>
>> There is more under -X, so I would read the whole section.
>>
>>
>>> Thanks a lot for your help,
>>>
>>> Guillaume
>>>
>>>
>>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


[GENERAL] Missing table from in INSERT RETURNING

2015-02-17 Thread Rémi Cura
Hello dear list,
I would appreciate some help on a small matter that has been bothering me
for a long time :


CREATE TABLE test_insert_returning(
gid SERIAL
,some_value int
);
WITH serie AS (
select s, s*10 as computing
from generate_series(1,10) as s
)
INSERT INTO test_insert_returning (some_value)
SELECT computing
FROM serie
RETURNING gid, serie.s

doesn't work.

The only workaround I found was to create a plpgsql function that doesan
idnividual insert
so that :
WITH serie AS (
select s, s*10 as computing
from generate_series(1,10) as s
)
SELECT serie.s, my_inserting_function(computing)
FROM serie ;

But it is very annoying and potentially bad for performance because many
insert may be fired.

Any solution?
(postgres 9.3)
Cheers,
Rémi-C


Re: [GENERAL] Issue dumping schema using readonly user

2015-02-17 Thread Adrian Klaver
On 02/17/2015 03:11 AM, Daniel LaMotte wrote:
> The point is that the user seems to have permissions to view the schema 
> but not the table data.  If I can interactively inspect the table schema 
> but pg_dump is unable to dump the table schema, that seems like a bug.
> 
> The account explicitly is not allowed access to the table's data but 
> seems to be able to access the schema (at least interactively).
> 
> Does that make more sense?

Whenever I deal with permissions I am reminded of the old Abbott and Costello 
skit; Who's on first? 
To put it another way, it can be confusing. So let me walk through it below:

aklaver@panda:~> /usr/local/pgsql93/bin/psql  -d test -h localhost -U aklaver 
-p 5452 

# Changed to int, because I do not have uuid on this instance.
test=# create table mytable_is_readonly (id int primary key, text text not 
null);
CREATE TABLE
test=# create table mytable_is_not_readonly (id int primary key, text text not 
null);
CREATE TABLE


# Need to change because aklaver does have role creation privileges.
test=# \c - postgres
You are now connected to database "test" as user "postgres".
test=# create user readonly with password 'readonly';
CREATE ROLE

# Change back and create data
test=# \c - aklaver
You are now connected to database "test" as user "aklaver".

test=# insert into mytable_is_readonly values (1, 'test1'), (2, 'test2'), (3, 
'test3');
INSERT 0 3
test=# insert into mytable_is_not_readonly values (1, 'test1'), (2, 'test2'), 
(3, 'test3');
INSERT 0 3

# Show who owns the tables
test=> \dt+ mytable_is_readonly 
  List of relations
 Schema |Name | Type  |  Owner  | Size  | Description 
+-+---+-+---+-
 public | mytable_is_readonly | table | aklaver | 16 kB | 
(1 row)

test=> \dt+ mytable_is_not_readonly 
List of relations
 Schema |  Name   | Type  |  Owner  | Size  | Description 
+-+---+-+---+-
 public | mytable_is_not_readonly | table | aklaver | 16 kB | 
(1 row)


# As aklaver GRANT SELECT to readonly.
test=> grant select on mytable_is_readonly to readonly;
GRANT

# Show permissions for tables.
test=> \dp mytable_is_readonly 
 Access privileges
 Schema |Name | Type  |Access privileges| Column access 
privileges 
+-+---+-+--
 public | mytable_is_readonly | table | aklaver=arwdDxt/aklaver+| 
| |   | readonly=r/aklaver  | 
(1 row)

test=> \dp mytable_is_not_readonly 
Access privileges
 Schema |  Name   | Type  | Access privileges | Column access 
privileges 
+-+---+---+--
 public | mytable_is_not_readonly | table |   | 
(1 row)

# As aklaver select data.
test=> select * from mytable_is_readonly ;
 id | text  
+---
  1 | test1
  2 | test2
  3 | test3
(3 rows)

test=> select * from mytable_is_not_readonly ;
 id | text  
+---
  1 | test1
  2 | test2
  3 | test3
(3 rows)


# As readonly role select data.
test=> \c - readonly
Password for user readonly: 
You are now connected to database "test" as user "readonly".
test=> select * from mytable_is_readonly ;
 id | text  
+---
  1 | test1
  2 | test2
  3 | test3
(3 rows)

test=> select * from mytable_is_not_readonly ;
ERROR:  permission denied for relation mytable_is_not_readonly

# Dump with readonly role
aklaver@panda:~> /usr/local/pgsql93/bin/pg_dump -U readonly test --schema-only 
--table=mytable_is_readonly -p 5452 > readonly.sql
aklaver@panda:~> /usr/local/pgsql93/bin/pg_dump -U readonly test --schema-only 
--table=mytable_is_not_readonly -p 5452 > not_readonly.sql
pg_dump: [archiver (db)] query failed: ERROR:  permission denied for relation 
mytable_is_not_readonly
pg_dump: [archiver (db)] query was: LOCK TABLE public.mytable_is_not_readonly 
IN ACCESS SHARE MODE

# Dump with table owner
aklaver@panda:~> /usr/local/pgsql93/bin/pg_dump -U aklaver test --schema-only 
--table=mytable_is_readonly -p 5452 > readonly.sql
aklaver@panda:~> /usr/local/pgsql93/bin/pg_dump -U aklaver test --schema-only 
--table=mytable_is_not_readonly -p 5452 > not_readonly.sql


So the error you are seeing is correct. You gave the readonly role a specific 
permission,
select, for mytable_is_readonly. You did not do the same for 
mytable_is_not_readonly so only the 
table owner has access to it, in this case aklaver. Per the docs:

http://www.postgresql.org/docs/9.3/interactive/sql-grant.html

SELECT

Allows SELECT from any column, or the specific columns listed, of the 
specified table, view, or sequence. Also allows the use of COPY TO. This 
privilege is also needed to reference existing column values in UPDATE or 
DE

Re: [GENERAL] Starting new cluster from base backup

2015-02-17 Thread Guillaume Drolet
Adrian: thanks for this information.

I tried running pg_basebackup in plain format with option -X stream
(pg_basebackup -D "F:\208376PT\db" -X stream -l "208376PT17022015" -U
postgres -P) but I got the message:

pg_basebackup: directory "E:\Data\Database" exists but is not empty"

I creatde a tablespace using CREATE TABLESPACE at the location mentioned in
the message. According to what I read online about this, this message is
issued when a tablespace was created under PGDATA. In my case, only the
directory junction pointing to my tablespace (on a different drive than
PGDATA) exists under PGDATA, not the tablespace itself.

The only way I can run pg_basebackup with WAL files is with option -Ft and
-X fetch. I'd much prefer using plain mode since my 670 GB tablespace takes
a lot of time to extract when tarred. Is there another way to approach
this?

Thanks.



2015-02-16 15:21 GMT-05:00 Adrian Klaver :

> On 02/16/2015 11:31 AM, Guillaume Drolet wrote:
>
>> Dear listers,
>>
>> I want to move a cluster from one machine to another. I used
>> pg_basebackup to create an archive and copied/extracted it over the old
>> PGDATA location on the new machine (the server was stopped). If I start
>> pgsql I get these messages in my log file:
>>
>> 2015-02-16 14:29:12 EST LOG:  database system was interrupted; last
>> known up at 2015-02-07 06:31:41 EST
>> 2015-02-16 14:29:12 EST LOG:  invalid checkpoint record
>> 2015-02-16 14:29:12 EST FATAL:  could not locate required checkpoint
>> record
>> 2015-02-16 14:29:12 EST HINT:  If you are not restoring from a backup,
>> try removing the file "E:/data/backup_label".
>> 2015-02-16 14:29:12 EST LOG:  startup process (PID 3148) exited with
>> exit code 1
>> 2015-02-16 14:29:12 EST LOG:  aborting startup due to startup process
>> failure
>>
>> I assume this is due to the fact the pg_xlog folder is empty (this is
>> how pg_basebackup makes it in the archive) and that I haven't supplied a
>> recovery.conf file with the restore restore_command = 'copy
>> "E:\\archivedir\\%f" "%p"', and the archived WAL files.
>>
>> Now my question is: it this a correct way of moving a cluster between
>> machines?
>>
>> If yes, what WAL files will I put in pg_xlog? Would I have needed to
>> copy those that were in the old machine right after the base backup?
>>
>> If this is not the right way to do it, what is the best way?
>>
>
> http://www.postgresql.org/docs/9.3/interactive/app-pgbasebackup.html
>
> "
> -X method
> --xlog-method=method
>
> Includes the required transaction log files (WAL files) in the backup.
> This will include all transaction logs generated during the backup. If this
> option is specified, it is possible to start a postmaster directly in the
> extracted directory without the need to consult the log archive, thus
> making this a completely standalone backup 
> "
>
> There is more under -X, so I would read the whole section.
>
>
>> Thanks a lot for your help,
>>
>> Guillaume
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>