Will there ever be support for Row Level Security on Materialized Views?

2018-08-14 Thread Ken Tanzer
Hi.  My question is similar to one that was asked but apparently never
answered a couple of years ago on this list. (
https://www.postgresql.org/message-id/20160112023419.GA30965%40moraine.isi.edu
)

Basically, I'm wondering whether materialized views are likely to ever
support row-level security.

I've been toying with using MVs to address some performance issues, until
coming across this issue.

As explanation of the use case, I've got a DB that is shared across
multiple sites ("segments").  Each site is a different DB user, and access
to data is controlled by a function (has_segment_access).

So for a table, and a view, we use:

CREATE POLICY tbl_client_sites ON tbl_client FOR ALL USING
(has_segment_access(agency_segment_code));

CREATE VIEW client AS SELECT * FROM tbl_client WHERE
has_segment_access(agency_segment_code);

I'd been thinking I could just materialize some of these views, but then
they are frozen data, like a table.  But unlike a table, you can't apply
RLS.  Hence the email.  Any insight or development crystal-balling
appreciated.  Thanks!

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Immutable function WAY slower than Stable function?

2018-08-14 Thread Michal

On 2018.08.07 18:10, Tom Lane wrote:

I've had a to-do item to rewrite and improve the SQL function cache
mechanism for a long time, but I hadn't thought it was high priority.
Maybe it should be.

In the meantime, I think you could dodge the issue by converting either
level of function into plpgsql.  Or just be careful about inline-ability
of convenience wrappers.


I now remember I stumbled across this same problem last year when I 
needed an immutable unaccent() function for the purpose of creating a 
functional index. I simply wrapped unaccent(), which is stable, in an 
immutable custom function (and accepting the need to recreate the index 
in case when unaccent rules change). I can't remember the specifics but 
I was also surprised to learn that query execution plans were not 
optimized at all when the function was immutable and in some cases they 
were even slower than when the function was stable, quite contrary to 
all logic and to what the docs say. So I think it would be great if this 
issue were resolved in Postgres.




Re: How to revoke privileged from PostgreSQL's superuser

2018-08-14 Thread Bruce Momjian
On Fri, Aug 10, 2018 at 04:06:40PM -0400, Benedict Holland wrote:
> I also would take Bruce's comment with a massive grain of salt. Everything 
> that
> everyone does on a database is logged somewhere assuming proper logging. Now 
> do
> you have the person-power to go through gigs of plain text logs to find out if
> someone is doing something shady... that is a question for your management
> team. Also, if you suspect someone of doing something shady, you should
> probably revoke their admin rights. 

Agreed, the best way to limit the risk of undetected DBA removal of data
is secure auditing --- I should have mentioned that.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Duplicating data folder without tablespace, for read access

2018-08-14 Thread Jack Cushman
Thanks for such quick and helpful answers! My plan sounds probably better
to avoid, but if it turns out to be necessary, you all gave me some helpful
avenues and things to look out for.

Best,
Jack

On Tue, Aug 14, 2018 at 1:06 PM, Stephen Frost  wrote:

> Greetings,
>
> * Jack Cushman (jcush...@gmail.com) wrote:
> > I have a large database of text, with a 600GB table and a 100GB table
> > connected by a join table. They both see occasional updates throughout
> the
> > week. Once a week I want to "cut a release," meaning I will clone just
> the
> > 100GB table and copy it to a "release" server for read-only web access.
>
> My general recommendation to people who are thinking about something
> like this is to use their restore-tests as a way to stage things (you
> are testing your backups by doing a restore, right?) and then copy over
> the results.
>
> > My procedure would be:
> >
> > - keep the 600GB table on a separate tablespace
> > - cleanly stop postgres on both servers
> > - copy the data folder to the release server
> > - delete pg_tblspc/* on the release server
> > - start postgres on both servers
>
> So, instead of that procedure, it'd be:
>
> - Back up the database as per usual
> - Restore the database somewhere
> - Run some sanity checks on the restored database
> - go in and drop the table and sanitize anything else necessary
> - Shut down the database and copy it into place
> - OR take a new backup of the sanitized database and then restore it
>   into place
>
> Much cleaner, and tests your backup/restore process.
>
> Alternatively, you could just track changes to the "main" database using
> triggers into an audit log and then replay the changes made to the 100GB
> table into the other database.
>
> > In local testing this seems to work -- the release server works fine,
> and I
> > only get an error message if I try to access the missing tables, which is
> > expected. But are there reasons this is going to bite me if I try it in
> > production? I'm hoping it helps that (a) I'm only doing read access, (b)
> I
> > can cleanly stop both servers before cutting a release, and (c) I'm not
> > worried about losing data, since it's just an access copy.
>
> Still, it's a hacked up and not entirely proper PG database which will
> likely lead to confusion- maybe you won't be confused, but I strongly
> suspect others looking at it will be, and you might run into other
> issues too along the lines of what Tom mentioned (background jobs
> failing and such).
>
> Thanks!
>
> Stephen
>


Re: How to revoke privileged from PostgreSQL's superuser

2018-08-14 Thread Bruce Momjian
On Fri, Aug 10, 2018 at 10:34:26PM -0400, Rui DeSousa wrote:
> With that logic then you should use flat files for encrypted data and
> unencrypted data.  It’s what was done many moons ago; and its unstructured
> haphazard approach gave rise to RDBMS systems.
> 
> You cannot say that encrypted data does not belong in a RDBMS system… that is
> just false.  Hell, I’ve stored blobs in a RDMBS system which could have easily
> been stored in a different system if need be.  It’s a design choice and what
> fits the application and budget needs.
> 
> Encrypting sensitive information and storing in the database is a valid use
> case.  It may be only a few columns that are encrypted or a complete document
> (blob); there is no need to increase complexity just to move those columns out
> of the database.

I think the point is that it makes sense to store data encrypted in a
database only if it is a payload on another piece of non-encrypted data.
You can't easily index, restrict, or join encrypted data, so it doesn't
have a huge value alone in a database.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Best Practices for Extensions, limitations and recommended use for monitoring

2018-08-14 Thread Alvar Freude
Hi Tom,

> Am 14.08.2018 um 17:07 schrieb Tom Lane :
> 
> I'd say that's generally deprecated.  Per the documentation, you can write
> CREATE USER commands in an extension script if you like, but the roles
> won't be considered to "belong" to the extension, and won't be dropped
> when it is.  This is mainly because roles are cluster-wide but extensions
> are only local to one database.  Consider for example what will happen
> when somebody tries to load your extension into more than one database in
> the same cluster.

Yes, that’s one of the problems ;-)

There may be two solutions:
a) check, if the roles exist and skip CREATE ROLE if they already exist, else 
create them.
b) check, if the roles exist and throw an exception if not.

For b), the administrator should create the roles and has to give them the 
correct rights (e.g. posemo_admin has NOLOGIN, …). With a), the extension can 
set the rights, but it may be strange, that it creates users.

I tend to use variant a) (create the roles), because this needs lower 
administrative effort.


And there is variant c): don’t build an extension … – but as more I think about 
it, it looks more reasonable to use the extension mechanism.


> For the particular use-case you're describing here, maybe it'd make
> sense to grant privileges to the predefined role pg_monitor, which
> exists in v10 and up.  v11 has some additional predefined roles that
> perhaps would fit in, too.

The posemo_admin user (owner of all functions etc) is member of pg_monitor (in 
v10+; and superuser below v10). But only the posemo user (unprivileged user, 
has no other rights then EXECUTE on the functions) should have executable 
rights.


Ciao
  Alvar

--
Alvar C.H. Freude | https://alvar.a-blast.org | a...@alvar-freude.de
https://blog.alvar-freude.de/
https://www.wen-waehlen.de/







signature.asc
Description: Message signed with OpenPGP


Re: upgrading from pg 9.3 to 10

2018-08-14 Thread Martín Marqués
El 14/08/18 a las 14:44, Edmundo Robles escribió:
> Is safe  to upgrade from pg 9.3 to pg 10 directly using pg_upgrade or
> is better upgrade, with pg_upgrade,  from  9.3 -> 9.4 ->9.5 -> 9.6 -> 10.

That is quiet a jump. But not imposible with pg_upgrade.

I'd recommend testing with a clone of the server to verify that it works
properly (not only pg_upgrade, but your application with the new version
of postgres). Also to time the window you'll need and see if there are
things to be aware of, like extensions which are upgraded and might
break the upgrade.

Now if you are going to first jump to 9.4, I'd recommend using pglogical
after getting to 9.4 and upgrade straight from 9.4 to 10 (always after
testing your application against 10)

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: pg_basebackup failed to read a file

2018-08-14 Thread Joshua D. Drake

On 08/14/2018 09:14 AM, Tom Lane wrote:

Mike Cardwell  writes:

It'd be nice to have a more coherent theory about what needs to be copied
or not, and not fail on files that could simply be ignored.  Up to now
we've resisted having any centrally defined knowledge of what can be
inside a PG data directory, but maybe that bullet needs to be bitten.


This is not the first time, nor the second time this issue has arisen. I 
would think we would know that a coherent theory or at least 
semi-coherent theory would be pretty easy to resolve. Granted, we can't 
reasonably know what is going on under base but under the / of PGDATA, 
we know *exactly* what files should and should not be in there.


JD

--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
***  A fault and talent of mine is to tell it exactly how it is.  ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Re: Duplicating data folder without tablespace, for read access

2018-08-14 Thread Peter J. Holzer
On 2018-08-14 11:57:38 -0400, Jack Cushman wrote:
> I have a large database of text, with a 600GB table and a 100GB table 
> connected
> by a join table. They both see occasional updates throughout the week. Once a
> week I want to "cut a release," meaning I will clone just the 100GB table and
> copy it to a "release" server for read-only web access.
> 
> My procedure would be:
> 
> - keep the 600GB table on a separate tablespace
> - cleanly stop postgres on both servers
> - copy the data folder to the release server
> - delete pg_tblspc/* on the release server
> - start postgres on both servers

If you copy the whole database anyway before deleting the tablespace:

Why don't you just drop the 600 GB table on the release server?

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: pg_basebackup failed to read a file

2018-08-14 Thread Stephen Frost
Greetings,

* Ron (ronljohnso...@gmail.com) wrote:
> On 08/14/2018 11:14 AM, Tom Lane wrote:
> >Mike Cardwell  writes:
> >>pg_basebackup: could not get write-ahead log end position from server:
> >>ERROR:  could not open file "./postgresql.conf~": Permission denied
> >>Now, I know what this error means. There was a root owned file at
> >>"/var/lib/pgsql/10/data/postgresql.conf~" which contained an old
> >>version of our postgres config and was not readable by the postgres
> >>user. I'll delete this file and try again. However, in the mean time: I
> >>feel like it would be useful for pg_basebackup to check that it has
> >>read access to all of the existing files in the source directory at the
> >>start, before it begins it's copy.
> >That seems like a pretty expensive thing to do, if there are lots of
> >files ... and you'd still end up failing, so it's not moving the ball
> >very far.
> 
> Why is checking a bunch of file permissions anywhere close to being as
> expensive as transferring 1.5TB over a WAN link?

One could argue that the cost would be bourn by everyone who is using
pg_basebackup and not just those users who are transferring 1.5TB over a
WAN link.

That said, pgbackrest always builds a full manifest by scanning all of
the directories, tablespaces, files, etc, and I can't recall anyone ever
complaining about it.  Certainly, failing fast would be better than
failing after a lot of time has been spent.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: pg_upgrade with large pg_largeobject table

2018-08-14 Thread Tom Lane
Mate Varga  writes:
>> Using the large-object API for things that tend to not actually be very
>> large (which they aren't, if you've got hundreds of millions of 'em) is an
>> antipattern, I'm afraid.

> I know :( So maybe I need to do some refactoring in the application and
> inline the lobs. The data is binary data with very high entropy (encrypted
> stuff). Would you recommend bytea for that?

Yeah, it'd likely be less of a pain-in-the-neck than text.  You would need
some sort of encoding anyway to deal with zero bytes and sequences that
aren't valid per your encoding, so you might as well go with bytea's
solution.

regards, tom lane



Re: pg_upgrade with large pg_largeobject table

2018-08-14 Thread Mate Varga
Thanks.

> You mean 250M rows in pg_largeobject itself, or 250M large objects
(that is, 250M rows in pg_largeobject_metadata)?

250M large objects.

> Are you sure you're using a 64-bit build of pg_dump?

 file /usr/lib/postgresql/10/bin/pg_dump
/usr/lib/postgresql/10/bin/pg_dump: ELF 64-bit LSB  shared object, x86-64,
version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux
2.6.24, BuildID[sha1]=4ff6bf0dd628d151481256723994dd50785e54e9, stripped
Also it really eats all the memory.

> Using the large-object API for things that tend to not actually be very
large (which they aren't, if you've got hundreds of millions of 'em)  is an
antipattern, I'm afraid.

I know :( So maybe I need to do some refactoring in the application and
inline the lobs. The data is binary data with very high entropy (encrypted
stuff). Would you recommend bytea for that?

Thanks again.
Mate



On Tue, Aug 14, 2018 at 7:58 PM Tom Lane  wrote:

> Mate Varga  writes:
> > We have a PSQL 9.5 DB with 16G physical RAM and ~ 1 TB data mostly stored
> > in the pg_largeobject system table. This table has 250M rows at the
> moment.
>
> You mean 250M rows in pg_largeobject itself, or 250M large objects
> (that is, 250M rows in pg_largeobject_metadata)?
>
> > This command
> > fails because of an OOM. Logs say:
> > pg_dump: [archiver (db)] query failed: out of memory for query result
> > pg_dump: [archiver (db)] query was: SELECT oid, (SELECT rolname FROM
> > pg_catalog.pg_roles WHERE oid = lomowner) AS rolname, lomacl, NULL AS
> > rlomacl, NULL AS initlomacl, NULL AS initrlomacl  FROM
> > pg_largeobject_metadata
>
> The selected rows shouldn't be very wide here, so I'm a bit surprised
> you are hitting OOM, even if there are 250M rows to read.  Are you sure
> you're using a 64-bit build of pg_dump?
>
> > Would it help if we'd inline these largeobjects as e.g. text cols
> (instead
> > of storing them as lobs)?
>
> Yes, because then they'd not have their own ownership and permissions
> for pg_dump to keep track of.  (You might want bytea instead of text,
> depending on what the data is.)
>
> Using the large-object API for things that tend to not actually be very
> large (which they aren't, if you've got hundreds of millions of 'em)
> is an antipattern, I'm afraid.  You could get away with it before we
> added per-largeobject permissions, but now it's a problem for pg_dump.
>
> regards, tom lane
>


Re: Duplicating data folder without tablespace, for read access

2018-08-14 Thread Jerry Sievers
Tom Lane  writes:

> Jack Cushman  writes:
>
>> Now here's the situation where I want to do what Christophe said not to do:
>> :)
>
>> I have a large database of text, with a 600GB table and a 100GB table
>> connected by a join table. They both see occasional updates throughout the
>> week. Once a week I want to "cut a release," meaning I will clone just the
>> 100GB table and copy it to a "release" server for read-only web access.
>
>> My procedure would be:
>
>> - keep the 600GB table on a separate tablespace
>> - cleanly stop postgres on both servers
>> - copy the data folder to the release server
>> - delete pg_tblspc/* on the release server
>> - start postgres on both servers
>
>> In local testing this seems to work -- the release server works fine, and I
>> only get an error message if I try to access the missing tables, which is
>> expected. But are there reasons this is going to bite me if I try it in
>> production?
>
> Sooner or later, autovacuum is going to try to touch the missing table.
> Maybe you can limp along with autovac failing in the background,
> especially if this is a throwaway copy of the DB with little or no
> write activity.  But I'm not sure how well that will work.
>
> Probably, the worst-case scenario would be for the database to shut
> down because it thinks it's in XID wraparound trouble.  But it's hard
> to see how you get to that without lots of write traffic, so maybe
> you can get away with this.
>
>   regards, tom lane
>

With Pg being pretty good about letting you drop objects that have
missing backend storage bits, such as when we snapshot large DBs on the
SAN which does *not* capture our NVMe temp tablespace...

I'd say find the objects supposed to be in the missing tablespace by
catalog scraping and drop them.  You'll see warnings or whatever in the
logs that file $foo not found but the objects are then gone nonetheless.

Presuming this avoids risk of eventual wrap conditions etc due to cronic
autovac failure if tables left in catalogs.

FWIW

>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800



Re: pg_upgrade with large pg_largeobject table

2018-08-14 Thread Tom Lane
Mate Varga  writes:
> We have a PSQL 9.5 DB with 16G physical RAM and ~ 1 TB data mostly stored
> in the pg_largeobject system table. This table has 250M rows at the moment.

You mean 250M rows in pg_largeobject itself, or 250M large objects
(that is, 250M rows in pg_largeobject_metadata)?

> This command
> fails because of an OOM. Logs say:
> pg_dump: [archiver (db)] query failed: out of memory for query result
> pg_dump: [archiver (db)] query was: SELECT oid, (SELECT rolname FROM
> pg_catalog.pg_roles WHERE oid = lomowner) AS rolname, lomacl, NULL AS
> rlomacl, NULL AS initlomacl, NULL AS initrlomacl  FROM
> pg_largeobject_metadata

The selected rows shouldn't be very wide here, so I'm a bit surprised
you are hitting OOM, even if there are 250M rows to read.  Are you sure
you're using a 64-bit build of pg_dump?

> Would it help if we'd inline these largeobjects as e.g. text cols (instead
> of storing them as lobs)?

Yes, because then they'd not have their own ownership and permissions
for pg_dump to keep track of.  (You might want bytea instead of text,
depending on what the data is.)

Using the large-object API for things that tend to not actually be very
large (which they aren't, if you've got hundreds of millions of 'em)
is an antipattern, I'm afraid.  You could get away with it before we
added per-largeobject permissions, but now it's a problem for pg_dump.

regards, tom lane



upgrading from pg 9.3 to 10

2018-08-14 Thread Edmundo Robles
Is safe  to upgrade from pg 9.3 to pg 10 directly using pg_upgrade or
is better upgrade, with pg_upgrade,  from  9.3 -> 9.4 ->9.5 -> 9.6 -> 10.

--


pg_upgrade with large pg_largeobject table

2018-08-14 Thread Mate Varga
Hi.

hanks in advance for any advice.

We have a PSQL 9.5 DB with 16G physical RAM and ~ 1 TB data mostly stored
in the pg_largeobject system table. This table has 250M rows at the moment.
We're trying to upgrade this to 10.x with an in-place upgrade. The command
I'm using is:sudo -u postgres /usr/lib/postgresql/10/bin/pg_upgrade -b
/usr/lib/postgresql/9.5/bin -B /usr/lib/postgresql/10/bin -p 5433 -P 5434
-d /etc/postgresql/9.5/test -D /etc/postgresql/10/test --linkThis command
fails because of an OOM. Logs say:

command: "/usr/lib/postgresql/10/bin/pg_dump" --host
/var/log/postgresql/pg_upgradecluster-9.5-10-test.wjNi --port 5433
--username postgres --schema-only --quote-all-identifiers --binary-upgrade
--format=custom  --file="pg_upgrade_dump_31803.custom" 'dbname=tolven' >>
"pg_upgrade_dump_31803.log" 2>&1
pg_dump: [archiver (db)] query failed: out of memory for query result
pg_dump: [archiver (db)] query was: SELECT oid, (SELECT rolname FROM
pg_catalog.pg_roles WHERE oid = lomowner) AS rolname, lomacl, NULL AS
rlomacl, NULL AS initlomacl, NULL AS initrlomacl  FROM
pg_largeobject_metadata

Any ideas about how could we make this work on a server with only 16G RAM
(possibly a few dozens of gigabytes of swap)?Thanks.
Would it help if we'd inline these largeobjects as e.g. text cols (instead
of storing them as lobs)?
Thanks,
Mate


Re: pg_basebackup failed to read a file

2018-08-14 Thread Dimitri Maziuk
On 08/14/2018 12:14 PM, Ron wrote:

> Why is checking a bunch of file permissions anywhere close to being as
> expensive as transferring 1.5TB over a WAN link?

Normally it shouldn't be but I recently had postgres create ~13M .snap
files and just opendir() took longer than anyone would care to wait...
so it can be just as expensive.

One could just as easily ask why create mode 600 files in places where
they don't belong.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature


Re: pg_basebackup failed to read a file

2018-08-14 Thread Ron




On 08/14/2018 11:14 AM, Tom Lane wrote:

Mike Cardwell  writes:

pg_basebackup: could not get write-ahead log end position from server:
ERROR:  could not open file "./postgresql.conf~": Permission denied
Now, I know what this error means. There was a root owned file at
"/var/lib/pgsql/10/data/postgresql.conf~" which contained an old
version of our postgres config and was not readable by the postgres
user. I'll delete this file and try again. However, in the mean time: I
feel like it would be useful for pg_basebackup to check that it has
read access to all of the existing files in the source directory at the
start, before it begins it's copy.

That seems like a pretty expensive thing to do, if there are lots of
files ... and you'd still end up failing, so it's not moving the ball
very far.


Why is checking a bunch of file permissions anywhere close to being as 
expensive as transferring 1.5TB over a WAN link?


--
Angular momentum makes the world go 'round.



Re: Duplicating data folder without tablespace, for read access

2018-08-14 Thread Stephen Frost
Greetings,

* Jack Cushman (jcush...@gmail.com) wrote:
> I have a large database of text, with a 600GB table and a 100GB table
> connected by a join table. They both see occasional updates throughout the
> week. Once a week I want to "cut a release," meaning I will clone just the
> 100GB table and copy it to a "release" server for read-only web access.

My general recommendation to people who are thinking about something
like this is to use their restore-tests as a way to stage things (you
are testing your backups by doing a restore, right?) and then copy over
the results.

> My procedure would be:
> 
> - keep the 600GB table on a separate tablespace
> - cleanly stop postgres on both servers
> - copy the data folder to the release server
> - delete pg_tblspc/* on the release server
> - start postgres on both servers

So, instead of that procedure, it'd be:

- Back up the database as per usual
- Restore the database somewhere
- Run some sanity checks on the restored database
- go in and drop the table and sanitize anything else necessary
- Shut down the database and copy it into place
- OR take a new backup of the sanitized database and then restore it
  into place

Much cleaner, and tests your backup/restore process.

Alternatively, you could just track changes to the "main" database using
triggers into an audit log and then replay the changes made to the 100GB
table into the other database.

> In local testing this seems to work -- the release server works fine, and I
> only get an error message if I try to access the missing tables, which is
> expected. But are there reasons this is going to bite me if I try it in
> production? I'm hoping it helps that (a) I'm only doing read access, (b) I
> can cleanly stop both servers before cutting a release, and (c) I'm not
> worried about losing data, since it's just an access copy.

Still, it's a hacked up and not entirely proper PG database which will
likely lead to confusion- maybe you won't be confused, but I strongly
suspect others looking at it will be, and you might run into other
issues too along the lines of what Tom mentioned (background jobs
failing and such).

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Duplicating data folder without tablespace, for read access

2018-08-14 Thread Tom Lane
Jack Cushman  writes:
> Now here's the situation where I want to do what Christophe said not to do:
> :)

> I have a large database of text, with a 600GB table and a 100GB table
> connected by a join table. They both see occasional updates throughout the
> week. Once a week I want to "cut a release," meaning I will clone just the
> 100GB table and copy it to a "release" server for read-only web access.

> My procedure would be:

> - keep the 600GB table on a separate tablespace
> - cleanly stop postgres on both servers
> - copy the data folder to the release server
> - delete pg_tblspc/* on the release server
> - start postgres on both servers

> In local testing this seems to work -- the release server works fine, and I
> only get an error message if I try to access the missing tables, which is
> expected. But are there reasons this is going to bite me if I try it in
> production?

Sooner or later, autovacuum is going to try to touch the missing table.
Maybe you can limp along with autovac failing in the background,
especially if this is a throwaway copy of the DB with little or no
write activity.  But I'm not sure how well that will work.

Probably, the worst-case scenario would be for the database to shut
down because it thinks it's in XID wraparound trouble.  But it's hard
to see how you get to that without lots of write traffic, so maybe
you can get away with this.

regards, tom lane



Re: pg_basebackup failed to read a file

2018-08-14 Thread Tom Lane
Mike Cardwell  writes:
> pg_basebackup: could not get write-ahead log end position from server:
> ERROR:  could not open file "./postgresql.conf~": Permission denied

> Now, I know what this error means. There was a root owned file at
> "/var/lib/pgsql/10/data/postgresql.conf~" which contained an old
> version of our postgres config and was not readable by the postgres
> user. I'll delete this file and try again. However, in the mean time: I
> feel like it would be useful for pg_basebackup to check that it has
> read access to all of the existing files in the source directory at the
> start, before it begins it's copy.

That seems like a pretty expensive thing to do, if there are lots of
files ... and you'd still end up failing, so it's not moving the ball
very far.

More generally, this seems closely related to bug #14999 [1]
which concerned pg_rewind's behavior in the face of unexpected file
permissions within the data directory.  We ended up not doing anything
about that except documenting it, which I wasn't very satisfied with,
but the costs of doing better seemed to exceed the benefits.

It'd be nice to have a more coherent theory about what needs to be copied
or not, and not fail on files that could simply be ignored.  Up to now
we've resisted having any centrally defined knowledge of what can be
inside a PG data directory, but maybe that bullet needs to be bitten.

regards, tom lane

[1] 
https://www.postgresql.org/message-id/flat/20180104200633.17004.16377%40wrigleys.postgresql.org



Duplicating data folder without tablespace, for read access

2018-08-14 Thread Jack Cushman
Hi --

I'm wondering whether, in my specific situation, it would be safe to copy a
database cluster's data folder, and bring up the copy for read access,
without copying a tablespace linked from it. My situation (described below)
involves a database with a 100GB table and a 600GB table where I want to
routinely clone just the 100GB table for web access.

---

For context, the last discussion I've found is from 2013, in this blog post
from Christophe Pettus and response from Tom Lane:

https://thebuild.com/blog/2013/03/10/you-cannot-recover-from-the-loss-of-a-tablespace/
https://www.postgresql.org/message-id/19786.1367378...@sss.pgh.pa.us

In that discussion, Christophe summarized the situation this way:

> I would not count on it.  And if it works 100% reliably now, it might not
on a future version of PostgreSQL.

> As Josh Berkus pointed out to my off-list, there are two competing
definitions of the term "recover" in use here:

> 1. In my blog post, the definition of "recover" was "bring up the
database without having unusually extensive knowledge of PostgreSQL's
internals."
> 2. For Tom, the definition of "recover" is "bring up the database if you
have appropriate knowledge of PostgreSQL's internals."

> You can't recover from the lost of a tablespace per definition #1.  You
can per definition #2.

> I'd strongly suggest that relying on definition #2, while absolutely
correct, is a poor operational decision for most users.

https://www.postgresql.org/message-id/FABAC7F1-3172-4B5D-8E56-0B3C579980EC%40thebuild.com

---

Now here's the situation where I want to do what Christophe said not to do:
:)

I have a large database of text, with a 600GB table and a 100GB table
connected by a join table. They both see occasional updates throughout the
week. Once a week I want to "cut a release," meaning I will clone just the
100GB table and copy it to a "release" server for read-only web access.

My procedure would be:

- keep the 600GB table on a separate tablespace
- cleanly stop postgres on both servers
- copy the data folder to the release server
- delete pg_tblspc/* on the release server
- start postgres on both servers

In local testing this seems to work -- the release server works fine, and I
only get an error message if I try to access the missing tables, which is
expected. But are there reasons this is going to bite me if I try it in
production? I'm hoping it helps that (a) I'm only doing read access, (b) I
can cleanly stop both servers before cutting a release, and (c) I'm not
worried about losing data, since it's just an access copy.

Alternatives I've considered:

- I could pg_dump and restore, but the 100GB table has lots of indexes and
I'd rather not have to reindex on the release server each week.
- I could replicate with pglogical and use some sort of blue-green setup on
the release server to cut a release, but this adds a lot of moving parts,
especially to deal with schema migrations.

Thanks for any advice you might have!

-Jack


Re: Best Practices for Extensions, limitations and recommended use for monitoring

2018-08-14 Thread Tom Lane
Alvar Freude  writes:
> I have a question about best practices writing PostgreSQL extensions. Is it 
> OK to write extensions which create users and grant/revoke rights on the 
> created functions to this users?

I'd say that's generally deprecated.  Per the documentation, you can write
CREATE USER commands in an extension script if you like, but the roles
won't be considered to "belong" to the extension, and won't be dropped
when it is.  This is mainly because roles are cluster-wide but extensions
are only local to one database.  Consider for example what will happen
when somebody tries to load your extension into more than one database in
the same cluster.

> Is it possible to add options to CREATE EXTENSION by the extension itself 
> e.g. to make user names configurable?

No.

For the particular use-case you're describing here, maybe it'd make
sense to grant privileges to the predefined role pg_monitor, which
exists in v10 and up.  v11 has some additional predefined roles that
perhaps would fit in, too.

regards, tom lane



Re: What is the use case for UNLOGGED tables

2018-08-14 Thread Ravi Krishna
> 
> I use them for "ELT" oriented processing where the final results get stored 
> on permanently logged tables but I want to manipulate tables while 
> transforming from the original input.

Yeah I see the use case.  Basically raw data -> rolled up data -> to final 
results in normal tables.
However flipping a normal table between LOGGED and UNLOGGED does not seem to be 
a use case.




Re: What is the use case for UNLOGGED tables

2018-08-14 Thread David G. Johnston
On Tuesday, August 14, 2018, Ravi Krishna  wrote:

> Then what exactly is the benefit of UNLOGGED tables, unless we have
> permanent unlogged tables for disposable
> data.
>

I use them for "ELT" oriented processing where the final results get stored
on permanently logged tables but I want to manipulate tables while
transforming from the original input.

David J.


What is the use case for UNLOGGED tables

2018-08-14 Thread Ravi Krishna



I am trying to understand the use case for UNLOGGED tables in PG. I am 
specifically talking about normal tables which need to be turned into UNLOGGED 
for a specific purpose like bulk loading because generating WAL logs during the 
load makes no sense, even when we take into consideration that this generally 
breaks replication. 
Typically work flow is:

1. Turn off logging on the table.
2. Load a large set of data using some bulk load tool.
3. Turn back logging.

Depending on RDBMS, some make (3) less painful and some make it painful by 
rebuilding all indexes.  DB2 is very good in that as the only
penalty you have is in step (2) when the table is locked exclusively.

With PG step (3) basically rebuilds entire table.  Imagine a 500 million row 
table, which got additional 30 million rows via a batch load. Does PG
rebuild entire 530 million rows ?  Then what exactly is the benefit of UNLOGGED 
tables, unless we have permanent unlogged tables for disposable
data.

thanks.






Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread David G. Johnston
On Tuesday, August 14, 2018, Ruiqiang Chen  wrote:

> CSV file has no limitation of # of rows, excel max row is 2^20. Am I
> correct?
>

It's considered bad form to hijack threads in this manner.

Text files have no inherent limits.  You can check the Excel documentation
for the version you care about to learn it's limits.

David J.


pg_basebackup failed to read a file

2018-08-14 Thread Mike Cardwell
Hi,

I was just setting up streaming replication for the first time. I ran
pg_basebackup on the slave. It copied 1.5TB of data. Then it errored
out with:

```
1498215035/1498215035 kB (100%), 1/1 tablespace
pg_basebackup: could not get write-ahead log end position from server:
ERROR:  could not open file "./postgresql.conf~": Permission denied
pg_basebackup: removing data directory "/var/lib/pgsql/10/data"
bash-4.2$
```

Now, I know what this error means. There was a root owned file at
"/var/lib/pgsql/10/data/postgresql.conf~" which contained an old
version of our postgres config and was not readable by the postgres
user. I'll delete this file and try again. However, in the mean time: I
feel like it would be useful for pg_basebackup to check that it has
read access to all of the existing files in the source directory at the
start, before it begins it's copy. I'd like to submit this as a feature
request, but I'm struggling to find how to do that. So here I am... Can
anyone point me in the right direction?

Regards,

Mike

signature.asc
Description: This is a digitally signed message part


Re: Vacuum process waiting on BufferPin

2018-08-14 Thread Don Seiler
On Tue, Aug 14, 2018 at 8:58 AM, Vik Fearing 
wrote:

>
> There is an idle_in_transaction_session_timeout parameter to kill
> connections that are idle (in transaction) for too long.  It was
> implemented specifically for cases like this.


Thanks for reminding me of this. I'll probably look to set it but make sure
to point out that it is just a safety net to let DB maintenance run and
they should make sure their work is committed cleanly if they want to keep
it.

Don.

-- 
Don Seiler
www.seiler.us


Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread Adrian Klaver

On 08/14/2018 06:38 AM, pavan95 wrote:

Hi Adrian,

I tried to use
*"COPY postgres_log1 FROM '/tmp/abc/xyz/postgresql-`date --date="0 days ago"
+%Y-%m-%d`_*.csv' WITH csv;"*

But it resulted in an error. How to issue such that it is understandable by
psql?

And I am completely unaware of python & psycopg2. Anything which suits my
requirement is enough!!


Well I was just using that as an example. If you are not familiar with 
Python then you would probably be better off using a language and it's 
associated Postgres adapter you are comfortable with.




Thanks in Advance

Regards,
Pavan



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread Ruiqiang Chen
CSV file has no limitation of # of rows, excel max row is 2^20. Am I
correct?

On Tue, Aug 14, 2018 at 9:46 AM, Ron  wrote:

> On 08/14/2018 08:38 AM, pavan95 wrote:
>
>> Hi Adrian,
>>
>> I tried to use
>> *"COPY postgres_log1 FROM '/tmp/abc/xyz/postgresql-`date --date="0 days
>> ago"
>> +%Y-%m-%d`_*.csv' WITH csv;"*
>>
>> But it resulted in an error. How to issue such that it is understandable
>> by
>> psql?
>>
>> And I am completely unaware of python & psycopg2. Anything which suits my
>> requirement is enough!!ostgresql-archive.org/
>> PostgreSQL-general-f1843780.html
>>
>
> Why not:
> cat /tmp/abc/xyz/postgresql-`date --date="0 days ago"+%Y-%m-%d`_*.csv' | \
>psql YOURDB -c "COPY postgres_log1 FROM STDIN WITH csv;"
>
> --
> Angular momentum makes the world go 'round.
>
>


Re: Vacuum process waiting on BufferPin

2018-08-14 Thread Vik Fearing
On 14/08/18 15:26, Don Seiler wrote:
> On Tue, Aug 14, 2018 at 8:23 AM, Vick Khera  > wrote:
> 
> 
> The general fix is to never sit idle in transaction, but this
> specific case closing the cursor seems like it will also do it.
> 
> 
> Of course. I've let development know that they need to sort out why it's
> left that way.

There is an idle_in_transaction_session_timeout parameter to kill
connections that are idle (in transaction) for too long.  It was
implemented specifically for cases like this.
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread Ron

On 08/14/2018 08:38 AM, pavan95 wrote:

Hi Adrian,

I tried to use
*"COPY postgres_log1 FROM '/tmp/abc/xyz/postgresql-`date --date="0 days ago"
+%Y-%m-%d`_*.csv' WITH csv;"*

But it resulted in an error. How to issue such that it is understandable by
psql?

And I am completely unaware of python & psycopg2. Anything which suits my
requirement is enough!!ostgresql-archive.org/PostgreSQL-general-f1843780.html


Why not:
cat /tmp/abc/xyz/postgresql-`date --date="0 days ago"+%Y-%m-%d`_*.csv' | \
   psql YOURDB -c "COPY postgres_log1 FROM STDIN WITH csv;"

--
Angular momentum makes the world go 'round.



Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread pavan95
Hi Adrian,

I tried to use 
*"COPY postgres_log1 FROM '/tmp/abc/xyz/postgresql-`date --date="0 days ago"
+%Y-%m-%d`_*.csv' WITH csv;"*

But it resulted in an error. How to issue such that it is understandable by
psql?

And I am completely unaware of python & psycopg2. Anything which suits my
requirement is enough!!

Thanks in Advance

Regards,
Pavan



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Vacuum process waiting on BufferPin

2018-08-14 Thread Don Seiler
On Tue, Aug 14, 2018 at 8:23 AM, Vick Khera  wrote:
>
>
> The general fix is to never sit idle in transaction, but this specific
> case closing the cursor seems like it will also do it.
>

Of course. I've let development know that they need to sort out why it's
left that way. But if they're also NOT closing cursors, that seems like
another bad practice to correct.

Would commit/rollback automatically close cursors opened in that
transaction?

Don.

-- 
Don Seiler
www.seiler.us


Re: Vacuum process waiting on BufferPin

2018-08-14 Thread Vick Khera
On Tue, Aug 14, 2018 at 9:21 AM, Don Seiler  wrote:

> On Mon, Aug 13, 2018 at 5:55 PM, Vick Khera  wrote:
>
>> On Mon, Aug 13, 2018 at 5:19 PM, Don Seiler  wrote:
>>
>>>
>>> I don't quite follow this. What circumstances would lead to this
>>> situation?
>>>
>>
>> BEGIN WORK;
>> DECLARE CURSOR ... ;
>> FETCH ...;  -- for some number of fetches, which does not reach the end
>> of the cursor.
>>
>> then just sit there idle, without having closed the cursor or fetching
>> anything more.
>>
>
> So the fix in that case would be to ensure that they CLOSE the cursors
> when done with them?
>
>
The general fix is to never sit idle in transaction, but this specific case
closing the cursor seems like it will also do it.


Re: Vacuum process waiting on BufferPin

2018-08-14 Thread Don Seiler
On Mon, Aug 13, 2018 at 5:55 PM, Vick Khera  wrote:

> On Mon, Aug 13, 2018 at 5:19 PM, Don Seiler  wrote:
>
>>
>> I don't quite follow this. What circumstances would lead to this
>> situation?
>>
>
> BEGIN WORK;
> DECLARE CURSOR ... ;
> FETCH ...;  -- for some number of fetches, which does not reach the end of
> the cursor.
>
> then just sit there idle, without having closed the cursor or fetching
> anything more.
>

So the fix in that case would be to ensure that they CLOSE the cursors when
done with them?

Don.

-- 
Don Seiler
www.seiler.us


Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread Adrian Klaver

On 08/14/2018 06:10 AM, pavan95 wrote:

Hi all,

I am well versed with the COPY command for copying the contents of a csv
file into a table. I am used to the below mentioned command:

* COPY postgres_log1 FROM '/tmp/abc/xyz/postgresql-2018-08-14_00.csv'
WITH csv;*

But in the location "/tmp/abc/xyz/" daily a file(with header
postgresql-2018-08-14_00.csv) will be generated.

So how can I automate this task instead of manually specifying the file
name, it has to look for the filename with current_date at the time of
copying into table.  How can I achieve this?


COPY FROM can do so from a program:

https://www.postgresql.org/docs/10/static/sql-copy.html

"PROGRAM

A command to execute. In COPY FROM, the input is read from standard 
output of the command, and in COPY TO, the output is written to the 
standard input of the command.


Note that the command is invoked by the shell, so if you need to 
pass any arguments to shell command that come from an untrusted source, 
you must be careful to strip or escape any special characters that might 
have a special meaning for the shell. For security reasons, it is best 
to use a fixed command string, or at least avoid passing any user input 
in it.

"

So maybe create a program that does the file look up and then sends the 
data to stdout for consumption by COPY.


OR

Just create a program/script that does it all, find the file and invoke 
COPY on the file. I do similar things using Python and psycopg2.




Any suggestions or modifications are most welcome.

Regards,
Pavan



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread pavan95
Hi all,

I am well versed with the COPY command for copying the contents of a csv
file into a table. I am used to the below mentioned command:

* COPY postgres_log1 FROM '/tmp/abc/xyz/postgresql-2018-08-14_00.csv'
WITH csv;*

But in the location "/tmp/abc/xyz/" daily a file(with header
postgresql-2018-08-14_00.csv) will be generated. 

So how can I automate this task instead of manually specifying the file
name, it has to look for the filename with current_date at the time of
copying into table.  How can I achieve this?

Any suggestions or modifications are most welcome.

Regards,
Pavan



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Uncaught PHP Exception Doctrine\DBAL\Exception\UniqueConstraintViolationException: "An exception occurred while executing 'UPDATE

2018-08-14 Thread rob stone
Hello,

On Tue, 2018-08-14 at 07:48 +, Jarosław Torbicki wrote:
> Hello,   
> I used PostgreSQL 9.3 but I executed upgrade few days ago.
> Now, I am using 10.4 PostgreSQL and:
> doctrine/annotations v1.2.7
> doctrine/cache   v1.4.2  
> doctrine/collections v1.3.0
> doctrine/common  v2.7.3
> doctrine/dbalv2.5.13
> doctrine/doctrine-bundle v1.5.2 
> doctrine/doctrine-cache-bundle   v1.0.1
> doctrine/inflector   v1.0.1
> doctrine/instantiator1.0.5 
> doctrine/lexer   v1.0.1 
> doctrine/orm v2.5.14
>  
>  
> I have a problem with ManyToOne relation.
> For example, I have main object with three child and when I execute
> on main object
> $em = $this->getDoctrine()->getManager();
> $em->merge($data);
> $em->flush();
> I sometimes get ERROR message like:
> Uncaught PHP Exception
> Doctrine\DBAL\Exception\UniqueConstraintViolationException: "An
> exception occurred while executing 'UPDATE
>  
> I get this ERRROR message not for all main object and not for all
> child. For example, first update child object is ok but in second I
> get error.
>  
> SQL prepared by doctrine:
> UPDATE child_table SET id = ?, name = ?, object_name = ?, object_size
> = ? WHERE id = ?' with params ["2", "test Name object 2", "test name
> object 2", "1234", 3]
>  
> In this sql the doctrine tries update object with id=3 using data
> from object with id = 2.
>  
> This problem didn’t occur before executing upgrade to 10.4 version.
>  
> Can you help me and give some tips?
>  
>  
> Pozdrawiam,
> __
> Jarosław Torbicki
> Analityk
>  


I haven't a clue what "doctrine" does but your update statement makes
no sense. It effectively is doing this based on your list of params:-

UPDATE child_table SET id = 2, name = 'test Name object 2', object_name
= 'test Name object 2', object_size = 1234 WHERE id = 3;

Why would you change the value of the id column?
If the column "id" is a primary key, you'll no doubt pull a constraint
violation.

Maybe altering your postgresl.conf file to log all statements you might
be able to follow the sequence of statements until the constraint
violation occurs and thus make the necessary changes to the "doctrine"
code.

HTH,
Rob 




Uncaught PHP Exception Doctrine\DBAL\Exception\UniqueConstraintViolationException: "An exception occurred while executing 'UPDATE

2018-08-14 Thread Jarosław Torbicki
Hello,
I used PostgreSQL 9.3 but I executed upgrade few days ago.
Now, I am using 10.4 PostgreSQL and:
doctrine/annotations v1.2.7
doctrine/cache   v1.4.2
doctrine/collections v1.3.0
doctrine/common  v2.7.3
doctrine/dbalv2.5.13
doctrine/doctrine-bundle v1.5.2
doctrine/doctrine-cache-bundle   v1.0.1
doctrine/inflector   v1.0.1
doctrine/instantiator1.0.5
doctrine/lexer   v1.0.1
doctrine/orm v2.5.14


I have a problem with ManyToOne relation.
For example, I have main object with three child and when I execute on main 
object
$em = $this->getDoctrine()->getManager();
$em->merge($data);
$em->flush();
I sometimes get ERROR message like:
Uncaught PHP Exception 
Doctrine\DBAL\Exception\UniqueConstraintViolationException: "An exception 
occurred while executing 'UPDATE

I get this ERRROR message not for all main object and not for all child. For 
example, first update child object is ok but in second I get error.

SQL prepared by doctrine:
UPDATE child_table SET id = ?, name = ?, object_name = ?, object_size = ? WHERE 
id = ?' with params ["2", "test Name object 2", "test name object 2", "1234", 3]

In this sql the doctrine tries update object with id=3 using data from object 
with id = 2.

This problem didn't occur before executing upgrade to 10.4 version.

Can you help me and give some tips?


Pozdrawiam,
__
Jarosław Torbicki
Analityk



Best Practices for Extensions, limitations and recommended use for monitoring

2018-08-14 Thread Alvar Freude
Hi all,

I have a question about best practices writing PostgreSQL extensions. Is it OK 
to write extensions which create users and grant/revoke rights on the created 
functions to this users? Is it possible to add options to CREATE EXTENSION by 
the extension itself e.g. to make user names configurable?

I’m the author of Posemo (PostgreSQL Secure monitoring), a new PostgreSQL 
monitoring framework for monitoring everything in PostgreSQL in a secure way 
with an unprivileged user and a simple way to add new checks. (In development 
and available here under PostgreSQL license: 
https://github.com/alvar-freude/Posemo)

The Posemo framework creates for each check a PostgreSQL function with the 
SECURITY DEFINER option, which can (only) be called by an unprivileged user, 
who only can call the functions and don’t need access to any data, system table 
etc.

A simple alive check looks like this (the author of the check has only to write 
some meta data and the SQL, “SELECT true” in this example here):

   CREATE FUNCTION posemo.alive() RETURNS boolean
   LANGUAGE sql STABLE SECURITY DEFINER
   SET search_path TO posemo, pg_temp
   AS $$
 SELECT true
   $$;

   ALTER FUNCTION posemo.alive() OWNER TO posemo_admin;
   REVOKE ALL ON FUNCTION posemo.alive() FROM PUBLIC;
   GRANT  ALL ON FUNCTION posemo.alive() TO   posemo;



For more complex checks I create Types, some checks can write something into 
tables.

   CREATE TYPE posemo.transactions_type AS (
database  character varying(64),
xact_commit   bigint,
xact_rollback bigint
   );
   ALTER TYPE posemo.transactions_type OWNER TO posemo_admin;


At the moment I have an installation program, which creates everything in a 
database/schema, which can be chosen by the admin. Of course, it’s also 
possible to load a dump. Checks may have some tables to store some data a (e.g. 
the writeable check inserts a row and deletes old rows).

The Posemo application calls the functions (depending a local config file) and 
builds the result.


If it’s easier to use an extension or install Posemo via installation script 
depends on the environment.

At the moment I think, that it would be a good idea to have (at least) the 
option to use Posemo as Extension, but the need for two users is a possible 
drawback: can and should the extension create the necessary users and can a 
extension change the owner of it’s own objects?
Or is this a bad idea because it breaks everything?

Changing grants later should be not a good idea, because this is not part of a 
dump.


What about local checks of the user? It may use the “requires” parameter and 
should work without any problems.


Thank you for all comments! :-)



Ciao
  Alvar



--
Alvar C.H. Freude | https://alvar.a-blast.org | a...@alvar-freude.de
https://blog.alvar-freude.de/
https://www.wen-waehlen.de/






signature.asc
Description: Message signed with OpenPGP


Re: 'Identifier' columns

2018-08-14 Thread Laurenz Albe
David Favro wrote:
> A couple of questions about auto-assigned identifier columns, 
> forgive my ignorance, I'm used to other methods to create IDs...
> 
> 1. If creating a new application [i.e. no "legacy" reasons to do 
> anything] using PostgreSQL 10, when creating an "auto-assigned 
> integer ID" column, what are the advantages/disadvantages of using 
> the 'SERIAL' datatype [or equivalent explicitly created SEQUENCE w/ 
> nextval() used as default for column] versus the SQL-standard 
> 'integer GENERATED AS IDENTITY'?  All other things being equal, it 
> would seem a no-brainer to follow the standard.

Absolutely.
Use GENERATED ALWAYS AS IDENTITY.

> 2. When using the SQL-standard 'integer GENERATED AS IDENTITY' 
> column, after inserting a column, what is the recommended method to 
> find the ID of the just-inserted row?  Is there no SQL-standard way?  
> The docs seem to imply (without explicitly stating) that a SEQUENCE 
> is used behind the scenes hence 'currval()' could be used, but I 
> didn't see in the docs any mention of what the underlying sequence's 
> name is, or how to specify a name.  Perhaps 'lastval()' would work, 
> but not in all cases and in any event it has a sloppy feel to me.

The best way is to use the (non-standard) RETURNING clause:

INSERT ... RETURNING id;

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com