Re: [GENERAL] BDR: cannot drop database even after parting the node

2015-09-18 Thread Craig Ringer
On 17 September 2015 at 06:15, Florin Andrei  wrote:

> Then, from node1, I've parted node2 like this:
>
> SELECT bdr.bdr_part_by_node_names('{node2}');
>
> And then also on node1 I've parted node1 like this:
>
> SELECT bdr.bdr_part_by_node_names('{node1}');

The second step is not necessary. In fact we should detect that case
and ERROR, since it doesn't make sense to issue bdr_part_by_node_names
from a node that's already left, it can't have any effect.

> Now I want to start over with a clean slate, so I want to drop the bdrdemo
> database on node1.

That's because it's still a live BDR instance, just with one node.

> But I can't:
>
> postgres=# DROP DATABASE bdrdemo;
> ERROR:  database "bdrdemo" is being accessed by other users
> DETAIL:  There is 1 other session using the database.
> postgres=# SELECT pid FROM pg_stat_activity where pid <> pg_backend_pid();
>   pid
> ---
>  10259
>  10260
> (2 rows)
>
> # ps ax | grep -e 10259 -e 10260 | grep -v grep
> 10259 ?Ss 0:00 postgres: bgworker: bdr supervisor
> 10260 ?Ss 0:00 postgres: bgworker: bdr db: bdrdemo
>
> If I kill those workers and then drop the database, the workers get
> respawned, and then the logs fill up with complaints from the workers that
> they can't find the bdrdemo database.
>
> Is there a way to stop BDR completely, so that those workers are laid to
> rest and never respawn?

You've made a good point. We address shutdown and removal on some
nodes, but not the case where you want to shut down and remove BDR on
a single remaining node.

I've been meaning to write a helper function for this for some time,
but other priorities keep intervening.

Here's the manual process:

BEGIN;
SET LOCAL bdr.skip_ddl_locking = on;
SET LOCAL bdr.permit_unsafe_ddl_commands = on;
SET LOCAL bdr.skip_ddl_replication = on;
SECURITY LABEL FOR bdr ON DATABASE mydb IS NULL;
DELETE FROM bdr.bdr_connections;
DELETE FROM bdr.bdr_nodes;
SELECT bdr.bdr_connections_changed();
COMMIT;

SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = current_database() AND application_name LIKE '%): perdb';

DROP EXTENSION bdr;

... then remove 'bdr' from shared_preload_libraries .

> Basically, how do I reset BDR completely? It seems to retain the memory of
> the bdrdemo database somewhere.

Sort-of. What happens in your example is that when you part the nodes,
they're separated and stop communicating. So your second part command
never reaches the remaining node. That's expected and normal, but we
should issue an error when it's attempted to make it clearer to the
user what's going on.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Broken primary key after backup restore.

2015-09-18 Thread Guillaume Lelarge
Le 18 sept. 2015 5:23 AM, "Adrian Klaver"  a
écrit :
>
> On 09/17/2015 05:37 PM, Michael Chau wrote:
>>
>> 1)
>>
>> In Production, I have a DB2 which is replicated partially using
Londiste
>> from DB1.
>>
>>
>> Well I think the above needs more explanation to help understand how the
>> DB2 backup got into this state and possibly prevent it in the future.
>>
>> A: So, the DB1 has several schemas in the database. We use Londiste to
>> replicate just one of the schemas to DB2.  The table in question is in
>> that schema. Backup is done on both DB1 and DB2.
>>
>> 2)
>> I make file-system backups nightly on both DBs.
>>
>> How is that done exactly?
>>
>> A: To backup:
>>
>> 1) pg_start_backup()
>> 2) tar up the files under the data directory
>> 3) pg_stop_backup()
>>
>> To restore on test server:
>>
>> 1) Just untar the tar ball, then start up Postgres. Of course the data
>> directory is empty beforehand.
>>
>> This has been working for almost 2 years without any problem until last
>> Monday. I remember that I just ran vacuum analyze that table on both DB1
>> and DB2  that morning. But, I don't think that it harms anything.
>
>
> Well it looks fairly straight forward, to me at least.
>

Do I miss something obvious? Because this is to me the wrong way to do the
restore. You need to apply WAL files archived between pg_start_backup and
pg_stop_backup to get consistent data files.


[GENERAL] Delete trigger

2015-09-18 Thread Leif Jensen
   Hi,

   I am running PostgreSQL 9.3.4. I have a simple table with 3 fields:

CREATE TABLE devicegroup (
groupid integer NOT NULL,
ctrlid integer NOT NULL,
userid integer NOT NULL
);
ALTER TABLE ONLY devicegroup
ADD CONSTRAINT pk_devicegroup PRIMARY KEY (groupid, ctrlid, userid);

   I want to make sure that records are only deleted when all 3 fields are 
specified, so I tried make a trigger:

CREATE TRIGGER userDev_trig INSTEAD OF DELETE ON admdevgrid FOR EACH STATEMENT
  EXECUTE PROCEDURE deleteUserDev();

which could check for NOT NULL on the 3 fields before actual doing the delete. 
Unfortunately this is not possible to do FOR EACH STATEMENT, and FOR EACH ROW 
would not give me the chance to check for NOT NULL.

   Any ideas ?

 Leif


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Delete trigger

2015-09-18 Thread Albe Laurenz
Leif Jensen wrote:
>If I do "DELETE FROM devicegroup WHERE group=1" I do not want to delete 
> anything. I only want to
> delete if I do "DELETE FROM devicegroup WHERE groupid=x AND ctrlid=y AND 
> userid=z". I don't wanna let
> anyone delete more than 1 row at a time.

I can't think of a way to do that with a trigger.

I'd write a
  FUNCTION delete_devicegroup(groupid integer, ctrlid integer, userid integer)
RETURNS void CALLED ON NULL INPUT VOLATILE SECURITY DEFINER
that enables the user to delete a row and checks that all arguments
are NOT NULL.  The user doesn't get privileges to DELETE from the table 
directly.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Delete trigger

2015-09-18 Thread Leif Jensen
   Hello Laurenz,

   Thank you for you suggestion. I really want to aviod that someone 
'accidentally' deletes too much by typing (programming) a not full qualified 
DELETE ... statement. In your case one would have to always use the delete 
function, but no restrictions on using the DELETE statement.

 Leif


- Original Message -
> Leif Jensen wrote:
> >If I do "DELETE FROM devicegroup WHERE group=1" I do not want to delete
> >anything. I only want to
> > delete if I do "DELETE FROM devicegroup WHERE groupid=x AND ctrlid=y AND
> > userid=z". I don't wanna let
> > anyone delete more than 1 row at a time.
> 
> I can't think of a way to do that with a trigger.
> 
> I'd write a
>   FUNCTION delete_devicegroup(groupid integer, ctrlid integer, userid
>   integer)
> RETURNS void CALLED ON NULL INPUT VOLATILE SECURITY DEFINER
> that enables the user to delete a row and checks that all arguments
> are NOT NULL.  The user doesn't get privileges to DELETE from the table
> directly.
> 
> Yours,
> Laurenz Albe
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Delete trigger

2015-09-18 Thread Leif Jensen
   Hi Charles,

   If I do "DELETE FROM devicegroup WHERE group=1" I do not want to delete 
anything. I only want to delete if I do "DELETE FROM devicegroup WHERE 
groupid=x AND ctrlid=y AND userid=z". I don't wanna let anyone delete more than 
1 row at a time.

 Leif


- Original Message -
> Hello
> 
> Not sure I get it right, but all three fields are not nullable. So they will
> always have a value, which is what I understand of "are specified".
> What do you need the trigger for in that case?
> 
> Bye
> Charles
> 
> > -Original Message-
> > From: pgsql-general-ow...@postgresql.org
> > [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Leif Jensen
> > Sent: Freitag, 18. September 2015 10:23
> > To: pgsql-general 
> > Subject: [GENERAL] Delete trigger
> > 
> >Hi,
> > 
> >I am running PostgreSQL 9.3.4. I have a simple table with 3 fields:
> > 
> > CREATE TABLE devicegroup (
> > groupid integer NOT NULL,
> > ctrlid integer NOT NULL,
> > userid integer NOT NULL
> > );
> > ALTER TABLE ONLY devicegroup
> > ADD CONSTRAINT pk_devicegroup PRIMARY KEY (groupid, ctrlid, userid);
> > 
> >I want to make sure that records are only deleted when all 3 fields are
> >specified, so I tried make a trigger:
> > 
> > CREATE TRIGGER userDev_trig INSTEAD OF DELETE ON admdevgrid FOR EACH
> > STATEMENT
> >   EXECUTE PROCEDURE deleteUserDev();
> > 
> > which could check for NOT NULL on the 3 fields before actual doing the
> > delete. Unfortunately this is not possible to
> > do FOR EACH STATEMENT, and FOR EACH ROW would not give me the chance to
> > check for NOT NULL.
> > 
> >Any ideas ?
> > 
> >  Leif
> > 
> > 
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] clone_schema function

2015-09-18 Thread Melvin Davidson
Revised to add rules after all tables are create to avoid error where table
referenced in rule was not created yet.

Added copying of column statistics with thanks to Marc Mamin  for pointing
that out.


On Thu, Sep 17, 2015 at 12:06 PM, Marc Mamin  wrote:

>
> --
> *Von:* Melvin Davidson [melvin6...@gmail.com]
> *Gesendet:* Donnerstag, 17. September 2015 17:11
> *An:* Marc Mamin
> *Cc:* pgsql-general@postgresql.org
> *Betreff:* Re: [GENERAL] clone_schema function
>
> Thanks,
>
> >I'm not sure why you had trouble with the REPLACE(), as I did extensive
> testing and it was working as coded.
>
> might be that my modification is required when ( and only when ?) the
> source_schema is not part of the current search_path.
> This is just a guess, I only gave your code a quick try ...
>
> >As for SET STATISTICS, I'm guessing that is a glitch in the CREATE TABLE
>  LIKE option.
> Yes, we can see it as an incomplete feature.
>
> regards,
>
> Marc Mamin
>
> On Thu, Sep 17, 2015 at 11:05 AM, Marc Mamin  wrote:
>
>> Hello,
>>
>> I had to make 2 changes to get it running:
>>
>>
>>
>> line 193:
>>
>> -  REPLACE(column_default::text, quote_ident(source_schema) || '.',
>> quote_ident(dest_schema || '.') )
>>
>> + REPLACE(column_default::text, quote_ident(source_schema) || '.',
>> quote_ident(dest_schema) || '.' )
>>
>>
>>
>> line 319
>>
>> -  SELECT replace(v_def, ' ON ', ' ON ' || quote_ident(dest_schema) ||
>> '.') INTO dest_qry;
>>
>> + SELECT replace(v_def, ' ON ' || quote_ident(source_schema), ' ON ' ||
>> quote_ident(dest_schema) ) INTO dest_qry;
>>
>>
>>
>>
>>
>> moreover, you don't take care of the column statistic targets
>>
>> (i.e. ALTER TABLE t ALTER COLUMN c SET STATISTICS n;)
>>
>>
>>
>>
>>
>> regards,
>>
>>
>>
>> Marc Mamin
>>
>>
>>
>>
>>
>> *From:* pgsql-general-ow...@postgresql.org [mailto:
>> pgsql-general-ow...@postgresql.org] *On Behalf Of *Melvin Davidson
>> *Sent:* Donnerstag, 17. September 2015 15:48
>> *To:* David G. Johnston
>> *Cc:* Igor Neyman; Jim Nasby; Daniel Verite; pgsql-general@postgresql.org
>> *Subject:* Re: [GENERAL] clone_schema function
>>
>>
>>
>> Attached is hopefully the final version of
>>
>>  FUNCTION clone_schema(text, text, boolean)
>>
>>  This function now does the following:
>>  1. Checks that the source schema exists and the destination does not.
>>  2. Creates the destination schema
>>  3. Copies all sequences, tables, indexes, rules, triggers,
>> data(optional),
>>  views & functions from the source schema to the destination schema
>>  4. Optionally copies records from source schema tables to destination
>> tabled. (boolean)
>>  5. Copies comments for source schema and all sequences, tables,
>> functions, rules   and triggers;
>>
>>  If you discover a problem with this function, then kindly advise me what
>> it is
>>  and attach a script (SQL dump) to duplicate it. If you also have a fix,
>> that is
>>  even better.
>>
>>  However, if you "think" there is a problem that occurs when
>>  A. The moon is full
>>  B. You have blood type A/B negative
>>  C. You have a table the same name as your database and schema
>>  D. All you tables have column "id" in them
>>  E. You've had 16 beers and 4 oxycodones
>>  F. Your meth supplier raised the price
>>
>>  then do not contact me. Instead, run, do not walk,  immediately to your
>>  psychologist, as you have serious issues in addition to database design
>> problems
>>  and you should not use this function under any circumstance.
>>
>>  CAVEAT EMPTOR!
>>  The only known problem with this script is if functions in the source
>> schema
>>  have a SELECT using the form of tablename.columm, and tablename is the
>> same
>>  as  source schema, then tablename will be changed to destination schema
>> name.
>>  However, since good developers and DBA's use the form of alias.column,
>> this
>>  should rarely be a problem.
>>
>>
>>
>> On Tue, Sep 15, 2015 at 12:37 PM, Melvin Davidson 
>> wrote:
>>
>> David,
>>
>> Yes, it would be nice, but
>>
>> 1. I am still working also on bringing over the comments for various
>> objects
>>
>> 2. What you request is currently beyond my capability. Not to mention
>> that there already
>>
>> are existing tools that do that, albeit they are not free.
>>
>>
>>
>> On Tue, Sep 15, 2015 at 12:27 PM, David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>> To make the casual user's life easier, in the face of this reality, it
>> would nice if the routine would generate a reasonably attempted "diff"
>> between the two so that all changes can be reviewed in a structured manner
>> aided by correctly configured tools and advice.
>>
>>
>>
>> On Tue, Sep 15, 2015 at 12:20 PM, Melvin Davidson 
>> wrote:
>>
>> Igor,
>> I understand your point, however, I have spent over a week making a
>> function
>> that previously did very little do a lot.
>>
>> Naming a table the same as a schema is a very 

Re: [GENERAL] Broken primary key after backup restore.

2015-09-18 Thread Jeff Janes
On Fri, Sep 18, 2015 at 6:16 AM, Adrian Klaver 
wrote:

> On 09/17/2015 11:15 PM, Guillaume Lelarge wrote:
>
>> Le 18 sept. 2015 5:23 AM, "Adrian Klaver" > > a écrit :
>>  >
>>  > On 09/17/2015 05:37 PM, Michael Chau wrote:
>>  >>
>>  >> To restore on test server:
>>  >>
>>  >> 1) Just untar the tar ball, then start up Postgres. Of course the data
>>  >> directory is empty beforehand.
>>  >>
>>  >> This has been working for almost 2 years without any problem until
>> last
>>  >> Monday. I remember that I just ran vacuum analyze that table on both
>> DB1
>>  >> and DB2  that morning. But, I don't think that it harms anything.
>>  >
>>  >
>>  > Well it looks fairly straight forward, to me at least.
>>  >
>>
>> Do I miss something obvious? Because this is to me the wrong way to do
>> the restore. You need to apply WAL files archived between
>> pg_start_backup and pg_stop_backup to get consistent data files.
>>
>
> Would that not be taken care of by the tar data directory/ untar data
> directory?
>

Only if you are very lucky.  If your tar command tars up the pg_xlog
directory as the last thing it does, then you are probably going to be OK.
Otherwise, it is a crap shoot.



> I would think if it was a WAL issue the OP could never get the server to
> start and get to the point the query failed on a single table and column.


With pg_basebackup, that is probably the case, as it either doesn't copy
xlog at all, or if it does it makes sure it is complete.  But with tar, you
have no such protection.



> All that being said, I think the OP would be better served by
> pg_basebackup:
>
> http://www.postgresql.org/docs/9.4/static/app-pgbasebackup.html



Yes, indeed.

Cheers,

Jeff


Re: [GENERAL] Broken primary key after backup restore.

2015-09-18 Thread Michael Chau
Hi Jeff,

>Only if you are very lucky.  If your tar command tars up the pg_xlog
directory as the last thing it does, then you are probably going to be OK.
Otherwise, it is a crap shoot.

May be that's it. I have another similar set up, but the pg_xlog is a soft
link to another directory, and I use 'tar -chvzf'. It tar up the pg_xlog at
the very last. And the restore is fine.

For this one, DB1 and DB2, the pg_xlog is the directory itself, and I use
'tar -cvzf'. And it tar up pg_xlog at the beginning. I always have doubt
about it. But I though pg_stop_backup() and pg_start_backup() like freezing
would prevent the inconsistency.

Indeed, I will look inot pgbasebackup.

Thanks,



On Fri, Sep 18, 2015 at 11:20 AM, Jeff Janes  wrote:

> On Fri, Sep 18, 2015 at 6:16 AM, Adrian Klaver 
> wrote:
>
>> On 09/17/2015 11:15 PM, Guillaume Lelarge wrote:
>>
>>> Le 18 sept. 2015 5:23 AM, "Adrian Klaver" >> > a écrit :
>>>  >
>>>  > On 09/17/2015 05:37 PM, Michael Chau wrote:
>>>  >>
>>>  >> To restore on test server:
>>>  >>
>>>  >> 1) Just untar the tar ball, then start up Postgres. Of course the
>>> data
>>>  >> directory is empty beforehand.
>>>  >>
>>>  >> This has been working for almost 2 years without any problem until
>>> last
>>>  >> Monday. I remember that I just ran vacuum analyze that table on both
>>> DB1
>>>  >> and DB2  that morning. But, I don't think that it harms anything.
>>>  >
>>>  >
>>>  > Well it looks fairly straight forward, to me at least.
>>>  >
>>>
>>> Do I miss something obvious? Because this is to me the wrong way to do
>>> the restore. You need to apply WAL files archived between
>>> pg_start_backup and pg_stop_backup to get consistent data files.
>>>
>>
>> Would that not be taken care of by the tar data directory/ untar data
>> directory?
>>
>
> Only if you are very lucky.  If your tar command tars up the pg_xlog
> directory as the last thing it does, then you are probably going to be OK.
> Otherwise, it is a crap shoot.
>
>
>
>> I would think if it was a WAL issue the OP could never get the server to
>> start and get to the point the query failed on a single table and column.
>
>
> With pg_basebackup, that is probably the case, as it either doesn't copy
> xlog at all, or if it does it makes sure it is complete.  But with tar, you
> have no such protection.
>
>
>
>> All that being said, I think the OP would be better served by
>> pg_basebackup:
>>
>> http://www.postgresql.org/docs/9.4/static/app-pgbasebackup.html
>
>
>
> Yes, indeed.
>
> Cheers,
>
> Jeff
>
>


-- 
*Michael Chau*
*Database Administrator*
*GAME GOLF*
77 Geary St, 5th floor
San Francisco, CA 94108
c) *510-366-3800*
e) *michael.c...@gameyourgame.com *
f) www.facebook.com/gamegolf 
t) @GAMEGOLF
w) www.gamegolf.c *om*


Re: [GENERAL] Broken primary key after backup restore.

2015-09-18 Thread David Steele

On 9/18/15 3:44 PM, Michael Chau wrote:

Hi Jeff,


Only if you are very lucky.  If your tar command tars up the pg_xlog directory 
as the last thing it does, then you are probably going to be OK.  Otherwise, it 
is a crap shoot.


May be that's it. I have another similar set up, but the pg_xlog is a
soft link to another directory, and I use 'tar -chvzf'. It tar up the
pg_xlog at the very last. And the restore is fine.


This is still not always safe.  It depends on your wal_keep_segments 
settings and some luck.  WAL segments can be recycled during the backup.



For this one, DB1 and DB2, the pg_xlog is the directory itself, and I
use 'tar -cvzf'. And it tar up pg_xlog at the beginning. I always have
doubt about it. But I though pg_stop_backup() and pg_start_backup() like
freezing would prevent the inconsistency.


This is definitely not a good idea.


Indeed, I will look inot pgbasebackup.


pg_basebackup is good for creating replicas but for real backup you 
might want to consider purpose-built backup software like pgBackRest or 
barman.


--
-David
da...@pgmasters.net


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Delete trigger

2015-09-18 Thread Ioana Danes
It depends on the size of the table and the frequency of updates, deletes
but cold consider an audit table with triggers for update, delete and
truncate. At least you have a way to recover deleted records.

Ioana


On Fri, Sep 18, 2015 at 5:52 AM, Leif Jensen  wrote:

>Hello Laurenz,
>
>Thank you for you suggestion. I really want to aviod that someone
> 'accidentally' deletes too much by typing (programming) a not full
> qualified DELETE ... statement. In your case one would have to always use
> the delete function, but no restrictions on using the DELETE statement.
>
>  Leif
>
>
> - Original Message -
> > Leif Jensen wrote:
> > >If I do "DELETE FROM devicegroup WHERE group=1" I do not want to
> delete
> > >anything. I only want to
> > > delete if I do "DELETE FROM devicegroup WHERE groupid=x AND ctrlid=y
> AND
> > > userid=z". I don't wanna let
> > > anyone delete more than 1 row at a time.
> >
> > I can't think of a way to do that with a trigger.
> >
> > I'd write a
> >   FUNCTION delete_devicegroup(groupid integer, ctrlid integer, userid
> >   integer)
> > RETURNS void CALLED ON NULL INPUT VOLATILE SECURITY DEFINER
> > that enables the user to delete a row and checks that all arguments
> > are NOT NULL.  The user doesn't get privileges to DELETE from the table
> > directly.
> >
> > Yours,
> > Laurenz Albe
> >
> >
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Delete trigger

2015-09-18 Thread David G. Johnston
On Friday, September 18, 2015, Leif Jensen  wrote:

>Hello Laurenz,
>
>Thank you for you suggestion. I really want to aviod that someone
> 'accidentally' deletes too much by typing (programming) a not full
> qualified DELETE ... statement. In your case one would have to always use
> the delete function, but no restrictions on using the DELETE statement.
>
>
There is no way you can prevent a superuser from shooting themselves in the
foot.  For anyone else you can enforce use of the function to perform the
delete.

You could make a field called ok-to-delete and add a partial unique index
on it so that only a single record can be marked ok to delete at a time and
then have your trigger abort if it tries to delete a field without the ok
to delete field set to true.

David J.


Re: [GENERAL] Online backup of PostgreSQL data.

2015-09-18 Thread Sathiyan Subramanian
You can use pg_dump to creating a backup.

pg_dump -U db_username db_name -f file_name.dump

On Thu, Sep 17, 2015 at 1:45 PM, Albe Laurenz 
wrote:

> John R Pierce wrote:
> > to copy the data directory and have it be useful you need to bracket the
> copy with calls to
> > pg_start_backup() and pg_stop_backup()  this ensures the data files
> are coherent.   this is in
> > fact what pg_basebackup does for you
>
> I apologize for my fussiness, but this is a misconception I encounter so
> frequently
> that I have to speak up.
>
> The file system copy does not become consistent (isn't that what you
> meant?) if it
> is surrounded by pg_start_backup() and pg_stop_backup().  What happens is
> that
> a) a backup.label file is created that tells recovery where to start
> b) more WAL is generated so that all changes can be replayed safely.
>
> Yours,
> Laurenz Albe
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Delete trigger

2015-09-18 Thread Charles Clavadetscher
Hello

Not sure I get it right, but all three fields are not nullable. So they will 
always have a value, which is what I understand of "are specified".
What do you need the trigger for in that case?

Bye
Charles

> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Leif Jensen
> Sent: Freitag, 18. September 2015 10:23
> To: pgsql-general 
> Subject: [GENERAL] Delete trigger
> 
>Hi,
> 
>I am running PostgreSQL 9.3.4. I have a simple table with 3 fields:
> 
> CREATE TABLE devicegroup (
> groupid integer NOT NULL,
> ctrlid integer NOT NULL,
> userid integer NOT NULL
> );
> ALTER TABLE ONLY devicegroup
> ADD CONSTRAINT pk_devicegroup PRIMARY KEY (groupid, ctrlid, userid);
> 
>I want to make sure that records are only deleted when all 3 fields are 
> specified, so I tried make a trigger:
> 
> CREATE TRIGGER userDev_trig INSTEAD OF DELETE ON admdevgrid FOR EACH STATEMENT
> EXECUTE PROCEDURE deleteUserDev();
> 
> which could check for NOT NULL on the 3 fields before actual doing the 
> delete. Unfortunately this is not possible to
> do FOR EACH STATEMENT, and FOR EACH ROW would not give me the chance to check 
> for NOT NULL.
> 
>Any ideas ?
> 
>  Leif
> 
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Delete trigger

2015-09-18 Thread Leif Jensen
   Hi Ioana and David.

   Thank you. Yes, I can see the problem. I will look into your suggestions.

 Leif


- Original Message -
> On Friday, September 18, 2015, Leif Jensen  wrote:
> 
> >Hello Laurenz,
> >
> >Thank you for you suggestion. I really want to aviod that someone
> > 'accidentally' deletes too much by typing (programming) a not full
> > qualified DELETE ... statement. In your case one would have to always use
> > the delete function, but no restrictions on using the DELETE statement.
> >
> >
> There is no way you can prevent a superuser from shooting themselves in the
> foot.  For anyone else you can enforce use of the function to perform the
> delete.
> 
> You could make a field called ok-to-delete and add a partial unique index
> on it so that only a single record can be marked ok to delete at a time and
> then have your trigger abort if it tries to delete a field without the ok
> to delete field set to true.
> 
> David J.
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Broken primary key after backup restore.

2015-09-18 Thread Adrian Klaver

On 09/17/2015 11:15 PM, Guillaume Lelarge wrote:

Le 18 sept. 2015 5:23 AM, "Adrian Klaver" > a écrit :
 >
 > On 09/17/2015 05:37 PM, Michael Chau wrote:
 >>
 >> 1)
 >>
 >> In Production, I have a DB2 which is replicated partially using
Londiste
 >> from DB1.
 >>
 >>
 >> Well I think the above needs more explanation to help understand how the
 >> DB2 backup got into this state and possibly prevent it in the future.
 >>
 >> A: So, the DB1 has several schemas in the database. We use Londiste to
 >> replicate just one of the schemas to DB2.  The table in question is in
 >> that schema. Backup is done on both DB1 and DB2.
 >>
 >> 2)
 >> I make file-system backups nightly on both DBs.
 >>
 >> How is that done exactly?
 >>
 >> A: To backup:
 >>
 >> 1) pg_start_backup()
 >> 2) tar up the files under the data directory
 >> 3) pg_stop_backup()
 >>
 >> To restore on test server:
 >>
 >> 1) Just untar the tar ball, then start up Postgres. Of course the data
 >> directory is empty beforehand.
 >>
 >> This has been working for almost 2 years without any problem until last
 >> Monday. I remember that I just ran vacuum analyze that table on both DB1
 >> and DB2  that morning. But, I don't think that it harms anything.
 >
 >
 > Well it looks fairly straight forward, to me at least.
 >

Do I miss something obvious? Because this is to me the wrong way to do
the restore. You need to apply WAL files archived between
pg_start_backup and pg_stop_backup to get consistent data files.


Would that not be taken care of by the tar data directory/ untar data 
directory?


I would think if it was a WAL issue the OP could never get the server to 
start and get to the point the query failed on a single table and 
column. All that being said, I think the OP would be better served by 
pg_basebackup:


http://www.postgresql.org/docs/9.4/static/app-pgbasebackup.html







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] search_path not reloaded via unix socket connections

2015-09-18 Thread Adrian Klaver

On 09/18/2015 08:19 AM, Kong Man wrote:

Tom,
I have made sure that the per-user and per-database settings are reset.
  The old search_path setting still persists whenever I reconnect via
local connections no matter what user I log in as.


So exactly how are you connecting and from what?

In other words the connection string and what client are you using?



 > From: t...@sss.pgh.pa.us
 > To: kong_mansatian...@hotmail.com
 > CC: pgsql-general@postgresql.org
 > Subject: Re: [GENERAL] search_path not reloaded via unix socket
connections
 > Date: Thu, 17 Sep 2015 22:29:04 -0400
 >
 > Kong Man  writes:
 > > Can anybody explain why the search_path setting, with several
config reloads, would not change via local connections? We struggled
with our production settings on Postgres 9.3 today, only to realize,
after a while, that the search_path change actually took effect via
TCP/IP, but not unix socket, connections ever since the first reload.
 >
 > That's, um, pretty hard to believe. Less magical interpretations would
 > involve something like a per-user or per-database setting overriding
 > what's in the config file in some sessions but not others. But I really
 > really doubt that TCP vs unix socket is the determining factor.
 >
 > regards, tom lane



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] search_path not reloaded via unix socket connections

2015-09-18 Thread Kong Man
We just found out that the problem was that our own ~/.psqlrc contained the SET 
statement, overriding the global setting.  

From: kong_mansatian...@hotmail.com
To: t...@sss.pgh.pa.us
CC: pgsql-general@postgresql.org
Subject: RE: [GENERAL] search_path not reloaded via unix socket connections
Date: Fri, 18 Sep 2015 08:19:30 -0700




Tom,I have made sure that the per-user and per-database settings are reset.  
The old search_path setting still persists whenever I reconnect via local 
connections no matter what user I log in as.

> From: t...@sss.pgh.pa.us
> To: kong_mansatian...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] search_path not reloaded via unix socket connections
> Date: Thu, 17 Sep 2015 22:29:04 -0400
> 
> Kong Man  writes:
> > Can anybody explain why the search_path setting, with several config 
> > reloads, would not change via local connections?  We struggled with our 
> > production settings on Postgres 9.3 today, only to realize, after a while, 
> > that the search_path change actually took effect via TCP/IP, but not unix 
> > socket, connections ever since the first reload.
> 
> That's, um, pretty hard to believe.  Less magical interpretations would
> involve something like a per-user or per-database setting overriding
> what's in the config file in some sessions but not others.  But I really
> really doubt that TCP vs unix socket is the determining factor.
> 
>   regards, tom lane

  

Re: [GENERAL] search_path not reloaded via unix socket connections

2015-09-18 Thread Kong Man
Tom,I have made sure that the per-user and per-database settings are reset.  
The old search_path setting still persists whenever I reconnect via local 
connections no matter what user I log in as.

> From: t...@sss.pgh.pa.us
> To: kong_mansatian...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] search_path not reloaded via unix socket connections
> Date: Thu, 17 Sep 2015 22:29:04 -0400
> 
> Kong Man  writes:
> > Can anybody explain why the search_path setting, with several config 
> > reloads, would not change via local connections?  We struggled with our 
> > production settings on Postgres 9.3 today, only to realize, after a while, 
> > that the search_path change actually took effect via TCP/IP, but not unix 
> > socket, connections ever since the first reload.
> 
> That's, um, pretty hard to believe.  Less magical interpretations would
> involve something like a per-user or per-database setting overriding
> what's in the config file in some sessions but not others.  But I really
> really doubt that TCP vs unix socket is the determining factor.
> 
>   regards, tom lane