Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
thank you.
it listens to the local network, which are my two other MacBooks. I reported 
the bug to DaVinci. Hopefully the have a fix then I can put it back on md5.

thank you for your script.
will try it.

best
Robert


> On 13 Nov 2016, at 23:27, John R Pierce  wrote:
> 
> On 11/13/2016 1:01 PM, aws backup wrote:
>> thank you so much.
>> With your help I could solve all my problems.
>> DaVinci has a bug somewhere with the database configuration.
>> I installed everything new and set all auth method to trust instead of md5 
>> which was default from DaVinci.
> 
> trust is not very secure as it means any process on your system has full 
> access to any database user including the postgres database administrator.  
> hopefully this database server only has listen_addresses='localhost'   and 
> isn't listening on external network interfaces, otherwsie trust means any 
> machine on your network can access it.
> 
> 
> 
>> Now everything is working as expected. No errors anymore with the backup and 
>> restore buttons in DaVinci which are just linked to the pg_dump and 
>> pg_restore scripts.
>> The pg_dumpall approach was from the example scripts which are posted in the 
>> DaVinci Forum.
>> But this approach doesn't make sense … ?
> 
> on a Linux system, my pg backup script looks like...
> 
> 
> #!/bin/bash
> ## postgres backup script
> DD=$(date +%a)
> PP=/backups/pgsql/
> 
> pg_dumpall -g -f $PP/pg_dumpall.$p.globals-$DD.sql
> for db in $(psql -tc "select datname from pg_database where not 
> datistemplate"); do
>   pg_dump -Fc -f $PP/pgdump.$p.$db.$DD.dump -d $db
> done
> 
> 
> this generates a globals backup file, and a compressed backup from each 
> non-system database.   these compressed backups are restorable with the 
> pg_restore command which has a lot of useful options (restore schema only, or 
> restore data only, etc etc).   It also puts the day-of-the-week in the 
> filename of each of these backups (thats what the DD value is for), so if you 
> run this daily, you end up with 7 sets of files, one for each day of the 
> week.   if you change %a to %F, then the filenames will instead contain like 
> 2016-11-13 ...
> 
> 
> -- 
> john r pierce, recycling bits in santa cruz
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
Thank you, I understand. Nobody else can access the database.
As the database backup button in DaVinci started to work I could understand 
that DaVinci is only making pg_dump of the database. The restore works fine 
from any DaVinci installation to another one. I tested this. I still can make a 
automatic daily shell script with a pg_dumpall. Will see.

Thank you.


> On 13 Nov 2016, at 23:12, Adrian Klaver  wrote:
> 
> On 11/13/2016 01:01 PM, aws backup wrote:
>> Hi,
>> 
>> thank you so much.
>> With your help I could solve all my problems.
>> DaVinci has a bug somewhere with the database configuration.
>> I installed everything new and set all auth method to trust instead of md5 
>> which was default from DaVinci.
> 
> Just remember that trust is just that, trust. If someone knows a valid 
> username they have access, subject to other conditions set in pg_hba.conf.
> 
>> Now everything is working as expected. No errors anymore with the backup and 
>> restore buttons in DaVinci which are just linked to the pg_dump and 
>> pg_restore scripts.
>> The pg_dumpall approach was from the example scripts which are posted in the 
>> DaVinci Forum.
>> But this approach doesn't make sense … ?
> 
> It does if you want to backup the state of the entire cluster. Could be that 
> DaVinci is including things in the template1 and postgres databases that are 
> needed. The other thing that pg_dumpall gets you is data global to the 
> cluster:
> 
> https://www.postgresql.org/docs/9.5/static/app-pg-dumpall.html 
> 
> 
> " ...  pg_dumpall also dumps global objects that are common to all databases. 
> (pg_dump does not save these objects.) ..."
> 
> Though you can have your cake and eat it to by using pg_dump for individual 
> databases and then
> 
> pg_dumpall -g
> 
> where -g is:
> 
> "-g
> --globals-only
> 
>Dump only global objects (roles and tablespaces), no databases.
> "
> 
>> 
>> Best Regards,
>> Robert
>> 
>> 
>> 
>>> On 13 Nov 2016, at 19:13, Adrian Klaver  wrote:
>>> 
>>> On 11/13/2016 09:04 AM, aws backup wrote:
 Hi Adrian,
 
 thank you for the explanation.
 I will look into you links.
 
 I am doing this because I want to make backups from the
 database. Ideally automatic backups every night.
 The Blackmagic support says they can't help me with this. The Backup and
 Restore button in DaVinci does not work.
 Every database related question I ask the Blackmagic support stays
 unanswered.
>>> 
>>> Guessing they are treating the Postgres database as an embedded one that 
>>> should not be touched by the end user.
>>> 
 For example: How can I restart the SQL server? Oh sorry we can't help
 you with this … ?
>>> 
>>> https://www.postgresql.org/docs/9.5/static/app-pg-ctl.html
>>> 
 
 Thank you.
 Robert
 
 
 
 
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>> 
>> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread Adrian Klaver

On 11/13/2016 01:01 PM, aws backup wrote:

Hi,

thank you so much.
With your help I could solve all my problems.
DaVinci has a bug somewhere with the database configuration.
I installed everything new and set all auth method to trust instead of md5 
which was default from DaVinci.


Just remember that trust is just that, trust. If someone knows a valid 
username they have access, subject to other conditions set in pg_hba.conf.



Now everything is working as expected. No errors anymore with the backup and 
restore buttons in DaVinci which are just linked to the pg_dump and pg_restore 
scripts.
The pg_dumpall approach was from the example scripts which are posted in the 
DaVinci Forum.
But this approach doesn't make sense … ?


It does if you want to backup the state of the entire cluster. Could be 
that DaVinci is including things in the template1 and postgres databases 
that are needed. The other thing that pg_dumpall gets you is data global 
to the cluster:


https://www.postgresql.org/docs/9.5/static/app-pg-dumpall.html

" ...  pg_dumpall also dumps global objects that are common to all 
databases. (pg_dump does not save these objects.) ..."


Though you can have your cake and eat it to by using pg_dump for 
individual databases and then


pg_dumpall -g

where -g is:

"-g
--globals-only

Dump only global objects (roles and tablespaces), no databases.
"



Best Regards,
Robert




On 13 Nov 2016, at 19:13, Adrian Klaver  wrote:

On 11/13/2016 09:04 AM, aws backup wrote:

Hi Adrian,

thank you for the explanation.
I will look into you links.

I am doing this because I want to make backups from the
database. Ideally automatic backups every night.
The Blackmagic support says they can't help me with this. The Backup and
Restore button in DaVinci does not work.
Every database related question I ask the Blackmagic support stays
unanswered.


Guessing they are treating the Postgres database as an embedded one that should 
not be touched by the end user.


For example: How can I restart the SQL server? Oh sorry we can't help
you with this … ?


https://www.postgresql.org/docs/9.5/static/app-pg-ctl.html



Thank you.
Robert





--
Adrian Klaver
adrian.kla...@aklaver.com






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread John R Pierce

On 11/13/2016 8:51 AM, aws backup wrote:
I assume that the postgres database password is the one which is shown 
in the DaVinci connect database window.


user: postgres
password: DaVinci


there are no database passwords, thats the password for the postgres 
database role/user.




But if I work in the Terminal and change the user "sudo su - postgres" 
I have to use my admin password.


sudo expects the password of the user running sudo.  this has nothing to 
do wih postgres.



That is optional and I sort of doubt the application using one.


You think DaVinci is not using any password … ?


he meant, he doubts DaVinci is using a .pgpass file. The .pgpass 
file is another way to pass your passwords to postgres, but its NOT 
where postgres stores its passwords, those are stored as hashes in the 
pg_catalog.






--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
Hi Adrian,

thank you for the explanation.
I will look into you links.

I am doing this because I want to make backups from the database. Ideally 
automatic backups every night.
The Blackmagic support says they can't help me with this. The Backup and 
Restore button in DaVinci does not work.
Every database related question I ask the Blackmagic support stays unanswered. 
For example: How can I restart the SQL server? Oh sorry we can't help you with 
this … ?

Thank you.
Robert



> On 13 Nov 2016, at 18:09, Adrian Klaver  wrote:
> 
> On 11/13/2016 05:51 AM, aws backup wrote:
>> Hi,
>> 
>> now I have another problem. Sorry I am an absolute beginner.
>> When I restore the dumpall backup with
>> 
>> "psql -f infile postgres"
>> 
>> I get lot of errors > "already exists" and the database is not restored
>> to the point of the backup.
> 
> Yes, because it seems you are restoring back over existing databases in the 
> cluster. An explanation of terms is in order. When Postgres is installed it 
> init(ialize)s a cluster of databases, template0, template1(the one you had 
> issues with) and postgres. The user then can create additional databases to 
> serve their needs. When you do pg_dumpall you are doing a plain text dump of 
> the entire cluster, which may or may not be what you want. When you do the 
> psql -f above you are trying to restore all the databases in the cluster to 
> the cluster, again something you may or may not want. For finer grained 
> control over the dump/restore cycle you probably want pg_dump.
> 
>> I mean after I made the backup I changed something and I expected that
>> this change is undone after I restore the backup which did not happen.
>> Looks for me like nothing was restored … ?
> 
> You probably should spend some time here:
> 
> https://www.postgresql.org/docs/9.5/static/app-pg-dumpall.html 
> 
> 
> https://www.postgresql.org/docs/9.5/static/app-pgdump.html 
> 
> 
> In the meantime do you really want to start over with the database(s)?
> 
> In other words what is the goal of your dump/restore process?
> 
>> 
>> Thank you for your help.
>> 
>> Regards,
>> Robert
>> 
>> 
>>> On 13 Nov 2016, at 00:37, Scott Marlowe >> 
>>> >> wrote:
>>> 
>>> On Sat, Nov 12, 2016 at 2:31 PM, Adrian Klaver
>>>  
>>> >> 
>>> wrote:
 On 11/12/2016 01:20 PM, aws backup wrote:
> 
> Hi,
> 
> I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is
> part of the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.
> 
> Unfortunately I get following failure message:
> 
> pg_dumpall: could not connect to database "template1": FATAL: password
> authentication failed for user "postgres"
> 
> Maybe you can help me to solve this problem.
 
 
 Two choices:
 
 1) Determine what the password is for the postgres user and provide
 it when
 you connect.
 
 2) If you have access to the pg_hba.conf file create a access line
 that uses
 trust as the auth method for user postgres connect that way.
 
 https://www.postgresql.org/docs/9.5/static/auth-methods.html#AUTH-TRUST
>>> 
>>> OR you could use the -l switch and specify another db.
>>> 
>>> pg_dumpall -l mydbnamehere
>> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread Adrian Klaver

On 11/13/2016 05:51 AM, aws backup wrote:

Hi,

now I have another problem. Sorry I am an absolute beginner.
When I restore the dumpall backup with

"psql -f infile postgres"

I get lot of errors > "already exists" and the database is not restored
to the point of the backup.


Yes, because it seems you are restoring back over existing databases in 
the cluster. An explanation of terms is in order. When Postgres is 
installed it init(ialize)s a cluster of databases, template0, 
template1(the one you had issues with) and postgres. The user then can 
create additional databases to serve their needs. When you do pg_dumpall 
you are doing a plain text dump of the entire cluster, which may or may 
not be what you want. When you do the psql -f above you are trying to 
restore all the databases in the cluster to the cluster, again something 
you may or may not want. For finer grained control over the dump/restore 
cycle you probably want pg_dump.



I mean after I made the backup I changed something and I expected that
this change is undone after I restore the backup which did not happen.
Looks for me like nothing was restored … ?


You probably should spend some time here:

https://www.postgresql.org/docs/9.5/static/app-pg-dumpall.html

https://www.postgresql.org/docs/9.5/static/app-pgdump.html

In the meantime do you really want to start over with the database(s)?

In other words what is the goal of your dump/restore process?



Thank you for your help.

Regards,
Robert



On 13 Nov 2016, at 00:37, Scott Marlowe > wrote:

On Sat, Nov 12, 2016 at 2:31 PM, Adrian Klaver
> wrote:

On 11/12/2016 01:20 PM, aws backup wrote:


Hi,

I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is
part of the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.

Unfortunately I get following failure message:

pg_dumpall: could not connect to database "template1": FATAL: password
authentication failed for user "postgres"

Maybe you can help me to solve this problem.



Two choices:

1) Determine what the password is for the postgres user and provide
it when
you connect.

2) If you have access to the pg_hba.conf file create a access line
that uses
trust as the auth method for user postgres connect that way.

https://www.postgresql.org/docs/9.5/static/auth-methods.html#AUTH-TRUST


OR you could use the -l switch and specify another db.

pg_dumpall -l mydbnamehere





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
Hi,

now I have another problem. Sorry I am an absolute beginner.
When I restore the dumpall backup with 

"psql -f infile postgres"

I get lot of errors > "already exists" and the database is not restored to the 
point of the backup.
I mean after I made the backup I changed something and I expected that this 
change is undone after I restore the backup which did not happen. Looks for me 
like nothing was restored … ?

Thank you for your help.

Regards,
Robert


> On 13 Nov 2016, at 00:37, Scott Marlowe  > wrote:
> 
> On Sat, Nov 12, 2016 at 2:31 PM, Adrian Klaver
> > wrote:
>> On 11/12/2016 01:20 PM, aws backup wrote:
>>> 
>>> Hi,
>>> 
>>> I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is
>>> part of the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.
>>> 
>>> Unfortunately I get following failure message:
>>> 
>>> pg_dumpall: could not connect to database "template1": FATAL: password
>>> authentication failed for user "postgres"
>>> 
>>> Maybe you can help me to solve this problem.
>> 
>> 
>> Two choices:
>> 
>> 1) Determine what the password is for the postgres user and provide it when
>> you connect.
>> 
>> 2) If you have access to the pg_hba.conf file create a access line that uses
>> trust as the auth method for user postgres connect that way.
>> 
>> https://www.postgresql.org/docs/9.5/static/auth-methods.html#AUTH-TRUST 
>> 
> 
> OR you could use the -l switch and specify another db.
> 
> pg_dumpall -l mydbnamehere



Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
Hi Adrian,

thank you for the answer.
There is one password for the postgres database and one for the postgres user.
Both are not working somehow. Is there a way to look up the passwords? I saw in 
the documentation that there is a .pgpass file. But I can't find it.

I changed the auth method to trust for all users. This worked for now.
Thank you.

Best Regards,
Robert


> On 12 Nov 2016, at 23:31, Adrian Klaver  wrote:
> 
> On 11/12/2016 01:20 PM, aws backup wrote:
>> Hi,
>> 
>> I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is part 
>> of the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.
>> 
>> Unfortunately I get following failure message:
>> 
>> pg_dumpall: could not connect to database "template1": FATAL: password 
>> authentication failed for user "postgres"
>> 
>> Maybe you can help me to solve this problem.
> 
> Two choices:
> 
> 1) Determine what the password is for the postgres user and provide it when 
> you connect.
> 
> 2) If you have access to the pg_hba.conf file create a access line that uses 
> trust as the auth method for user postgres connect that way.
> 
> https://www.postgresql.org/docs/9.5/static/auth-methods.html#AUTH-TRUST
> 
>> 
>> Thank you.
>> 
>> Best Regards,
>> Robert
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-12 Thread Scott Marlowe
On Sat, Nov 12, 2016 at 2:31 PM, Adrian Klaver
 wrote:
> On 11/12/2016 01:20 PM, aws backup wrote:
>>
>> Hi,
>>
>> I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is
>> part of the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.
>>
>> Unfortunately I get following failure message:
>>
>> pg_dumpall: could not connect to database "template1": FATAL: password
>> authentication failed for user "postgres"
>>
>> Maybe you can help me to solve this problem.
>
>
> Two choices:
>
> 1) Determine what the password is for the postgres user and provide it when
> you connect.
>
> 2) If you have access to the pg_hba.conf file create a access line that uses
> trust as the auth method for user postgres connect that way.
>
> https://www.postgresql.org/docs/9.5/static/auth-methods.html#AUTH-TRUST

OR you could use the -l switch and specify another db.

pg_dumpall -l mydbnamehere


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-12 Thread Adrian Klaver

On 11/12/2016 01:20 PM, aws backup wrote:

Hi,

I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is part of 
the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.

Unfortunately I get following failure message:

pg_dumpall: could not connect to database "template1": FATAL: password authentication 
failed for user "postgres"

Maybe you can help me to solve this problem.


Two choices:

1) Determine what the password is for the postgres user and provide it 
when you connect.


2) If you have access to the pg_hba.conf file create a access line that 
uses trust as the auth method for user postgres connect that way.


https://www.postgresql.org/docs/9.5/static/auth-methods.html#AUTH-TRUST



Thank you.

Best Regards,
Robert







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-12 Thread aws backup
Hi,

I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is part of 
the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.

Unfortunately I get following failure message:

pg_dumpall: could not connect to database "template1": FATAL: password 
authentication failed for user "postgres"

Maybe you can help me to solve this problem.

Thank you.

Best Regards,
Robert




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall from a script

2013-10-23 Thread Michael Nolan
You could write a plperlul function that runs a shell script to back up
your database, you can even pass it parameters and put a call to that in a
trigger.

BUT, this could result in multiple backups running at the same time and
become a performance drag.
--
Mike Nolan


On Tue, Oct 22, 2013 at 9:19 PM, James Sewell james.sew...@lisasoft.comwrote:

 Oh I missed that, I skimmed and thought it was the same as \set

 Turns out it's not and it's exactly what I want!

 Thanks!


 James Sewell,
 PostgreSQL Team Lead / Solutions Architect
 __


  Level 2, 50 Queen St, Melbourne VIC 3000

 *P *(+61) 3 8370 8000 * **W* www.lisasoft.com  *F *(+61) 3 8370 8099



 On Wed, Oct 23, 2013 at 11:48 AM, Adrian Klaver 
 adrian.kla...@gmail.comwrote:

 On 10/22/2013 03:41 PM, James Sewell wrote:

 Hello All,

 Thanks for the replies.Sorry I must have been a bit unclear, I realise I
 *could* do this from the shell level, but can I do it from a PSQL
 session somehow?


 Lucas' \setenv method won't work for you?



  Cheers,


 James Sewell,
 PostgreSQL Team Lead / Solutions Architect



 --
 Adrian Klaver
 adrian.kla...@gmail.com



 --
 The contents of this email are confidential and may be subject to legal or
 professional privilege and copyright. No representation is made that this
 email is free of viruses or other defects. If you have received this
 communication in error, you may not copy or distribute any part of it or
 otherwise disclose its contents to anyone. Please advise the sender of your
 incorrect receipt of this correspondence.




Re: [GENERAL] pg_dumpall from a script

2013-10-23 Thread Oscar Calderon
Hi James, i wanna share with you a script that i use, i scheduled it with
crontab. It generates a backup for each database. After that, there is
another script scheduled that copies the backups to another server.

#! /bin/bash

TIEMPO=$(date '+%d%m%Y_%H%M%S')
UBICACION=/var/lib/pgsql/9.3/backups_auth/bkp_
BASES=$(psql  -h  localhost  -t  -U  postgres  -c  SELECT
datname  from  pg_database  WHERE  datistemplate  =  false
AND datname  'postgres' )
BASES='\n' read -a array  $BASES
for CURRDB in $BASES
do
pg_dump -h localhost -p 5432 -U postgres -Fc -Z0 -C -d $CURRDB 
$UBICACION$CURRDB-$TIEMPO.backup
# cp $UBICACION$CURRDB-$TIEMPO.backup /some/where/else
done

As you can see i query the list of the databases, but you can choose
another method to get the list of the databases, but if you are backing up
a single database then just use this line

pg_dump -h localhost -p 5432 -U postgres -Fc -Z0 -C -d your_database 
/path/to/backup/file.backup

Then schedule it:

nano /etc/crontab

30 23 * * * postgres /path/to/my/script.sh

***
Oscar Calderon
Analista de Sistemas
Soluciones Aplicativas S.A. de C.V.
www.solucionesaplicativas.com
Cel. (503) 7741 7850 Tel. (503) 2522-2834


2013/10/23 Michael Nolan htf...@gmail.com

 You could write a plperlul function that runs a shell script to back up
 your database, you can even pass it parameters and put a call to that in a
 trigger.

 BUT, this could result in multiple backups running at the same time and
 become a performance drag.
 --
 Mike Nolan


 On Tue, Oct 22, 2013 at 9:19 PM, James Sewell 
 james.sew...@lisasoft.comwrote:

 Oh I missed that, I skimmed and thought it was the same as \set

 Turns out it's not and it's exactly what I want!

 Thanks!


 James Sewell,
 PostgreSQL Team Lead / Solutions Architect
 __


  Level 2, 50 Queen St, Melbourne VIC 3000

 *P *(+61) 3 8370 8000 * **W* www.lisasoft.com  *F *(+61) 3 8370 8099



 On Wed, Oct 23, 2013 at 11:48 AM, Adrian Klaver 
 adrian.kla...@gmail.comwrote:

 On 10/22/2013 03:41 PM, James Sewell wrote:

 Hello All,

 Thanks for the replies.Sorry I must have been a bit unclear, I realise I
 *could* do this from the shell level, but can I do it from a PSQL
 session somehow?


 Lucas' \setenv method won't work for you?



  Cheers,


 James Sewell,
 PostgreSQL Team Lead / Solutions Architect



 --
 Adrian Klaver
 adrian.kla...@gmail.com



 --
 The contents of this email are confidential and may be subject to legal
 or professional privilege and copyright. No representation is made that
 this email is free of viruses or other defects. If you have received this
 communication in error, you may not copy or distribute any part of it or
 otherwise disclose its contents to anyone. Please advise the sender of your
 incorrect receipt of this correspondence.





Re: [GENERAL] pg_dumpall from a script

2013-10-23 Thread Raghu Ram
On Tue, Oct 22, 2013 at 10:50 AM, James Sewell james.sew...@lisasoft.comwrote:

 That looks great, but it doesn't really help with my problem unless I'm
 missing something (very possible!)

 I need a way to backup either from SQL in PSQL (possibly \!) or from a
 PG/PLSQL function to a file with a name set from a :variable.

 This would be triggered by a certain action in the database (applying a
 patch).

 Hope that's a bit clearer!



You can use a stored procedure with this plsh
http://plsh.projects.postgresql.org/ , like this:



CREATE FUNCTION dump_db(text, text) RETURNS text AS '

#!/bin/sh

pg_dump $1  $2

' LANGUAGE plsh;



Note that you must CREATE LANGUAGE first, $1 is db_name, $2 is file name
and check for write permissions of $2.


Thanks  Regards

Raghu Ram


Re: [GENERAL] pg_dumpall from a script

2013-10-22 Thread BladeOfLight16
On Tue, Oct 22, 2013 at 1:20 AM, James Sewell james.sew...@lisasoft.comwrote:

 That looks great, but it doesn't really help with my problem unless I'm
 missing something (very possible!)

 I need a way to backup either from SQL in PSQL (possibly \!) or from a
 PG/PLSQL function to a file with a name set from a :variable.

 This would be triggered by a certain action in the database (applying a
 patch).


I don't understand. To use psql, you would have to launch it with a command
line, as far as I know. If psql is a viable option for you, then you should
be able to execute other commands as well. That would make a bash script a
viable option. How do you intend to launch psql?


Re: [GENERAL] pg_dumpall from a script

2013-10-22 Thread Luca Ferrari
On Tue, Oct 22, 2013 at 7:20 AM, James Sewell james.sew...@lisasoft.com wrote:

 I need a way to backup either from SQL in PSQL (possibly \!) or from a 
 PG/PLSQL function to a file with a name set from a :variable.

Could it be something like this:

# \setenv myFile 'filename'
# \! pg_dump  $myFile

Unfortunately there seems to be no unsetenv command to remove the
variable within the script.


Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall from a script

2013-10-22 Thread Adrian Klaver

On 10/21/2013 11:41 PM, Luca Ferrari wrote:

On Tue, Oct 22, 2013 at 7:20 AM, James Sewell james.sew...@lisasoft.com wrote:


I need a way to backup either from SQL in PSQL (possibly \!) or from a PG/PLSQL 
function to a file with a name set from a :variable.


Could it be something like this:

# \setenv myFile 'filename'
# \! pg_dump  $myFile

Unfortunately there seems to be no unsetenv command to remove the
variable within the script.


Call it again without the variable value:

http://www.postgresql.org/docs/9.3/interactive/app-psql.html

\setenv [ name [ value ] ]
Sets the environment variable name to value, or if the value is not 
supplied, unsets the environment variable


\setenv myFile




Luca





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall from a script

2013-10-22 Thread Steve Crawford

On 10/21/2013 10:20 PM, James Sewell wrote:
That looks great, but it doesn't really help with my problem unless 
I'm missing something (very possible!)


I need a way to backup either from SQL in PSQL (possibly \!) or from a 
PG/PLSQL function to a file with a name set from a :variable.


This would be triggered by a certain action in the database (applying 
a patch).


Hope that's a bit clearer!

...

Perhaps a better description of the problem you want to solve would 
help. Are backups running on the same machine as the server? How soon 
after the triggering event must the backup be run? If there is a 
subsequent triggering event that occurs before the backup starts on the 
first event will there be ill effects? Is there a method in place to 
prevent subsequent events prior to the backup completing on the first 
event? Does the test have to be done in psql for some reason or are 
other clients acceptable?


If you are running the backup from within psql then something started 
psql. I assume you aren't trying to have a constantly connected psql 
script running a loop or using listen/notify. If you can test for your 
triggering event via a query then you can use a simple bash script to 
create a temporary file based on your query then test that file to 
determine if a backup is required. Run via cron every minute and you're 
done.


If you really need relatively instant response and especially if you can 
use a different client-side program then I suppose you might be able to 
cook up something with listen/notify or advisory locks.


Alternately, you might be able to use a foreign-data-wrapper that would 
allow you to create/modify a file (external file-based table) when your 
event occurs. By watching that external file you can determine that a 
backup is required. A daemonized bash-script utilizing inotifytools can 
reilably sit and wait for a change to a file then immediately launch you 
backup script. Be careful that you have a mechanism to prevent too many 
overlapping backups.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall from a script

2013-10-22 Thread James Sewell
Hello All,

Thanks for the replies.Sorry I must have been a bit unclear, I realise I
*could* do this from the shell level, but can I do it from a PSQL session
somehow?

I think the answer is no (I assume unless I write my own C function or
similar).

It seems there is no way of passing a :variable to the \! command, and
PL/PGSQL is a trusted language.

I might investigate PL/PythonU, would the untrusted mean I might be able to
do this sort of thing (run arbitrary commands?)

If not, I'll just make a wrapper and do the backup there.

Cheers,


James Sewell,
PostgreSQL Team Lead / Solutions Architect
__


 Level 2, 50 Queen St, Melbourne VIC 3000

*P *(+61) 3 8370 8000 * **W* www.lisasoft.com  *F *(+61) 3 8370 8099



On Wed, Oct 23, 2013 at 3:24 AM, Steve Crawford 
scrawf...@pinpointresearch.com wrote:

 On 10/21/2013 10:20 PM, James Sewell wrote:

 That looks great, but it doesn't really help with my problem unless I'm
 missing something (very possible!)

 I need a way to backup either from SQL in PSQL (possibly \!) or from a
 PG/PLSQL function to a file with a name set from a :variable.

 This would be triggered by a certain action in the database (applying a
 patch).

 Hope that's a bit clearer!

 ...

 Perhaps a better description of the problem you want to solve would help.
 Are backups running on the same machine as the server? How soon after the
 triggering event must the backup be run? If there is a subsequent
 triggering event that occurs before the backup starts on the first event
 will there be ill effects? Is there a method in place to prevent subsequent
 events prior to the backup completing on the first event? Does the test
 have to be done in psql for some reason or are other clients acceptable?

 If you are running the backup from within psql then something started
 psql. I assume you aren't trying to have a constantly connected psql script
 running a loop or using listen/notify. If you can test for your triggering
 event via a query then you can use a simple bash script to create a
 temporary file based on your query then test that file to determine if a
 backup is required. Run via cron every minute and you're done.

 If you really need relatively instant response and especially if you can
 use a different client-side program then I suppose you might be able to
 cook up something with listen/notify or advisory locks.

 Alternately, you might be able to use a foreign-data-wrapper that would
 allow you to create/modify a file (external file-based table) when your
 event occurs. By watching that external file you can determine that a
 backup is required. A daemonized bash-script utilizing inotifytools can
 reilably sit and wait for a change to a file then immediately launch you
 backup script. Be careful that you have a mechanism to prevent too many
 overlapping backups.

 Cheers,
 Steve


-- 


--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.


Re: [GENERAL] pg_dumpall from a script

2013-10-22 Thread Adrian Klaver

On 10/22/2013 03:41 PM, James Sewell wrote:

Hello All,

Thanks for the replies.Sorry I must have been a bit unclear, I realise I
*could* do this from the shell level, but can I do it from a PSQL
session somehow?


Lucas' \setenv method won't work for you?



Cheers,


James Sewell,
PostgreSQL Team Lead / Solutions Architect



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall from a script

2013-10-22 Thread James Sewell
Oh I missed that, I skimmed and thought it was the same as \set

Turns out it's not and it's exactly what I want!

Thanks!


James Sewell,
PostgreSQL Team Lead / Solutions Architect
__


 Level 2, 50 Queen St, Melbourne VIC 3000

*P *(+61) 3 8370 8000 * **W* www.lisasoft.com  *F *(+61) 3 8370 8099



On Wed, Oct 23, 2013 at 11:48 AM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On 10/22/2013 03:41 PM, James Sewell wrote:

 Hello All,

 Thanks for the replies.Sorry I must have been a bit unclear, I realise I
 *could* do this from the shell level, but can I do it from a PSQL
 session somehow?


 Lucas' \setenv method won't work for you?



  Cheers,


 James Sewell,
 PostgreSQL Team Lead / Solutions Architect



 --
 Adrian Klaver
 adrian.kla...@gmail.com


-- 


--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.


[GENERAL] pg_dumpall from a script

2013-10-21 Thread James Sewell
Hello,

I need to trigger a database dump from a SQL script (or function, but I
think that is even less likely).

I know I can do:

\! pg_dumpall  /mydir/myfile

Which is fine, but I need to use a variable to set the name.

\set myfile 'filename'
\! pg_dumpall  /mydir/:myfile

Doesn't seem to work.

Any ideas?

James Sewell,
PostgreSQL Team Lead / Solutions Architect
__


 Level 2, 50 Queen St, Melbourne VIC 3000

*P *(+61) 3 8370 8000 * **W* www.lisasoft.com  *F *(+61) 3 8370 8099

-- 


--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.


Re: [GENERAL] pg_dumpall from a script

2013-10-21 Thread Raghu Ram
On Tue, Oct 22, 2013 at 8:07 AM, James Sewell james.sew...@lisasoft.comwrote:

 Hello,

 I need to trigger a database dump from a SQL script (or function, but I
 think that is even less likely).

 I know I can do:

 \! pg_dumpall  /mydir/myfile

 Which is fine, but I need to use a variable to set the name.

 \set myfile 'filename'
 \! pg_dumpall  /mydir/:myfile

 Doesn't seem to work.

 Any ideas?


Below URL provides more information of Automated database Backup's on Linux:

http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux


Thanks  Regards
Raghu Ram


Re: [GENERAL] pg_dumpall from a script

2013-10-21 Thread James Sewell
That looks great, but it doesn't really help with my problem unless I'm
missing something (very possible!)

I need a way to backup either from SQL in PSQL (possibly \!) or from a
PG/PLSQL function to a file with a name set from a :variable.

This would be triggered by a certain action in the database (applying a
patch).

Hope that's a bit clearer!

Cheers,


James Sewell,
PostgreSQL Team Lead / Solutions Architect
__


 Level 2, 50 Queen St, Melbourne VIC 3000

*P *(+61) 3 8370 8000 * **W* www.lisasoft.com  *F *(+61) 3 8370 8099



On Tue, Oct 22, 2013 at 4:05 PM, Raghu Ram raghuchenn...@gmail.com wrote:


 On Tue, Oct 22, 2013 at 8:07 AM, James Sewell 
 james.sew...@lisasoft.comwrote:

 Hello,

 I need to trigger a database dump from a SQL script (or function, but I
 think that is even less likely).

 I know I can do:

 \! pg_dumpall  /mydir/myfile

 Which is fine, but I need to use a variable to set the name.

 \set myfile 'filename'
 \! pg_dumpall  /mydir/:myfile

 Doesn't seem to work.

 Any ideas?


 Below URL provides more information of Automated database Backup's on
 Linux:

 http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux


 Thanks  Regards
 Raghu Ram


-- 


--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.


Re: [GENERAL] pg_dumpall custom format?

2011-06-24 Thread Guillaume Lelarge
On Wed, 2011-06-22 at 16:17 +0100, Thom Brown wrote:
 On 13 May 2010 18:28, Guillaume Lelarge guilla...@lelarge.info wrote:
  Le 13/05/2010 19:24, Thom Brown a écrit :
  On 13 May 2010 17:49, Guillaume Lelarge guilla...@lelarge.info wrote:
 
  Le 13/05/2010 14:12, Thom Brown a écrit :
  On 5 March 2009 12:08, Thom Brown thombr...@gmail.com wrote:
  [...]
  I'm bringing this thread back to life to see if there are any further
  thoughts on this.  It would be nice to have a complete backup of a
  database
  cluster in custom format, and use pg_restore to restore an individual
  database in the same way that it can restore individual schemas and
  tables
  etc.
 
 
  Nope. But this is something I'd like to have too. I didn't actually look
  into it much more (I mean, in the code), but I intend to work on this
  for 9.1 (and other pg_dump stuff).
 
 
 
  You're now my favourite PostgreSQL developer!  Thanks Guillaume!
 
 
  Don't get your expectations too high. I just said I'll try :)
 
 Have you had any opportunity to look at this yet? :)
 

Nope, still in my TODO list.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall custom format?

2011-06-22 Thread Thom Brown
On 13 May 2010 18:28, Guillaume Lelarge guilla...@lelarge.info wrote:
 Le 13/05/2010 19:24, Thom Brown a écrit :
 On 13 May 2010 17:49, Guillaume Lelarge guilla...@lelarge.info wrote:

 Le 13/05/2010 14:12, Thom Brown a écrit :
 On 5 March 2009 12:08, Thom Brown thombr...@gmail.com wrote:
 [...]
 I'm bringing this thread back to life to see if there are any further
 thoughts on this.  It would be nice to have a complete backup of a
 database
 cluster in custom format, and use pg_restore to restore an individual
 database in the same way that it can restore individual schemas and
 tables
 etc.


 Nope. But this is something I'd like to have too. I didn't actually look
 into it much more (I mean, in the code), but I intend to work on this
 for 9.1 (and other pg_dump stuff).



 You're now my favourite PostgreSQL developer!  Thanks Guillaume!


 Don't get your expectations too high. I just said I'll try :)

Have you had any opportunity to look at this yet? :)

Thanks

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall behavior in 9.1beta1

2011-05-14 Thread Rajesh Kumar Mallah
oops! sorry for the noise

it was an  innocuous  'NOTICE' not ERROR as claimed.
thanks for your time.

regds
mallah.

On Fri, May 13, 2011 at 7:59 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Rajesh Kumar Mallah mallah.raj...@gmail.com writes:
 We migrated to 9.1beta1 from 9.0.x in our development environment.
 we shall report if any significant problems or issue arises out of this.

 one small thing that we observed rite now regarding pg_dumpall output is that

 the ALTER USER is referring to some db objects which are not created yet
 and this raises an ERROR this may cause some inconvenience where restoration
 is being carried out in a txn .

 example of relevant sql lines is given below

 CREATE ROLE pdns;
 ALTER ROLE pdns WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN
 NOREPLICATION PASSWORD 'md56dx' ;
 ALTER ROLE pdns SET search_path TO pdns;

 in above line the 'pdns' namespace does not exists yet.

 When I do that, I get a harmless NOTICE, not an ERROR.  Are you sure you
 are getting an error?

 regression=# CREATE ROLE pdns;
 CREATE ROLE
 regression=# ALTER ROLE pdns WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB 
 LOGIN
 regression-# NOREPLICATION PASSWORD 'md56dx' ;
 ALTER ROLE
 regression=# ALTER ROLE pdns SET search_path TO pdns;
 NOTICE:  schema pdns does not exist
 ALTER ROLE
 regression=#

                        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall behavior in 9.1beta1

2011-05-13 Thread Rajesh Kumar Mallah
Dear List ,

We migrated to 9.1beta1 from 9.0.x in our development environment.
we shall report if any significant problems or issue arises out of this.

one small thing that we observed rite now regarding pg_dumpall output is that

the ALTER USER is referring to some db objects which are not created yet
and this raises an ERROR this may cause some inconvenience where restoration
is being carried out in a txn .

example of relevant sql lines is given below

CREATE ROLE pdns;
ALTER ROLE pdns WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN
NOREPLICATION PASSWORD 'md56dx' ;
ALTER ROLE pdns SET search_path TO pdns;

in above line the 'pdns' namespace does not exists yet.

regds
Rajesh Kumar Mallah.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall behavior in 9.1beta1

2011-05-13 Thread Tom Lane
Rajesh Kumar Mallah mallah.raj...@gmail.com writes:
 We migrated to 9.1beta1 from 9.0.x in our development environment.
 we shall report if any significant problems or issue arises out of this.

 one small thing that we observed rite now regarding pg_dumpall output is that

 the ALTER USER is referring to some db objects which are not created yet
 and this raises an ERROR this may cause some inconvenience where restoration
 is being carried out in a txn .

 example of relevant sql lines is given below

 CREATE ROLE pdns;
 ALTER ROLE pdns WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN
 NOREPLICATION PASSWORD 'md56dx' ;
 ALTER ROLE pdns SET search_path TO pdns;

 in above line the 'pdns' namespace does not exists yet.

When I do that, I get a harmless NOTICE, not an ERROR.  Are you sure you
are getting an error?

regression=# CREATE ROLE pdns;
CREATE ROLE
regression=# ALTER ROLE pdns WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB 
LOGIN
regression-# NOREPLICATION PASSWORD 'md56dx' ;
ALTER ROLE
regression=# ALTER ROLE pdns SET search_path TO pdns;
NOTICE:  schema pdns does not exist
ALTER ROLE
regression=# 

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall backup script w. ftp ; pgpass file ; after upgrade to Ubuntu 10.4 (lucid) and Postgresql 9.0.2

2011-01-21 Thread MargaretGillon
After a year I have restarted work on a project using Postgresql. The database I created was in Postgresql 8.2.5. I updated my database server from Ubuntu 7.10 to 10.4 (lucid) and installed Postgresql 9.0.2 The Postgresql install went fine and the restore from my previous pg_dumpall went fine but I have an issue with the backup script.I have made the pgpass file as root but I cannot get the backup script to run as root I must change to a postgresql/ubuntu user engineering and then the script runs. Previously the backup script ran from cron.daily . I need to know what I am not doing correctly for this to work. I do not have much scripting experience. The error message is root@claLinus:/# source /etc/cron.daily/postgresql-backup.shpg_dumpall: could not connect to database "template1": FATAL: Ident authentication failed for user "engineering"I have confirmed that .pgpass exists root@claLinus:/# locate .pgpass/home/engineering/.pgpass/home/mmg/.pgpass/root/.pgpassBelow is my backup script and my instructions for creating the pgpass that came from this list several years agoThank you for any help.Margaret #!/bin/bash#file is /etc/cron.daily/postgresql-backup.sh#store the file name into a variablenumdate=$(date +%F)-$(date +%k%M)owl.sql#remove spaces from the file namenumdate=${numdate/ /}#backup the databasepg_dumpall -U engineering --column-inserts --attribute-inserts  /tmp/$numdate# record in the log it was doneecho $(date) PID $$ backup completed  /var/log/messages#ftp the fileftp -inv 192.168.100.29ENDFTPuser backupftp passwordcd ./backupLinus/lcd /tmpput $numdatebyeENDFTP#instructions for making the .pgpass filetouch ~/.pgpasschmod 0600 ~/.pgpassecho '*:*:*:username:password'  ~/.pgpassNote that this presumes you are logged in as the user who will be dumping the database(s).This e-mail message and any attachment(s) are for the sole use of the intended recipient(s) and may contain company proprietary, privileged or confidential information.   If you are not the intended recipient(s), please contact the sender by reply e-mail, advise them of the error and destroy this message and its attachments as well as any copies. The review, use or distribution of this message or its content by anyone other than the intended recipient or senior management of the company is strictly prohibited.


Re: [GENERAL] pg_dumpall backup script w. ftp ; pgpass file ; after upgrade to Ubuntu 10.4 (lucid) and Postgresql 9.0.2

2011-01-21 Thread John R Pierce

On 01/21/11 11:07 AM, margaretgil...@chromalloy.com wrote:
After a year I have restarted work on a project using Postgresql. The 
database I created was in Postgresql 8.2.5. I updated my database 
server from Ubuntu 7.10 to 10.4 (lucid)  and installed Postgresql 
9.0.2 The Postgresql install went fine and the restore from my 
previous pg_dumpall went fine but I have an issue with the backup script.


I have made the pgpass file as root but I cannot get the backup script 
to run as root I must change to a postgresql/ubuntu user engineering 
and then the script runs. Previously the backup script ran from 
cron.daily . I need to know what I am not doing correctly for this to 
work. I do not have much scripting experience.

...

FATAL:  Ident authentication failed for user engineering


that says it right there.  your LOCAL authentication type is set to 
IDENT in your pg_hba.conf file.


see the documentation section on Client Authentication, 
http://www.postgresql.org/docs/current/static/client-authentication.html





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall backup script w. ftp ; pgpass file ; after upgrade to Ubuntu 10.4 (lucid) and Postgresql 9.0.2

2011-01-21 Thread MargaretGillon
FATAL:Identauthenticationfailedforuser"engineering"thatsaysitrightthere.yourLOCALauthenticationtypeissettoIDENTinyourpg_hba.conffile.That was it. I have updated my install documentation. Thank you, MargaretThis e-mail message and any attachment(s) are for the sole use of the intended recipient(s) and may contain company proprietary, privileged or confidential information.   If you are not the intended recipient(s), please contact the sender by reply e-mail, advise them of the error and destroy this message and its attachments as well as any copies. The review, use or distribution of this message or its content by anyone other than the intended recipient or senior management of the company is strictly prohibited.


Re: [GENERAL] pg_dumpall custom format?

2010-05-17 Thread Thom Brown
On 13 May 2010 18:28, Guillaume Lelarge guilla...@lelarge.info wrote:

 Le 13/05/2010 19:24, Thom Brown a écrit :
  On 13 May 2010 17:49, Guillaume Lelarge guilla...@lelarge.info wrote:
 
  Le 13/05/2010 14:12, Thom Brown a écrit :
  On 5 March 2009 12:08, Thom Brown thombr...@gmail.com wrote:
  [...]
  I'm bringing this thread back to life to see if there are any further
  thoughts on this.  It would be nice to have a complete backup of a
  database
  cluster in custom format, and use pg_restore to restore an individual
  database in the same way that it can restore individual schemas and
  tables
  etc.
 
 
  Nope. But this is something I'd like to have too. I didn't actually look
  into it much more (I mean, in the code), but I intend to work on this
  for 9.1 (and other pg_dump stuff).
 
 
 
  You're now my favourite PostgreSQL developer!  Thanks Guillaume!
 

 Don't get your expectations too high. I just said I'll try :)


I guess this will also require changes to pg_restore to support:

-g globals only
-r roles only
-t tablespaces (conflicts with pg_restore's table specification of -t)

And of course being able to restore individual databases and specifying
restoration of a table/function/trigger within specific databases.  I guess
that could be done by specifying a database to restore, then asking for a
single function.

e.g.:

pg_restore --database=my_source_database -P my_function(argtype) -d
my_destination_database /database/backup/file

And adding database entry support to the listing file.

Thom


Re: [GENERAL] pg_dumpall custom format?

2010-05-17 Thread Guillaume Lelarge
Le 17/05/2010 14:28, Thom Brown a écrit :
 On 13 May 2010 18:28, Guillaume Lelarge guilla...@lelarge.info wrote:
 
 Le 13/05/2010 19:24, Thom Brown a écrit :
 On 13 May 2010 17:49, Guillaume Lelarge guilla...@lelarge.info wrote:

 Le 13/05/2010 14:12, Thom Brown a écrit :
 On 5 March 2009 12:08, Thom Brown thombr...@gmail.com wrote:
 [...]
 I'm bringing this thread back to life to see if there are any further
 thoughts on this.  It would be nice to have a complete backup of a
 database
 cluster in custom format, and use pg_restore to restore an individual
 database in the same way that it can restore individual schemas and
 tables
 etc.


 Nope. But this is something I'd like to have too. I didn't actually look
 into it much more (I mean, in the code), but I intend to work on this
 for 9.1 (and other pg_dump stuff).



 You're now my favourite PostgreSQL developer!  Thanks Guillaume!


 Don't get your expectations too high. I just said I'll try :)


 I guess this will also require changes to pg_restore to support:
 
 -g globals only
 -r roles only
 -t tablespaces (conflicts with pg_restore's table specification of -t)
 

Yes.

 And of course being able to restore individual databases and specifying
 restoration of a table/function/trigger within specific databases.  I guess
 that could be done by specifying a database to restore, then asking for a
 single function.
 
 e.g.:
 
 pg_restore --database=my_source_database -P my_function(argtype) -d
 my_destination_database /database/backup/file
 

Yes.

 And adding database entry support to the listing file.
 

And adding a role and a tablespace entries. And prefixing every object
with the database they belong too.

It won't be easy, but it's really something lacking right now. Or at
least, I think so :)


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall for Postgres Database Daily Backup

2010-05-16 Thread Adrian von Bidder
On Thursday 13 May 2010 17.50:31 Wang, Mary Y wrote:
 http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-A
 LL (I only found the documentation for 8.3.10), and it looks like
 pg_dumpall  outfile is the best choice.

pg_dumpall or by-database pg_dump ha the advantage that it is uncomplicated 
and restore is possible to newer pg versions without problem.

Another strategy is taking snapshots of the data/ directory and archiving 
WAL files as described in 
http://www.postgresql.org/docs/8.4/interactive/continuous-archiving.html

This has various trade-offs against the pg_dump approach:
 * pint in time recovery
 * you can get away with taking a full backup only occasionally, as long
   as you keep the WAL files
 * you need much more space (data/ dir plus WAL)
 * it's a bit more complex to set up
 * reovery needs to be to a database with the same version (and 
architecture?  Not sure.  Only same minor version, or patchlevel too?)

We decided to use the continuous archiving way becase (since we ship the WAL 
files off site) we lose almost no data if a crash happens (hasn't, so far 
:-) and transferring the WAL file over the network (on a only light db load) 
is quite light.  (transferring pg_dump files with rsync should be quite 
light as well, though; I guess the ordering of the data is more or less 
constant for parts of tables that are not changed.  I've not tested this 
though.)

cheers
-- vbi

-- 
this email is protected by a digital signature: http://fortytwo.ch/gpg


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] pg_dumpall for Postgres Database Daily Backup

2010-05-16 Thread Raymond O'Donnell
On 16/05/2010 16:59, Adrian von Bidder wrote:
  * pint in time recovery

This is what happens *after* work - this side of the pond, anyway :-)

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall for Postgres Database Daily Backup?

2010-05-14 Thread Devrim GÜNDÜZ
On Thu, 2010-05-13 at 09:15 -0700, Wang, Mary Y wrote:
 By reading the documentation over here:
 http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL (I 
 only found the documentation for 8.3.10),

If you replace 8.3 with current, you will get the current docs.

  and it looks like pg_dumpall  outfile is the best choice. 

Depends. Many people tend to dump each database individually, especially
using -Fc parameters in order to restore them easily. If you choose this
route, please note that you will also need to dump globals using
pg_dumpall -g.

Regards,
-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] pg_dumpall for Postgres Database Daily Backup

2010-05-14 Thread Scott Mead
On Thu, May 13, 2010 at 8:16 PM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Thu, May 13, 2010 at 9:50 AM, Wang, Mary Y mary.y.w...@boeing.com
 wrote:
  Hi,
 
  I'm running on Postgres 8.3.8.  My system admin is ready to set up a cron
 job for a daily database backup.
  By reading the documentation over here:
 http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL(I 
 only found the documentation for 8.3.10), and it looks like pg_dumpall 
 outfile is the best choice.  I'd like to ask the community to reconfirm.

 That will work.  So will pg_dumpall | gzip  filename.gz if you want
 compression.



My preference is to use

pg_dumpall -g  globals.bak
pg_dump -Fc database  database.bak
pg_dump -Fc database1  database1.bak

 etc..

  The major reason for this is because this allows selective restores.  If
something goes wrong with my 'users' table in 'database1' :

   Using the pg_dumpall strategy
- uncompress large text file
- manually cut 'users' table from large text file
- restore

With the pg_dump -Fc strategy
- pg_restore -t users -d database1  database1.bak

The pg_dump -Fc strategy also allows you to leverage the new, parallel
restore feature (-j option to pg_restore) in 8.4.

--Scott



 What's more important is that you first prove your backups are working
 by restoring them elsewhere, then check every month or so to make sure
 they're still happening.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] pg_dumpall custom format?

2010-05-13 Thread Thom Brown
On 5 March 2009 12:08, Thom Brown thombr...@gmail.com wrote:

 Thanks for the suggestion Marc.  It would still be nice to have the custom
 format included as an option though with pg_restore supporting it, just for
 consistency.

 I will, however, follow your recommendation.

 Thom

 2009/3/5 Marc Mamin m.ma...@intershop.de

  Hello,

 you can first pipe the output of pg_dump to a compression tool:

 pg_dump. | gzip  file.

 instead of gzip, I'm using pigz which is faster thanks multithreading.

 HTH,

 Marc Mamin



  --
 *From:* pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] *On Behalf Of *Thom Brown
 *Sent:* Thursday, March 05, 2009 11:53 AM
 *To:* pgsql general
 *Subject:* [GENERAL] pg_dumpall custom format?

 Hi,

 I noticed that while pg_dump can output a custom format, effectively
 compressing the output, pg_dumpall doesn't.  Is there a reason for this?
 And is there a way to get pg_dumpall to compress it's output?

 Thanks

 Thom



I'm bringing this thread back to life to see if there are any further
thoughts on this.  It would be nice to have a complete backup of a database
cluster in custom format, and use pg_restore to restore an individual
database in the same way that it can restore individual schemas and tables
etc.

Thanks

Thom


[GENERAL] pg_dumpall for Postgres Database Daily Backup

2010-05-13 Thread Wang, Mary Y
Hi,

I'm running on Postgres 8.3.8.  My system admin is ready to set up a cron job 
for a daily database backup.
By reading the documentation over here: 
http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL (I 
only found the documentation for 8.3.10), and it looks like pg_dumpall  
outfile is the best choice.  I'd like to ask the community to reconfirm.

Thanks in advance.
Mary




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall for Postgres Database Daily Backup?

2010-05-13 Thread Wang, Mary Y
Hi,

I'm running on Postgres 8.3.8.  My system admin is ready to set up a cron job 
for a daily database backup.
By reading the documentation over here: 
http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL (I 
only found the documentation for 8.3.10), and it looks like pg_dumpall  
outfile is the best choice.  I'd like to ask the community to reconfirm.

Thanks in advance.
Mary





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall custom format?

2010-05-13 Thread Guillaume Lelarge
Le 13/05/2010 14:12, Thom Brown a écrit :
 On 5 March 2009 12:08, Thom Brown thombr...@gmail.com wrote:
 [...]
 I'm bringing this thread back to life to see if there are any further
 thoughts on this.  It would be nice to have a complete backup of a database
 cluster in custom format, and use pg_restore to restore an individual
 database in the same way that it can restore individual schemas and tables
 etc.
 

Nope. But this is something I'd like to have too. I didn't actually look
into it much more (I mean, in the code), but I intend to work on this
for 9.1 (and other pg_dump stuff).


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall for Postgres Database Daily Backup?

2010-05-13 Thread Joshua D. Drake
On Thu, 2010-05-13 at 09:15 -0700, Wang, Mary Y wrote:
 Hi,
 
 I'm running on Postgres 8.3.8.  My system admin is ready to set up a cron job 
 for a daily database backup.
 By reading the documentation over here: 
 http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL (I 
 only found the documentation for 8.3.10), and it looks like pg_dumpall  
 outfile is the best choice.  I'd like to ask the community to reconfirm.

It entirely depends on your needs. I generally use pg_dump -Fc over a
loop to each database. The plain text version of the dump is
particularly difficult to work with for partial restores. 

Joshua D. Drake


 
 Thanks in advance.
 Mary
 
 
 
 
 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall custom format?

2010-05-13 Thread Thom Brown
On 13 May 2010 17:49, Guillaume Lelarge guilla...@lelarge.info wrote:

 Le 13/05/2010 14:12, Thom Brown a écrit :
  On 5 March 2009 12:08, Thom Brown thombr...@gmail.com wrote:
  [...]
  I'm bringing this thread back to life to see if there are any further
  thoughts on this.  It would be nice to have a complete backup of a
 database
  cluster in custom format, and use pg_restore to restore an individual
  database in the same way that it can restore individual schemas and
 tables
  etc.
 

 Nope. But this is something I'd like to have too. I didn't actually look
 into it much more (I mean, in the code), but I intend to work on this
 for 9.1 (and other pg_dump stuff).



You're now my favourite PostgreSQL developer!  Thanks Guillaume!

Thom


Re: [GENERAL] pg_dumpall custom format?

2010-05-13 Thread Guillaume Lelarge
Le 13/05/2010 19:24, Thom Brown a écrit :
 On 13 May 2010 17:49, Guillaume Lelarge guilla...@lelarge.info wrote:
 
 Le 13/05/2010 14:12, Thom Brown a écrit :
 On 5 March 2009 12:08, Thom Brown thombr...@gmail.com wrote:
 [...]
 I'm bringing this thread back to life to see if there are any further
 thoughts on this.  It would be nice to have a complete backup of a
 database
 cluster in custom format, and use pg_restore to restore an individual
 database in the same way that it can restore individual schemas and
 tables
 etc.


 Nope. But this is something I'd like to have too. I didn't actually look
 into it much more (I mean, in the code), but I intend to work on this
 for 9.1 (and other pg_dump stuff).



 You're now my favourite PostgreSQL developer!  Thanks Guillaume!
 

Don't get your expectations too high. I just said I'll try :)


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall for Postgres Database Daily Backup

2010-05-13 Thread Vick Khera
On Thu, May 13, 2010 at 11:50 AM, Wang, Mary Y mary.y.w...@boeing.com wrote:
 it looks like pg_dumpall  outfile is the best choice.  I'd like to ask the 
 community to reconfirm.

We do an individual pg_dump on each DB separately, using the '-Fc'
format.  The only thing we miss from pg_dumpall we get via pg_dumpall
--globals-only  globals.sql.  The c format is much more flexible
to restore bits of the database with judicious use of pg_restore and
awk or grep.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall for Postgres Database Daily Backup

2010-05-13 Thread Scott Marlowe
On Thu, May 13, 2010 at 9:50 AM, Wang, Mary Y mary.y.w...@boeing.com wrote:
 Hi,

 I'm running on Postgres 8.3.8.  My system admin is ready to set up a cron job 
 for a daily database backup.
 By reading the documentation over here: 
 http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL (I 
 only found the documentation for 8.3.10), and it looks like pg_dumpall  
 outfile is the best choice.  I'd like to ask the community to reconfirm.

That will work.  So will pg_dumpall | gzip  filename.gz if you want
compression.

What's more important is that you first prove your backups are working
by restoring them elsewhere, then check every month or so to make sure
they're still happening.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall asking for password for each database

2009-10-07 Thread Krzysztof Barlik
 What version are you running?  IIRC it should remember the password
 between databases.

8.4.0 on Linux/x86_64. It does not, and man page clearly says:

pg_dumpall  needs  to  connect  several  times  to  the  
PostgreSQL  server (once per database). If you use password 
authentication it will ask for a password  each  time.

Is there any specific reason for this behaviour ?

Thanks for Scott and Joshua - I read documentation about 
.pgpass and PGPASSWORD, I just thought it would be a bit
more convenient and possibly safer to do it without
storing password in files or environment variables.

Thanks,
Krzysztof


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall asking for password for each database

2009-10-06 Thread Krzysztof Barlik
Hi,

 is it possible to force pg_dumpall to ask for
password only once (connecting as superuser 'postgres') ?

Entering it for every database is a bit annoying.

Thanks,
Krzysztof




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall asking for password for each database

2009-10-06 Thread Scott Mead
2009/10/6 Krzysztof Barlik kbar...@wp.pl

 Hi,

  is it possible to force pg_dumpall to ask for
 password only once (connecting as superuser 'postgres') ?

 Entering it for every database is a bit annoying.


Kind of, you'll either use an environment variable:

http://www.postgresql.org/docs/8.4/interactive/libpq-envars.html
(specifically PGPASSWORD)

   or the pgpass file:

http://www.postgresql.org/docs/´8.4/interactive/libpq-pgpass.html

--Scott


Re: [GENERAL] pg_dumpall asking for password for each database

2009-10-06 Thread Joshua D. Drake
On Tue, 2009-10-06 at 22:59 +0200, Krzysztof Barlik wrote:
 Hi,
 
  is it possible to force pg_dumpall to ask for
 password only once (connecting as superuser 'postgres') ?
 
 Entering it for every database is a bit annoying.

take a look at .pgpass documentation

 
 Thanks,
 Krzysztof
 
 
 
 
-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall asking for password for each database

2009-10-06 Thread Alvaro Herrera
Krzysztof Barlik wrote:
 Hi,
 
  is it possible to force pg_dumpall to ask for
 password only once (connecting as superuser 'postgres') ?
 
 Entering it for every database is a bit annoying.

What version are you running?  IIRC it should remember the password
between databases.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall custom format?

2009-03-05 Thread Thom Brown
Hi,

I noticed that while pg_dump can output a custom format, effectively
compressing the output, pg_dumpall doesn't.  Is there a reason for this?
And is there a way to get pg_dumpall to compress it's output?

Thanks

Thom


Re: [GENERAL] pg_dumpall custom format?

2009-03-05 Thread Thom Brown
Thanks for the suggestion Marc.  It would still be nice to have the custom
format included as an option though with pg_restore supporting it, just for
consistency.

I will, however, follow your recommendation.

Thom

2009/3/5 Marc Mamin m.ma...@intershop.de

  Hello,

 you can first pipe the output of pg_dump to a compression tool:

 pg_dump. | gzip  file.

 instead of gzip, I'm using pigz which is faster thanks multithreading.

 HTH,

 Marc Mamin



  --
 *From:* pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] *On Behalf Of *Thom Brown
 *Sent:* Thursday, March 05, 2009 11:53 AM
 *To:* pgsql general
 *Subject:* [GENERAL] pg_dumpall custom format?

 Hi,

 I noticed that while pg_dump can output a custom format, effectively
 compressing the output, pg_dumpall doesn't.  Is there a reason for this?
 And is there a way to get pg_dumpall to compress it's output?

 Thanks

 Thom



Re: [GENERAL] pg_dumpall custom format?

2009-03-05 Thread Marc Mamin
Hello,
 
you can first pipe the output of pg_dump to a compression tool:
 
pg_dump. | gzip  file.
 
instead of gzip, I'm using pigz which is faster thanks multithreading.
 
HTH,
 
Marc Mamin
 
 



From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Thom Brown
Sent: Thursday, March 05, 2009 11:53 AM
To: pgsql general
Subject: [GENERAL] pg_dumpall custom format?


Hi,

I noticed that while pg_dump can output a custom format, effectively
compressing the output, pg_dumpall doesn't.  Is there a reason for this?
And is there a way to get pg_dumpall to compress it's output?

Thanks

Thom



Re: [GENERAL] pg_dumpall problem when roles have default schemas

2008-09-17 Thread Bruce Momjian
[EMAIL PROTECTED] wrote:
 I've run into a problem while migrating an existing 8.2.7 data base to a
 new server running 8.3.3 (although I think the version numbers may not
 matter -- I think I've seen this problem in the past and just lived with
 it since so much of Postgresql is so great!).
 
 The problem stems from the fact that for certain roles, I have defined
 default search paths, and because the output of pg_dumpall includes role
 definitions first, then data base definitions, then schema definitions.
 
 Thus, when piping the output (from legacy host 192.168.2.2) to populate
 the newly initialized cluster, by way of running (on the new host
 192.168.2.3)
 
 
   pg_dumpall -h 192.168.2.2|psql
 
 
 an error occurs in that first section when the script attempts to set a
 role-specific search path ... because the schema named in the search
 path hasn't been created yet.
 
 Not sure if there is some different way I should be using these tools to
 accomplish this, or if there is a work-around, or if this feature needs
 improvement.

I tested this on 8.3.3:

test= CREATE USER test;
CREATE ROLE
test= ALTER USER test SET search_path = 'asdf';
NOTICE:  schema asdf does not exist
ALTER ROLE
test= SELECT * FROM pg_user;
 usename  | usesysid | usecreatedb | usesuper | usecatupd |  passwd  |
valuntil | useconfig

--+--+-+--+---+--+--+

 postgres |   10 | t   | t| t |  |  
   |
 test |16385 | f   | f| f |  |  
   | {search_path=asdf}
(2 rows)

The ALTER SET command is performed;  it only generates a NOTICE.  Is
that the problem?

(FYI, you emailed the Postgres general _owner_; that is not the right
address for posting questions.)

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall problem when roles have default schemas

2008-09-16 Thread btober

I've run into a problem while migrating an existing 8.2.7 data base to a
new server running 8.3.3 (although I think the version numbers may not
matter -- I think I've seen this problem in the past and just lived with
it since so much of Postgresql is so great!).

The problem stems from the fact that for certain roles, I have defined
default search paths, and because the output of pg_dumpall includes role
definitions first, then data base definitions, then schema definitions.

Thus, when piping the output (from legacy host 192.168.2.2) to populate
the newly initialized cluster, by way of running (on the new host
192.168.2.3)


pg_dumpall -h 192.168.2.2|psql


an error occurs in that first section when the script attempts to set a
role-specific search path ... because the schema named in the search
path hasn't been created yet.

Not sure if there is some different way I should be using these tools to
accomplish this, or if there is a work-around, or if this feature needs
improvement.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall problem when roles have default schemas

2008-09-12 Thread btober
 I've run into a problem while migrating an existing 8.2.7 data base
to a  new server running 8.3.3 (although I think the version numbers
may not  matter -- I think I've seen this problem in the past and just
lived with  it since so much of Postgresql is so great!). 
 The problem stems from the fact that for certain roles, I have
defined  default search paths, and because the output of pg_dumpall
includes role  definitions first, then data base definitions, then
schema definitions. 
 Thus, when piping the output (from legacy host 192.168.2.2) to
populate  the newly initialized cluster, by way of running (on the new
host  192.168.2.3) 
 pg_dumpall -h 192.168.2.2|psql 
 an error occurs in that first section when the script attempts to
set a  role-specific search path ... because the schema named in the
search  path hasn't been created yet. 
 Not sure if there is some different way I should be using these
tools to  accomplish this, or if there is a work-around, or if this
feature needs  improvement. 


Re: [GENERAL] pg_dumpall problem when roles have default schemas

2008-08-31 Thread btober

Tom Lane wrote:
 [EMAIL PROTECTED] writes:
  Thus, when piping the output (from legacy host 192.168.2.2) to
 populate  the newly initialized cluster, by way of running (on the new
 host  192.168.2.3)
  pg_dumpall -h 192.168.2.2|psql
  an error occurs in that first section when the script attempts to
 set a  role-specific search path ... because the schema named in the
 search  path hasn't been created yet.

 Could we see a complete example?

 Recent versions of the backend are not supposed to throw hard errors in
 this situation, because of exactly that hazard.  For instance:

 regression=# create role joe;
 CREATE ROLE
 regression=# alter role joe set search_path to foo, bar;
 NOTICE:  schema foo does not exist
 NOTICE:  schema bar does not exist
 ALTER ROLE
 regression=#

 which AFAICS is exactly what will happen while restoring a pg_dumpall
 dump.

Ah, I understand. I think.

What I saw was not literally and error, but rather a notice: The 
alter role ... set search_path statement doesn't actually fail.


The real problem is my preconceived notion. I was thinking in terms of, 
say,  a database insert to a table with a foreign key, i.e., that since 
the schema to be set doesn't exist, the the statement should actually fail.


The implication of your illustration above is that the alter role ... 
set search_path statement completes successfully even though it is 
setting an invalid search path (invalid because the schema referenced in 
the search path to be set does not exist at the time the statement is 
executed).


So this behavior, which seems a little counter intuitive, actually makes 
for a robust end result.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall problem when roles have default schemas

2008-08-29 Thread btober
 Tom Lane wrote: 
  [EMAIL PROTECTED] writes: 
   Thus, when piping the output (from legacy host 192.168.2.2) to 
  populate  the newly initialized cluster, by way of running (on
the new 
  host  192.168.2.3) 
   pg_dumpall -h 192.168.2.2|psql 
   an error occurs in that first section when the script attempts
to 
  set a  role-specific search path ... because the schema named in
the 
  search  path hasn't been created yet. 
  
  Could we see a complete example? 
  
  Recent versions of the backend are not supposed to throw hard
errors in 
  this situation, because of exactly that hazard.  For instance: 
  
  regression=# create role joe; 
  CREATE ROLE 
  regression=# alter role joe set search_path to foo, bar; 
  NOTICE:  schema foo does not exist 
  NOTICE:  schema bar does not exist 
  ALTER ROLE 
  regression=# 
  
  which AFAICS is exactly what will happen while restoring a
pg_dumpall 
  dump. 
 Ah, I understand. I think. 
 What I saw was not literally and error, but rather a notice: 
The  alter role ... set search_path statement doesn't actually fail.

 The real problem is my preconceived notion. I was thinking in terms
of,  say,  the analogous situation for a database insert to a table
with a foreign key, i.e., that since  the schema to be set doesn't
exist, the the statement should actually fail. 
 The implication of your illustration above is that the alter role
...  set search_path statement completes successfully even though it
is  setting an invalid search path (invalid because the schema
referenced in  the search path to be set does not exist at the time
the statement is  executed). 
 So this behavior, which seems a little counter intuitive, actually
makes  for a robust end result. 


[GENERAL] pg_dumpall problem when roles have default schemas

2008-08-28 Thread btober
 
 I've run into a problem while migrating an existing 8.2.7 data base
to a  new server running 8.3.3 (although I think the version numbers
may not  matter -- I think I've seen this problem in the past and just
lived with  it since so much of Postgresql is so great!). 
 The problem stems from the fact that for certain roles, I have
defined  default search paths, and because the output of pg_dumpall
includes role  definitions first, then data base definitions, then
schema definitions. 
 Thus, when piping the output (from legacy host 192.168.2.2) to
populate  the newly initialized cluster, by way of running (on the new
host  192.168.2.3) 
 pg_dumpall -h 192.168.2.2|psql 
 an error occurs in that first section when the script attempts to
set a  role-specific search path ... because the schema named in the
search  path hasn't been created yet. 
 Not sure if there is some different way I should be using these
tools to  accomplish this, or if there is a work-around, or if this
feature needs  improvement.  


Re: [GENERAL] pg_dumpall problem when roles have default schemas

2008-08-28 Thread Tom Lane
[EMAIL PROTECTED] writes:
  Thus, when piping the output (from legacy host 192.168.2.2) to
 populate  the newly initialized cluster, by way of running (on the new
 host  192.168.2.3) 
  pg_dumpall -h 192.168.2.2|psql 
  an error occurs in that first section when the script attempts to
 set a  role-specific search path ... because the schema named in the
 search  path hasn't been created yet. 

Could we see a complete example?

Recent versions of the backend are not supposed to throw hard errors in
this situation, because of exactly that hazard.  For instance:

regression=# create role joe;
CREATE ROLE
regression=# alter role joe set search_path to foo, bar;
NOTICE:  schema foo does not exist
NOTICE:  schema bar does not exist
ALTER ROLE
regression=# 

which AFAICS is exactly what will happen while restoring a pg_dumpall
dump.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: pg_conversion table not saved

2008-05-08 Thread Michael Enke
Tom Lane schrieb:
 Michael Enke [EMAIL PROTECTED] writes:
 I updated pg_converion to set the condefault=false where I need it.
 
 Why are you needing to replace an existing default conversion?
 Do we have a bug in it?

No, not a bug. But I need CP852 encoding conversion. Maybe you can take a look 
at
http://archives.postgresql.org/pgsql-general/2008-05/msg00185.php
and give your comment.

Is the CREATE CONVERSION only for the case that there is a bug in existing 
default conversions? ;-)

 pg_dumpall does NOT write that table out and therefore it is not possible to 
 restore, this update get lost.
 
 pg_dump does not (and shouldn't) dump system-defined objects.  It has
 no way to know that you modified the definition of such an object.
 
 Why can not the CREATE DEFAULT CONVERSION ...
   a) return with an error if there is another default conversion 
 existing or
 
 AFAICS it does.

You can try with attached library:

   CREATE FUNCTION utf82cp852(integer,integer,cstring,internal,integer)
   RETURNS void AS '/tmp/libencodings' LANGUAGE 'C';
   CREATE FUNCTION cp8522utf8(integer,integer,cstring,internal,integer)
   RETURNS void AS '/tmp/libencodings' LANGUAGE 'C';

   CREATE DEFAULT CONVERSION utf82cp852 FOR 'UNICODE' TO 'LATIN2' FROM 
utf82cp852;
   CREATE DEFAULT CONVERSION cp8522utf8 FOR 'LATIN2' TO 'UNICODE' FROM 
cp8522utf8;

It doesn't complain but it also is not the encoding used now
since utf8_to_iso_8859_2 / iso_8859_2_utf8 has still condefault=true.
If I now set condefault to false, my conversion function is used.

Regards,
Michael

-- 
Wincor Nixdorf International GmbH
Sitz der Gesellschaft: Paderborn
Registergericht Paderborn HRB 3507
Geschäftsführer: Eckard Heidloff (Vorsitzender), Stefan Auerbach, Dr. Jürgen 
Wunram
Vorsitzender des Aufsichtsrats: Karl-Heinz Stiller
Steuernummer: 339/5884/0020 - Ust-ID Nr.: DE812927716 - WEEE-Reg.-Nr. DE44477193

Diese E-Mail enthält vertrauliche Informationen. Wenn Sie nicht der richtige 
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte 
Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.

This e-mail may contain confidential information. If you are not the intended 
recipient (or have received this e-mail in error) please notify the sender 
immediately and destroy this e-mail. Any unauthorised copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden.



libencodings.so
Description: application/sharedlib

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dumpall: pg_conversion table not saved

2008-05-07 Thread Michael Enke

Hi list,
I updated pg_converion to set the condefault=false where I need it.
pg_dumpall does NOT write that table out and therefore it is not possible to 
restore, this update get lost.
Anybody know if this is an error or has an explanation why it has to work this 
way?

Background:
I have created my own conversion with CREATE DEFAULT CONVERSION ...
http://www.postgresql.org/docs/8.3/interactive/sql-createconversion.html
 There should be only one default encoding in a schema for the encoding pair

Why can not the CREATE DEFAULT CONVERSION ...
 a) return with an error if there is another default conversion existing or
 b) silently set any other default to false
for the confor- and conto-encoding involved?

Regards,
Michael

--
Wincor Nixdorf International GmbH
Sitz der Gesellschaft: Paderborn
Registergericht Paderborn HRB 3507
Geschäftsführer: Eckard Heidloff (Vorsitzender), Stefan Auerbach, Dr. Jürgen 
Wunram
Vorsitzender des Aufsichtsrats: Karl-Heinz Stiller
Steuernummer: 339/5884/0020 - Ust-ID Nr.: DE812927716 - WEEE-Reg.-Nr. DE44477193

Diese E-Mail enthält vertrauliche Informationen. Wenn Sie nicht der richtige 
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte 
Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.

This e-mail may contain confidential information. If you are not the intended 
recipient (or have received this e-mail in error) please notify the sender 
immediately and destroy this e-mail. Any unauthorised copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: pg_conversion table not saved

2008-05-07 Thread Tom Lane
Michael Enke [EMAIL PROTECTED] writes:
 I updated pg_converion to set the condefault=false where I need it.

Why are you needing to replace an existing default conversion?
Do we have a bug in it?

 pg_dumpall does NOT write that table out and therefore it is not possible to 
 restore, this update get lost.

pg_dump does not (and shouldn't) dump system-defined objects.  It has
no way to know that you modified the definition of such an object.

 Why can not the CREATE DEFAULT CONVERSION ...
   a) return with an error if there is another default conversion existing 
 or

AFAICS it does.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall: pg_conversion table not saved

2008-05-07 Thread Martijn van Oosterhout
On Wed, May 07, 2008 at 05:38:12PM -0400, Tom Lane wrote:
 Michael Enke [EMAIL PROTECTED] writes:
  I updated pg_converion to set the condefault=false where I need it.
 
 Why are you needing to replace an existing default conversion?
 Do we have a bug in it?

He doesn't really. See his message two days ago about wanting to add
another charset. Since postgresql doesn't support that he hijacked an
existing one and replaced the conversion functions. And they're not
saved/restored.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Please line up in a tree and maintain the heap invariant while 
 boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] pg_dumpall

2008-01-20 Thread Aarni Ruuhimäki
On Friday 18 January 2008 14:38, Steve Clark wrote:
 Thanks for everyone that replied to my query about pg_dumpall.


 Now another question/issue - anytime I usr createdb the resulting db
 ends up
 with UTF-8 encoding unless I use the -E switch. Is there a way to make
 the
 default be sql_ascii? postgres version is 8.2.5

 Thanks again
 Steve

Hi Steve,

http://www.postgresql.org/docs/8.2/static/app-initdb.html

Best regards,

-- 
Aarni

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pg_dumpall

2008-01-18 Thread Steve Clark

Erik Jones wrote:

On Jan 17, 2008, at 1:08 PM, Greg Smith wrote:



On Thu, 17 Jan 2008, Tom Lane wrote:


There isn't any good way to guarantee time coherence of dumps  
across two databases.


Whether there's a good way depends on what you're already doing.   
If you're going to the trouble of making a backup using PITR  
anyway, it's not hard to stop applying new logs to that replica and  
dump from it to get a point in time backup across all the  
databases.  That's kind of painful now because you have to start  
the server to run pg_dumpall, so resuming recovery is difficult,  
but you can play filesystem tricks to make that easier.



Actually, this exact scenario brings up a question I was thinking of  
last night.  If you stop a PITR standby server and bring it up to  
dump from, will all of the database file have something written to  
them at some point during the dump?  Transactional information is  
what I'd assume would be written, if so, but I'm not really sure of  
the low level details there.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com





Thanks for everyone that replied to my query about pg_dumpall.


Now another question/issue - anytime I usr createdb the resulting db 
ends up
with UTF-8 encoding unless I use the -E switch. Is there a way to make 
the

default be sql_ascii? postgres version is 8.2.5

Thanks again
Steve


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] pg_dumpall

2008-01-17 Thread Steve Clark

Hello List,

the man page for pg_dump say:
pg_dump is a utility for backing up a  PostgreSQL  database.  It  makes
   consistent  backups  even  if  the database is being used 
concurrently.


does pg_dumpall make consistent backups if the database is being used 
concurrently?

Even though the man page doesn't say it does.

Thanks,
Steve

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Martijn van Oosterhout
On Thu, Jan 17, 2008 at 11:14:22AM -0800, Glyn Astill wrote:
  begin;
  set transaction isolation level serializable;
  
  --- begin dumping stuff;
  
 
 Wouldn't that just lock everything so nothing could be updated? Or
 just the table it is outputting?

PostgreSQL uses MVCC, which means the whole thing is lock free. It just
requires more diskspace. To keep the older versions around.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Glyn Astill

Alvaro Herrera [EMAIL PROTECTED] wrote:
 Glyn Astill wrote:
  Out of interest, how does pg_dump manage to do a snapshot of a
  database at an instant in time?
 
  My mental picture of pg_dump was just a series of queries dumping
 out
  the tables...
 
 begin;
 set transaction isolation level serializable;
 
 --- begin dumping stuff;
 

Wouldn't that just lock everything so nothing could be updated? Or
just the table it is outputting?

I'm guessing I need to go off and school myself on different
isolation levels etc to understand, but say I have 2 tables sales
and sold, and users are selling items with inserts into the sales
table and a count updating manually in sold. Wouldn't these end up
inconsistant in the dump?




  ___
Support the World Aids Awareness campaign this month with Yahoo! For Good 
http://uk.promotions.yahoo.com/forgood/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Greg Smith

On Thu, 17 Jan 2008, Tom Lane wrote:

There isn't any good way to guarantee time coherence of dumps across two 
databases.


Whether there's a good way depends on what you're already doing.  If 
you're going to the trouble of making a backup using PITR anyway, it's not 
hard to stop applying new logs to that replica and dump from it to get a 
point in time backup across all the databases.  That's kind of painful now 
because you have to start the server to run pg_dumpall, so resuming 
recovery is difficult, but you can play filesystem tricks to make that 
easier.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Glyn Astill
Out of interest, how does pg_dump manage to do a snapshot of a
database at an instant in time?

My mental picture of pg_dump was just a series of queries dumping out
the tables...

--- Tom Lane [EMAIL PROTECTED] wrote:

 Steve Clark [EMAIL PROTECTED] writes:
  does pg_dumpall make consistent backups if the database is being
 used 
  concurrently?
  Even though the man page doesn't say it does.
 
 That's intentional, because it doesn't.  What you get is a pg_dump
 snapshot of each database in sequence; those snapshots don't all
 correspond to the same time instant.  There isn't any good way to
 guarantee time coherence of dumps across two databases.
 
   regards, tom lane
 
 ---(end of
 broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 



  ___
Support the World Aids Awareness campaign this month with Yahoo! For Good 
http://uk.promotions.yahoo.com/forgood/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Alvaro Herrera
Tom Lane wrote:
 Steve Clark [EMAIL PROTECTED] writes:
  does pg_dumpall make consistent backups if the database is being used 
  concurrently?
  Even though the man page doesn't say it does.
 
 That's intentional, because it doesn't.  What you get is a pg_dump
 snapshot of each database in sequence; those snapshots don't all
 correspond to the same time instant.  There isn't any good way to
 guarantee time coherence of dumps across two databases.

The fine point possibly being missed is that each database's dump
produced by pg_dumpall is, of course, self-consistent.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Tom Lane
Steve Clark [EMAIL PROTECTED] writes:
 does pg_dumpall make consistent backups if the database is being used 
 concurrently?
 Even though the man page doesn't say it does.

That's intentional, because it doesn't.  What you get is a pg_dump
snapshot of each database in sequence; those snapshots don't all
correspond to the same time instant.  There isn't any good way to
guarantee time coherence of dumps across two databases.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Erik Jones


On Jan 17, 2008, at 1:08 PM, Greg Smith wrote:


On Thu, 17 Jan 2008, Tom Lane wrote:

There isn't any good way to guarantee time coherence of dumps  
across two databases.


Whether there's a good way depends on what you're already doing.   
If you're going to the trouble of making a backup using PITR  
anyway, it's not hard to stop applying new logs to that replica and  
dump from it to get a point in time backup across all the  
databases.  That's kind of painful now because you have to start  
the server to run pg_dumpall, so resuming recovery is difficult,  
but you can play filesystem tricks to make that easier.


Actually, this exact scenario brings up a question I was thinking of  
last night.  If you stop a PITR standby server and bring it up to  
dump from, will all of the database file have something written to  
them at some point during the dump?  Transactional information is  
what I'd assume would be written, if so, but I'm not really sure of  
the low level details there.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Alvaro Herrera
Glyn Astill wrote:
 Out of interest, how does pg_dump manage to do a snapshot of a
 database at an instant in time?

 My mental picture of pg_dump was just a series of queries dumping out
 the tables...

begin;
set transaction isolation level serializable;

--- begin dumping stuff;



-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 That's intentional, because it doesn't.  What you get is a pg_dump
 snapshot of each database in sequence; those snapshots don't all
 correspond to the same time instant.  There isn't any good way to
 guarantee time coherence of dumps across two databases.

 The fine point possibly being missed is that each database's dump
 produced by pg_dumpall is, of course, self-consistent.

Right, but Steve already knew that.

Hmm ... it suddenly strikes me that Simon's transaction snapshot
cloning idea could provide a way to get exactly coherent dumps from
multiple databases in the same cluster.  Maybe he already realized that,
but I didn't.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] pg_dumpall and authentication

2007-11-09 Thread Tom Hart
I'm sure you guys have heard this about 100 times, and I've done some 
research on Google and found out some things, but I still have a couple 
questions.


As I'm sure you may have guessed from the subject, I'm trying to 
schedule (under windows) pg_dumpall to run each night/morning/full 
moon/whatever. The hitch in this is that it asks for a password for each 
database as it dumps it. I know I can use the PGPASS environment 
variable, or a ~/.pgpass file. What I'm wondering is what's considered 
'best practice' in practical applications. What solutions do you guys 
use? Is it worth changing PGPASSFILE to point to a different .pgpass?


Thanks in advance for any assistance.

Thomas R. Hart II
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dumpall and authentication

2007-11-09 Thread Steve Atkins


On Nov 9, 2007, at 8:52 AM, Tom Hart wrote:

I'm sure you guys have heard this about 100 times, and I've done  
some research on Google and found out some things, but I still have  
a couple questions.


As I'm sure you may have guessed from the subject, I'm trying to  
schedule (under windows) pg_dumpall to run each night/morning/full  
moon/whatever. The hitch in this is that it asks for a password for  
each database as it dumps it. I know I can use the PGPASS  
environment variable, or a ~/.pgpass file. What I'm wondering is  
what's considered 'best practice' in practical applications. What  
solutions do you guys use? Is it worth changing PGPASSFILE to point  
to a different .pgpass?


Any of those approaches should be fine. I'd probably stick with the  
default pgpass file, just for the sake of whoever may have to  
maintain it next.


I tend to create a unix user just for doing backups and other  
scheduled maintenance, then give that user access to the database via  
ident authentication from the local system only. If PG-on-Windows has  
equivalent functionality that's another approach to consider.


Cheers,
  Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_dumpall and authentication

2007-11-09 Thread Tom Hart

Steve Atkins wrote:


On Nov 9, 2007, at 8:52 AM, Tom Hart wrote:

I'm sure you guys have heard this about 100 times, and I've done some 
research on Google and found out some things, but I still have a 
couple questions.


As I'm sure you may have guessed from the subject, I'm trying to 
schedule (under windows) pg_dumpall to run each night/morning/full 
moon/whatever. The hitch in this is that it asks for a password for 
each database as it dumps it. I know I can use the PGPASS environment 
variable, or a ~/.pgpass file. What I'm wondering is what's 
considered 'best practice' in practical applications. What solutions 
do you guys use? Is it worth changing PGPASSFILE to point to a 
different .pgpass?


Any of those approaches should be fine. I'd probably stick with the 
default pgpass file, just for the sake of whoever may have to maintain 
it next.


I tend to create a unix user just for doing backups and other 
scheduled maintenance, then give that user access to the database via 
ident authentication from the local system only. If PG-on-Windows has 
equivalent functionality that's another approach to consider.
Ok, here's what I think is going to work best for me. I'm going to 
create a user with very little access rights, and then I'm going to set 
up a scheduled task in windows to run pg_dumpall on the machine that 
houses the db (and where the backup will be stored). On that machine 
I'll have a pgpass file that I've placed somewhere where only the dummy 
backup account can access it and pointed to it with the PGPASSFILE 
environment variable. I think I'll be able to run a scheduled task 
associated with that name without giving them login abilities.


Sound pretty solid to you guys?

BTW, this isn't protecting a ton of data, but the data itself is pretty 
sensitive, so security is a concern. I appreciate your help in building 
as solid a system as I can.
BTW2, I already told somebody today that Windows and security is like a 
cherry pie with gravy on top, but I don't get a choice here, so try to 
understand :-)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] pg_dumpall to psql -f NOT working

2007-08-12 Thread Ketema
Hello.  I performed a pg_dumpall of my 8.1 db cluster and upgraded to
8.2.4.  the dump went fine and the script file appears to be fine.  I
have a mixture of UTF8 and SQL_ASCII databases in the dump.  However I
am not able to restore my data after the upgrade.  I performed from
the command line
psql -f myfile -U postgres postgres

the process starts ok then it fails in the data resotration.  What i
see onthe command line is a bunch of invalid commands because of psql
interpreting data as commands.

I had specifically not did insert statements because the documentation
said the copy commands were much faster, now I am regretting it.  How
can I get psql to interpret the dumpfile correctly and restore my
data?

Thanks.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pg_dumpall to psql -f NOT working

2007-08-12 Thread Pavel Stehule
2007/8/12, Ketema [EMAIL PROTECTED]:
 Hello.  I performed a pg_dumpall of my 8.1 db cluster and upgraded to
 8.2.4.  the dump went fine and the script file appears to be fine.  I
 have a mixture of UTF8 and SQL_ASCII databases in the dump.  However I
 am not able to restore my data after the upgrade.  I performed from
 the command line
 psql -f myfile -U postgres postgres

 the process starts ok then it fails in the data resotration.  What i
 see onthe command line is a bunch of invalid commands because of psql
 interpreting data as commands.

 I had specifically not did insert statements because the documentation
 said the copy commands were much faster, now I am regretting it.  How
 can I get psql to interpret the dumpfile correctly and restore my
 data?

 Thanks.


Hello

try pg_restore myfile

Regards
Pavel Stehule

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_dumpall and version confusion

2007-03-16 Thread Dave Page


 --- Original Message ---
 From: Joshua D. Drake [EMAIL PROTECTED]
 To: Dave Page [EMAIL PROTECTED]
 Sent: 15/03/07, 23:51:18
 Subject: Re: [GENERAL] pg_dumpall and version confusion
 
 If you want that, create a wrapper program that calls to different
 statically compiled versions of pg_dump.

I can't even begin to imagine how difficult that would be on Windows!

/D

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dumpall and version confusion

2007-03-16 Thread Jorge Godoy
Dave Page [EMAIL PROTECTED] writes:

 --- Original Message ---
 From: Joshua D. Drake [EMAIL PROTECTED]
 To: Dave Page [EMAIL PROTECTED]
 Sent: 15/03/07, 23:51:18
 Subject: Re: [GENERAL] pg_dumpall and version confusion
 
 If you want that, create a wrapper program that calls to different
 statically compiled versions of pg_dump.

 I can't even begin to imagine how difficult that would be on Windows!

As difficult as a new .bat file? 

-- 
Jorge Godoy  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dumpall and version confusion

2007-03-16 Thread Dave Page
Jorge Godoy wrote:
 Dave Page [EMAIL PROTECTED] writes:
 
 --- Original Message ---
 From: Joshua D. Drake [EMAIL PROTECTED]
 To: Dave Page [EMAIL PROTECTED]
 Sent: 15/03/07, 23:51:18
 Subject: Re: [GENERAL] pg_dumpall and version confusion

 If you want that, create a wrapper program that calls to different
 statically compiled versions of pg_dump.
 I can't even begin to imagine how difficult that would be on Windows!
 
 As difficult as a new .bat file? 
 

Thats the easy bit. The difficult bit is building the statically linked
utilities with ssl, gettext and kerberos support. We found when porting
PostgreSQL in the first place that for many of the supporting libraries,
Windows is an afterthought and where on unix there might be static and
dynamic builds, on Windows once dynamic is done, static doesn't matter.

Regards, Dave

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dumpall and version confusion

2007-03-16 Thread Joshua D. Drake
Dave Page wrote:
 
 --- Original Message ---
 From: Joshua D. Drake [EMAIL PROTECTED]
 To: Dave Page [EMAIL PROTECTED]
 Sent: 15/03/07, 23:51:18
 Subject: Re: [GENERAL] pg_dumpall and version confusion

 If you want that, create a wrapper program that calls to different
 statically compiled versions of pg_dump.
 
 I can't even begin to imagine how difficult that would be on Windows!

Well supporting Windows in general is difficult ;), but perhaps it would
only be hard the first time because the build process could be automated?

J


 
 /D
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_dumpall and version confusion

2007-03-16 Thread Tony Caduto

Joshua D. Drake wrote:

other deficiencies let's not add to complexity by having an
--output-version.

If you want that, create a wrapper program that calls to different
statically compiled versions of pg_dump.

Joshua D. Drake


  
I am afraid that I have to agree with Tom here. Pg_dump has plenty of


Well for me that would not be a option.  I use object pascal and can't 
statically link C code into my apps.  Doing the statically linked thing 
would also bloat any resulting binary.


I don't think the whole --output-version thing would be a good idea 
either, but it would be a simple matter to add some logic for the
GRANT ON SEQUENCE, which seems to be the only thing that is really 
causing problems at least between 8.1 and 8.2.


The old way of granting permissions on sequences still works on 8.2 
right?  If so then maybe a switch to disable GRANT ON SEQUENCE would do 
the trick.


Later,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dumpall and version confusion

2007-03-16 Thread Tom Lane
Tony Caduto [EMAIL PROTECTED] writes:
 I don't think the whole --output-version thing would be a good idea 
 either, but it would be a simple matter to add some logic for the
 GRANT ON SEQUENCE, which seems to be the only thing that is really 
 causing problems at least between 8.1 and 8.2.

You mean it's the only thing you've run into.  There are a *lot* of
changes in the output of different pg_dump versions.  Even focusing
on the narrow question of sequence changes between 8.1 and 8.2,
what of the change from emitting SERIAL to emitting a separate
CREATE SEQUENCE and ALTER SEQUENCE OWNED BY?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Alan J Batsford

I inherited an existing system with no documents on how it was originally
configuerd, so I'll provide as much as I can. OS is CentOS 4.3 Linux
Distribution.

When I search my system for anything postgres I find a boatload of 8.0.4
source, and a boatload of 7.4.8 documentation.

postgres --version yields 8.1.4 (thats no typo)
psql --version yields 7.4.8
pg_dump --version yields 7.4.8
pg_dumpall --version yields 7.4.8

When doing 'which' on all of the above binaries I get /usr/bin/ for all
4.

The original problem was recieving 'pg_dumpall: query failed: ERROR:
column datpath does not exist' error when trying to run pg_dumpall. I
learned that this is expected error if pg_dumpall is too new for the
database, but it claims to be 7.4.8. I hate to ask a terribly vague
question but how do I find out what version I'm running when postgres and
psql don't agree? There's no trace of 8.1.4 files on the system so I'm
without a lead as to how I have a binary for it.

Thanks for any assistance you can provide.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Tom Lane
Alan J Batsford [EMAIL PROTECTED] writes:
 When I search my system for anything postgres I find a boatload of 8.0.4
 source, and a boatload of 7.4.8 documentation.

rpm -qa | grep postgres would probably prove enlightening.  It sounds
like you have postgresql-server at 8.1.4 and the other subpackages at
older releases, which is pretty odd; the RPMs should have had
cross-requires that would prevent that.

 The original problem was recieving 'pg_dumpall: query failed: ERROR:
 column datpath does not exist' error when trying to run pg_dumpall. I
 learned that this is expected error if pg_dumpall is too new for the
 database, but it claims to be 7.4.8.

No, usually you have version skew problems if pg_dumpall is too *old*
for the server, which is exactly the case here.  We try to make pg_dump
cope with prior server releases, but since we lack a time machine we
can't go back and teach old versions about subsequent server changes...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Tony Caduto

Tom Lane wrote:

We try to make pg_dump
cope with prior server releases, but since we lack a time machine we
can't go back and teach old versions about subsequent server changes...
  

Tom,
How come version 8.2 of pg_dump uses GRANT ON SEQUENCE when dumping a 
8.1 or older database?
Was it just a oversight?  Seems it should be a simple matter to add some 
logic that says IF version = 8.2 THEN use grant on seq  else don't use it.


Bruce told me just the opposite of what you said in the above message.

Thanks,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Alvaro Herrera
Tony Caduto wrote:
 Tom Lane wrote:
 We try to make pg_dump
 cope with prior server releases, but since we lack a time machine we
 can't go back and teach old versions about subsequent server changes...
   
 Tom,
 How come version 8.2 of pg_dump uses GRANT ON SEQUENCE when dumping a 
 8.1 or older database?

What's wrong with that?  8.2 will understand the GRANT ON SEQUENCE
without a problem.

 Bruce told me just the opposite of what you said in the above message.

So Bruce told you that hackers do have a time machine?  If so, he hasn't
let me know about it, and I'm very interested.  I'll ask him.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Tony Caduto

Alvaro Herrera wrote:


What's wrong with that?  8.2 will understand the GRANT ON SEQUENCE
without a problem.

  

Yes, but 8.1 and earlier wont!

If you dump a 8.1 database (with 8.2 pg_dump) and then use that dump to 
restore to another 8.1  server, any permissions on sequences will fail 
to restore because the 8.2 pg_dump can't handle NOT using GRANT ON 
SEQUENCE on the lower version of the server.
Would it really be that big of a deal to add some logic to 8.2 pg_dump 
to say: Hey I am not dumping a 8.2 server, so don't use GRANT ON SEQUENCE?


most Admin tools ship with the latest version of pg_dump and restore, so 
If I attempt to restore that dump via pgAdmin III (or other tools) which 
is using 8.2 versions of dump and restore it will fail on a 8.1 server.


Do you see the point I am trying to make?   Should admin tool vendors 
start shipping every version of pg_dump now?


In a earlier message about this same thing Bruce said (I am quoting from 
memory so it's not exact) that we don't try and make pg_dump backwards 
compatible and to use the dump that came with whatever version you are 
dumping from  So to me that seemed opposite of what Tom said.


If I misquoted anyone I  apologize  in advance.

Thanks,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Dave Page

Tony Caduto wrote:

Alvaro Herrera wrote:


What's wrong with that?  8.2 will understand the GRANT ON SEQUENCE
without a problem.

  

Yes, but 8.1 and earlier wont!

If you dump a 8.1 database (with 8.2 pg_dump) and then use that dump to 
restore to another 8.1  server, any permissions on sequences will fail 
to restore because the 8.2 pg_dump can't handle NOT using GRANT ON 
SEQUENCE on the lower version of the server.
Would it really be that big of a deal to add some logic to 8.2 pg_dump 
to say: Hey I am not dumping a 8.2 server, so don't use GRANT ON SEQUENCE?


most Admin tools ship with the latest version of pg_dump and restore, so 
If I attempt to restore that dump via pgAdmin III (or other tools) which 
is using 8.2 versions of dump and restore it will fail on a 8.1 server.


Do you see the point I am trying to make?   Should admin tool vendors 
start shipping every version of pg_dump now?


This is a problem I've been thinking about on and off recently - and I 
am starting to come to the conclusion that shipping each version of the 
utilities is the only way things are likely to work unless someone puts 
some really significant effort into adding backwards compatibility modes 
to pg_dump (which I imagine is likely to meet resistance if offered as a 
patch anyway).


That will be a real PITA though - we already have a bunch of code in 
pgAdmin to find just the current versions of PostgreSQL's and EDB's 
pg_dump et al. (yes, they are different), nevermind having to do it 
right back to 7.3 which is the earliest version we currently support.


The other issue will be having to ship multiple copies of libpq and 
supporting libraries for each version :-(


For the moment though, our SVN trunk code will allow you to override the 
paths to the PostgreSQL and EnterpriseDB utilities, so you can point 
pgAdmin at the appropriate verion for your server.


Regards, Dave

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Tony Caduto

Dave Page wrote:
This is a problem I've been thinking about on and off recently - and I 
am starting to come to the conclusion that shipping each version of 
the utilities is the only way things are likely to work unless someone 
puts some really significant effort into adding backwards 
compatibility modes to pg_dump (which I imagine is likely to meet 
resistance if offered as a patch anyway).




I never had a issue before the 8.2 dump and the GRANT ON SEQUENCE. 
The version differences in PGSQL are nothing compared to what goes on 
with MySQL.


Maybe that would be a good Google summer of code project :-)  (make 
pg_dump more backwards compatible to at least 8.0)


Later,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Scott Marlowe
On Thu, 2007-03-15 at 11:53, Tony Caduto wrote:
 Dave Page wrote:
  This is a problem I've been thinking about on and off recently - and I 
  am starting to come to the conclusion that shipping each version of 
  the utilities is the only way things are likely to work unless someone 
  puts some really significant effort into adding backwards 
  compatibility modes to pg_dump (which I imagine is likely to meet 
  resistance if offered as a patch anyway).
 
 
 I never had a issue before the 8.2 dump and the GRANT ON SEQUENCE. 
 The version differences in PGSQL are nothing compared to what goes on 
 with MySQL.
 
 Maybe that would be a good Google summer of code project :-)  (make 
 pg_dump more backwards compatible to at least 8.0)

It would be quite useful to have a -sourcever and -targetver flag in
pg_dump that understood the latest version and the last two or three
versions.  So, if you had pg_dump for v8.2 you could do something like:

pg_dump -sourcever 8.2 -targetver 8.0  dump.sql

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Tom Lane
Tony Caduto [EMAIL PROTECTED] writes:
 Alvaro Herrera wrote:
 What's wrong with that?  8.2 will understand the GRANT ON SEQUENCE
 without a problem.
 
 Yes, but 8.1 and earlier wont!

You're missing the point entirely.  The versioning contract for pg_dump
is that version M.N can dump from servers of versions = M.N, but the
dump it produces is meant to load into server versions = M.N.  If you
want a dump that will reload into 8.1, use 8.1's pg_dump.

pg_dump is messy enough already just trying to support this contract.
I have *no* interest in trying to get it to support an --output-version
switch...

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Dave Page

Tom Lane wrote:

I have *no* interest in trying to get it to support an --output-version
switch...


There speaks a man who has the luxury of not having to worry about 
multi-version admin tools :-(


/D

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_dumpall and version confusion

2007-03-15 Thread Alvaro Herrera
Dave Page wrote:
 Tom Lane wrote:
 I have *no* interest in trying to get it to support an --output-version
 switch...
 
 There speaks a man who has the luxury of not having to worry about 
 multi-version admin tools :-(

Is it more difficult for you to ship a pg_dump-8.0, pg_dump-8.1, etc
along pgAdmin?  I think you (and phpPgAdmin?) are already shipping
pg_dump binaries, right?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


  1   2   >