On Wed, 2013-07-03 at 20:53 +0900, Ian Lawrence Barwick wrote:
> 2013/7/3 Jiří Hlinka :
> > Hi All,
> >
> > I have a 8.4 server and I'd like to use --section parameter of pg_dump and
> > pg_restore, which is available only in 9.2.
> >
> > 1. Is it, in general, safe to use 9.2 tools on 8.4 server?
2013/7/3 Jiří Hlinka :
> Hi All,
>
> I have a 8.4 server and I'd like to use --section parameter of pg_dump and
> pg_restore, which is available only in 9.2.
>
> 1. Is it, in general, safe to use 9.2 tools on 8.4 server? AFAIK the tools
> are backward compatible, at least in case of plain SQL comm
Hi All,
I have a 8.4 server and I'd like to use --section parameter of pg_dump and
pg_restore, which is available only in 9.2.
1. Is it, in general, safe to use 9.2 tools on 8.4 server? AFAIK the tools
are backward compatible, at least in case of plain SQL commands it should
be compatible, right
suhas.basavaraj12 wrote:
> We will be dumping data from version 9.0 and restore to 9.1.
That should work fine, as long as use use pg_dump from version
9.1 to dump the 9.0 database.
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
h
Thanks for the info .We will be dumping data from version 9.0 and restore to
9.1.
Rgrds
Suhas
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pg-dump-and-restore-tp5739350p5739517.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.
--
Sent vi
On 10/01/2013 09:47, Arun Padule wrote:
Hi,
Yes you a dump data from one version of postgres to the other version.
But later you will facing issue with casting of data type's.
As documentaion said, you need to make a dump with pg_dump from the
"destination" version. But if you want to migrat
Hi,
Yes you a dump data from one version of postgres to the other version.
But later you will facing issue with casting of data type's.
e,g
where '5' = 5; works in 8.0 version
But same where clause through's a data type miss match on 9.1 version.
These is just a one example, there might be man
Hi,
Can we dump data from any postrgres version and restore it to any version
of postgres ?
If not , can anyone tell which version of data is compatible to which
version ?
Rgrds
Suhas.B
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pg-dump-and-restore-tp5739350.h
Neil Morgan writes:
> I am running a PostgreSQL 8.3 server (not my choice, would prefer 9.1) but I
> am experiencing memory issues when using pg_dump.
8.3.what?
> Does anyone have any ideas please?
For starters, turn on log_statements so you can see what query is
triggering this. It'd be even
Dear All,
I am running a PostgreSQL 8.3 server (not my choice, would prefer 9.1) but I am
experiencing memory issues when using pg_dump.
I have looked on the forums for memory issues, and can say that the data is not
corrupt.
We are running a VM with RHEL6, 4GB RAM, 2CPU and 80GB HDD
Does any
Alanoly Andrews writes:
> On this issue, instead of going for a newer version of xlc, as suggested, I
> opted to get a newer version of the Postgres source code, 9.1.4. After
> compiling it with the same xlc version, I found that pg_dump works as
> expected. So, the problem appears to be somewh
least for
binaries created from it for AIX (6.1).
Regards.
Alanoly.
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Friday, July 27, 2012 12:13 PM
To: Alanoly Andrews
Cc: 'pgsql-admin@postgresql.org'
Subject: Re: [ADMIN] pg_dump on Postgres 9.1
Alano
Alanoly Andrews writes:
> Is there any reported bug with pg_dump in Postgres 9.1, on AIX ? The
> following command hangs for "ever" and has to be interrupted. It creates a
> zero-length file.
We had a recent report of strange server-side behavior on AIX that went
away after rebuilding with a ne
Hello,
Is there any reported bug with pg_dump in Postgres 9.1, on AIX ? The following
command hangs for "ever" and has to be interrupted. It creates a zero-length
file.
pg_dump -Fc alan1 > alan1.dmp
If I run the command in the verbose mode, I see that it stops at "saving
database definition
I am seeing, what appears to me, strange behavior during PG_DUMP backups.
These pg_dump backups have been running for weeks with no issue, and run very
quickly.
Here is the previous days run from the log:
2012-05-23 07:10:04 PDT::@:[14715]: LOG: checkpoint starting: time
2012-05-23 07:10:04 PD
Hello
Please can I be removed from the mailing list, and I receive many emails like
this
thanks
ЄLIZANDЯO GALLEGOS V.
> Date: Wed, 9 May 2012 09:58:45 -0400
> From: chander.gane...@gmail.com
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] pg_dump:
Elizandro Gallegos wrote:
> Please can I be removed from the mailing list
The answer was in the email to which you responded. Did you have
trouble using the referenced page?
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
-Kevin
--
Sent via pg
Hello
Please can I be removed from the mailing list, and I receive many emails like
this
thanks
ЄLIZANDЯO GALLEGOS V.
> Date: Wed, 9 May 2012 09:58:45 -0400
> From: chander.gane...@gmail.com
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] pg_dump: sch
Chander Ganesan writes:
> I'm running into a weird issue with PostgreSQL 9.1.3 and PostGIS 2.0
> when trying to dump a table - no matter what table I try to dump in this
> database, I find that I get the same error, as evidenced below (scroll
> down for relevant data/error output.)
2200 would
Hi All,
I'm running into a weird issue with PostgreSQL 9.1.3 and PostGIS 2.0
when trying to dump a table - no matter what table I try to dump in this
database, I find that I get the same error, as evidenced below (scroll
down for relevant data/error output.)
Any ideas as to what might be the
Hi,
I am trying to create a daily backup cron script but it fails with an error
as below:
Any pointers to resolve this will be greatly appreciated.
Thanks,
Mitesh Shah
mitesh.s...@stripes39.com
*(1) Error:*
bash-3.2$ sh pg_backup_rotated_orig.sh
Making backup directory in /Users/miteshshah/Docum
Hi,
Once I had the same problem with not existing schema. In my case removing
all the references to the schema worked fine and I had no problems with the
db ever since. It wa PG 8.2.1 and since the operation we migrated all the
DBs including the one to 9.0.6 and everything works fine.
Hope this h
Hello,
We have some problems using pg_dump. We get the following error:
pg_dump: schema with OID 145167 does not exist
I found one entry in the pg_type and another one in the pg_class.
I was able to remove the one in the pg_class but when we try to remove the
row in the pg_type, I get an
"Paul Wouters" wrote:
> We have some problems using pg_dump. We get the following error:
>
> pg_dump: schema with OID 145167 does not exist
Make sure you have a copy of the entire PostgreSQL data directory
tree before trying to fix corruption.
> In the table pg_depend I have also e referenc
Hello,
We have some problems using pg_dump. We get the following error:
pg_dump: schema with OID 145167 does not exist
I found one entry in the pg_type and another one in the pg_class.
I was able to remove the one in the pg_class but when we try to remove the
row in the pg_type, I get an
On Tue, Mar 6, 2012 at 7:22 AM, Piyush Lenka wrote:
> Hi,
>
> I m trying to take backup of data of a particular table using pg_dump.
> I used double quotes for table name but output is :
> pg_dump : no tables were found.
>
> Command used :
> -h localhost -p 5432 -U postgres -W -F p -a -t '"TestTa
Hi,
I m trying to take backup of data of a particular table using pg_dump.
I used double quotes for table name but output is :
pg_dump : no tables were found.
Command used :
-h localhost -p 5432 -U postgres -W -F p -a -t '"TestTable"' -f
DbBackup/BackupTableActions.sql TestDataBase
This problem
Kieren Scott wrote:
> I need to migrate some data (a few GB's) from an 8.4 database to
> an 8.3 database using pg_dump. What is the best way to acheive
> this?
This is always a little tricky because you may have objects in your
later-release database which can't be represented in the older
rel
Hi,
I need to migrate some data (a few GB's) from an 8.4 database to an 8.3
database using pg_dump. What is the best way to acheive this?
E.g. Run pg_dump from the 8.3 host pointing it at the 8.4 host and include the
version mismatch parameter?
Or, run pg_dump on the 8.4 host, then zip the 8.4
Tom,
That did the trick. I made a bad assumption that the shared_memory
was causing the problem and not the other way around. I set it up to
256, last attempt was 128 and it still failed, not sure what value
would have given me success (128 - 256) but it needed quite a bit
more.
Thanks for your
jtke...@verizon.net writes:
> I am having a problem running "pg_dump -s database "on one system
> while it runs fine on another system.
> when I run the following dump on the Ubuntu system I get :
> pg_dump -s DB >/tmp/DB_schema_only.dmp
> pg_dump: WARNING: out of shared memory
> pg_dump: SQL
I am having a problem running "pg_dump -s database "on one system
while it runs fine on another system.
Both databases are nearly identical (minor changes to schemas and
tables)
On the older system it is a redhat x.x (32 bit) 12GiB memory running
postgresql 8.4.3 (32 bit)
On the newer system it is
On 13/06/2011 09:56, Ibrahim Harrani wrote:
> Hi,
>
> I am using PostgreSQL 9.0 I would like to dump some tables and all
> functions,triggers on the database.
> If I drop with pg_dump -c parameter, it removes public schema as
> well.
With -c, the drop of the public schema does not cascade,
Hi,
I am using PostgreSQL 9.0 I would like to dump some tables and all
functions,triggers on the database.
If I drop with pg_dump -c parameter, it removes public schema as
well. pg_dump -C parameter output has create table
but I can't restore this output directly(Because the tables are
already
Bineeta Guha wrote:
> When I used pg_dump then after few minutes it shows error
>
> This probably means the server terminated abnormally
> before or while processing the request. pg_dump: The command was:
> FETCH 100 FROM _pg_dump_cursor
>
> Before 2 month backup running normal. But few
When I used pg_dump then after few minutes it shows error
This probably means the server terminated abnormally before or while
processing the request. pg_dump: The command was: FETCH 100 FROM
_pg_dump_cursor
Before 2 month backup running normal. But few month Backup only done 869Mb
after
Elliot Chance writes:
> Wouldn't that mean at some point it would be advisable to be using 64bit
> transaction IDs? Or would that change too much of the codebase?
It's not so much "too much of the codebase" as "nobody wants another 8
bytes added to per-row overhead". Holding a transaction open
Vladimir Rusinov wrote:
> I think it would be advisable not to use pg_dump on such load.
Agreed.
> Use fs- or storage-level snapshots instead.
Or PITR backup techniques. Or hot/warm standby. Or streaming
replication. Or one of the many good trigger-based replication
products. Just abou
On Wed, Nov 24, 2010 at 12:59 PM, Elliot Chance wrote:
> > Elliot Chance writes:
> >> This is a hypothetical problem but not an impossible situation. Just
> curious about what would happen.
> >
> >> Lets say you have an OLTP server that keeps very busy on a large
> database. In this large databas
On 24/11/2010, at 5:07 PM, Tom Lane wrote:
> Elliot Chance writes:
>> This is a hypothetical problem but not an impossible situation. Just curious
>> about what would happen.
>
>> Lets say you have an OLTP server that keeps very busy on a large database.
>> In this large database you have one
Elliot Chance writes:
> This is a hypothetical problem but not an impossible situation. Just curious
> about what would happen.
> Lets say you have an OLTP server that keeps very busy on a large database. In
> this large database you have one or more tables on super fast storage like a
> fusio
Hi,
This is a hypothetical problem but not an impossible situation. Just curious
about what would happen.
Lets say you have an OLTP server that keeps very busy on a large database. In
this large database you have one or more tables on super fast storage like a
fusion IO card which is handling
Glen,
Did you drop the indexes prior to the restore? If not, try doing so and
recreating the indexes afterwards. That will also speed up the data load.
Bob Lunney
--- On Mon, 2/15/10, Glen Brown wrote:
From: Glen Brown
Subject: [ADMIN] pg_dump/restore problems
To: pgsql-admin@postgresql.org
On Sep 28, 2010, at 6:26 PM, Marc Mamin wrote:
> But if I prefix my pattern with the schema name, then I finally get the
> expected result:
>
>pg_dump -i -v -nXXX -T 'XXX.*2008*' -T ' XXX.*2009*' -T ' XXX.*201001*' -T
> XXX.'*201002*' .
>
>
> seems that the use of the -n flag require
On Sep 28, 2010, at 6:26 PM, Marc Mamin wrote:
> But if I prefix my pattern with the schema name, then I finally get the
> expected result:
>
>pg_dump -i -v -nXXX -T 'XXX.*2008*' -T ' XXX.*2009*' -T ' XXX.*201001*' -T
> XXX.'*201002*' .
>
>
> seems that the use of the -n flag require
here is a strange behaviour:
I did first simplify my syntax with multiples -T flags:
pg_dump -i -v -nXXX -T '*2008*' -T '*2009*' -T '*201001*' -T '*201002*'
.
still not working.
But if I prefix my pattern with the schema name, then I finally get the
expected result:
pg_dump -i -
hello,
I'm trying to export a schema with multiple table exclusions:
pg_dump -i -v -nXXX -T
'*20((08[0-9]+)|(09[0-9]+)|(100[1-8][0-9]+)|(1009[0-1][0-9]+))'
unfortunately, the filter does not work as expected.(no table at all
are excluded)
when I try the same patte
One solution is to run vacuum and reindex and the db before pg_dump.
The second thing to do (it's risky I admit) is to create a file with
this name and filling it with zeros. The pg_dump should run without
problems.
2010/9/24 Benjamin Arai, Ph.D. :
> The file does not exist. What does that mean?
>
First thing I would do is checking wether the relation exists (e.g. by
checking pg_class for the relfilenode of the relation and then finding
it physically in PGDATA folder) or checking wether the file is
corrupted for example by finding the name and type of the relation and
trying to make some man
Hello,
The server is still running but pg_dumps output the following error. What
should I do?
Thanks,
Benjamin
OUTPUT:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: cache lookup failed for index
1531353157
pg_dump: The command was: SELECT t.tableoid, t.oid, t.relname a
"Benjamin Arai, Ph.D." writes:
> The server is still running but pg_dumps output the following error. What
> should I do?
Try reindexing pg_index in whichever database is giving trouble.
Depending on what PG version you are using (which is something that
should ALWAYS be mentioned in any kind of
Is the index 1531353157? To be clear, I would just run:
REINDEX INDEX 1531353157
Thanks,
Benjamin
On Tue, Sep 21, 2010 at 1:57 PM, Joshua D. Drake wrote:
> On Tue, 2010-09-21 at 13:32 -0700, Benjamin Arai, Ph.D. wrote:
> > Hello,
> >
> > The server is still running but pg_dumps output the follo
On Tue, 2010-09-21 at 13:32 -0700, Benjamin Arai, Ph.D. wrote:
> Hello,
>
> The server is still running but pg_dumps output the following error.
> What should I do?
Try reindexing the index.
Joshua D. Drake
--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com
Hello,
The server is still running but pg_dumps output the following error. What
should I do?
Thanks,
Benjamin
OUTPUT:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: cache lookup failed for index
1531353157
pg_dump: The command was: SELECT t.tableoid, t.oid, t.relname a
Kevin Grittner ha scritto:
Silvio Brandani wrote:
We have a standby database
During pg_dump
Hmm... I just noticed that word "standby" in there. Can you
elaborate on what you mean by that?
-Kevin
It means it is an istance refreshed (via rsync) from another istanc
Kevin Grittner ha scritto:
Silvio Brandani wrote:
We have a standby database
During pg_dump
Hmm... I just noticed that word "standby" in there. Can you
elaborate on what you mean by that?
-Kevin
It means it is an istance refreshed (via rsync) from another istanc
Silvio Brandani wrote:
> We have a standby database
> During pg_dump
Hmm... I just noticed that word "standby" in there. Can you
elaborate on what you mean by that?
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www
Silvio Brandani wrote:
> We have a standby database version postgres 8.3.1 on linux .
You should seriously consider upgrading to a more recent 8.3 bug fix
release. The most current is now 8.3.11. Please read this:
http://www.postgresql.org/support/versioning
There was a bug fix related t
We have a standby database version postgres 8.3.1 on linux .
During pg_dump we get the error:
-- pg_dump: SQL command failed
-- pg_dump: Error message from server: ERROR: missing chunk number 0
for toast value 254723406
-- pg_dump: The command was: COPY helpdesk.attachments_data (id,
filedata,
On Thursday 03 June 2010 11:45, Tom Lane wrote:
> Kevin Kempter writes:
> > On Thursday 03 June 2010 11:18, Tom Lane wrote:
> >> Bizarre ... that command really oughtn't be invoking any non-builtin
> >> operator, but the OID is too high for a builtin. What do you get from
> >> "select 33639::rego
Kevin Kempter writes:
> On Thursday 03 June 2010 11:18, Tom Lane wrote:
>> Bizarre ... that command really oughtn't be invoking any non-builtin
>> operator, but the OID is too high for a builtin. What do you get from
>> "select 33639::regoperator"?
> postgres=# select 33639::regoperator
> postgr
On Thursday 03 June 2010 11:18, Tom Lane wrote:
> Kevin Kempter writes:
> > pg_dump: Error message from server: ERROR: could not find hash function
> > for hash operator 33639
>
> Bizarre ... that command really oughtn't be invoking any non-builtin
> operator, but the OID is too high for a builti
Kevin Kempter writes:
> pg_dump: Error message from server: ERROR: could not find hash function for
> hash operator 33639
Bizarre ... that command really oughtn't be invoking any non-builtin
operator, but the OID is too high for a builtin. What do you get from
"select 33639::regoperator"?
Hi all;
I'm seeing these errors when running a pg_dump of the postgres database:
Running: [pg_dump --schema-only postgres > postgres.ddl]
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: could not find hash function for
hash operator 33639
pg_dump: The command was: SELECT
fida aljounaidi wrote:
> For migration purpose (from 8.2 to 8.4)
8.2.what? to 8.4.what?
> (this is my dump command "pg_dump mydb | gzip > /var/dump/db.gz"
Using pg_dump from which version?
> It fails whith this error message:
> pg_dump: la commande SQL a échoué
> pg_dump: Message d'erreu
Hi
For migration purpose (from 8.2 to 8.4), i'm trying to make a dump of a 16
GB database.
(this is my dump command "pg_dump mydb | gzip > /var/dump/db.gz"
It fails whith this error message:
pg_dump: la commande SQL a échoué
pg_dump: Message d'erreur du serveur : ERREUR: invalid memory alloc req
Στις Friday 23 April 2010 17:05:46 ο/η Tom Lane έγραψε:
> Achilleas Mantzios writes:
> > Then i did
>
> > # CREATE TABLE mail_entity2 AS SELECT * FROM mail_entity;
>
> > which went fine
>
> > but, for some crazy reason, pg_dump on mail_entity2 also results to an
> > error:
> > srv:~> pg_dump -
Achilleas Mantzios writes:
> Then i did
> # CREATE TABLE mail_entity2 AS SELECT * FROM mail_entity;
> which went fine
> but, for some crazy reason, pg_dump on mail_entity2 also results to an error:
> srv:~> pg_dump -t mail_entity2 > /dev/null
> pg_dump: SQL command failed
> pg_dump: Error messa
Hello,
just coming back from a rescue marathon on this remote server i was telling you.
As i said, the last problem was while doing a
pg_dump dynacom
(dynacom is my db'd name)
i kept getting
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: compressed data is corrupt
pg_du
"Campbell, Lance" writes:
> How do you tell pg_dump to use SSL when loading data from one server
> into another server?
It will likely do so by default, but if you want to be sure you can do
export PGSSLMODE=require
before starting pg_dump. See
http://www.postgresql.org/docs/8.4/static/
PostgreSQL 8.4.3
Assume you have two servers running PostgreSQL with SSL on.
How do you tell pg_dump to use SSL when loading data from one server
into another server?
Thanks,
Lance Campbell
Software Architect/DBA/Project Manager
Web Services at Public Affairs
217-333-0382
"Matt Janssen" writes:
> When migrating our Postgres databases from 32 to 64-bit systems, including
> large binary objects, how well will this work?
> 32-bit server) pg_dump --format=c --blobs --file=backup.pg mydb
> 64-bit server) pg_restore -d mydb backup.pg
Should be fine; but remember that o
When migrating our Postgres databases from 32 to 64-bit systems, including
large binary objects, how well will this work?
32-bit server) pg_dump --format=c --blobs --file=backup.pg mydb
64-bit server) pg_restore -d mydb backup.pg
I'm hoping that PG's compressed custom archive format is
>>I am keep getting error of mismatch of pg_dump version. how should one
dealing with different version of pg_dump normally?
C:\Program Files\pgAdmin III\1.8\pg_dump.exe -h 192.168.222.129 -p 5433 -U
postgres -F c -b -v -f "C:\Documents and
Settings\steven\Desktop\template.backup" template_postgis
On Sat, Feb 20, 2010 at 6:59 PM, Kevin Grittner
wrote:
> Glen Brown wrote:
>
>> I am using Ubuntu 8LTS on both systems. How can tell where the
>> space is going?
>
> Maybe someone has a more sophisticated way, but I'd be poking around
> with "du -shx" requests against the contents of various dire
Glen Brown wrote:
> I am using Ubuntu 8LTS on both systems. How can tell where the
> space is going?
Maybe someone has a more sophisticated way, but I'd be poking around
with "du -shx" requests against the contents of various directories
during the run. Maybe run "vmstat 1" in another shell,
I am using Ubuntu 8LTS on both systems. How can tell where the space is
going?
thanks for the help
-glen
Glen Brown
On Sat, Feb 20, 2010 at 9:52 AM, Kevin Grittner wrote:
> Glen Brown wrote:
>
> > When I dump this table using pg_dump -Fc it creates a 15 gb file. I
> > am trying to restore in
Glen Brown wrote:
> When I dump this table using pg_dump -Fc it creates a 15 gb file. I
> am trying to restore in into a database that has 100gb of free disk
> space and it consumes it all and fails to finish the restore.
What is the platform? (I remember having problems with large file
handl
red office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK
-Original Message-
From: Josh Kupershmidt [mailto:schmi...@gmail.com]
Sent: 12 February 2010 15:30
To: Renato Oliveira
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] PG_DUMP backup
Importance: High
On Feb 12, 2010,
February 2010 19:31
To: Renato Oliveira; pgsql-admin@postgresql.org
Subject: RE: [ADMIN] PG_DUMP backup
Backing up a 170GB in 28 hours definitely doesn't sound right and I
almost certain has nothing to do with pg_dump, but rather your hardware,
ie, server, disk, etc. with a 170GB, backup should be
I am not sure where I should post this but I am running into problems trying
to restore a large table. I am running 8.4.1 on all servers. The table is
about 25gb in size and most of that is toasted. It has about 2.5m records.
When I dump this table using pg_dump -Fc it creates a 15 gb file. I am
tr
gards,
Husam
-Original Message-
From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Renato Oliveira
Sent: Friday, February 12, 2010 1:59 AM
To: pgsql-admin@postgresql.org
Subject: [ADMIN] PG_DUMP backup
Importance: High
Dear all,
I have a s
England, registration number 658133
Registered office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK
-Original Message-
From: Josh Kupershmidt [mailto:schmi...@gmail.com]
Sent: 12 February 2010 15:30
To: Renato Oliveira
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] PG_DUMP
On Feb 12, 2010, at 4:58 AM, Renato Oliveira wrote:
> Dear all,
>
> I have a server running 8.2.4 and has a database 170GB in size.
> Currently I am backing it up using pg_dump and it takes around 28 hours,
> sadly.
That's suspiciously slow for a pg_dump alone. I have a ~168 GB database which
Dear all,
I have a server running 8.2.4 and has a database 170GB in size.
Currently I am backing it up using pg_dump and it takes around 28 hours, sadly.
I was asked to check and compare the newly created DUMP file to the live
database and compare records.
I personally cannot see an easy or quic
arze
Sent: Friday, October 23, 2009 4:25 PM
To: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] pg_dump custom format and pigz
Hi Marc,
On Fri, Oct 23, 2009 at 03:52:16PM +0200, Marc Mamin wrote:
> > You might add pigz as a post-processing step and disabling compression in
> > pg_dum
Hi Marc,
On Fri, Oct 23, 2009 at 03:52:16PM +0200, Marc Mamin wrote:
> > You might add pigz as a post-processing step and disabling compression in
> > pg_dump.
>
> The problem with this solution is that it make it necessary to
> decompress the dump entirely before using pg_restore (or did I mis
f Tino Schwarze
Sent: Friday, October 23, 2009 3:11 PM
To: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] pg_dump custom format and pigz
Hi Marc,
On Fri, Oct 23, 2009 at 02:48:58PM +0200, Marc Mamin wrote:
> I'm using pg_dump intensively, and until now using the plaintext
> format which
Hi Marc,
On Fri, Oct 23, 2009 at 02:48:58PM +0200, Marc Mamin wrote:
> I'm using pg_dump intensively, and until now using the plaintext format
> which allows me to pipe the output to pigz (http://zlib.net/pigz/)
> This is the fastest way I've found to generate compressed backups.
>
> (These back
Hello,
I'm using pg_dump intensively, and until now using the plaintext format
which allows me to pipe the output to pigz (http://zlib.net/pigz/)
This is the fastest way I've found to generate compressed backups.
(These backups are very rarely used)
I would like to switch to the custom format i
Chris Browne writes:
> I'm finding that pg_dumps are dumping out, right near the end, the
> following sequence of grants that are causing our QA folk a little bit
> of concern:
> REVOKE ALL ON SCHEMA public FROM PUBLIC;
> REVOKE ALL ON SCHEMA public FROM chris;
> GRANT ALL ON SCHEMA public TO chr
I'm finding that pg_dumps are dumping out, right near the end, the
following sequence of grants that are causing our QA folk a little bit
of concern:
REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM chris;
GRANT ALL ON SCHEMA public TO chris;
GRANT ALL ON SCHEMA public TO
"Mary Sipple" writes:
> I'm using postgresql 8.2.3 and am trying to run pg_dump with some tables
> excluded. It seems that no matter what I try pg_dump core dumps on me --
> "Segmentation Fault (core dumped)". The -t flag works fine but -T does not.
> Even excluding just one table gives me the seg
I'm using postgresql 8.2.3 and am trying to run pg_dump with some tables
excluded. It seems that no matter what I try pg_dump core dumps on me --
"Segmentation Fault (core dumped)". The -t flag works fine but -T does not.
Even excluding just one table gives me the segmentation fault.
This works fi
Andy Shellam writes:
> I've just re-created this using the following steps on a blank database:
> 1. Create a new database using a role with a default search path of
> "$user", public.
> 2. Create a schema in that database (myschema)
> 3. Create a sequence in the test schema (mysequence)
> 4. Cr
No, it isn't. If the search_path was "product" when the table
definition was loaded,
No it wasn't. When the table was initially created (from scratch not
from the dump) the search path was the default of "$user", public.
I've just re-created this using the following steps on a blank data
Andy Shellam writes:
> Yes, that's true - it's in the search path because (so I believe)
> pg_dump is adding a "SET search_path..." line before it carries out the
> commands in the schema, which works when the dump is restored, but when
> running as a normal user, the search path is the default
Hi Tom
The reason it's printed as just 'tax_id' is that that relation should be
first in the search_path at this point.
Yes, that's true - it's in the search path because (so I believe)
pg_dump is adding a "SET search_path..." line before it carries out the
commands in the schema, which wor
Andy Shellam writes:
> When I pg_dump the schema, the resulting SQL is:
> ...
> CREATE SCHEMA product;
> ...
> SET search_path = product, pg_catalog;
> ...
> CREATE SEQUENCE tax_id
> INCREMENT BY 1
> NO MAXVALUE
> NO MINVALUE
> CACHE 1;
> ...
> CREATE TABLE tax (
> id smallint
I've come across an issue with pg_dump from 8.3.7 (running on Windows.)
I'm using pg_dump to dump the schema only of the database for a system
I'm currently displaying.
The other day I had to re-create the database using the latest dump, and
for a lot of the tables I now get the error "relati
1 - 100 of 489 matches
Mail list logo