Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-07-19 Thread Simon Riggs

On Fri, 2008-07-18 at 20:25 -0400, Francisco Reyes wrote:

 Does pg_snapclone works mostly on large rows or will it also be faster 
 than pg_dump for narrow tables?

It allows you to run your dump in multiple pieces. Thats got nothing to
do with narrow or wide.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-07-18 Thread Francisco Reyes

Simon Riggs wrote:

Have a look at pg_snapclone. It's specifically designed to significantly
improve dump times for very large objects.

http://pgfoundry.org/projects/snapclone/
  
Also, in case the original poster is not aware, by default pg_dump 
allows to backup single tables.

Just add -t table name.



Does pg_snapclone works mostly on large rows or will it also be faster 
than pg_dump for narrow tables?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-05-08 Thread Simon Riggs
On Wed, 2008-05-07 at 15:24 -0700, John Smith wrote:

 Actually, I forgot to mention one more detail in my original post.
 For the table that we're looking to backup, we also want to be able to
 do incremental backups.  pg_dump will cause the entire table to be
 dumped out each time it is invoked.
 
 With the pg_{start,stop}_backup approach, incremental backups could be
 implemented by just rsync'ing the data files for example and applying
 the incremental WALs.   So if table foo didn't change very much since
 the first backup, we would only need to rsync a small amount of data
 plus the WALs to get an incremental backup for table foo.
 
 Besides picking up data on unwanted tables from the WAL (e.g., bar
 would appear in our recovered database even though we only wanted
 foo), do you see any other problems with this pg_{start,stop}_backup
 approach?  Admittedly, it does seem a bit hacky.

You wouldn't be the first to ask to restore only a single table.

I can produce a custom version that does that if you like, though I'm
not sure that feature would be accepted into the main code.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Backup/Restore of single table in multi TB database

2008-05-07 Thread John Smith
Hi,

I have a large database (multiple TBs) where I'd like to be able to do
a backup/restore of just a particular table (call it foo).  Because
the database is large, the time for a full backup would be
prohibitive.  Also, whatever backup mechanism we do use needs to keep
the system online (i.e., users must still be allowed to update table
foo while we're taking the backup).

After reading the documentation, it seems like the following might
work.  Suppose the database has two tables foo and bar, and we're only
interested in backing up table foo:

1. Call pg_start_backup

2. Use the pg_class table in the catalog to get the data file names
for tables foo and bar.

3. Copy the system files and the data file for foo.  Skip the data file for bar.

4. Call pg_stop_backup()

5. Copy WAL files generated between 1. and 4. to another location.

Later, if we want to restore the database somewhere with just table
foo, we just use postgres's normal recovery mechanism and point it at
the files we backed up in 2. and the WAL files from 5.

Does anyone see a problem with this approach (e.g., correctness,
performance, etc.)?  Or is there perhaps an alternative approach using
some other postgresql mechanism that I'm not aware of?

Thanks!
- John

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-05-07 Thread David Wilson
On Wed, May 7, 2008 at 4:02 PM, John Smith [EMAIL PROTECTED] wrote:

  Does anyone see a problem with this approach (e.g., correctness,
  performance, etc.)?  Or is there perhaps an alternative approach using
  some other postgresql mechanism that I'm not aware of?

Did you already look at and reject pg_dump for some reason? You can
restrict it to specific tables to dump, and it can work concurrently
with a running system. Your database is large, but how large are the
individual tables you're interested in backing up? pg_dump will be
slower than a file copy, but may be sufficient for your purpose and
will have guaranteed correctness.

I'm fairly certain that you have to be very careful about doing simple
file copies while the system is running, as the files may end up out
of sync based on when each individual one is copied. I haven't done it
myself, but I do know that there are a lot of caveats that someone
with more experience doing that type of backup can hopefully point you
to.

-- 
- David T. Wilson
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-05-07 Thread Joshua D. Drake
On Wed, 7 May 2008 13:02:57 -0700
John Smith [EMAIL PROTECTED] wrote:

 Hi,
 
 I have a large database (multiple TBs) where I'd like to be able to do
 a backup/restore of just a particular table (call it foo).  Because
 the database is large, the time for a full backup would be
 prohibitive.  Also, whatever backup mechanism we do use needs to keep
 the system online (i.e., users must still be allowed to update table
 foo while we're taking the backup).

 Does anyone see a problem with this approach (e.g., correctness,
 performance, etc.)?  Or is there perhaps an alternative approach using
 some other postgresql mechanism that I'm not aware of?

Why are you not just using pg_dump -t ? Are you saying the backup of
the single table pg_dump takes to long? Perhaps you could use slony
with table sets?

Joshua D. Drake



-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




signature.asc
Description: PGP signature


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-05-07 Thread Joshua D. Drake
On Wed, 7 May 2008 16:09:45 -0400
David Wilson [EMAIL PROTECTED] wrote:

 I'm fairly certain that you have to be very careful about doing simple
 file copies while the system is running, as the files may end up out
 of sync based on when each individual one is copied. I haven't done it
 myself, but I do know that there are a lot of caveats that someone
 with more experience doing that type of backup can hopefully point you
 to.

Besides the fact that it seems to be a fairly hacky thing to do... it
is going to be fragile. Consider:

(serverA) create table foo();
(serverB) create table foo();

(serverA) Insert stuff;
(serverA) Alter table foo add column;

Oops...

(serverA) alter table foo drop column;

You now have different version of the files than on serverb regardless
of the table name.

Joshua D. Drake

 


-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




signature.asc
Description: PGP signature


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-05-07 Thread Simon Riggs
On Wed, 2008-05-07 at 13:02 -0700, John Smith wrote:

 I have a large database (multiple TBs) where I'd like to be able to do
 a backup/restore of just a particular table (call it foo).  Because
 the database is large, the time for a full backup would be
 prohibitive.  Also, whatever backup mechanism we do use needs to keep
 the system online (i.e., users must still be allowed to update table
 foo while we're taking the backup). 

Have a look at pg_snapclone. It's specifically designed to significantly
improve dump times for very large objects.

http://pgfoundry.org/projects/snapclone/

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-05-07 Thread Tom Lane
John Smith [EMAIL PROTECTED] writes:
 After reading the documentation, it seems like the following might
 work.  Suppose the database has two tables foo and bar, and we're only
 interested in backing up table foo:

 1. Call pg_start_backup

 2. Use the pg_class table in the catalog to get the data file names
 for tables foo and bar.

 3. Copy the system files and the data file for foo.  Skip the data file for 
 bar.

 4. Call pg_stop_backup()

 5. Copy WAL files generated between 1. and 4. to another location.

 Later, if we want to restore the database somewhere with just table
 foo, we just use postgres's normal recovery mechanism and point it at
 the files we backed up in 2. and the WAL files from 5.

 Does anyone see a problem with this approach

Yes: it will not work, not even a little bit, because the WAL files will
contain updates for all the tables.  You can't just not have the tables
there during restore.

Why are you not using pg_dump?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-05-07 Thread John Smith
Hi Tom,

Actually, I forgot to mention one more detail in my original post.
For the table that we're looking to backup, we also want to be able to
do incremental backups.  pg_dump will cause the entire table to be
dumped out each time it is invoked.

With the pg_{start,stop}_backup approach, incremental backups could be
implemented by just rsync'ing the data files for example and applying
the incremental WALs.   So if table foo didn't change very much since
the first backup, we would only need to rsync a small amount of data
plus the WALs to get an incremental backup for table foo.

Besides picking up data on unwanted tables from the WAL (e.g., bar
would appear in our recovered database even though we only wanted
foo), do you see any other problems with this pg_{start,stop}_backup
approach?  Admittedly, it does seem a bit hacky.

Thanks,
- John

On Wed, May 7, 2008 at 2:41 PM, Tom Lane [EMAIL PROTECTED] wrote:
 John Smith [EMAIL PROTECTED] writes:
   After reading the documentation, it seems like the following might
   work.  Suppose the database has two tables foo and bar, and we're only
   interested in backing up table foo:

   1. Call pg_start_backup

   2. Use the pg_class table in the catalog to get the data file names
   for tables foo and bar.

   3. Copy the system files and the data file for foo.  Skip the data file 
 for bar.

   4. Call pg_stop_backup()

   5. Copy WAL files generated between 1. and 4. to another location.

   Later, if we want to restore the database somewhere with just table
   foo, we just use postgres's normal recovery mechanism and point it at
   the files we backed up in 2. and the WAL files from 5.

   Does anyone see a problem with this approach

  Yes: it will not work, not even a little bit, because the WAL files will
  contain updates for all the tables.  You can't just not have the tables
  there during restore.

  Why are you not using pg_dump?

 regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general