Hi,

First let me state that I'm not a DBA, but a developer.  I know enough about 
databases to be dangerous, but not enough to make any money at it. ;-)

We are using large objects with OIDs as part of our data model.  One of our 
processes is to backup the database every night.  Our DBAs told me that pg_dump 
was taking too long.  So they decided to shut down Postgres, TAR up the 
directory to a back up location, restart Postgres, then copy the back up TAR to 
tape.

For our smaller sites, 200GB or less (mostly LOBs), it takes less than an hour 
to shutdown, TAR, and restart (writing to tape is not part of this time frame). 
 Some of our larger sites have 500GB+ worth of data, which is mostly LOBs.  Our 
DBAs want to move the LOBs out of the database and store them on the file 
system and have the record have a path the binary files.  I'd like to come up 
with a better and faster back up solution that allows the LOBs to stay in the 
DB.

A few things to note is that when a LOB gets inserted into the DB, it is never 
updated.  It may be deleted on rare occasions, but never updated.  Also, the 
DBAs are against incremental backups and I don't blame them, sort of.

I'm open to any ideas.  The servers are pretty standard.  They initially come 
with four 1TB hard drives ran in RAID 10, so they have 2TB available.  There is 
another controller card and space for 4 more drives.  We want to keep cost 
down, but uptime is very important.   Even though I'm not a Sys Admin either, I 
was wondering if there would be a way to replicate the DB on the two different 
RAID sets, "halt" one to do the backup, then reinitialize it so that it would 
sync up with the other RAID set.

Other than adding a secondary server to do data replication, I'm open to ideas. 
 Even if it means moving the LOBs onto the file system.  I just need a back up 
solution that scales well when we exceed 1TB.

Thank you,

Todd

Reply via email to