Le mardi 20 Décembre 2005 22:02, David Rees a écrit :
cpool uses hardlinks. rsync by default does not preserve hardlinks
which is why the copy you make explodes in size. If you specify -H to
rsync, it will preserve hard-links, but expect it to take a LONG time.
just to give an idea of
On Wed, Dec 21, 2005 at 11:29:33AM +0100, Guus Houtzager wrote:
A colleague of mine wrote just that script. I had the problem of needing
to migrate my backuppc with all data to another server and ran into the
whole hardlink / memory issue. So my colleague wrote a script that
rsyncs the /pc
On Wed, 2005-12-21 at 22:53 +1100, Vincent Ho wrote:
On Wed, Dec 21, 2005 at 11:29:33AM +0100, Guus Houtzager wrote:
A colleague of mine wrote just that script. I had the problem of needing
to migrate my backuppc with all data to another server and ran into the
whole hardlink / memory
Guus Houtzager wrote:
On Wed, 2005-12-21 at 22:53 +1100, Vincent Ho wrote:
On Wed, Dec 21, 2005 at 11:29:33AM +0100, Guus Houtzager wrote:
A colleague of mine wrote just that script. I had the problem of needing
to migrate my backuppc with all data to another server and ran into the
Guus Houtzager wrote:
Run the shellscript in a screen (if you prefer), sit back and let it do
its work.
It seems that in the main calling script, if you change this line,
rsync -av rsync://10.0.0.2/remotebackup/pc/$i/ .
to this:
rsync -av --delete
On Wed, 2005-12-21 at 12:30, Marty wrote:
That brings up another question for anyone here -- does cp -al work
(within the filesystem) on the pool, or on the cpool, or is it also
prohibitively time-consuming? If it works (and you don't run out of
inodes) then it seems you could use it to
: Wednesday, December 21, 2005 12:49 PM
To: Marty
Cc: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Cpool question
On Wed, 2005-12-21 at 12:30, Marty wrote:
That brings up another question for anyone here -- does cp -al work
(within the filesystem) on the pool
Les Mikesell wrote:
On Wed, 2005-12-21 at 12:30, Marty wrote:
That brings up another question for anyone here -- does cp -al work
(within the filesystem) on the pool, or on the cpool, or is it also
prohibitively time-consuming? If it works (and you don't run out of
inodes) then it seems you
On Wed, 2005-12-21 at 13:22, Marty wrote:
That brings up another question for anyone here -- does cp -al work
(within the filesystem) on the pool, or on the cpool, or is it also
prohibitively time-consuming? If it works (and you don't run out of
inodes) then it seems you could use it
you find the pool link.
- Wade
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Paul
Fox
Sent: Wednesday, December 21, 2005 1:52 PM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Cpool question
I don't see why not. Just
On 12/21 06:53 , Guus Houtzager wrote:
Ok, here are the scripts.
cool. this is just in time for the server migration I need to do right now.
I'm planning on doing it over nfs rather than rsyncd or rsync-over-ssh; but
I might give these scripts a bit of a try.
I was originally planning on just
Hi all,
I have a question as to how cpool stores it's information.
I have everything running good with my backup system, but now I want to
rsync all or /var/lib/backuppc to an external hard drive for an offsite
backup.
When I run the rsync, everything syncs fine except the
Hi all,
I have a question as to how cpool stores it's information.
I have everything running good with my backup system, but now I want to
rsync all or /var/lib/backuppc to an external hard drive for an offsite
backup.
When I run the rsync, everything syncs fine except the cpool
I use rsync -vaHuSx --delete cpool /backup/cpool
It still explodes in size. I can't necessarily use raid or dd because the drive sizes are different. my external drive is much smaller than my actual backup drive. At my backups current space usage, I can fit it on the external drive but I
there's no perfect way to make a copy of the pool. the best is
to do a byte-for-byte copy, either to an identical hard-drive, or
by using software RAID as a syncing mechanism for your two disks.
(i.e., run as a broken RAID pair most of the time, and only add
the second half when
If you are simply copying the pool, why not try cp -a.
doesn't work.
Damn!
I confess I haven't searched the archives to see if anybody has
suggested
cp in the past :-)
they have. :-)
Damn again!, should have checked. You know, as I was sending the message, I
was thinking as soon
Just everything within the cpool directory. All other directories match up the same in size.
On Tue, 2005-12-20 at 16:34 -0500, Paul Fox wrote:
I confess I haven't searched the archives to see if
anybody has suggested cp in the past :-)
they have. :-)
...
But then, I
). It should have
a link count of at least 2.
- Wade
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jon
ScottornSent: Tuesday, December 20, 2005 3:39 PMTo:
backuppc-users@lists.sourceforge.netSubject: Re: [BackupPC-users]
Cpool question
Just everything within the cpool directory
005 3:09 PMTo:
backuppc-users@lists.sourceforge.netSubject: Re: [BackupPC-users]
Cpool question
I use rsync -vaHuSx --delete cpool /backup/cpoolIt still
explodes in size. I can't necessarily use raid or dd because the drive
sizes are different. my external drive is much smaller than my actual
backup drive. At
.
- Wade
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jon Scottorn
Sent: Tuesday, December 20, 2005 3:09 PM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Cpool question
I use rsync -vaHuSx --delete cpool
On Tue, 2005-12-20 at 16:13, Jon Scottorn wrote:
On Tue, 2005-12-20 at 16:10 -0600, Brown, Wade ASL (GE Healthcare)
wrote:
Also, are you rsync'n/copying individual directories
(/backup/cpool, /backup/hosts, etc.)? Or the entire /backup
directory?
Individual directories because otherwise
Les Mikesell writes:
On Tue, 2005-12-20 at 16:13, Jon Scottorn wrote:
On Tue, 2005-12-20 at 16:10 -0600, Brown, Wade ASL (GE Healthcare)
wrote:
Also, are you rsync'n/copying individual directories
(/backup/cpool, /backup/hosts, etc.)? Or the entire /backup
directory?
Individual
On Tue, 2005-12-20 at 17:00, Craig Barratt wrote:
I have been experimenting with a perl script that generates a large
tar file for copying the BackupPC data.
Could you do one that rebuilds the hardlinks after the fact? Then
you could copy one PC directory at a time, do the link step and
23 matches
Mail list logo