Sorry to hit the list with this but once in awhile I wander back to see
if theres been any new developments on this topic and to share what
little I find on this topic as I go.
I just found an excellent (and free) imaging and backup application for
XP called Driveimage XML that will create a v
On 12/21 05:05 , Carl Wilhelm Soderstrom wrote:
> I was originally planning on just doing tar over netcat; which shouldn't run
> into the memory issues too badly, and yet *should* preserve the hardlinks if
> I do it all at once... but I've not tried it thorougly before.
I'm giving tar over netcat
On 12/21 06:53 , Guus Houtzager wrote:
> Ok, here are the scripts.
cool. this is just in time for the server migration I need to do right now.
I'm planning on doing it over nfs rather than rsyncd or rsync-over-ssh; but
I might give these scripts a bit of a try.
I was originally planning on just
On 12/21/05, Olivier LAHAYE <[EMAIL PROTECTED]> wrote:
>
> Please ignore my previous message, it contains the wrong package version,
> sorry.
>
> now the anwser ;-)
>
> Just use this source package and rebuild it, it should work. It's what I'm
> using at my work. I'm not reponsible if you loose dat
Naw, I'm not talking about replacing the current system. This is just
for making a "modified" copy on a remote server or external disk.
@Marty: our backup strategies differ. The only reason I move data
offsite is if my data center is destroyed (ok, there are some other
reasons but you get the
Paul Fox wrote:
>
> I don't see why not. Just change the hardlink into a file whose
> contents are the name of the pool file it pointed to. That seems like
> trival one liner in a script, both converting and unconverting the
> link. Since it seems too easy, there must be a "gotcha" whi
>
> I don't see why not. Just change the hardlink into a file whose
> contents are the name of the pool file it pointed to. That seems like
> trival one liner in a script, both converting and unconverting the
> link. Since it seems too easy, there must be a "gotcha" which I am
> missi
Brown, Wade ASL (GE Healthcare) wrote:
If we had the hash values saved when BackupPC_link runs, we could
reconnect the links much faster.
I thought the pool filenames *were* the hash. If that's true, then
they are already "saved" and available.
In the case of an off-site backup, I wonder
On Wed, 2005-12-21 at 13:22, Marty wrote:
> >> That brings up another question for anyone here -- does cp -al work
> >> (within the filesystem) on the pool, or on the cpool, or is it also
> >> prohibitively time-consuming? If it works (and you don't run out of
> >> inodes) then it seems you cou
Les Mikesell wrote:
On Wed, 2005-12-21 at 12:30, Marty wrote:
That brings up another question for anyone here -- does cp -al work
(within the filesystem) on the pool, or on the cpool, or is it also
prohibitively time-consuming? If it works (and you don't run out of
inodes) then it seems you
If we had the hash values saved when BackupPC_link runs, we could
reconnect the links much faster.
In the case of an off-site backup, I wonder if the cpool could get
copied directly and the per-pc files are changed such that they are no
longer hardlinks but simply contain the hash.
The idea here
On Wed, 2005-12-21 at 12:30, Marty wrote:
> That brings up another question for anyone here -- does cp -al work
> (within the filesystem) on the pool, or on the cpool, or is it also
> prohibitively time-consuming? If it works (and you don't run out of
> inodes) then it seems you could use it t
Guus Houtzager wrote:
Run the shellscript in a screen (if you prefer), sit back and let it do
its work.
It seems that in the main calling script, if you change this line,
rsync -av rsync://10.0.0.2/remotebackup/pc/$i/ .
to this:
rsync -av --delete rsync://10.0.0.2/remotebackup/pc/$i/
Hi,
Ok, here are the scripts. I've used them to migrate backuppc with all
data from one server to another. Put both scripts in /var/lib/backuppc.
Make sure both are executable for the backuppc user.
Note on pathnames: I'm running Debian, other distro's may have stuff in
different places.
First cop
Please ignore my previous message, it contains the wrong package version,
sorry.
now the anwser ;-)
Just use this source package and rebuild it, it should work. It's what I'm
using at my work. I'm not reponsible if you loose data though ;-)
I've joined a "nosource" package to reduce the attac
Hi all,
I've been using BackupPC on Debian for months and I'm very happy with it.
I now have to run it on Mandriva 2006, but unfortunately there is no
package for this Linux distribution (as far as I know). And I must
admit that I'm not very familiar with RedHat-based distros at the
moment...
So
Guus Houtzager wrote:
On Wed, 2005-12-21 at 22:53 +1100, Vincent Ho wrote:
On Wed, Dec 21, 2005 at 11:29:33AM +0100, Guus Houtzager wrote:
A colleague of mine wrote just that script. I had the problem of needing
to migrate my backuppc with all data to another server and ran into the
w
On Wed, 2005-12-21 at 22:53 +1100, Vincent Ho wrote:
> On Wed, Dec 21, 2005 at 11:29:33AM +0100, Guus Houtzager wrote:
>
> > A colleague of mine wrote just that script. I had the problem of needing
> > to migrate my backuppc with all data to another server and ran into the
> > whole hardlink / mem
On Wed, Dec 21, 2005 at 11:29:33AM +0100, Guus Houtzager wrote:
> A colleague of mine wrote just that script. I had the problem of needing
> to migrate my backuppc with all data to another server and ran into the
> whole hardlink / memory issue. So my colleague wrote a script that
> rsyncs the /pc
Hi,
On Tue, 2005-12-20 at 17:23 -0600, Les Mikesell wrote:
> On Tue, 2005-12-20 at 17:00, Craig Barratt wrote:
>
> > I have been experimenting with a perl script that generates a large
> > tar file for copying the BackupPC data.
>
> Could you do one that rebuilds the hardlinks after the fact?
20 matches
Mail list logo