I'd use xfsdump rather than dd (I'm using xfs filesystem) as xfsdump is able
to restore on any partition type with any size. More over, it is browsable
using xfsrestore.
The only problem I have is that I'm unable to find any hot swapable SATA bay.
any links to such equipment would be cool.
Rega
On Mon, 15 Aug 2005 16:38:18 -0400
Justin Pessa <[EMAIL PROTECTED]> wrote:
> I am interested in seeing what others do to move their backup data to
> other media for rotational backups.
>
> Currently I use a series of hotswappable SATA drives and rsync to
> transfer the contents of /var/lib/back
On Mon, 2005-08-15 at 15:38, Justin Pessa wrote:
> I am interested in seeing what others do to move their backup data to
> other media for rotational backups.
>
> Currently I use a series of hotswappable SATA drives and rsync to
> transfer the contents of /var/lib/backuppc to the removable disk. I
I am interested in seeing what others do to move their backup data to other media for rotational backups.
Currently I use a series of hotswappable SATA drives and rsync to
transfer the contents of /var/lib/backuppc to the removable disk. Is
there a better solution?
My current backup scheme is si
When I tried to use a replicated backup archive on a new server, most
of the host directories in /var/lib/backuppc/pc/ were deleted as soon
backuppc attempted to start backups, followed by an error message that
the (non-existent) host directory could not be accessed, or similar wording.
The only
i think this is incorrect. a tar archive is produced using the
index of a _completed_ backup. you can't extract an in-process
backup.
and as for the cleanup process, it's only cleaning up files that
have no references, so by definition it can't affect an
extraction.
> Good questions.. I wou
Good questions.. I would think if nothing else you could end up with
potentially invalid hardlink targets in the tar archive if the
background cleanup process is running, but I'm not sure.
Check out the howto for LVM information. It should explain everything.
http://www.tldp.org/HOWTO/LVM-HOWTO
It's rigth that Dan wrote?
but... if i go the the web interface and i run the archive command while
the backuppc service is running, i will have the same incosistant tar
archive?
What's do you mean for "incosistent"? That isn't the last archived, or
that contain wrong/corrupped data?
Pleas
On 08/15 10:31 , Julian Robbins wrote:
> I am missing something, or is it right that my 'normal' pool is always
> empty?
compression for files is turned on by default. you only get files in your
uncompressed pool, if you explicitly turn off compression (either
system-wide, or per-host).
for inst
I believe you could potentially have an inconsistent tar archive if the
backuppc service is running while you are generating your archive. The
safest method would be to use LVM volumes on your backup server, and
then just create an LVM snapshot and back up the snapshot so you won't
need to shu
Hi all,
below my approach used on a backuppc server, for the second copy of
archive in a iPod via usb (but you can use a more big and fast usb hard
disk)
My test-server run with debian/sarge with 2.6.10 or 2.6.12 kernel + udev
, on a raid SW Linear with 2x40 GB HD for the data and a 6 GB HD
Hi all
I just wondered :- I have never ever seen any files in my normal pool,
only files in the compressed pool. I have run backupppc for over a year
now, and restored many times.
I am missing something, or is it right that my 'normal' pool is always
empty?
2005/8/14 10:00:00 Running Back
12 matches
Mail list logo