iiuc BackupPC_fixLinks.pl
(http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=BackupPC_FixLinks)
ought not ignore multiply linked files when considering what files in
the pc tree might be in need of being attached into the pool. just
because files are multiply linked doesn't mean
> I HAVE moved everything from /var/lib/backuppc to /mnt/backuppc (a
> different hard drive).
>
> AHA!!Doing a tree -a in /var/lib/backuppc I see a "cpool" there with
> LOTS of directories and files!!!
>
> So, somewhere I must have to point the cpool and log directories at the
> new location,
On 2011-03-18 05:46, Neal Becker wrote:
> I'm interested in setting up linux->linux backup. I don't like the idea of
> giving permission for machine1 as user backup to ssh to machine2 as root.
> What
> are the options?
>
> 1. Can ssh be restricted so that the only command user backup can run is
for my offsite backups i've a script selecting the latest full and
incremental from each ~backuppc/pc/*, along with the logs and "backups"
files
if/when this is restored, what will be needed?
something to rebuild cpool no doubt.
perhaps also editing of the "backups" files to reflect what's act
i've been having good success with a script that selects only the most
recent full and most recent incremental for each backup in the pc
directory, as well as the set of backups last successfully
transferred, and rsync's that set offsite, with -H. for me, this
still deduplicates, and keeps a reaso
> rsync'ing the BackupPC data pool is generally recommended against. The
> number of hardlinks causes an explosive growth in memory consumption by
> rsync and while you may be able to get away with it if you have 20GB of
> data (depending on how much memory you have); you will likely run out of
> m
mothy Omer wrote:
> On 3 January 2011 15:59, gregwm wrote:
>
>> > When i do run "a du -hs on the clients folder under the pc dir" is the
>> result of that the amount of file storage the client is using (what could
>> be
>> shared by others)
>> >
> When i do run "a du -hs on the clients folder under the pc dir" is the result
> of that the amount of file storage the client is using (what could be shared
> by others)
>
> For example, I have comA and comB that both have the same 2GB file backed up.
> du -hs on both of their folders will res
"pull" is fine for most circumstances, but i have an instance where "push"
is the only option. assuming i don't run nightly or trashclean remotely,
any danger in mounting the backuppc volume via sshfs and running
BackupPC_dump?
--
hmm, i rather expect the pool check doesn't follow all the transfers,
rather is interleaved with the transfers, if i'm right the temporary
ballooning you describe should not occur other than a file at a time.
On 2010-12-06, Ed McDonagh wrote:
> On Mon, 2010-12-06 at 10:47 -0500, Ken D'Ambrosio wr
>
> > i've used rsync -qPHSa with some success. however, if you have lots of
> > links, and not terribly much memory, rsync gobbles memory in proportion
> to
> > how many hardlinks it's trying to match up. so, ironically, i use
> > storebackup to make an offsite copy of my backuppc volume.
>
> Is
>
> I'm archiving the BackupPC backup folder (/var/lib/BackupPC) folder to
> external disk with rsync.
>
> However, it looks like rsync is filling the links?
>
> My total disk usage on the backup server is 407g, and the space used on
> the external drive is up to 726g.
>
> (using rsync -avh --delet
>
> to copy my backuppc volume offsite i wrote a script to pick
> (from /pc/*/backups) the 2 most recent incremental and the
> 2 most recent full backups from each backup set and rsync all that to the
> remote site. i'm ignoring (c)pool but the hardlinks still apply amongst the
> selected backups.
> ...saving to an Amazon s3 share...
> ..."So you have a nice
> non-redundant repo, and you want to make it redundant before you push it
> over the net??? Talk sense man!"
>
> The main question:
> ==
> He thinks it would be more bandwidth-efficient to tar up and encrypt the
> pool, whic
i'd just exclude them by name/pattern until a better answer surfaces
> At our site files larger than 10BG are usually recreated faster than
> restored from backup, therefore we added to the "RsyncExtraArgs" the
> parameter
> "--max-size=100".
>
> Although this parameter is visible in the r
>> /e4/v/h/backuppc/bin/BackupPC_zcat LOG.2.z
>> /e4/v/h/backuppc/bin/BackupPC_zcat LOG.1.z
>> denied at /v/h/backuppc/bin/BackupPC_dump line 193
>> 2010-10-28 17:15:11 admin : Can't read /bc/backuppcdata/pc: No such
>> file or directory at /v/h/backuppc/bin/BackupPC_sendEmail line 165.
>
> Why ar
umm,
Cpool nightly clean removed 190 files from where??
the mobo died on 10/14, a new server was purchased, complete with new
discs. the orig server primary volume was also installed, but not the
original server backuppc volume. on 10/28 i created a fresh empty
backuppc volume, tried starting ba
17 matches
Mail list logo