On Mon, Feb 15, 2010 at 04:32:41PM -0500, tribat wrote:
Would be cool to get the BackupPC TopDir on an encrypted container so I could
backup machines that are running an encrypted OS.
I found out that dm_crypt can be used between a loop mounted FS container to
encrypt all the data in
On Wed, Jan 20, 2010 at 11:07:22AM -0500, Ken Long wrote:
Hello!
I have a BackupPC server that is currently running out of drive space.
Currently, we are backing up both the servers and the workstations all
to one, single machine. The problem is that I'm really not wanting to
grow the
On Thu, Jan 07, 2010 at 05:38:14PM -0700, Kyle Anderson wrote:
Pieter,
Thank you for kindly providing this script. I also have an accounting need to
get some sort of reasonable estimate for how much space they are occupying,
and
I don't want to do a du.
Since files that occur in multiple
On Tue, Dec 01, 2009 at 12:36:21PM +0100, Tomasz Chmielewski wrote:
Tyler J. Wagner wrote:
This is a frequent question. The answer is: no such statistic exists. A
host
does not use a given amount of space at all. All files are pooled, and one
cannot say how much of the pool a
On Tue, Dec 01, 2009 at 09:28:50AM -0500, Jeffrey J. Kosowsky wrote:
Pieter Wuille wrote at about 13:18:33 +0100 on Tuesday, December 1, 2009:
What you can do is count the allocated space for each directory and file,
but
divide the numbers for files by (nHardlinks+1). This way you end up
On Tue, Nov 24, 2009 at 05:49:37PM -0500, Steve wrote:
I agree - almost every newbie that picks up BackupPC makes this
mistake - the more experienced you are with old-school config files
the MORE likely you are to assume changing this is all you need to do.
A note in the docs and/or a link to
On Thu, Nov 19, 2009 at 12:42:04PM +0100, Christian Völker wrote:
Hi,
[I know, you don't want to have this topic again, treat it as a collection]
has somebody tried to rsync a file system image?
As my BackupPC pool is too large for any backup programm (~600GB) to
follow all the
On Fri, Sep 11, 2009 at 04:47:31PM +1000, Adam Goryachev wrote:
Timothy J Massey wrote:
Of course, now we've come full circle: how do you copy a physical
block device in an rsync-like manner? :)
Why not just use lvm to take a snapshot, use dd to take 2G chunks (or
whatever size you want)
On Wed, Sep 09, 2009 at 09:33:03AM +0200, Christian Völker wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
as I have the same issue with storing my BackupPC outside I tried
another way the last days:
First, my environment:
28 hosts to back up. Mostly idle machines with minor
On Wed, Sep 02, 2009 at 01:08:27PM -0500, Les Mikesell wrote:
Pieter Wuille wrote:
You're very right, and i thought about it too. Instead of using a RAID1 on
the offsite backup, there are two separate backups on the offsite machine,
and synchronisation switches between them. This also
On Thu, Sep 03, 2009 at 11:35:50AM -0500, Les Mikesell wrote:
Pieter Wuille wrote:
You're very right, and i thought about it too. Instead of using a RAID1 on
the offsite backup, there are two separate backups on the offsite machine,
and synchronisation switches between them. This also
Hello everyone,
trying to come up with a way for efficiently synchronising a BackupPC archive
on one server with a remote and encrypted offsite backup, the following problems
arise:
* As often pointed out on this list, filesystem-level synchronisation is
extremely cpu and memory-intensive. Not
On Wed, Sep 02, 2009 at 10:14:05AM -0500, Les Mikesell wrote:
Pieter Wuille wrote:
In our case, the BackupPC pool is stored on an XFS filesystem on an LVM
volume, allowing a xfsfreeze/sync/snapshot/xfsunfreeze, and using
devfiles.pl on the snapshot. Instead of xfsfreeze+unfreeze, a backuppc
On Thu, Jul 09, 2009 at 10:52:02AM -0400, shion wrote:
Hello,
i want to restore a backup via command line.
I have already tried to use the BackupPC_restore command but I don't know
what I should do with the third parameter (reqFileName).
I don't have a list of all files, which should
On Wed, Jul 08, 2009 at 10:39:14AM -0500, Les Mikesell wrote:
Filipe Brandenburger wrote:
$Conf{WakeupSchedule} = ...
The default configuration causes BackupPC to run the BackupPC_nightly job
when there are backup jobs which may need to run during the night...
Comments, anyone?
On Thu, Jun 18, 2009 at 11:30:18AM +0200, Chris Picton wrote:
On Thu, 2009-06-18 at 18:34 +1000, Adam Goryachev wrote:
Chris Picton wrote:
Hi all
I am trying to perform a restore of some important data, but have hit a
snag.
Through the web interface, I can browse all
On Sun, Jun 14, 2009 at 04:58:41PM -0400, Jeffrey J. Kosowsky wrote:
Adam Goryachev wrote at about 04:03:31 +1000 on Monday, June 15, 2009:
Chris Baker wrote:
It seems that I can not reliably restore for more than two hours or
so. The restores just quit for no apparent reason.
On Tue, Jun 02, 2009 at 04:17:18PM -0400, Jeffrey J. Kosowsky wrote:
Pieter Wuille wrote at about 21:40:46 +0200 on Tuesday, June 2, 2009:
On Tue, Jun 02, 2009 at 12:31:54PM -0400, Jeffrey J. Kosowsky wrote:
Pieter Wuille wrote at about 17:57:06 +0200 on Tuesday, June 2, 2009:
cut
On Wed, Jun 3, 2009 at 5:23 PM, John Rouillard
rouilj-backu...@renesys.com wrote:
On Tue, Jun 02, 2009 at 01:46:39PM -0700, Craig Barratt wrote:
I recently heard about lessfs, which runs on top of FUSE to provide
a file system that does block-level de-duplication. See:
What is the status of
On Wed, Jun 03, 2009 at 07:36:22PM -0400, Jeffrey J. Kosowsky wrote:
Holger Parplies wrote at about 23:45:35 +0200 on Wednesday, June 3, 2009:
Hi,
Peter Walter wrote on 2009-06-03 16:15:37 -0400 [Re: [BackupPC-users]
Backing up a BackupPC server]:
[...]
My understanding is
On Tue, Jun 02, 2009 at 11:44:11AM +0200, Tino Schwarze wrote:
On Mon, Jun 01, 2009 at 06:15:52PM -0400, Stephane Rouleau wrote:
Is the blockdevel-level rsync-like solution going to be something
publicly available?
We certainly intend to, but no guarantee it ever gets finished. Except
Hello,
because of a need to restore files from backuppc in a more flexible way than
through the web-interface (a particular directory in a whole bunch of hosts
at the same time) and some googling, i stumbled upon Stephen Day's fuse system
for backuppc.
It had a few shortcomings, such as not
On Tue, Jun 02, 2009 at 12:31:54PM -0400, Jeffrey J. Kosowsky wrote:
Pieter Wuille wrote at about 17:57:06 +0200 on Tuesday, June 2, 2009:
cut
If anyone's interested at trying/looking at it:
https://svn.ulyssis.org/repos/sipa/backuppc-fuse/backuppcfs.pl
cut
It is only tested on one
On Sun, May 31, 2009 at 11:22:13AM -0400, Stephane Rouleau wrote:
Pieter Wuille wrote:
This is how we handle backups of the backuppc pool:
* the pool itself is on a LUKS-encrypted XFS filesystem, on a LVM volume,
on a
software RAID1 of 2 1TB disks.
* twice a week following
Hello list,
I don't know how common this usage is, but in our setup we have a lot of
backuppc hosts that are physically located on a few machines only. It
would be nice if it were possible to allow hosts on different machines to
be backupped simultaneously, but prevent simultaneous backups(dumps)
On Tue, May 19, 2009 at 05:51:29PM +0200, Boniforti Flavio wrote:
Hi,
there is a regular discussion on how to backup/move/copy the
backuppc pool. Did anyone try to backup the pool with bacula?
Hello there...
I don't know about bacula, but would like myself also to get a backup of
26 matches
Mail list logo