I am using $Conf{ClientNameAlias} quite happily with rsync/ssh unix
hosts, but I can't get it to work with a Windows/smb server.
I assume it should work, or is this a bad assumption ???
Thanks,
Brendan.
---
This SF.Net email is sponsored by
On 03/07 09:58 , Chris Willard wrote:
> I have just installed BackupPC on my Debian system using the package
> manager. I was not given a choice of where to install the backups. Is
> there a way to change to location as I would like to use a separate disk
> for backups. Would it work if I moved
Hi Chris,
To change the location where BackupPC writes its files, find the file Lib.pm
and modify the $TopDir in it.
Or, if you prefer, a symbolic link will also work (it's what I use). Here's
how I move the BackupPC files to a different location (in this case, a hard
drive that's been mounted as
Hi,
I have just installed BackupPC on my Debian system using the package
manager. I was not given a choice of where to install the backups. Is
there a way to change to location as I would like to use a separate disk
for backups. Would it work if I moved the data from the current location
and
On Tue, 2006-03-07 at 12:46, David Brown wrote:
> There are no links. Each file has one entry, under the pool directory.
> All that has to be managed is creation and deletion of these files. It is
> not difficult to be able to easily recover from a crash in either of these
> scenarios, as long a
I agree. It is sometimes nice to be able to step down the client's
tree and look for a specific file.
So, what's the drawback of using a database to manage the tree?
Obviously, you only have a single hash tree that contains all backups
and you wouldn't be able to browse it for a specific file.
On Tue, Mar 07, 2006 at 12:15:50PM -0600, Les Mikesell wrote:
> The piece that has to be atomic has to do with the actual pool
> file, so unless you move the data into the database as well
> you can't atomically manage the links or ever be sure that
> they are actually correct. And if you move th
> > okay, right? it's only when you want to preserve or copy your
> > pool that there's an issue? (or am i neglecting something? i
> > might well be.)
>
> Even just the normal process of looking at the pool, either to see if a
> file is present, or as part of the cleanup scan is much slow
On Tue, 2006-03-07 at 11:55, David Brown wrote:
> >
> > All you'll do by trying is lose the atomic nature of the hardlinks.
> > You aren't ever going have the data at the same time you know all
> > of it's names so you can store them close together. Just throw in
> > lots of ram and let caching d
We had the same problem trying to restore unix user accounts on windows
machine. (files .*, links).
We tried many unzip programs. The only one able to restore data was 7zip
Regards,
Olivier.
Le Tuesday 07 March 2006 16:22, Jean-Michel Beuken a écrit :
> Hello,
>
> I have a problem to expand t
On Tue, Mar 07, 2006 at 11:49:40AM -0600, Les Mikesell wrote:
> > I still say it is going to be a lot easier to change how backuppc works
> > than it is going to be to find a filesystem that will deal with this very
> > unusual use case well.
>
> All you'll do by trying is lose the atomic nature
On Tue, 2006-03-07 at 11:10, David Brown wrote:
> The depth isn't really the issue. It is that they are created under one
> tree, and hardlinked to another tree. The normal FS optimization of
> putting the inodes of files in a given directory near each other breaks
> down, and the directories in
On Tue, Mar 07, 2006 at 12:20:04PM -0500, Paul Fox wrote:
> to clarify -- in the "normal" case, where the backup data is
> usually not read, but only written, the current filesystems are
> okay, right? it's only when you want to preserve or copy your
> pool that there's an issue? (or am i neglec
> The depth isn't really the issue. It is that they are created under one
> tree, and hardlinked to another tree. The normal FS optimization of
> putting the inodes of files in a given directory near each other breaks
> down, and the directories in the pool end up with files of very diverse
On Tue, Mar 07, 2006 at 10:21:15AM -0600, Carl Wilhelm Soderstrom wrote:
> ok. point taken.
> Bonnie does create a very shallow tree for these files, but it's only a
> directory or two deep.
The depth isn't really the issue. It is that they are created under one
tree, and hardlinked to another t
I did a quick search on the archive and only one 50001
thread came up with no apparent resolution.
http://sourceforge.net/mailarchive/message.php?msg_id=10431724
Craig suggested checking your PC's disk for
errors. Have you looked for specific errors in pc's xferlog? Do you
have tons of
On 03/07 08:14 , David Brown wrote:
> Unfortunately, the resultant filesystem has very little resemblance to the
> file tree that backuppc writes. I'm not sure if there is any utility that
> creates this kind of tree, and I would argue that backuppc shouldn't be
> either, since it is so hard on th
On Tue, Mar 07, 2006 at 09:23:36AM -0600, Carl Wilhelm Soderstrom wrote:
> I'm experimenting with an external firewire drive enclosure, and I formatted
> it with 3 different filesystems, then used bonnie++ to generate 10GB of
> sequential data, and 1,024,000 small files between 1000 and 100 bytes
I just installed a second backuppc server after giving up trying to mirror
my 1.5T USB RAID arrays. It's almost identical to my first one, but on
several *nix hosts, I'm getting:
Running: /usr/bin/ssh -q -x -l root apollo /usr/bin/rsync --server
--sender --numeric-ids --perms --owner --group --de
On 03/07 09:54 , Les Mikesell wrote:
> See if you can find a benchmark program called 'postmark'. This
> used to be available from NetApp but I haven't been able to find
> a copy recently. It specifically tests creation and deletion
> of lots of small files. When I used it years ago it showed
>
I am
gettting an error from my BackupPC software. The backup starts and it will run
for a while then it will give the following
2006-03-06 10:31:16 Got fatal error
during xfer (Too many smbtar errors (50001)
I am not sure what is causing this problem, anyone seen
this before?
On Tue, 2006-03-07 at 09:23, Carl Wilhelm Soderstrom wrote:
> Am I missing something here? Am I mis-interpreting the data? Is there anyone
> else out there with more bonnie experience than I, who can suggest other
> things to try to gain more surety about this?
See if you can find a benchmark pro
On 03/07 04:43 , Guus Houtzager wrote:
> I think you're right. I have 2 suggestions for additional testing. It's my
> experience that backuppc became really really slow after a few weeks when
> more data began to accumulate. Could you test ext3 again, but with a few
> million more files? I'm als
Hi,
On Tuesday 07 March 2006 16:23, Carl Wilhelm Soderstrom wrote:
> I'm experimenting with an external firewire drive enclosure, and I
> formatted it with 3 different filesystems, then used bonnie++ to generate
> 10GB of sequential data, and 1,024,000 small files between 1000 and 100
> bytes in s
I'm experimenting with an external firewire drive enclosure, and I formatted
it with 3 different filesystems, then used bonnie++ to generate 10GB of
sequential data, and 1,024,000 small files between 1000 and 100 bytes in
size.
I tried it with xfs, reiserfs, and ext3; and contrary to a lot of hype
Hello,
I have a problem to expand the restore.zip file generated with the
CGI interface (I use Option 2 : Download Zip archive, "Make Arch
relative to /", "compress=5")
the size of the file restore.zip in the Windows XP Desktop is good
but if I use WinZip v9, PowerArchiver, PKZIP or "Open wi
On 03/07 03:09 , Stephen Vaughan wrote:
> I'm having problems backing up one of my servers. The backup goes for about
> 1 1/2 hours then it just cuts out and says timeout error.
try setting:
$Conf{ClientTimeout} = 432000;
either in config.pl or the per-host config files. See if that solves your
Hello all,
I'm running currently two backupservers which are doing a nice job.
The backupservers are doing backups using rsync over ssh.
OS : Mandriva Linux release 2006.0 (Official) for i586
kernel : 2.6.12
rsync : version 2.6.6 protocol version 29
raid 5
The problem I have now is that a coup
El Lunes, 6 de Marzo de 2006 09:37, Chaiwat Maneeboon escribió:
CM > Hi, all
CM > I use backuppc at first time. my server is Ubuntu 5.10, client is
CM > windows Xp pro SP2.
CM > I installed backuppc from a debain stable repository. config.pl is
CM > config by default.
CM > Windows xp
29 matches
Mail list logo