Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Rich Rauenzahn
Les Mikesell wrote: Johan Ehnberg wrote: OK. I can see now why this is true. But it seems like one could rewrite the backuppc rsync protocol to check the pool for a file with same checksum before syncing. This could give some real speedup on long files. This would be possible at least for

Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-14 Thread Rich Rauenzahn
I thought about this a little a year or so ago -- enough to attempt to try to understand the rsync perl modules (failed!). I thought perhaps what would be best is a berkeley db/tied hash lookup table/cache that would map rsync checksums+file size to a pool item.The local rsync

Re: [BackupPC-users] Backing up the backup to an external USB drive

2008-12-12 Thread Rich Rauenzahn
Yesterday I read a little about the LVM snapshot, I didn't know that LVM had this feature. When I read that suggestion I thought that snapshot was a sort of dd. By now I am running a dd on the snapshot, I opened some space on the lvm by reducing the root size and created the snapshot.

Re: [BackupPC-users] merge host from one pool into another pool

2008-11-25 Thread Rich Rauenzahn
Msquared wrote: I guess the simple question is, then: is there a tool you can run that will de-dupe and/or re-merge with the pool? How about a linux tool like either of these? There is a risk of linking two files that are the same, but are semantically different... like an attrib file

Re: [BackupPC-users] Child Exited Prematurely

2008-11-24 Thread Rich Rauenzahn
This is most likely a TCP timeout or other network problem. Rsync added a TCP keep-alive option in protocol version 29 (if I recall correctly) and is not currently supported in File::RsyncP that BackupPC uses. It's too bad rsync doesn't have any kind of hook/plugin system where you

Re: [BackupPC-users] Is there any way for BackupPC to restore hard links properly?

2008-11-10 Thread Rich Rauenzahn
Craig Barratt wrote: This results in one subtle bug that can't be easily fixed: if you switch the Xfer method from tar to rsync, old backups that have hardlinks stored with tar won't be correctly restored with rsync. The workaround is generate a tar file and extract it, or switch the Xfer

Re: [BackupPC-users] Tailing the XferLOG

2008-11-07 Thread Rich Rauenzahn
Kevin DeGraaf wrote: Is there a way to tail the XferLOG during an rsync-over-ssh dump? How about something like: mkfifo pipe tail -f --bytes=+0 XferLOG.z pipe BackupPC_zcat pipe Another option is to run strace against the process writing the files and filtering for just open's. Rich

Re: [BackupPC-users] Feature Request: Link to latest Full

2008-11-07 Thread Rich Rauenzahn
Nils Breunese (Lemonbit) wrote: If you want to send an archive of a backup to tape that you can restore (without BackupPC), check out 'Archive Functions' in the BackupPC documentation: http://backuppc.sourceforge.net/faq/BackupPC.html#archive_functions Nils Breunese. I've always

Re: [BackupPC-users] rsync - writefd_unbuffered failed to write

2008-10-27 Thread Rich Rauenzahn
Matthias Meyer wrote: Hi, I've installed cygwin and rsync on a vista client and want backup it to backuppc 3.1.0-2 on my Linux server: 2008/10/17 10:30:28 [1432] rsync: writefd_unbuffered failed to write 4 bytes [sender]: Connection reset by peer (104) 2008/10/17 10:30:28 [1432] rsync

Re: [BackupPC-users] Move BackupPC

2008-08-08 Thread Rich Rauenzahn
Diederik De Deckere wrote: anyway. you should consider NOT moving to raid5 as it is very very slow with backuppc. specifically, write speed is less that half that of a raid1 and way less than a raid0+1. I'm not sure if speed is an issue here since all backups are taken over night.

Re: [BackupPC-users] Just to make sure: How to move/copy /var/lib/backuppc to another place (RAID1)

2008-08-06 Thread Rich Rauenzahn
Rob Owens wrote: Holger Parplies wrote: Your best options remain to either do a block level copy of the file system (dd) or to start over. One suggestion I've heard on this list before, which may be a good one for you, is to simply start over with a new pool but save the existing

Re: [BackupPC-users] BackupPCd: still dead?

2008-07-15 Thread Rich Rauenzahn
dan wrote: there is no official message that says its dead, but its dead. no activity for 2 full years, and keene's last post to this like is like 2 full years ago. id guess it's completely dead! On Thu, Jul 10, 2008 at 3:19 PM, dnk [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

Re: [BackupPC-users] Incremental directory structure

2008-06-24 Thread Rich Rauenzahn
I don't want to get into a war about filesystem formats, but perhaps this is a valid data point for XFS. I don't know about other filesystem types either... You might like to check out/talk to some XFS experts, and see what they say about your very slow performance There may be some

Re: [BackupPC-users] Need access to raw mailing list archive

2008-02-21 Thread Rich Rauenzahn
Curtis Preston wrote: Hey, folks! It's W. Curtis Preston, the author of Backup Recovery and the webmaster of www.backupcentral.com. I'd like to add the backuppc-users list to the backup software mailing lists that I support on BackupCentral.com. I added the NetWorker, NetBackup, and TSM

Re: [BackupPC-users] Enhancing WAN link transfers

2008-02-19 Thread Rich Rauenzahn
dan wrote: no, incrementals are more efficient on bandwidth. they do a less strenuous test to determine if a file has changed. at the expense of CPU power on both sides, you can compress the rsync traffic either with rsync -z Have you tried rsync -z? Last I heard, BackupPC's rsync

Re: [BackupPC-users] Improving security, and user options

2008-02-05 Thread Rich Rauenzahn
Joe Krahn wrote: (Maybe this should be posted to -devel?) Unrestricted remote root access by a non-root user is generally not a secure design. There are many ways to restrict the access to backup This seems like a good chance to explain how I handle the rsync security -- I prefer it

Re: [BackupPC-users] Improving security, and user options

2008-02-05 Thread Rich Rauenzahn
There are several secure ways to set up a read-only backup system, but that loses the convenience of browsing and restoring files via the web interface. But, users can still directly download files or tar archives, so it is a reasonable approach, and probably the right thing to do for now.

Re: [BackupPC-users] Best way to backup localhost?

2008-01-22 Thread Rich Rauenzahn
B. Cook wrote: Hello All, I'm setting up a new machine trying out different things.. Do I need to setup sshd/rsync so that the backuppc 'user' can have full access to / ? Or is there a better, more efficient way? Thanks in advance. I prefer to let rsyncd do the privilege raise

Re: [BackupPC-users] How to manually add a single file to a backed up host

2008-01-15 Thread Rich Rauenzahn
Les Mikesell wrote: Adam Goryachev wrote: In this case, it no longer matters, and for those that are interested in how to use rsync to transfer a 20G file full of 0's in a few minutes over a very slow connection, here is how: rsync --partial --progress source destination Then, cancel the

Re: [BackupPC-users] Update Re: How to speed up backup process?

2008-01-11 Thread Rich Rauenzahn
the previous post mentioned something about the compression being for the pool. In fact you can also compress the rsync stream with the -z flag which works great on limited bandwidth connections as long as you have a decent CPU. The ssh stream I tried compressing the rsync stream and

Re: [BackupPC-users] scheduling backups

2008-01-10 Thread Rich Rauenzahn
This seems to make sense, however when I set it up at first the host(s) could not be backed up because of host not found error. Placing hostA_share1 and hostA_share2 and so on in the /etc/backuppc/hosts file seemed to cause this as they are not real host names and fail an nmblookup

Re: [BackupPC-users] Backing up the BackupPC server

2008-01-07 Thread Rich Rauenzahn
Nils Breunese (Lemonbit) wrote: This is topic is discussed pretty regularly on this mailinglist. Please also search the archives. Because of the heavy use of hardlinks breaking the pool up into smaller batches is not really feasible and indeed rsync doesn't really handle very large

Re: [BackupPC-users] File::RsyncP installation problem.

2007-12-06 Thread Rich Rauenzahn
Matthew Metzger wrote: /bin/sh: cc: command not found Install a compiler/gcc...? make[1]: *** [Digest.o] Error 127 make[1]: Leaving directory `/home/sysadmin/File-RsyncP-0.68/Digest' make: *** [subdirs] Error 2 --- I get the same type of errors when trying to install the

Re: [BackupPC-users] Compression level

2007-12-05 Thread Rich Rauenzahn
John Pettitt wrote: What happens is the newly transfered file is compared against candidates in the pool with the same hash value and if one exists it's just linked, The new file is not compressed. It seems to me that if you want to change the compression in the pool the way to go

Re: [BackupPC-users] (no subject)

2007-12-05 Thread Rich Rauenzahn
Carl Keil wrote: I'm sorry for the delay, I'm just now getting a chance to try the perl -ne suggestion. What do you mean by backuppc's header info? How would I search for that? Thanks for your suggestions, backuppc stores the compressed backed up files in compressed blocks with an

Re: [BackupPC-users] Compression level

2007-12-04 Thread Rich Rauenzahn
[EMAIL PROTECTED] wrote: Hello, I would like to have an information about compression level. I'm still doing several tests about compression and I would like to have your opinion about something : I think that there is a very little difference between level 1 and level 9. I tought that

Re: [BackupPC-users] Recommended partitioning for BackupPC on Ubuntu server?

2007-11-30 Thread Rich Rauenzahn
Angus Scott-Fleming wrote: I'm trying to set up a BackupPC server with an eye to the future so that I can expand storage easily, so I want LVM with RAID (unfortunately it'll have to be RAID 1, as I can't see that LVM and software-RAID 5 work together, but that's another issue :-().

Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Rich Rauenzahn
Gene Horodecki wrote: I had that problem as well.. so I uhh.. well, I fiddled with the backup directory on the backuppc server and moved them around so that backuppc wouldn't see I had moved them remotely.. Not something I would exactly recommend doing... although it worked. Great

Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Rich Rauenzahn
Gene Horodecki wrote: Sounds reasonable... What did you do about the attrib file? I noticed there is a file called 'attrib' in each of the pool directories with some binary data in it. Nothing... it just contains permissions, etc. That's why I did another full after the move -- then all

Re: [BackupPC-users] adjusting compression level

2007-11-20 Thread Rich Rauenzahn
[EMAIL PROTECTED] wrote: Hello, I installed BackupPC 3.0.0 on a Fedora Core 6 (server) in order to back up Windows XP client. For your information, I use rsyncd. I'm checking and testing all differents compression levels to compare them. The test is based on 3.00 Gb in full backupc.

Re: [BackupPC-users] Backing up servers behind a PPPoEo-connection

2007-10-01 Thread Rich Rauenzahn
Peter Carlsson wrote: Hello! I'm using BackupPC version 2.1.2pl1 on a Debian (stable) server to successfully backup itself as well as two other Debian-machines on the local network. What is the best/easiest way to configure BackupPC to also backup two other machines (one Debian and on

[BackupPC-users] Backing up TIVO

2007-09-21 Thread Rich Rauenzahn
Now that I have your attention =-)... I just wanted to report I got it working. I got tired of tarring up my /var/hack customizations and ftp'ing them. I realized rsync would be more efficient and then thought -- heck, if I find rsync for TIVO, I might was well add it to my BackupPC

Re: [BackupPC-users] UserCmdCheckStatus not working?

2007-09-21 Thread Rich Rauenzahn
Kimball Larsen wrote: Hey, that's a good idea. You certainly did not misunderstand the problem, and I believe that will work. I do wish that there was a way to prevent backuppc from even creating the filesystem in the first place, but having a script clean it up is ok. Not

Re: [BackupPC-users] UserCmdCheckStatus not working?

2007-09-21 Thread Rich Rauenzahn
Ambrose Li wrote: On 21/09/2007, Rich Rauenzahn [EMAIL PROTECTED] wrote: Not familiar with MACs -- but -- shouldn't the mountpoint have permissions such that only the OS can add directories? That's not exactly how things work on Macs. They have a /tmp-like directory /Volumes

Re: [BackupPC-users] files already in pool are downloaded

2007-09-12 Thread Rich Rauenzahn
The problem is that BackupPC and rsync use different checksum algorithms. This has been discussed many times. I believe there is a specialized client being developped (BackupPCd) which may allow such speedups, but, as Les says, What would be the problem with a lookup table of rsync

Re: [BackupPC-users] Backing up one share uncompressed

2007-09-12 Thread Rich Rauenzahn
Mark Allison wrote: Thanks - I haven't tested this but I assumed that compression would be much slower. As all the files are MP3s (already compressed), bzip2 is unlikely to be able to compress it much further. I'm currently using a setting of 5, bzip2. Thanks for the replies, I'll

Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread Rich Rauenzahn
Rob Owens wrote: My understanding is that with tar and smb, all files are downloaded (and then discarded if they're already in the pool). Rsync is smart enough, though, not to download files already in the pool. -Rob I was about to post the same thing. I moved/renamed some

Re: [BackupPC-users] files already in pool are downloaded

2007-09-11 Thread Rich Rauenzahn
Les Mikesell wrote: Rich Rauenzahn wrote: Rob Owens wrote: My understanding is that with tar and smb, all files are downloaded (and then discarded if they're already in the pool). Rsync is smart enough, though, not to download files already in the pool. -Rob I was about

[BackupPC-users] rsync --compress broken ?

2007-08-21 Thread Rich Rauenzahn
Whenever I use these options, rsync seems to work and transfer files but nothing ever seems to actually get written to the backup dirs: $Conf{RsyncArgs} = [ # defaults, except I added the compress flags. '--numeric-ids', '--perms', '--owner', '--group', '-D', '--links',

Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn
Rob Owens wrote: If I have 2 hosts that contain common files, and compression is enabled on one but not the other, will these hosts' files ever get pooled? What if compression is enabled on both, but different compression levels are set? Thanks -Rob I'm curious about this as well,

Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn
Compression is done on the server side after the transfer. What's the point of using different methods? According to the docs, compressed and uncompressed files aren't pooled but different levels are. The only way to get compression over the wire is to add the -C option to ssh - and

Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn
Les Mikesell wrote: Don't you run several concurrent backups? Unless you limit it to the number of CPUs in the server the high compression versions will still be stealing cycles from the LAN backups. I'm backing up 5 machines. Only one is on the internet, and the amount of CPU time/sec the

Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn
Rob Owens wrote: Rich Rauenzahn wrote: For example, to compress a 5,861,382 byte mp3 file with bzip2 -9 takes 3.3 seconds. That's 1,776,176 bytes/sec. Rich, I just tried bzip'ing an ogg file and found that it got slightly larger. The reason, I believe, is that formats like ogg, mp3

Re: [BackupPC-users] pooling of compressed and non-compressed backups

2007-08-21 Thread Rich Rauenzahn
I'm curious about this as well, but would like to add to the question -- what if I'm backing up some hosts across the internet, and I set the compression to bzip2 -9. But local hosts on the LAN I set to gzip -4. I believe I read that the pool checksums are based on the uncompressed

Re: [BackupPC-users] wishlist: full backups whenever incrementals get too large

2007-08-21 Thread Rich Rauenzahn
If this is really the way backuppc does incremental backups, I think backuppc should be a bit more incremental with its incremental backups. Instead of comparing against the last full, it should compare against the last full and incremental backups. This would solve this problem and make