Les Mikesell wrote:
Johan Ehnberg wrote:
OK. I can see now why this is true. But it seems like one could
rewrite the backuppc rsync protocol to check the pool for a file with
same checksum before syncing. This could give some real speedup on
long files. This would be possible at least for
I thought about this a little a year or so ago -- enough to attempt to
try to understand the rsync perl modules (failed!).
I thought perhaps what would be best is a berkeley db/tied hash lookup
table/cache that would map rsync checksums+file size to a pool
item.The local rsync
Yesterday I read a little about the LVM snapshot, I didn't know that LVM
had this feature. When I read that suggestion I thought that snapshot
was a sort of dd. By now I am running a dd on the snapshot, I opened
some space on the lvm by reducing the root size and created the snapshot.
Msquared wrote:
I guess the simple question is, then: is there a tool you can run that
will de-dupe and/or re-merge with the pool?
How about a linux tool like either of these? There is a risk of
linking two files that are the same, but are semantically different...
like an attrib file
This is most likely a TCP timeout or other network problem.
Rsync added a TCP keep-alive option in protocol version 29
(if I recall correctly) and is not currently supported in
File::RsyncP that BackupPC uses.
It's too bad rsync doesn't have any kind of hook/plugin system where you
Craig Barratt wrote:
This results in one subtle bug that can't be easily fixed: if you
switch the Xfer method from tar to rsync, old backups that have
hardlinks stored with tar won't be correctly restored with rsync.
The workaround is generate a tar file and extract it, or switch
the Xfer
Kevin DeGraaf wrote:
Is there a way to tail the XferLOG during an rsync-over-ssh dump?
How about something like:
mkfifo pipe
tail -f --bytes=+0 XferLOG.z pipe
BackupPC_zcat pipe
Another option is to run strace against the process writing the files
and filtering for just open's.
Rich
Nils Breunese (Lemonbit) wrote:
If you want to send an archive of a backup to tape that you can
restore (without BackupPC), check out 'Archive Functions' in the
BackupPC documentation:
http://backuppc.sourceforge.net/faq/BackupPC.html#archive_functions
Nils Breunese.
I've always
Matthias Meyer wrote:
Hi,
I've installed cygwin and rsync on a vista client and want backup it to
backuppc 3.1.0-2 on my Linux server:
2008/10/17 10:30:28 [1432] rsync: writefd_unbuffered failed to write 4 bytes
[sender]: Connection reset by peer (104)
2008/10/17 10:30:28 [1432] rsync
Diederik De Deckere wrote:
anyway. you should consider NOT moving to raid5 as it is very very
slow with backuppc. specifically, write speed is less that half
that of a raid1 and way less than a raid0+1.
I'm not sure if speed is an issue here since all backups are taken
over night.
Rob Owens wrote:
Holger Parplies wrote:
Your best options remain to either do a block level copy of the file system
(dd) or to start over.
One suggestion I've heard on this list before, which may be a good one
for you, is to simply start over with a new pool but save the existing
dan wrote:
there is no official message that says its dead, but its dead. no
activity for 2 full years, and keene's last post to this like is like
2 full years ago.
id guess it's completely dead!
On Thu, Jul 10, 2008 at 3:19 PM, dnk [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I don't want to get into a war about filesystem formats, but perhaps
this is a valid data point for XFS. I don't know about other filesystem
types either...
You might like to check out/talk to some XFS experts, and see what they
say about your very slow performance There may be some
Curtis Preston wrote:
Hey, folks! It's W. Curtis Preston, the author of Backup Recovery and
the webmaster of www.backupcentral.com.
I'd like to add the backuppc-users list to the backup software mailing
lists that I support on BackupCentral.com. I added the NetWorker,
NetBackup, and TSM
dan wrote:
no, incrementals are more efficient on bandwidth. they do a less
strenuous test to determine if a file has changed.
at the expense of CPU power on both sides, you can compress the rsync
traffic either with rsync -z
Have you tried rsync -z? Last I heard, BackupPC's rsync
Joe Krahn wrote:
(Maybe this should be posted to -devel?)
Unrestricted remote root access by a non-root user is generally not a
secure design. There are many ways to restrict the access to backup
This seems like a good chance to explain how I handle the rsync security
-- I prefer it
There are several secure ways to set up a read-only backup system, but
that loses the convenience of browsing and restoring files via the web
interface. But, users can still directly download files or tar archives,
so it is a reasonable approach, and probably the right thing to do for now.
B. Cook wrote:
Hello All,
I'm setting up a new machine trying out different things..
Do I need to setup sshd/rsync so that the backuppc 'user' can have
full access to / ?
Or is there a better, more efficient way?
Thanks in advance.
I prefer to let rsyncd do the privilege raise
Les Mikesell wrote:
Adam Goryachev wrote:
In this case, it no longer matters, and for those that are interested in
how to use rsync to transfer a 20G file full of 0's in a few minutes
over a very slow connection, here is how:
rsync --partial --progress source destination
Then, cancel the
the previous post mentioned something about the compression being for
the pool. In fact you can also compress the rsync stream with the -z
flag which works great on limited bandwidth connections as long as you
have a decent CPU. The ssh stream
I tried compressing the rsync stream and
This seems to make sense, however when I set it up at first the
host(s) could not be backed up because of host not found error.
Placing hostA_share1 and hostA_share2 and so on in the
/etc/backuppc/hosts file seemed to cause this as they are not real
host names and fail an nmblookup
Nils Breunese (Lemonbit) wrote:
This is topic is discussed pretty regularly on this mailinglist.
Please also search the archives. Because of the heavy use of hardlinks
breaking the pool up into smaller batches is not really feasible and
indeed rsync doesn't really handle very large
Matthew Metzger wrote:
/bin/sh: cc: command not found
Install a compiler/gcc...?
make[1]: *** [Digest.o] Error 127
make[1]: Leaving directory `/home/sysadmin/File-RsyncP-0.68/Digest'
make: *** [subdirs] Error 2
---
I get the same type of errors when trying to install the
John Pettitt wrote:
What happens is the newly transfered file is compared against candidates
in the pool with the same hash value and if one exists it's just
linked, The new file is not compressed. It seems to me that if you
want to change the compression in the pool the way to go
Carl Keil wrote:
I'm sorry for the delay, I'm just now getting a chance to try the perl
-ne suggestion. What do you mean by backuppc's header info? How would I
search for that?
Thanks for your suggestions,
backuppc stores the compressed backed up files in compressed blocks with
an
[EMAIL PROTECTED] wrote:
Hello,
I would like to have an information about compression level.
I'm still doing several tests about compression and I would like to
have your opinion about something :
I think that there is a very little difference between level 1 and
level 9.
I tought that
Angus Scott-Fleming wrote:
I'm trying to set up a BackupPC server with an eye to the future so
that I can expand storage easily, so I want LVM with RAID
(unfortunately it'll have to be RAID 1, as I can't see that LVM and
software-RAID 5 work together, but that's another issue :-().
Gene Horodecki wrote:
I had that problem as well.. so I uhh.. well, I fiddled with the backup
directory on the backuppc server and moved them around so that backuppc
wouldn't see I had moved them remotely.. Not something I would exactly
recommend doing... although it worked.
Great
Gene Horodecki wrote:
Sounds reasonable... What did you do about the attrib file? I noticed
there is a file called 'attrib' in each of the pool directories with
some binary data in it.
Nothing... it just contains permissions, etc. That's why I did another
full after the move -- then all
[EMAIL PROTECTED] wrote:
Hello,
I installed BackupPC 3.0.0 on a Fedora Core 6 (server) in order to
back up Windows XP client.
For your information, I use rsyncd.
I'm checking and testing all differents compression levels to compare
them.
The test is based on 3.00 Gb in full backupc.
Peter Carlsson wrote:
Hello!
I'm using BackupPC version 2.1.2pl1 on a Debian (stable) server to
successfully backup itself as well as two other Debian-machines on
the local network.
What is the best/easiest way to configure BackupPC to also backup
two other machines (one Debian and on
Now that I have your attention =-)...
I just wanted to report I got it working.
I got tired of tarring up my /var/hack customizations and ftp'ing them.
I realized rsync would be more efficient and then thought -- heck, if I
find rsync for TIVO, I might was well add it to my BackupPC
Kimball Larsen wrote:
Hey, that's a good idea. You certainly did not misunderstand the
problem, and I believe that will work. I do wish that there was a
way to prevent backuppc from even creating the filesystem in the
first place, but having a script clean it up is ok.
Not
Ambrose Li wrote:
On 21/09/2007, Rich Rauenzahn [EMAIL PROTECTED] wrote:
Not familiar with MACs -- but -- shouldn't the mountpoint have
permissions such that only the OS can add directories?
That's not exactly how things work on Macs. They have a /tmp-like
directory /Volumes
The problem is that BackupPC and rsync use different checksum algorithms.
This has been discussed many times. I believe there is a specialized client
being developped (BackupPCd) which may allow such speedups, but, as Les
says,
What would be the problem with a lookup table of rsync
Mark Allison wrote:
Thanks - I haven't tested this but I assumed that compression would be
much slower. As all the files are MP3s (already compressed), bzip2 is
unlikely to be able to compress it much further. I'm currently using a
setting of 5, bzip2.
Thanks for the replies, I'll
Rob Owens wrote:
My understanding is that with tar and smb, all files are downloaded (and
then discarded if they're already in the pool). Rsync is smart enough,
though, not to download files already in the pool.
-Rob
I was about to post the same thing. I moved/renamed some
Les Mikesell wrote:
Rich Rauenzahn wrote:
Rob Owens wrote:
My understanding is that with tar and smb, all files are downloaded (and
then discarded if they're already in the pool). Rsync is smart enough,
though, not to download files already in the pool.
-Rob
I was about
Whenever I use these options, rsync seems to work and transfer
files but nothing ever seems to actually get written to the backup
dirs:
$Conf{RsyncArgs} = [ # defaults, except I added the compress flags.
'--numeric-ids',
'--perms',
'--owner',
'--group',
'-D',
'--links',
Rob Owens wrote:
If I have 2 hosts that contain common files, and compression is enabled
on one but not the other, will these hosts' files ever get pooled?
What if compression is enabled on both, but different compression levels
are set?
Thanks
-Rob
I'm curious about this as well,
Compression is done on the server side after the transfer. What's the
point of using different methods? According to the docs, compressed
and uncompressed files aren't pooled but different levels are. The
only way to get compression over the wire is to add the -C option to
ssh - and
Les Mikesell wrote:
Don't you run several concurrent backups? Unless you limit it to the
number of CPUs in the server the high compression versions will still
be stealing cycles from the LAN backups.
I'm backing up 5 machines. Only one is on the internet, and the amount
of CPU time/sec the
Rob Owens wrote:
Rich Rauenzahn wrote:
For example, to compress a 5,861,382 byte mp3 file with bzip2 -9 takes
3.3 seconds. That's 1,776,176 bytes/sec.
Rich, I just tried bzip'ing an ogg file and found that it got slightly
larger. The reason, I believe, is that formats like ogg, mp3
I'm curious about this as well, but would like to add to the question --
what if I'm backing up some hosts across the internet, and I set the
compression to bzip2 -9. But local hosts on the LAN I set to gzip -4.
I believe I read that the pool checksums are based on the uncompressed
If this is really the way backuppc does incremental backups, I think backuppc
should be a bit more incremental with its incremental backups. Instead of
comparing against the last full, it should compare against the last full and
incremental backups. This would solve this problem and make
45 matches
Mail list logo