Re: [BackupPC-users] 100,000+ errors in last nights backup
Holger, I started to reply to your e-mail but my system crashed. The messages log suggests that backuppc may have been the culprit. See below. On Thu, 13 Aug 2009 02:19:25 +0200 Holger Parplies wb...@parplies.de wrote: Hi, Jeffrey J. Kosowsky wrote on 2009-08-12 18:12:09 -0400 [Re: [BackupPC-users] 100,000+ errors in last nights backup]: Steve Blackwell wrote at about 14:33:54 -0400 on Wednesday, August 12, 2009: Steve Blackwell wrote at about 11:18:36 -0400 on Wednesday, August 12, 2009: On Wed, 12 Aug 2009 10:06:37 -0400 Jeffrey J. Kosowsky backu...@kosowsky.org wrote: Try manually doing something like the following from the command line: link b32585c3cc30b7ebb556d335a08554e3 /media/disk/pc/steve/151/f%2f/froot/f.gconf/fdesktop/fgnome/faccessibility/fkeyboard/f%25gconf.xml Where do I need to be to run this? you need to be in $TopDir/cpool/b/3/2 ... or rather, it should be sudo -u backuppc ln $TopDir/cpool/b/3/2/b32585c3cc30b7ebb556d335a08554e3 /media/disk/pc/steve/151/f%2f/froot/f.gconf/fdesktop/fgnome/faccessibility/fkeyboard/f%25gconf.xml my $TopDir is /media/disk. I ran the command above and it appeared to work OK. I don't have the exact result anymore because of the crash. (or use 'link' if you prefer). You need to get $TopDir right though. See below. # sudo -u backuppc link b32585c3cc30b7ebb556d335a08554e3 /media/disk/pc/steve/151/f%2f/froot/f.gconf/fdesktop/fgnome/faccessibility/fkeyboard/f%25gconf.xml link: cannot create link `/media/disk/pc/steve/151/f%2f/froot/f.gconf/fdesktop/fgnome/faccessibility/fkeyboard/f%25gconf.xml' to `b32585c3cc30b7ebb556d335a08554e3': No such file or directory Yes, most directories won't contain a file with that name ;-). Jeffrey, I'm now thinking that two backups have somehow been scheduled at the same. See the Les Miskell thread. I agree with that except for the name. But I don't think it's the problem, at least not the one you're looking for. I can't imagine why two backups of the same host would be scheduled simultaneously, and I don't think it's a good idea to do so :-). BackupPC doesn't usually do this. Have you changed the code in any way? Have you seen such a thing happen before? Could I have started the backuppc service twice somehow? I certainly haven't changed any of the code. I haven't written any perl since perl 4 circa 1995. 8 [snip] So, I believe we're back to the issue of what you did wrong when moving $TopDir. I don't remember reading which version of BackupPC When moving $TopDir? I haven't moved it. It's always been /media/disk ever since I installed backuppc. you are using. What did you do to move $TopDir? Have a look at the old location, wherever that was. Are there files with recent modification times below the cpool/ directory? Below /media/disk/cpool? BackupPC_nightly reports files there, but # ls -l /media/disk total 48 drwxr-x--- 18 backuppc root 4096 2009-08-12 20:00 cpool drwx-- 2 root root 16384 2008-07-16 23:54 lost+found drwxr-x--- 4 backuppc root 4096 2009-08-09 23:06 pc drwxr-x--- 2 backuppc root 4096 2008-07-26 18:53 pool drwxr-x--- 2 backuppc root 4096 2009-08-11 01:25 trash So cpool has today's date ]# ls -l /media/disk/cpool total 128 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 0 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 1 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 2 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 3 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 4 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 5 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 6 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 7 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 8 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 9 drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 a drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 b drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 c drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 d drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 e drwxr-x--- 18 backuppc backuppc 4096 2009-06-28 15:30 f Nothing since 6/28. This is where I got to when the machine crashed. Looking at /var/log/messages I see this: Aug 12 20:00:12 steve kernel: [ cut here ] Aug 12 20:00:12 steve kernel: WARNING: at lib/list_debug.c:51 list_del+0x41/0x60() Aug 12 20:00:12 steve kernel: Hardware name: To Be Filled By O.E.M. Aug 12 20:00:12 steve kernel: list_del corruption. next-prev should be c14c2838, but was c14c2878 Aug 12 20:00:12 steve kernel: Modules linked in: snd_seq_midi vfat fat autofs4 w83627ehf hwmon_vid hwmon nf_conntrack_netbios_ns nf_conntrack_ipv6 ip6t_ipv6header ip6t_REJECT ip6table_filter ip6_tables ipv6 p4_clockmod fuse dm_multipath uinput snd_usb_audio snd_usb_lib snd_emu10k1_synth
Re: [BackupPC-users] What is the meaning of repeated, max chain and max links in the logfile?
Matthias writes: Every day I get a message in __LOGDIR__/LOG: 2009-08-12 02:35:49 Cpool is 322.19GB, 1142028 files (860 repeated, 31 max chain, 11424 max links), 4369 directories What is the meaning of: repeated The total number of pool files with hash collisions. Since the hash is not computed over the entire file contents, it's common to get files with the same hash. max chain The biggest set of pool files with the same hash. This is a potential performance bottleneck if the number gets too large, since a incoming file needs to be matched against every pool file in the chain. That means each file is O(n), and n files from a full backup all with the same hash takes O(n^2) comparisons. So it's a problem when this number gets too big. max links The maximum number of hardlinks on any pool file. The minimum is 2 (anything with 1 is deleted since it is no longer used). This means that across all your backups you have one file that appears 11424 times. Craig -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Fwd: Backup fails
On Thu, Aug 13, 2009 at 12:20:20PM +0200, Michael Aram wrote: Hello all, thank you for your answers. I changed the backupcommand (added -q) and got rid of the first error. Thanks. However unfortunately, my backupserver wasnt able to successfully backup my remote machine. I have to mention, that I want to backup ~100GB over a normal 25MBit xDSL connection over the internet. It use the rsync (not rsynd) between two ubuntu machines. The backup always fails after a couple of hours with signal PIPE. I think the machine being backed up just resets the connection or something. How can I detect the reason for the problem? There is no /var/log/rsync.log or something on the remote machine. [...] Negotiated protocol version 28 Sent exclude: /proc Sent exclude: /tmp Xfer PIDs are now 10929,11621 [ skipped 114214 lines ] sys/block/md0/dev: md4 doesn't match: will retry in phase 1; file removed Remote[1]: rsync: read errors mapping /sys/block/md0/dev: No data available (61) You should exclude /sys and /proc from backup - these are virtual file systems anyway. Rsync might copy your whole harddisk (block devices), kernel image etc. HTH! Tino. -- What we nourish flourishes. - Was wir nähren erblüht. www.lichtkreis-chemnitz.de www.craniosacralzentrum.de -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Fwd: Backup fails
Also check if your /mnt /media have something monted on them like external usb devices, dvds or cds. Cheers, Pedro On Thursday 13 August 2009 11:34:26 Tino Schwarze wrote: On Thu, Aug 13, 2009 at 12:20:20PM +0200, Michael Aram wrote: Hello all, thank you for your answers. I changed the backupcommand (added -q) and got rid of the first error. Thanks. However unfortunately, my backupserver wasnt able to successfully backup my remote machine. I have to mention, that I want to backup ~100GB over a normal 25MBit xDSL connection over the internet. It use the rsync (not rsynd) between two ubuntu machines. The backup always fails after a couple of hours with signal PIPE. I think the machine being backed up just resets the connection or something. How can I detect the reason for the problem? There is no /var/log/rsync.log or something on the remote machine. [...] Negotiated protocol version 28 Sent exclude: /proc Sent exclude: /tmp Xfer PIDs are now 10929,11621 [ skipped 114214 lines ] sys/block/md0/dev: md4 doesn't match: will retry in phase 1; file removed Remote[1]: rsync: read errors mapping /sys/block/md0/dev: No data available (61) You should exclude /sys and /proc from backup - these are virtual file systems anyway. Rsync might copy your whole harddisk (block devices), kernel image etc. HTH! Tino. -- -- Pedro M. S. Oliveira Pólo Tecnológico de Lisboa IT ConsultantEstrada do Paço do Lumiar, Lote 1 Email: pedro.olive...@dri.pt Sala 14 – 1600-546 Lisboa URL: http://www.dri.pt http://www.linux-geex.com Telefone: +351 21 715 30 55 Fax: +351 21 715 30 57 -- -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] 100,000+ errors in last nights backup
Steve Blackwell wrote: So, I believe we're back to the issue of what you did wrong when moving $TopDir. I don't remember reading which version of BackupPC When moving $TopDir? I haven't moved it. It's always been /media/disk ever since I installed backuppc. If you installed from the sourceforge tarball you can set the location anywhere you want. However it is a very common problem with packaged (rpm/deb) installations that people try to move the storage location after installation and it doesn't work to simply change $TopDir once it is set. But, I didn't think that was your problem since you were at backup 150 before seeing a problem. There is still the possibility of a physical disk error. External drives aren't all that reliable - but that should be reported in 'dmesg'. -- Les Mikesell lesmikes...@gmail.com -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] 100,000+ errors in last nights backup
Last night's backup, after the restart due to the crash appears to have worked OK. Here is the server log file: 2009-08-13 01:00:00 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15) 2009-08-13 01:00:00 Running BackupPC_nightly -m 0 127 (pid=5801) 2009-08-13 01:00:00 Running BackupPC_nightly 128 255 (pid=5802) 2009-08-13 01:00:00 Next wakeup is 2009-08-13 02:00:00 2009-08-13 01:00:02 Started incr backup on steve (pid=5803, share=/) 2009-08-13 01:06:37 Finished admin1 (BackupPC_nightly 128 255) 2009-08-13 01:06:37 BackupPC_nightly now running BackupPC_sendEmail 2009-08-13 01:06:47 Finished admin (BackupPC_nightly -m 0 127) 2009-08-13 01:06:47 Pool nightly clean removed 0 files of size 0.00GB 2009-08-13 01:06:47 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max links), 1 directories 2009-08-13 01:06:47 Cpool nightly clean removed 1 files of size 0.00GB 2009-08-13 01:06:47 Cpool is 15.90GB, 397513 files (48553 repeated, 11 max chain, 794 max links), 4369 directories 2009-08-13 01:29:59 Finished incr backup on steve 2009-08-13 01:29:59 Running BackupPC_link steve (pid=5977) 2009-08-13 01:30:21 Finished steve (BackupPC_link steve) and the corresponding log file from steve: 2009-08-13 01:00:02 incr backup started back to 2009-08-12 00:00:06 (backup #151) for directory / 2009-08-13 01:29:59 incr backup 152 complete, 41555 files, 0 bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) I didn't change any settings. I think it's worse when things magically start to work than when they continue to fail. [:-P Steve -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] What is the meaning of repeated, max chain and max links in the logfile?
Craig Barratt wrote at about 00:47:09 -0700 on Thursday, August 13, 2009: Matthias writes: Every day I get a message in __LOGDIR__/LOG: 2009-08-12 02:35:49 Cpool is 322.19GB, 1142028 files (860 repeated, 31 max chain, 11424 max links), 4369 directories What is the meaning of: repeated The total number of pool files with hash collisions. Since the hash is not computed over the entire file contents, it's common to get files with the same hash. max chain The biggest set of pool files with the same hash. This is a potential performance bottleneck if the number gets too large, since a incoming file needs to be matched against every pool file in the chain. That means each file is O(n), and n files from a full backup all with the same hash takes O(n^2) comparisons. So it's a problem when this number gets too big. Just as a reminder back to another thread regarding the advantages of storing the full md5sum in the envelope is that instead of O(n^2) byte-byte comparisons, one would only have to do O(n^2) md5sum comparisons or potentially none (or just a filesystem lookup which you have to do anyway) if the full file md5sum is used as the index. max links The maximum number of hardlinks on any pool file. The minimum is 2 (anything with 1 is deleted since it is no longer used). This means that across all your backups you have one file that appears 11424 times. Craig -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/ -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] filelistrecieve failed, but manual command works
For the sake of completeness, I got it working. I compiled the entire toochain from scratch and got a successful sync on the first try. I have no idea which link in the chain was breaking. I may investigate further if I find the time. -james On Aug 11, 2009, at 2:08 PM, James Kyle wrote: Also, perl 5.8.9 -james On Aug 11, 2009, at 1:43 PM, Les Mikesell wrote: James Kyle wrote: Tested another target, same behavior. What OS and rsync version? Do you have any targets that are working? -- Les Mikesell lesmikes...@gmail.com -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/ -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] 100,000+ errors in last nights backup
On Thu, 13 Aug 2009 10:18:20 -0500 Les Mikesell lesmikes...@gmail.com wrote: Steve Blackwell wrote: I didn't change any settings. I think it's worse when things magically start to work than when they continue to fail. [:-P Running Fedora or some other close-to-beta OS? F10. I use the oldest supported version in an attempt to step away from the bleeding edge. That strategy has worked up until this version. Steve. -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] patch for auto-detection of Bonjour (apple) clients for 3.2
I patched BackupPC_dump to look for bonjour clients. My apologies if this is not the most correct way to do so. What this allows is: If you have an apple client with hostname foo and Bonjour name foo.local, you can enter the client's name as foo and backuppc_dump will auto-detect its bonjour name. This makes it so that your clients don't have to run an smb service if you don't want to without redundantly entering .local for all your clients. patch-bin-backuppc_dump.diff Description: Binary data Cheers, -james-- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] patch for auto-detection of Bonjour (apple) clients for 3.2
James Kyle wrote at about 12:00:46 -0700 on Thursday, August 13, 2009: I patched BackupPC_dump to look for bonjour clients. My apologies if this is not the most correct way to do so. What this allows is: If you have an apple client with hostname foo and Bonjour name foo.local, you can enter the client's name as foo and backuppc_dump will auto-detect its bonjour name. This makes it so that your clients don't have to run an smb service if you don't want to without redundantly entering .local for all your clients. untyped binary data: patch-bin-backuppc_dump.diff [save to a file] I'm not sure I would want this patch rolled into the sources since all it really does is check 'hostname'.local and assumes if it exist then it must be a Bonjour name. But in *nix world, 'foohost.local' itself is a valid name which may or may not be related to 'foohost'. So this in general seems more like a hack than a robust, general solution. I'm not sure what the problem is with just appending '.local' to the names of Bonjour hosts in the Backuppc 'hosts' file. Alternatively, just create the aliases in the /etc/hosts file or equivalents. -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] patch for auto-detection of Bonjour (apple) clients for 3.2
Hi, On Thu, Aug 13, 2009 at 15:14, Jeffrey J. Kosowskybacku...@kosowsky.org wrote: I'm not sure what the problem is with just appending '.local' to the names of Bonjour hosts in the Backuppc 'hosts' file. Alternatively, just create the aliases in the /etc/hosts file or equivalents. Or add search local to /etc/resolv.conf HTH, Filipe -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] patch for auto-detection of Bonjour (apple) clients for 3.2
Jeffrey J. Kosowsky wrote: James Kyle wrote at about 12:00:46 -0700 on Thursday, August 13, 2009: I patched BackupPC_dump to look for bonjour clients. My apologies if this is not the most correct way to do so. What this allows is: If you have an apple client with hostname foo and Bonjour name foo.local, you can enter the client's name as foo and backuppc_dump will auto-detect its bonjour name. This makes it so that your clients don't have to run an smb service if you don't want to without redundantly entering .local for all your clients. untyped binary data: patch-bin-backuppc_dump.diff [save to a file] I'm not sure I would want this patch rolled into the sources since all it really does is check 'hostname'.local and assumes if it exist then it must be a Bonjour name. But in *nix world, 'foohost.local' itself is a valid name which may or may not be related to 'foohost'. So this in general seems more like a hack than a robust, general solution. I'm not sure what the problem is with just appending '.local' to the names of Bonjour hosts in the Backuppc 'hosts' file. Alternatively, just create the aliases in the /etc/hosts file or equivalents. This should be even easier: echo search local /etc/resolv.conf -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] rsync clients run out of memory
I'm using BackupPC 3.1.0 and I'm having problems with rsync clients running out of memory and crashing (the Linux kernel's OOM-Killer is unleashed, wreaking all sorts of havoc). Apparently this is a known problem if a lot of files are being synced and one or both ends of the rsync transfer is older than rsync 3.0 (see http://www.samba.org/rsync/FAQ.html#5). The client machines have rsync 3.0, but it looks like BackupPC uses an older version. I just switched to tar, so we'll see if that makes the problem go away. I'd prefer to use rsync. Is there any chance BackupPC could use a newer version of File::RsyncP that supports rsync protocol version 30? Thanks, Richard -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] patch for auto-detection of Bonjour (apple) clients for 3.2
This should be even easier: echo search local /etc/resolv.conf awesome, done. -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] rsync clients run out of memory
Richard Hansen wrote at about 16:52:01 -0400 on Thursday, August 13, 2009: I'm using BackupPC 3.1.0 and I'm having problems with rsync clients running out of memory and crashing (the Linux kernel's OOM-Killer is unleashed, wreaking all sorts of havoc). Apparently this is a known problem if a lot of files are being synced and one or both ends of the rsync transfer is older than rsync 3.0 (see http://www.samba.org/rsync/FAQ.html#5). The client machines have rsync 3.0, but it looks like BackupPC uses an older version. I just switched to tar, so we'll see if that makes the problem go away. I'd prefer to use rsync. Is there any chance BackupPC could use a newer version of File::RsyncP that supports rsync protocol version 30? This has been discussed many times on the list - please see the archives. Bottom line is that the memory issue seems to be solved in rsync 3.0 even though protocol 30 is used. Upgrading to version 30 is a *big* deal and would likely require significant rewrite of perl-File-RsyncP. Again please review the archives... -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/