Hi, Scott wrote on 01.05.2007 at 23:26:39 [[BackupPC-users] BackupPC and OS X]: > I'm giving BackupPC 3.0.0 a try on my home network. The BackupPC > server is CentOS5 and the clients are OS X 10.4. The hardware is an > Athlon XP 2200+, 512MB RAM, and a few IDE drives without RAID. I was > able to configure BackupPC today and completed a backup and test > restore of the $HOME dirs on two different Mac clients. So far I have > two questions. > > 1) I'm using /usr/bin/tar for 10.4.9. Before setting the backuppc tar- > cmds, I tested the commandline switch under terminal. I ran the the > followi my client to verify the tar options: > > a) Backup the Music folder: "tar -c -v -f Music.tar --totals --one- > file-system Music" > b) From the commandline, "rm -rf Music" > c) Restore the Music folder: "tar -x -p -v -f Music.tar" > > Everything looked okay except the Music folder icon was now just a > plain OS X folder icon, not the "blessed" Music folder icon a default > OS X account gets. Can anyone explain this? I saw similar behavior > for the Documents folder that I restored while doing a full BackupPC > restore test.
resource forks. You know more about Macs than I do (what they look like, for instance ;-). The list archives know even more. > 2) For my GigE wired Mac OS X client, BackupPC reports ~3.10MB/s. > This is using ssh + tar. Normal NFS copies to the same disk on this > client and server avg ~18MB/s. There are a few factors I can think of > that would be slowing down BackupPC: compression, pooling, and ssh > encryption. I've changed my ssh command to use -c blowfish rather > than the default cipher which is more robust but slower. For all of the reasons you mentioned, you will never get anywhere near the raw transfer speed. So? How much data do you need to back up? Is speed worth worrying about? As you seem to be on a local network, you might consider using rsh instead of ssh or switching to rsyncd as transport method to get rid of encryption. Or use NFS: mount the client's disk on the BackupPC server and backup via tar without ssh. You can keep the client name in the configuration as it is, and that will even do something useful: ping the host before attempting the backup. You would probably use $Conf {DumpPreUserCmd} to mount and $Conf {DumpPostUserCmd} to umount the NFS volume. That may be incompatible with your goal of not modifying the client though, and it might not be a good idea if your BackupPC server has other important server functions and you don't want the trouble you may get if the client goes down before umounting ... > If I disable compression now, how will that affect pooling with my > current full backup and future incremental? Unchanged files will have a copy both in pool and cpool (at least after the next full backup). More precisely: files in cpool will no longer be considered for pooling with newly received files. A copy in pool will be used or created as necessary. The copy in cpool will eventually expire, but until then you potentially need space for both. If you change compression level, that will only affect new files put into the pool, i.e. old files with different compression level will be linked to rather than create a new copy with the new compression level. Due to the implementation of pooling your second full backup may be much faster than the first: on the first backup, all files need to be compressed in order to create the pool. On the second backup (with tar), all files will be re-transfered, but then the pool files will be decompressed to do the match rather than the transfered files compressed. Decompression is significantly faster. Only new files added to the pool will need to be compressed (meaning new or changed files not matching a file already in the pool). BackupPC does some magic to ensure that no intermediate copies of files already in the pool need to be stored. This may come at a cost if you have long chains in the pool (eg. many files of which the first 1 MB is identical but the rest is not). Your server status page tells you "Pool hashing gives N repeated files with longest chain M" - are the values of N and M especially high? > Is there any tuning that can be done with pooling to allow for faster > backup speeds? Not that I know of. In theory, you can change the hashing algorithm, but you'll have to start over if you do, and I doubt there is much to be gained except headaches. Scott wrote on 02.05.2007 at 07:49:46 [Re: [BackupPC-users] BackupPC and OS X]: > On May 2, 2007, at 5:59 AM, Jamie Lists wrote: > > this may sound weird but have pretty much the same setup as you and > > we're finding the cause of our terrible backup speeds to be a problem > > with ssh speeds on centos. > > [...] > The Mac and the Linux server and connected by GigE on the same > switch. An NFS copy of some some 600MB ISO files shows about 15MB/s > average. This is measured on the Linux server using iptraf. I'm > fairly certain it's not my network. :) That is the point Jamie was making - ssh slowing things down which are otherwise fast. As a side note, you won't get faster speeds than for copying large files. Try copying 600MB of clib header files, thumbnails and maildir type mail folders :-). Or, in fact, try copying your target directory tree with 'cp -R' over NFS. > During the BackupPC run, I noticed top showed virtually no wait and > mostly ran at 80% cpu. That could be due to the compression (default > value 3 in use). Disk is cheap so I'm not real worried about saving > 10-15% using compression. It could also be encryption. You're talking about the BackupPC server, right? What type of file system is your pool on? Which mount options do you use? > So you're suggesting I replace ssh with sftp under BackupPC and > retest? No. I think it was "find out if ssh is the cause and get a fixed ssh". > Thus far my test plan is: > > 1) Retest full backup now that ssh -c blowfish is set and see what > performance looks like > 2) Retest full backup with ssh -c blowfish and compression set to 0 > 3) Test ssh + tar speeds from the Linux server outside of BackupPC to > verify performance being seen All of that seems to make sense. > 4) Any other testing suggested, rsh or sftp for example. - rsh: yes, should be quite simple. - sftp: I wouldn't know how that would work. - rsyncd: might be faster, could also be slower. - NFS: simple too, if you're exporting in that direction and with no_root_squash option anyway. Ask if you need more details. - cp -R: maybe your expectations are simply too high for the data you are backing up? Regards, Holger ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/