Les Mikesell wrote: > Evren Yurtesen wrote: > >> The problem is the reads, not writes. It takes long time for BackupPC >> to figure out if the file should be backed up or not. Backups take >> very long time even when there are few files which are backed up. > > That's only true with rsync and especially rsync fulls where the > contents are compared as well as the timestamps. But I thought your > complaint was about the hardlinks and data storage. The transfer speed > value doesn't make a lot of sense in the rsync case anyway because you > often spend a lot of time computing to avoid having to transfer anything > - and you have the entire directory in RAM while doing it which may > reduce your disk buffering. What is wall clock time for a run and is it > reasonable for having to read through both the client and server copies?
I am using rsync but the problem is that it still has to go through a lot of hard links to figure out if files should be backed up or not. >> But I have to disagree, again :) When you make writes and filesystem >> is mounted with 'sync' option which is default on FreeBSD then all >> operations are done sync. Your program must have been doing something >> between truncate and the actual write operation which caused file to >> be empty for a period of time. > > Yes, it was writing a small amount to the file and the data write was > always deferred with this kind of problem: > http://www.complang.tuwien.ac.at/anton/sync-metadata-updates.html You shouldnt be trusting this article. The article is flawed in certain ways. For example it says "as demonstrated by soft updates (see below), which don't even require fsck upon crashing." but this information is wrong. Softupdates means that you can run the fsck in background but you must run it on a crash (system does it automatically anyhow). Also, about the problem in this article, if async updates would have been used then the file would have been lost anyhow as the writes would have been buffered. The information given is vague. I am not an expert on this but the problem about async updates that the BSD guys are scared of is probably not the same kind of inconsistency described here. In either case you could just enable sync or async updates on UFS to change this behaviour. You didnt have to change the filesystem because of a deficiency in UFS which is causing the problem. >> The only difference is that FreeBSD uses sync by default because they >> claim(there is no point to argue if it is true or not, there are >> enough threads about those if you google :) ) that it is safer while >> Linux uses async. > > It is safer for the file system if you are concerned that fsck can't fix > it. It is not safer for the data of any particular file. Anyway > crashes are pretty much a thing of the past and I protect against most > kinds of failures by raid-mirroring to an external drive that is rotated > off-site weekly. I'm running a journaled reiserfs because it is very > fast at creating and deleting files, but ext3 with the right options > should also work. If you check namesys.com benchmarks, you will see that they only tested reiserfs against ext2/ext3/xfs/jfs and conveniently forgot to test against ufs2. You can see in the end of the page that slight performance increase in reiserfs is also bringing twice the cpu usage! (plus extX is faster in certain operations even) http://www.namesys.com/benchmarks.html Another test results from sun, little bit old (from 2004) http://www.sun.com/software/whitepapers/solaris10/fs_performance.pdf I am not saying one is faster over another or anything like that. Sun's results do look too good to be true anyhow, they probably tweaked some stuff or I dont know, can they lie? wouldnt it be bad reputation? >>>>>> Perhaps, but there is a difference if they are moving 10 times or >>>>>> 100000 times. Where the difference is that the possibility of >>>>>> failure due to mechanical problems increases 10000 times. >>>>> >>>>> No, it doesn't make a lot of difference as long as the drive >>>>> doesn't overheat. The head only moves so fast and it doesn't >>>>> matter if it does it continuously. However, if your system has >>>>> sufficient RAM, it will cache and optimize many of the things that >>>>> might otherwise need an additional seek and access. >>>> >>>> I cant see how you can reach to this conclusion. >>> >>> Observation... I run hundreds of servers, many of which are 5 or more >>> years old. The disk failures have had no correlation to the server >>> activity. >> >> Hard drive manufacturers disagree with you. They do consider the >> duty-cycles when calculating the MTBF values. >> http://www.digit-life.com/articles/storagereliability/ > > Vendor MTBF values are very unrealistic. The trade rags seem to have > caught on to this recently: > http://www.eweek.com/article2/0%2C1895%2C2105551%2C00.asp > If you want to know what the vendor really expects, look at the warranty > instead. > It is a logical fact that more movement = more wear. Thanks, Evren ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ BackupPC-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/
