Evren Yurtesen wrote:

> The problem is the reads, not writes. It takes long time for BackupPC to 
> figure out if the file should be backed up or not. Backups take very 
> long time even when there are few files which are backed up.

That's only true with rsync and especially rsync fulls where the 
contents are compared as well as the timestamps. But I thought your 
complaint was about the hardlinks and data storage.  The transfer speed 
value doesn't make a lot of sense in the rsync case anyway because you 
often spend a lot of time computing to avoid having to transfer anything 
- and you have the entire directory in RAM while doing it which may 
reduce your disk buffering.  What is wall clock time for a run and is it 
reasonable for having to read through both the client and server copies?

> But I have to disagree, again :) When you make writes and filesystem is 
> mounted with 'sync' option which is default on FreeBSD then all 
> operations are done sync. Your program must have been doing something 
> between truncate and the actual write operation which caused file to be 
> empty for a period of time. 

Yes, it was writing a small amount to the file and the data write was 
always deferred with this kind of problem:
http://www.complang.tuwien.ac.at/anton/sync-metadata-updates.html

> The only difference is that FreeBSD uses sync by default because they 
> claim(there is no point to argue if it is true or not, there are enough 
> threads about those if you google :) ) that it is safer while Linux uses 
> async.

It is safer for the file system if you are concerned that fsck can't fix 
it.  It is not safer for the data of any particular file.  Anyway 
crashes are pretty much a thing of the past and I protect against most 
kinds of failures by raid-mirroring to an external drive that is rotated 
off-site weekly.  I'm running a journaled reiserfs because it is very 
fast at creating and deleting files, but ext3 with the right options 
should also work.

>>>>> Perhaps, but there is a difference if they are moving 10 times or 
>>>>> 100000 times. Where the difference is that the possibility of 
>>>>> failure due to mechanical problems increases 10000 times.
>>>>
>>>> No, it doesn't make a lot of difference as long as the drive doesn't 
>>>> overheat.  The head only moves so fast and it doesn't matter if it 
>>>> does it continuously.  However, if your system has sufficient RAM, 
>>>> it will cache and optimize many of the things that might otherwise 
>>>> need an additional seek and access.
>>>
>>> I cant see how you can reach to this conclusion.
>>
>> Observation... I run hundreds of servers, many of which are 5 or more 
>> years old.  The disk failures have had no correlation to the server 
>> activity.
> 
> Hard drive manufacturers disagree with you. They do consider the 
> duty-cycles when calculating the MTBF values.
> http://www.digit-life.com/articles/storagereliability/

Vendor MTBF values are very unrealistic. The trade rags seem to have 
caught on to this recently: 
http://www.eweek.com/article2/0%2C1895%2C2105551%2C00.asp
If you want to know what the vendor really expects, look at the warranty 
instead.

-- 
   Les Mikesell
    [EMAIL PROTECTED]


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to