Holger Parplies wrote:

> Hi,
> 
> Evren Yurtesen wrote on 29.03.2007 at 17:04:05 [Re: [BackupPC-users] very 
> slow backup speed]:
> 
>>Holger Parplies wrote:
>>
>>>"Most backup programs" take "do a full backup" to mean "read *everything*
>>>over the network and write it to backup medium". BackupPC modifies this to
>>>mean "read *everything* over the network and store *unchanged* contents
>>>efficiently by using hardlinks" while taking the notion of "unchanged" very
>>>seriously. If previous backups have become corrupted for some reason, that
>>>is no good reason to feel fine with storing unchanged contents in unchangedly
>>>[corrupted form]
>>
>>I am willing to take that risk, who are you to disagree? I am not asking that
>>checksum checking should be removed. I am just asking that people who want to 
>>take that risk should be able to disable it.
> 
> 
> well, go ahead and implement it. If you want other people to do it for you,
> you'll at least have to convince them that your idea is not plain braindead.
> Isn't that obvious? And you'll have to accept others making a clear
> statement like:
> 
> I wouldn't want to use backup software that can easily be misconfigured to
> make compromises concerning data security in favour of speed.


You are missing something here please read below (but you might want to stop 
using backuppc :D)

Right away when you enable 'tar' backuppc would be misconfigured by your 
definition, should tar support be removed? Tar can not check checksums anyway 
(plus has other flaws where it can miss newer/changed files)

> You've made a convincing demonstration that it can already easily be
> misconfigured to be slow, and you've also shown that some people would
> try dubious things to speed it up, and, finally, you've shown that people

I guess everybody who is using 'tar' have been doing 'dubious things' :)

> would blame the author for their mistakes. That should compell the author
> to implement what you want, shouldn't it?

I didnt blame anybody, just said BackupPC is working slow and it was working 
slow, very slow indeed. checksum-seeds option seems to be doing it's trick 
though.

I am thankful to people who wrote suggestions here in this forum, I tried all 
of 
those suggestions one by one. I think that shows that I took them seriously 
even 
though some of them looked like long shots. Eventually one of the suggestions 
seems to be working.

Not as fast as I would like but it is way better now and within acceptable 
levels (0.2mbyts/sec was really not acceptable).

>>Sure the way it works now fits to your usage doesnt mean that new features
>>shouldnt be added.
> 
> 
> I'm not saying it fits my usage. I'm saying the concepts make sense. And
> you're not asking for new features, you're complaining that the impossible
> has not been achieved yet. Right. You're only asking for lowering bandwidth

There has been a misunderstanding here. I was only talking about backuppc 
comparing checksums for deciding if the file should be backed up or not and it 
is not impossible to disable this, it is disabled when you use tar for example.

I have nothing against it using checksums when storing the file. (which would 
be 
impossible to disable)

> requirements at no extra cost. I wonder why nobody has thought of that
> before you.

Actually on the contrary, what I suggest might or might not increase/decrease 
bandwidth requirements depending on the situation while making backups 
(especially full backups) faster and use less cpu as there wont be checksum 
checks done for each and every file. That is why people are suggesting to use 
tar to get better speed even though it is flawed.

Even when this is disabled, the checksums are compared when any file with 
non-matching attribes is found. So it should be possible to find if there is a 
corruption in backed up data. After all when you enable checksum caching only 
1% 
of the checksums in the backed up data is re-checked (by default) (this setting 
is what speeded up things for me, if you are using it, you must disable it for 
better file checks)

Now that it is ok to suggest people to use tar even though it has the same flaw 
and more but making something better than tar but perhaps slight worse than 
current rsync implementation in catching changed files is not a good idea? How 
come?

 From BackupPC Info:
-------------
For SMB and tar, BackupPC uses the modification time (mtime) to determine which 
files have changed since the last lower-level backup. That mean SMB and tar 
incrementals are not able to detect deleted files, renamed files or new files 
whose modification time is prior to the last lower-level backup.
-------------

Thanks,
Evren

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to