Hi Toby, > The plan was to include something like an sha1 hash of the original file in > the response headers. Then once the file has been decoded you can check to > make sure it matches. If not you can resend the request without the black > hash header and get the file the oldfashioned way.
re-sending http requests can be dangerous. The request might have triggered an action like "delete the last person from the list". When you resend it could delete two users rather than one. Remember that one of the aims of this work is to allow cacheing of dynamic requests, so you can't just assume the pages are marked as cacheable (which usually implies that a 2nd request won't do any harm). Certainly including a strong whole-page hash is a good idea, but if the strong hash doesn't match, then I think you need to return an error, just like if you got a network outage. The per-block rolling hash should also be randomly seeded as Martin mentioned. That way if the user does ask for the page again then the hashing will be different. You need to send that seed along with the request. In practice hashing errors will be extremely rare. It is extremely rare for rsync to need a 2nd pass, and it uses a much weaker rolling hash (I think I used 16 bits by default for the per block hashes). The ability to do multiple passes is what allows rsync to get away with such a small hash, but I remember that when I was testing the multiple-pass code I needed to weaken it even more to get any reasonable chance of a 2nd pass so I could be sure the code worked. Cheers, Tridge _______________________________________________ Server-devel mailing list Server-devel@lists.laptop.org http://lists.laptop.org/listinfo/server-devel