On Wed, Jul 14, 2010 at 1:35 AM, Roger Binns <rog...@rogerbinns.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 07/13/2010 05:30 PM, Jim Wilcoxson wrote: > > I don't think this would work, because the problem described is that the > > writes aren't making it to disk. If pages don't make it to disk, the > old > > pages will be present, with the old, and valid checksums. > > You are assuming the checksums are stored in the page they checksum. That > would only detect corruption of that page. You could have pages that store > the checksums of numerous other pages, so both the checksum page and the > data page would have to fail to make it to disk. Yes, there are scenarios > where you could still get old apparently valid pages, but those are harder > to happen. > It seems there are several level of checking possible: - checksum on the page itself lets you detect some errors, with no extra I/O - checksum pages for a group of pages lets you detect missing writes within the group, with some extra I/O - checksum of all checksum pages lets you detect missing writes for an entire commit, with even more extra I/O How much extra I/O depends on the size of the db, page size, and how much memory is available for caching checksum pages. Scott mentioned that a detection system without the ability to correct might not be useful, but I think it is useful. Not as good as correction of course, but useful because: - it might prevent the application program from issuing a bogus error message like "the row you asked for isn't in the database"; lots of time could be spent in the weeds chasing down a misleading error - some applications might have backup copies of the database; they could display an error message and revert to a backup Jim _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users