Simon your design-idea do not reflect any reality, this is weak, there is a
lack of experience on the topic and we can feel it.

"You won't have anything to commit.  If your application really had crashed
it wouldn't have any transaction data to commit.  If your application had
not crashed the transaction would always have worked."

nope you can have a partial upload, a broken socket pipe et cetera, and you
only assume a version of the file is not already remote and assume that
after crash you might be able to recover local anyway.


there are two scenario to check:

local = remote after any network transaction
local = remote

after incident:
 + if not remote, test integrity of local
 + if remote make sure both are safe
 + if only remote restore/force sync has you got an interrupt (it happens
with box)

1- the network flow could be interrupted no need a power failure for that
to happen, it can happen that's you face also the case of undetected broken
pipe, that's the reason you need to be notify by the network pooler API you
use,

2- the journal tweaking only concern sqlite file and specific to it, then
wrong design, make it work for anything using the "common regular" system
of hashing/signing local to remote to ensure the integrity of the data, at
least that's the only purpose of this discussion how I am sure whatever
happen that I have my data in good shape somewhere.







On Tue, Jul 15, 2014 at 9:16 AM, Simon Slavin <slav...@bigfraud.org> wrote:

>
> On 15 Jul 2014, at 2:53pm, mm.w <0xcafef...@gmail.com> wrote:
>
> > ok sorry, I did not red all thru, may you simply sha1 local and remote ?
> if
> > != commit again
>
> You won't have anything to commit.  If your application really had crashed
> it wouldn't have any transaction data to commit.  If your application had
> not crashed the transaction would always have worked.
>
> Anything that might sync a file automatically can make a mistake like this:
>
> 1) computer A and computer B both have local copies of the database open
> 2) users of computer A and computer B both make changes to their local
> copies
> 3) computer A and computer B both close their local copies
>
> Now the automatic syncing routine kicks in and notices that both copies
> have been modified since the last sync.  Whichever copy it chooses, the
> changes made to the other copy are still going to be lost.
>
> Also, since the sync process doesn't understand that the journal file is
> intimately related to the database file, it can notice one file was updated
> and copy that across to another computer, and leave the other file as it
> was.  While SQLite will notice that the two files don't match, and will not
> corrupt its database by trying to update it with the wrong journal, there's
> no way to tell whether you are going to get the data before the last
> transaction was committed or after.
>
> My recommendation to the OP is not to do any programming around this at
> all, since whatever programming you come up with will not be dependable.
>  The routines for checking unexpected journal files in SQLite are very
> clever.  Just leave SQLite to sort out rare crashes by itself, which it
> does pretty well.
>
> If, on the other hand, crashes aren't rare then I agree with the other
> poster to this thread who said that time is better spent diagnosing the
> cause of your crashes.
>
> Simon.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to