I've also seen Google search results where "optimistic" or "opportunistic"
caching on Windows clients may cause data corruption, in the context of
other file-based database libraries.

Oplocks by themselves won't cause corruption. The way oplocks work is that the file server tells the client they are the only ones with the file open, so the client can cache stuff, doesn't have to send all writes and locking through immediately etc. If another client opens the same file, then that is held up and the first client is asked to relinquish the oplock. The first client will then send uncommitted data, apply any locks etc and then the second client will have the open succeed.

This scheme allows significant performance improvements in the situation
where only one machine has a file open, or many machines have a read-only
file open.

The default configuration of Samba and Windows servers is to give out oplocks. Since the Windows CIFS server is part of the kernel, it
can hold up opens by any other process to maintain data integrity.
Samba can ensure that other Samba connections maintain data integrity,
but it can't do so for other UNIX processes.


The lesson is that you should be *very* careful if you have multiple
network protocols to the same file (eg CIFS/SMB and NFS) or are
mixing local and remote access to the same file.

There is also another failure mode which is the client having an
oplock, and telling the process using SQLite that all the data
is safely committed.  Then when the file is closed, or an oplock
break message is received being unable to actually send the data
back to the server, or not doing so in a timely enough manner.

Windows clients always ask for oplocks by default.  Generally you
can cause oplocks not to be held by opening the same file again
with different permissions (eg request read only).  Use ethereal
to verify if the oplock is broken.

Roger

Reply via email to