When I first learned the SQLite had problems with Network File Systems I read a 
ton of stuff to learn why there doesn't seem to be a Network File Systems that 
implements locking properly.  I ended up with …

A) It slows access a lot.  Even with clever hashing to check for collisions it 
takes time to figure out whether your range is already locked.

B) Different use-cases have different preferences for retry-and-timeout times.  
It's one more thing for admins to configure and many admins don't get it right.

C) It's hard to debug.  There are numerous different orders in which different 
clients can lock and unlock ranges.  You have to run a random simulator to try 
them all.  The logic to deal with them properly is not as simple as you'd 
think.  Consider, for example, ranges which are not identical but do overlap.

D) It's mostly a waste of time.  Most client software doesn't care how to deal 
with a BUSY status and either crashes – which annoys the admin and user – or 
retries immediately – which makes the management CPU hot.  After all, most 
client software just wants to read a whole file or write a whole file.  And if 
two people save the same word processing document at nearly the same time, 
who's to say who was first ?

Still, I wonder why someone working on a Linux network file system, or APFS, or 
ZFS, hasn't done it.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to