On 13 Aug 2012, at 3:11pm, Marc L. Allen <mlal...@outsitenetworks.com> wrote:

> I wonder if it would be possible to refine the cache locking mechanism.
> 
> If I understand the modified DB/Table locking semantics when running under a 
> shared-cache, it appears that the cache page should be protected against 
> readers and writers.

All shared assets must be protected against readers and writers.  A reader 
needs to know that nothing is going to scramble the data while it's reading.  A 
writer must know that nothing is reading the data it's about to change.  (A 
little handwaving and a few exceptions, but that's the gist of it.)

> Perhaps only the list of pages in the cache need to be protected by a 
> higher-level mutex?  That is, if you want access to a page, you grab the 
> mutex, flag the page as 'in-use' with a reference counter, release the mutex, 
> and go on about your business.

You are proposing one mutex per page.  This is a system which many DBMSes use 
but it would /greatly/ slow down SQLite.  Also it would complicate the code 
quite a bit since a write to one page often leaks over to neighbouring pages.

> If you have multiple readers, they would be able to access the physical page 
> concurrently.  When access is complete, the reference count would be 
> decremented.

To get the effect of this, simply stop using shared-cache.  Let each process 
have its own cache.  That way each process knows nothing is messing with its 
cache.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to