I don't believe I'm suggesting one mutex per page.

If I understand correctly, the purpose of the overall mutex is to prevent a 
page from being removed underneath a user.  If the standard DB locking 
semantics are working properly, I think there is no possibility of a page's 
data from being modified underneath another user.  That would be no different 
than a physical DB page being modified underneath another user.

If the above it true, cache protection semantics are strictly concerned with 
page management.  That is, a page is requested that is not in the cache and 
needs to be inserted into it.  If the cache is full, another page needs to be 
released.  All that is required is protecting pages currently in use from being 
released.

I think, instead of a mutex serializing access to the entire cache, all that is 
needed is a mutex serializing access to the cache meta-data and the use of 
reference counts to help the page replacement algorithm make a good choice in 
which page to remove.

-----Original Message-----
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Simon Slavin
Sent: Monday, August 13, 2012 10:23 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Multi-Thread Reads to SQLite Database


On 13 Aug 2012, at 3:11pm, Marc L. Allen <mlal...@outsitenetworks.com> wrote:

> I wonder if it would be possible to refine the cache locking mechanism.
> 
> If I understand the modified DB/Table locking semantics when running under a 
> shared-cache, it appears that the cache page should be protected against 
> readers and writers.

All shared assets must be protected against readers and writers.  A reader 
needs to know that nothing is going to scramble the data while it's reading.  A 
writer must know that nothing is reading the data it's about to change.  (A 
little handwaving and a few exceptions, but that's the gist of it.)

> Perhaps only the list of pages in the cache need to be protected by a 
> higher-level mutex?  That is, if you want access to a page, you grab the 
> mutex, flag the page as 'in-use' with a reference counter, release the mutex, 
> and go on about your business.

You are proposing one mutex per page.  This is a system which many DBMSes use 
but it would /greatly/ slow down SQLite.  Also it would complicate the code 
quite a bit since a write to one page often leaks over to neighbouring pages.

> If you have multiple readers, they would be able to access the physical page 
> concurrently.  When access is complete, the reference count would be 
> decremented.

To get the effect of this, simply stop using shared-cache.  Let each process 
have its own cache.  That way each process knows nothing is messing with its 
cache.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to