On Nov 19, 2008, at 12:27 AM, Igor Sereda wrote:

> Hi,
>
> We have recently upgraded from 3.5.9 to 3.6.5 and one of tests that  
> ensures certain SQLite behavior now fails. The test basically checks  
> how cache spill is handled:
>
> SESSION THREAD 1                    SESSION THREAD 2
>
> Open session
> Launch SELECT, keep stmt
> (assert SHARED lock is held)
> Start session 2 ------------->      Open session
>                                    Adjust cache size to 5 pages
>                                    INSERT data definitely larger  
> than cache
>                                    (assert RESERVED lock)
>                                    (at some point assert cache spill:
>                                     try EXCLUSIVE lock => fail)
>
>
> The test expected that at some point during session two SQLITE_BUSY  
> will happen and transaction will be rolled back - that worked on  
> 3.5.9.
>
> What now happens is that all INSERTS complete successfully. If  
> followed by COMMIT, an SQLITE_BUSY will result and transaction will  
> *not* be rolled back. Also, trying to get more SHARED locks will  
> fail. So it looks like writer session holds PENDING lock.
>
> This change looks to be for the better, but I couldn’t find any  
> references in change log or here in the forum. So I wonder:
>
> 1. Is this intentional change, and will it stay in future versions?

Yes. And probably.

> 2. How is it implemented, in regards to serialized isolation level?  
> Shared lock holders should be able to read whole database in a  
> consistent state and at the same time writer is obviously able to  
> change as much as needed, spilling changes to disk.

The change is that if a cache-spill fails because it can't get the
EXCLUSIVE lock required to write to the database, the page cache
is allowed to grow indefinitely (well, until malloc() fails) to
accommodate dirty pages.

There are other related changes as well. If one cache is forced to  
exceed
its configured limit (i.e. the value configured by PRAGMA cache_size),
then the effective cache size limit for other database connections
in the process is temporarily reduced to compensate. Basically SQLite
tries not to cache more than a global limit of pages, where that global
limit is the sum of the configured cache-size limits for all database
connections in the process.

Upcoming versions of sqlite will feature an API that allows users to
supply their own global page cache implementation (you can see this in
cvs at the moment). This can be useful for embedded systems that need
to centrally control the way in which scarce memory resources are shared
between sqlite page caches and the rest of the system.

Dan.


_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to