Re: [sqlite] Asynchronous I/O and shared cache
On Thu, 19 Nov 2009 06:51:05 -0800 (PST), presta wrote: > >According to the documentation : > >"database connection in read-uncommitted mode does not attempt to obtain >read-locks before reading from database tables as described above. This can >lead to inconsistent query results if another database connection modifies a >table while it is being read, but it also means that a read-transaction >opened by a connection in read-uncommitted mode can neither block nor be >blocked by any other connection." > >Correct me if I'm wrong but if the "shared cache" access is locked for each >sqlite3_step, so it's not possible to have inconsistent query results.. and >no possible "real" parallel read/write. The "duration" of a sqlite3_step() is one row, not one statement, let alone one transaction. The reads and writes are interleaved. A SELECT in read-uncommitted mode will retrieve consistent rows, but the result set as a whole is not consistent: it may contain rows which are already deleted by the writing transaction, and not contain rows that aren't inserted yet. -- ( Kees Nuyt ) c[_] ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
According to the documentation : "database connection in read-uncommitted mode does not attempt to obtain read-locks before reading from database tables as described above. This can lead to inconsistent query results if another database connection modifies a table while it is being read, but it also means that a read-transaction opened by a connection in read-uncommitted mode can neither block nor be blocked by any other connection." Correct me if I'm wrong but if the "shared cache" access is locked for each sqlite3_step, so it's not possible to have inconsistent query results.. and no possible "real" parallel read/write. Regards > I don't know what Dan meant by his words but AFAIK there's no mutex > making exclusive grab of shared cache by sqlite3_step() call. There is > only mutex making sqlite3_step() execution exclusive for connection > object. I meant the mutex that is a member of the BtShared struct (BtShared.mutex). Grabbed by the call to sqlite3VdbeMutexEnterArray() at the top of sqlite3VdbeExec() and not released until that function returns. Pavel is right, technically it's not grabbed by sqlite3_step(). But 99% of the time spent in sqlite3_step() will be spent in a single call to sqlite3VdbeExec(), so the effect is similar. Dan. > Pavel > > On Wed, Nov 18, 2009 at 8:40 AM, presta wrote: >> >> I'm confused according to Dan Kennedy : >> >> "Each shared-cache has its own mutex. The mutex is held for the >> duration >> of each sqlite3_step() call. So the way you're defining it here, you >> can't have "real" concurrency when using shared-cache mode in any >> case. " >> >> So, it's a little bit "antagonist" to say "with shared cache they >> will be >> parallelized pretty effectively in the same file too" >> >> >> >> -- >> View this message in context: >> http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26407922.html >> Sent from the SQLite mailing list archive at Nabble.com. >> >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >> > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- View this message in context: http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26421364.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
> I meant the mutex that is a member of the BtShared struct > (BtShared.mutex). Grabbed by the call to sqlite3VdbeMutexEnterArray() > at the top of sqlite3VdbeExec() and not released until that function > returns. Okay, now I see that and I don't like it at all. Of course it doesn't eliminate ability to make 2 write transactions on different tables simultaneously but this ability now narrows significantly to the cases when transaction consists of more than one update and thus more than one call to sqlite3_step(). Do you know the reasons why it was made that way? What operations are made with shared cache that cannot be parallelized and require such a wide exclusive lock (especially in case of read-only transactions but cases of writing to different tables are questionable too)? Pavel On Wed, Nov 18, 2009 at 11:55 PM, Dan Kennedy wrote: > > On Nov 18, 2009, at 10:03 PM, Pavel Ivanov wrote: > >> I don't know what Dan meant by his words but AFAIK there's no mutex >> making exclusive grab of shared cache by sqlite3_step() call. There is >> only mutex making sqlite3_step() execution exclusive for connection >> object. > > I meant the mutex that is a member of the BtShared struct > (BtShared.mutex). Grabbed by the call to sqlite3VdbeMutexEnterArray() > at the top of sqlite3VdbeExec() and not released until that function > returns. > > Pavel is right, technically it's not grabbed by sqlite3_step(). But > 99% of the time spent in sqlite3_step() will be spent in a single call > to sqlite3VdbeExec(), so the effect is similar. > > Dan. > > > > >> Pavel >> >> On Wed, Nov 18, 2009 at 8:40 AM, presta wrote: >>> >>> I'm confused according to Dan Kennedy : >>> >>> "Each shared-cache has its own mutex. The mutex is held for the >>> duration >>> of each sqlite3_step() call. So the way you're defining it here, you >>> can't have "real" concurrency when using shared-cache mode in any >>> case. " >>> >>> So, it's a little bit "antagonist" to say "with shared cache they >>> will be >>> parallelized pretty effectively in the same file too" >>> >>> >>> >>> -- >>> View this message in context: >>> http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26407922.html >>> Sent from the SQLite mailing list archive at Nabble.com. >>> >>> ___ >>> sqlite-users mailing list >>> sqlite-users@sqlite.org >>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >>> >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
On Nov 18, 2009, at 10:03 PM, Pavel Ivanov wrote: > I don't know what Dan meant by his words but AFAIK there's no mutex > making exclusive grab of shared cache by sqlite3_step() call. There is > only mutex making sqlite3_step() execution exclusive for connection > object. I meant the mutex that is a member of the BtShared struct (BtShared.mutex). Grabbed by the call to sqlite3VdbeMutexEnterArray() at the top of sqlite3VdbeExec() and not released until that function returns. Pavel is right, technically it's not grabbed by sqlite3_step(). But 99% of the time spent in sqlite3_step() will be spent in a single call to sqlite3VdbeExec(), so the effect is similar. Dan. > Pavel > > On Wed, Nov 18, 2009 at 8:40 AM, presta wrote: >> >> I'm confused according to Dan Kennedy : >> >> "Each shared-cache has its own mutex. The mutex is held for the >> duration >> of each sqlite3_step() call. So the way you're defining it here, you >> can't have "real" concurrency when using shared-cache mode in any >> case. " >> >> So, it's a little bit "antagonist" to say "with shared cache they >> will be >> parallelized pretty effectively in the same file too" >> >> >> >> -- >> View this message in context: >> http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26407922.html >> Sent from the SQLite mailing list archive at Nabble.com. >> >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >> > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
I don't know what Dan meant by his words but AFAIK there's no mutex making exclusive grab of shared cache by sqlite3_step() call. There is only mutex making sqlite3_step() execution exclusive for connection object. Pavel On Wed, Nov 18, 2009 at 8:40 AM, presta wrote: > > I'm confused according to Dan Kennedy : > > "Each shared-cache has its own mutex. The mutex is held for the duration > of each sqlite3_step() call. So the way you're defining it here, you > can't have "real" concurrency when using shared-cache mode in any case. " > > So, it's a little bit "antagonist" to say "with shared cache they will be > parallelized pretty effectively in the same file too" > > > > -- > View this message in context: > http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26407922.html > Sent from the SQLite mailing list archive at Nabble.com. > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
I'm confused according to Dan Kennedy : "Each shared-cache has its own mutex. The mutex is held for the duration of each sqlite3_step() call. So the way you're defining it here, you can't have "real" concurrency when using shared-cache mode in any case. " So, it's a little bit "antagonist" to say "with shared cache they will be parallelized pretty effectively in the same file too" -- View this message in context: http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26407922.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
Shared cache instance is created on file-by-file basis, i.e. if you open connections to file1.db and file2.db they will have different cache instances and any manipulations with these database files won't influence one another at all (any write operations can be executed in parallel). But if you open several connections to the same database file (from the same process of course) they all will use the same instance of shared cache (hence the word 'shared'). In this case all write operations will be "serialized" but only on table-by-table basis, i.e. if one connection is doing some updates in one table and another connection is doing updates in another table they will be able to be executed in parallel. But if different connections try to update the same table(s) they will be "serialized". So for the purpose of parallelizing transactions working with different tables you don't have to split them into different databases - with shared cache they will be parallelized pretty effectively in the same file too Pavel. On Wed, Nov 18, 2009 at 8:19 AM, presta wrote: > > To be more precise I would like to parallelize writes operations on different > tables, so potentially in different db (files). > > It's why I think about using multi databases (1 by table), the shared cache > system and the asynchronized I/O.. > > So if a shared cache is shared accross different databases, writes operation > will be "serialized", so according to all reply it seems that a shared cache > is create for each different db instance ?? > > > > > > > > > -- > View this message in context: > http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26407565.html > Sent from the SQLite mailing list archive at Nabble.com. > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
To be more precise I would like to parallelize writes operations on different tables, so potentially in different db (files). It's why I think about using multi databases (1 by table), the shared cache system and the asynchronized I/O.. So if a shared cache is shared accross different databases, writes operation will be "serialized", so according to all reply it seems that a shared cache is create for each different db instance ?? -- View this message in context: http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26407565.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
>> So, does it possible to have more than one shared cache within a single >> process ? > > Open the same database twice, using two different handles. At least I think > it will work. Nope, it won't. That's the purpose of shared cache: if you open the same database several times with different connections they all will use the same cache instance (and in fact all use only one file handle for I/O operations). Although probably opening the same database file using different paths (like e.g. /some/path/to/db and /some/path/./to/db) will work and will force SQLite to use different instances of cache for such connections. Pavel On Wed, Nov 18, 2009 at 5:11 AM, Simon Slavin wrote: > > On 18 Nov 2009, at 10:00am, presta wrote: > >> So, does it possible to have more than one shared cache within a single >> process ? > > Open the same database twice, using two different handles. At least I think > it will work. > > Simon. > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
On 18 Nov 2009, at 10:00am, presta wrote: > So, does it possible to have more than one shared cache within a single > process ? Open the same database twice, using two different handles. At least I think it will work. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
Thanks, I will try to use the shared cache with Async I/O "Each shared-cache has its own mutex"... So, does it possible to have more than one shared cache within a single process ? One shared cache by db ? -- View this message in context: http://old.nabble.com/Asynchronous-I-O-and-shared-cache-tp26402983p26405154.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Asynchronous I/O and shared cache
On Nov 18, 2009, at 1:25 PM, presta wrote: > > Hello, > > I'm wondering if shared cache and read uncommited isolation level with > asyncronous I/O enabled is possible ? I haven't tried, but I assume it is possible. The two features don't really interact. > In sqlite3async.c I see a shared mutex between read and write > operations, so > I doubt that it is possible to have real concurrency between read and > write... Each shared-cache has its own mutex. The mutex is held for the duration of each sqlite3_step() call. So the way you're defining it here, you can't have "real" concurrency when using shared-cache mode in any case. However, using asynchronous IO allows sqlite3_step() to return more quickly when writing to the database file (because it doesn't actually do IO, just adds writes to a write-queue that is serviced by a background thread. Dan. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users