Thank you very much for the explanation... My interpretation of the comments 
inside the test_server.c code were incorrect. My first pass at the code was 
correct, everything goes through the server, except things like bind, column 
etc...
 
 Maybe some type of re-wording of the comments is in order???
 
 **    (3)  Database connections on a shared cache use table-level locking
 **         instead of file-level locking for improved concurrency.
 **
 **    (4)  Database connections on a shared cache can by optionally
 **         set to READ UNCOMMITTED isolation.  (The default isolation for
 **         SQLite is SERIALIZABLE.)  When this occurs, readers will
 **         never be blocked by a writer and writers will not be
 **         blocked by readers.  There can still only be a single writer
 **         at a time, but multiple readers can simultaneously exist with
 **         that writer.  This is a huge increase in concurrency.
 **
 ** To summarize the rational for using a client/server approach: prior
 ** to SQLite version 3.3.0 it probably was not worth the trouble.  But
 ** with SQLite version 3.3.0 and beyond you can get significant performance
 ** and concurrency improvements and memory usage reductions by going
 ** client/server.
 
 My thought was that if i wanted to perform selects concurrently on SMP system 
I would need 2 threads and each thread would be able to read concurrently, in 
order for that to happen they would need to use the connection that was created 
from the "server" to perform selects. But this is contradictory to the "enable 
shared cache".. 
 
 I just don't see how this improves concurrency, when using the client/server 
approach. Clients a,b send selects to the server and the server performs a, 
then b .... Not concurrent at all. 
 
 I don't see the benefit in table level locking in this scenario either, I'm 
obviously missing something here. What ??? 
 
 Thanks for any guidance... 
 
 Ken
 
[EMAIL PROTECTED] wrote: Ken  wrote:
> Hi all,
>   
>       I have a piece of code that utilizes test_server.c, (master thread) 
>   
> there are 3 threads, each performing seperate tasks, that get a 
> conection (shared) and set PRAGMA read_uncommitted=1. 
> My understanding is that this would allow each individual thread
> to concurrently execute a select statement?

Please reread the comments on test_server.c.  When shared cache
is enabled, database connections must remain in the same thread
in which they are created.  And the cache is only shared between
connections in the same thread.  Thus the read_uncommitted=1 pragma
only effects connections in the same thread.

The intent of shared cache and read_uncommitted=1 is to allow SQLite
to be used to build an in-process database server thread.  The
server thread can open multiple connections to the same database
file, one connection for each client, such that all connections share
the same cache.  This gives performance advantages and also reduces
memory consumption on low-memory embedded devices.  SQLite uses
table-level locking in a shared cache, for increased concurrency.
And if read_uncommitted is turned on, SQLite never locks out a read
request for an even bigger concurrency boost.

Look at test_server.c and see that it starts a separate server
thread that handles all database operations on behalf of clients.
The clients do not themselves attempt to access the database. 
Instead, each client sends its database requests to the server
thread, the server processes the request on behalf of the client,
then sends the results back to the client.

--
D. Richard Hipp  


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------


Reply via email to