I think that you misunderstood the shared cache description. Cache is shared by many connections but connections may not be passed between threads. Each thread must maintain and use its its own connection. In our case a thread has an associated control block and the connection handle resides there.

As long as you only access the Sqlite connection from the thread which created it you share the cache and it works fine.

The locking is a seperate issue and is aimed at avoiding the dreaded "busy wait". We use a leisurely busy wait to handle mutli-process Sqlite using file locks. The technique is to not delay after a busy is intercepted but to force a process time slice yield but in a server our intention is to avoid these inefficiencies by using the more efficient synchronization features. As you would appreciate a few percent better efficiency on your server means a corresponding increase in the number of possible users.

Ken wrote:
John, The server can maintaine a "shared cache" but if a thread also opens the DB then that execution line will not have a "shared cache" but rather a cache per thread. Only the server thread may open and act upon the connection utilizing a shared cache on behalf of the client. The client may not do things unto the connection handle such as open, prepare, step, reset, close, finalize. At least thats my understanding of the "shared_cache" mode. Using a locking primitive internally does simplify the code. But I'll contend that if you are using multiple threads and each having a connection to a DB with a locking structure for internal synchronization. Then you are not using the sqlite shared cache. And you will not benefit from sqlites locking internals (read/writer starvation ). And if it is write intensive and concurrent you might as well have a single connections that is shared across all threads. I guess my point was that inside the server thread, once a transaction is entered upon behalf of a client then only that activity may continue and no others. So in my design i only had two choices, re-enqueu the message inside the server until the transactional thread completed or return an error to the client. I preferred keeping the message on the queue waiting to be serviced. This is also programatically a pain in the arse since you must guarantee the client doesn't abandon its responsiblities and exit without sending a close command into the server thread, resulting in a permanently blocked server queue. You can test this behavouir using the src/test_server.c code and some client connections into the test_server thread. Or I may just be totally off my rocker.. and thats ok too. Ken John Stanton <[EMAIL PROTECTED]> wrote: That is why the Sqlite locking is not a good fit for a threaded server. Why not use thread locks instead and achieve the synchronization with minimum overhead and latency? You do miss out on a couple of Sqlite features doing that (the pending and reserved locks which help with concurrency and write starvation) so you need to balance the benefits of them against the detrimental effects of polling.

In our older embedded Sqlite threaded applications we just serialized Sqlite access using a mutex because concurrency was not a prime issue, but use read/write locks in a higher traffic Sqlite based multi-threaded application server.

After experimentation, which included some erroneous attempts at cache sharing we have a strategy in place which uses Sqlite shared cache and assigns a rwlock to each open database. Each thread has its own DB connection with a pointer to the locking structure for the open database. That gives good throughput since it holds each database open while the server runs and maintains one cache per database, accelerating reads. The downside is that we have to figure out a replacement for the FTS2 accesses used for text searching.

Since we no longer user POSIX file locking we compile Sqlite without it to trim some redundant overhead.

It looks like we can replace FTS by user functions using a text indexing method recycled from another product.

The server in question services AJAX style WWW pages where there are large numbers of short read transactions and minimum latency is required to achieve a snappy response. It manages to achieve sub-millisecond responses to database RPC's from the WWW browser.

BTW, with help from this forum we realized that our attempts to achieve shared cache and FTS were doomed to fail for fundamental architecture reasons and abandoned the effort. In retrospect we were trying to implement PostgreSQL with Sqlite and that was not a rational project.

The Sqlite based application server allows a central site to support many databases, each one specific to sets of users located globally. Sqlite's single file databases make this very simple to administer. Each database does not have a large number of users, relieving the concurrency load.

For further background on using Sqlite this way look at the way Mozilla implements it using shared cache.

Finally, it is important to recognize that Sqlite id not Oracle, it is a well conceived kit of tools to permit a developer to embed SQL database capability into an application and to make it fit transparently. The developer has the source and nothing is chiselled in stone.

Ken wrote:

John,
The sqlite api won't block, it will return a sqlite_busy type error to any 
other transactions that are attempted? Correct, so there is no sqlite blocking 
which is a good thing when writing a server. The clients will always block 
waiting upon a response from the server. The server simply keeps the client 
requests enqueued until it can service them some time later.



-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------





-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to