On 10 Mar 2017, at 9:34pm, Keith Medcalf <kmedc...@dessus.com> wrote:

> You mean physical reads?  I suppose this would be possible, as long as the 
> working set of all your read queries are able to fit in the cache 
> simultaneously.  If not, you are likely to get more cache thrash with the 
> cache being shared then if it is not shared since you are using the same 
> cache for all connections, rather one per connection that will contain only 
> the working set for the queries processed on that connection.

Two different patterns of use.  One is that the different threads/processes 
usually care about different rows (maybe in different tables).  In that case, 
shared cache is of very little benefit.  The other is when different 
threads/processes usually update the same parts of the files.  In that case 
sharing cache can provide a great improvement in throughput.

Modified for SQLite, of course, because almost every modification modifies the 
beginning of the database file and the beginning of the journal file.

But yes, as Keith points out, there’s no way to know which optimization(s) will 
benefit your particular setup without trying them.  And you shouldn’t waste a 
lot of time on doing anything non-standard unless a vanilla setup is too slow.  
You are not trying to provide the fastest possible program; you are trying to 
provide a program which is fast enough.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to