On Friday, 10 March, 2017 10:57, Jens Alfke <j...@mooseyard.com> wrote:
>> On Mar 9, 2017, at 7:55 AM, Keith Medcalf <kmedc...@dessus.com> wrote:

>> Why are you using SHARED_CACHE since it does not sound like you have
>> memory constraints on cache size, which is the primary (only) reason you
>> would ever want to use shared cache since the penalties for doing so are
>> significant.

> What about for reducing I/O? If the cache is shared, presumably the number
> of read calls will be correspondingly reduced.

You mean physical reads?  I suppose this would be possible, as long as the 
working set of all your read queries are able to fit in the cache 
simultaneously.  If not, you are likely to get more cache thrash with the cache 
being shared then if it is not shared since you are using the same cache for 
all connections, rather one per connection that will contain only the working 
set for the queries processed on that connection.
 
> I’m considering using a shared cache to cut down on I/O, as well as
> memory. If I use a pool of connections for read-only queries from multiple
> threads, it seems like a win to have them share a cache. 

Maybe.  It depends if the overhead of managing a shared cache (and the possible 
thrashing of that cache) exceeds the cost of not using a shared cache.  That 
will depend on your workload.  It is easy enough to experiment, however, since 
you only change one parameter to enable shared cache for the connections in the 
pool.

> I’d use a separate connection with its own cache for writes. Does that make 
> sense?

Yes.  However, each write will of course invalidate the cache used by the other 
connections (whether shared or not).  Though if it is WAL, I would suppose the 
cache would be invalidated only after a checkpoint ... but I am not certain of 
this.




_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to