>All I was able to determine was that setting temp_store = 2 (MEMORY)
>did speed up the queries but I've no idea if using that setting is risky
>on a lower spec pc.

I "tested" this and setting temp_store to MEMORY is the same as placing the 
database itself in :memory:.  That is, the page_cache and temp_page_cache size 
(as approriate) has zero effect.

That is, the size and memory consumption of both temp objects and objects in 
the database are limited by the ability of system to allocate virtual storage 
and once that is used up, an "OutOfMemory" error occurs with whatever 
consequences may ensure from that thus ensue.

If however the database is a file, or the temp_store is a file, then the 
applicable page cache size specifications provide a best effort limitation 
(note that it is best effort, not a guarantee) on the amount of memory consumed 
in the page_cache for each object type (database or temp).

So yes, if you run with temp_store = 2 on a computer with 4K of RAM you will 
have a limitation of being able to use a maximum of 4K of RAM.  On the other 
hand, if you run on a computer that has RAM in excess of the supported process 
virtual size, then you will keep consuming RAM until the process has allocated 
all the arena that it is allowed by the OS to allocate before that process gets 
an out of memory error.  Whether or not the "process" memory allocator runs out 
of allocable arena before the OS runs out of storage is defined by the OS (and 
the amount of virtual storage actually installed in the computer).

So "risky" is a questionable concept.  Everything is "risky".  I believe what 
you are trying to express is the probability of an out of memory condition, and 
that is an entirely different thing.  If your application is controlling a 
life-and-death process then it is quite likely that your CONSEQUENCE of failure 
is HIGH (immediate death of one of more persons).  

IF you have the temp_store on disk, then the probability of running out of 
memory is equal to the probability of running out of disk space, which is a 
separate question.  On the other hand, if you have temp_store in memory and 
only have 4K of virtual storage, then the probability that you will experience 
out-of-memory is high.  Alternatively, if you are using a computer than has 4 
Petabytes of RAM and you have temp_store in memory AND your software is written 
and running such that it can use all that RAM, then the probability of running 
out of RAM is remote.

Of course, these probabilities do not take into account the probability of 
"slow lingering death of one or more persons" because, since you cannot afford 
actual RAM, you instead use spinning-rust backed "Virtual Storage" which 
introduces "random latency" into your systems.


---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to