> mmm, I was thinking that I decrease the cache_size to like 20 when using the
> ram drive since I dont need caching anymore then.
> 
> I have inserted more timeing code and I am now convinced I have an IO
> problem. When I coax a OS to fully cache my (smaller 400000 rows) db file (
> which takes like 2-3 runs ) sqlite can do lookups at about 50000 per second.
> With the file uncached this value falls as low as 500.
> 
> Now I need to get the system administrators to make me that ram drive.

The obvious problem with a RAM drive is that the data is not persisted,
so if you lose power... you get the idea.

I may be drowned as a witch for suggesting this, but since you have ample
RAM and CPUs and you want the file to be in OS cache all day for quick
ad-hoc lookups, just put the following in cron to be run every few minutes:

   cat your.db > /dev/null

If the file is already in OS cache, this is a very quick operation.

There may be OS-specific ways to keep the image of the file in RAM 
without the cron/cat hack.  Some modern smart OS pagers may not keep 
the file cached in memory if it suspects it will not be used again, so 
see what cat alternative works on your OS.

Whether you're using the RAM drive approach or the keep-the-db-in-OS-cache
approach, do keep the cache_size low for all your sqlite processes, as 
you mention. Having large caches for multiple processes is a waste of 
system RAM, due to duplication.


       
____________________________________________________________________________________
Be a better Globetrotter. Get better travel answers from someone who knows. 
Yahoo! Answers - Check it out.
http://answers.yahoo.com/dir/?link=list&sid=396545469

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to