mmm, I was thinking that I decrease the cache_size to like 20 when using the
ram drive since I dont need caching anymore then.

I have inserted more timeing code and I am now convinced I have an IO
problem. When I coax a OS to fully cache my (smaller 400000 rows) db file (
which takes like 2-3 runs ) sqlite can do lookups at about 50000 per second.
With the file uncached this value falls as low as 500.

Now I need to get the system administrators to make me that ram drive.


ken-33 wrote:
> 
> The Ram drive is unlikely to work. It will still have the same cache
> invalidation. 
>   You need to get things logically working first. Ram drives are great to
> help improve performance where seeks are and rotational access
> requirements dictate.
>   
> 
> pompomJuice <[EMAIL PROTECTED]> wrote:
>   
> AArrgh.
> 
> That is the one thing that I wont be able to do. It would require a
> complete
> system redesign. I can adapt my program easy but now to get it to work in
> the greater scheme of things would be a nightmare.
> 
> My current efforts are being focussed into making a ram drive and putting
> the file in there. I hope it works.
> 
> 
> ken-33 wrote:
>> 
>> Can you consolidate your multiple binaries to a Single Binary?
>> Then Use threading and sqlite's shared caching to perform the Lookups and
>> updates.
>> That way the cache wouldn't get invalidated???
>> 
>> Someone else here correct me if this is a bad idea!!!
>> 
>> 
>> 
>> pompomJuice wrote: 
>> I suspected something like this, as it makes sense.
>> 
>> I have multiple binaries/different connections ( and I cannot make them
>> share a connection ) using this one lookup table and depending on which
>> connection checks first, it will update the table. 
>> 
>> My question is then, if any one connection makes any change to the
>> database
>> ( not neccesarily to the huge lookup table ) will all the other
>> connections
>> invalidate their entire cache? Or is it per table/btree that the cache is
>> dropped?
>> 
>> Thanks for that reponse. Already I can move ahead now with better
>> knowlege
>> of how the caching works.
>> 
>> Regards.
>> 
>> 
>> 
>> Dan Kennedy-4 wrote:
>>> 
>>> On Tue, 2007-06-19 at 01:06 -0700, pompomJuice wrote:
>>>> Hello there.
>>>> 
>>>> I need some insight into how SQLite's caching works. I have a database
>>>> that
>>>> is quite large (5Gb) sitting on a production server that's IO is
>>>> severely
>>>> taxed. This causes my SQLite db to perform very poorly. Most of the
>>>> time
>>>> my
>>>> application just sits there and uses about 10% of a CPU where it would
>>>> use a
>>>> 100% on test systems with idle IO. Effectively what the application
>>>> does
>>>> is
>>>> constantly doing lookups as fast as it can.
>>>> 
>>>> To counteract this I increased the page size to 8192 (Unix server with
>>>> advfs
>>>> having 8K block sizes) and increased the MAX_PAGES value in sqliteInt.h
>>>> to
>>>> 512000. This worked. My application starts at low memory usage and as
>>>> it
>>>> gradually gains more memory. As it gains more memory it uses more CPU
>>>> and
>>>> reaches a point where it finally uses 100% CPU and 5Gb of ram.
>>>> 
>>>> Every now and then the lookup table is udpated. As soon as the
>>>> application
>>>> does this however the performance goes back to a crawl and slowly
>>>> builds
>>>> up
>>>> again as described in the previous paragraph. The memory usage stays at
>>>> 5Gb.
>>>> All that I can think of is that the update invalidates the cache. 
>>> 
>>> Probably right.
>>> 
>>>> The update
>>>> is not very big, say 200000 rows in a table that has about 45 million
>>>> rows.
>>>> 
>>>> What exactly is happening here?
>>> 
>>> Are you using 3.3.17? And is it an external process (or at least a
>>> different connection doing) doing the update?
>>> 
>>> If so, the update is modifying the pager change-counter, invalidating
>>> the pager cache held by the lookup application. The lookup app has
>>> to start loading pages from the disk again, instead of just reading
>>> it's cache.
>>> 
>>> The only way around this performance hit is to do the UPDATE through
>>> the lookup app, using the same database connection.
>>> 
>>> Dan.
>>> 
>>>> Regards.
>>> 
>>> 
>>> -----------------------------------------------------------------------------
>>> To unsubscribe, send email to [EMAIL PROTECTED]
>>> -----------------------------------------------------------------------------
>>> 
>>> 
>>> 
>> 
>> -- 
>> View this message in context:
>> http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944908.html#a11192121
>> Sent from the SQLite mailing list archive at Nabble.com.
>> 
>> 
>> -----------------------------------------------------------------------------
>> To unsubscribe, send email to [EMAIL PROTECTED]
>> -----------------------------------------------------------------------------
>> 
>> 
>> 
>> 
> 
> -- 
> View this message in context:
> http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944908.html#a11208520
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> 
> -----------------------------------------------------------------------------
> To unsubscribe, send email to [EMAIL PROTECTED]
> -----------------------------------------------------------------------------
> 
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944908.html#a11217740
Sent from the SQLite mailing list archive at Nabble.com.


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to