Thats exactly why I thought this sqlite would work.

16 Processor machine
~40Gb ram
EMC storage
Running a huge Oracle 10G database
Running a 3rd party application that generates HUGE IO. 
Part of this 3rd party application is my application that does lookups.

1.) Data comes in in the form of files.
2.) 3rd party application decodes and prepares these tickets or records for
insert into Oracle
3.) As these records flow by, we use key fields inside them to do lookups
into this lightweight SQL database as doing it into oracle would be even
slower.

When I deployed my solution I initially set the cache size to very small as
I thought that we can make gains by rather having the OS cache the SQLite db
file. This strategy failed miserably. The 3rd party app generates so much IO
that it seems that my SQLite file simply has no chance of staying cached.

So I then modified my program to mess with PRAGMA cache size to see if that
would help. Initially it did nothing until I edited the
SQLITE_DEFAULT_CACHE_SIZE in sqliteInt.h (which I see is now in limits) and
only then would my program start consuming massive amounts of ram. That work
brilliantly for a while until the lookup table was updated. All programs
went dead slow again and it would take them a good 2 hours to fully cache
that 5GB file again.

I am panicking now as I have been working on this project for a few months
now and its seems to be doomed. The only course of action I see is finding a
way to somehow to increase the IO performance as that is where I believe the
problem resides. Copying to and from the file system with the DB file is
very fast so I am kind of at a loss why my application is not getting the
throughput. Maybe because its not accessing the file sequentially... I don’t
know.

Thanks for the help.

Regards.



Christian Smith-4 wrote:
> 
> pompomJuice uttered:
> 
>>
>> I suspected something like this, as it makes sense.
>>
>> I have multiple binaries/different connections ( and I cannot make them
>> share a connection ) using this one lookup table and depending on which
>> connection checks first, it will update the table.
> 
> 
> What is your working set like? Are all processes on the same machine?
> 
> Sounds like you might benefit from increasing the amount of RAM on this 
> machine. You may be thrashing the OS cache, as your lookup process hogs 
> memory for it's own cache, pushing out old pages from the OS filesystem 
> cache.
> 
> If RAM upgrade is not feasible, then try reducing the cache of the lookup 
> process, so that the OS cache isn't forced out of memory so easily. Then, 
> when the lookup process has a cache miss, it's missed page is more likely 
> to be in the OS memory cache, and copied to the lookup process at memory 
> copy speed.
> 
> As you may have guessed, choosing the correct cache size for the lookup 
> process may involve several tuning iterations.
> 
> 
> Christian
> 
> --
>      /"\
>      \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
>       X                           - AGAINST MS ATTACHMENTS
>      / \
> 
> -----------------------------------------------------------------------------
> To unsubscribe, send email to [EMAIL PROTECTED]
> -----------------------------------------------------------------------------
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Cache-invalidation-after-insert-statements.-tf3944908.html#a11193389
Sent from the SQLite mailing list archive at Nabble.com.


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to