Dear SQLite Community,

Please help me with your expertise and advice.

Environment:

   - Windows service which reads/updates any one of 60 different settings
   (integer values and strings) stored in SQLite database. This is .db file on
   disk. Another windows service (over which I have not control) also updates
   this .db files once in while.So SQLite DB is primary settings store.


Some Analysis:

ProcMon displays that there are 80-90 'ReadFile' requests per transaction
> on my application.

The transaction finishes in 1-3 seconds and user can at start at most 15
> transactions at any given instant. Again the frequency of transactions is
> not rigorous, it is at best intermittent. Say at max, 10 transactions in 2
> mins.

Now I think each ReadFile shown in ProcFile will not hit the disk.


How can improve the service performance here? Keeping in mind,


   - There will be no new inserts (ignoring addition of few hundred URLs
   once a day while overwriting the older ones). Size of the DB will be more
   or less constant in range 1-2MB or at most 5 MB.
   - Most of the times service is reading values.
   - Another service update some of these settings. (say once an hour)


Intention:
Super fast response to the user.


Is this case good candidate for using SQLite In-Memory database?

If not, how can I reliably know the number of hits to the SQLite .db file
on the disk?

Thank you.

Reply via email to