Hello,

 >>If you want to improve utilization memory, just put everything in one
 >> database.  Each attached database in each connection has its own cache.
 >> With that many data sources you're going to get a very uneven distribution
 >> of cache utilization, and very high worst-case usage.

Unfortunately, my application restricts me to use independent
databases than to a single  database
as each database exists on a different device and contains the info of
that device in the database.
Multiple devices are allowed to connect to my server app which instead
gathers the records from the
independent databases by attaching the databases together.
Unfortunately, I could not find a better
solution than attach to join these databases as after gathering the
information together i need to sort
the collected records and present them together. Thatswhy sometimes I
try to select lot of records
with a single query.

May be someone suggest a better option for this situation. Is there a
way to clean cache before I make
a select to ensure I have enough cache for the select. I do use SOFT
HEAP feature already.

>>"Just 250 records" implies a sort.  Unless an index is available, a
>>  sort requires doing the whole query and sorting the results.  That's
>>  the large lead time.  After that it is just popping records off the
>> stack, so to speak.

I do have a primary index on my very first column which is nothing but
a rowid itself. I did not
use other indexes as it may be again overhead on memory. My
implementations for databases are simple
whereby i create a database with a single table of 65 columns and
write data into it. I attach all the
databases together and make selects with ofcourse using sortby on not
on the primary index but on some other
column of type TEXT. Do u think this will slow down the things?

Regards,
-Akbar


On Thu, Mar 25, 2010 at 05:22:04PM +0100, Akbar Syed scratched on the wall:
>* I have been trying to improve the performance and memory usage for my
*>* application whereby i have maximum of 30 databases attached. In total I have
*>* 31 databases with 30 databases attached to the first one. Each database has
*>* a single table with approx 65 columns and the records in each table may go
*>* upto 50,000. I have 31 connections for 31 databases i.e. 1 connection for
*>* one database. For each connection I have a cache size of 500 pages (1 Page =
*>* 1KB), temporary cache 500 pages and for each attached connection cache size
*>* of 100 pages. My efforts to minimize memory usage as much as I can also the
*>* speed of reading is tolerable. I dont mind the writing speed, but I do care
*>* for reading speed. In one attempt, I would select all the records from all
*>* the databases and thats the purpose I am using attached databases with a
*>* single query.
*
  If you want to improve utilization memory, just put everything in one
  database.  Each attached database in each connection has its own cache.
  With that many data sources you're going to get a very uneven distribution
  of cache utilization, and very high worst-case usage.

>* In one attempt i tried to fetch just 250 records of 65 columns from 31
*>* databases and I observed that I spend approx 1-5 seconds in the first call
*>* to sqlite3_step() function and the subsequent calls to sqlite3_step() are
*>* some microseconds.
*
  "Just 250 records" implies a sort.  Unless an index is available, a
  sort requires doing the whole query and sorting the results.  That's
  the large lead time.  After that it is just popping records off the
  stack, so to speak.

   -j
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to