Hi Anish,

when a database hits the disk, there's not much you can
do about. You can increase the memory of your system, so
that the entire database fits into memory. If the database
is "cold" (the system has started and the database is not
in the file system cache), you can read the entire database
file once using to get it into the cache. Unfortunately,
this does not help much, if the database is too big to fit
into memory. Another trick that could work in some cases:
if you know you have 50.000 requests and you know the order
in which the data is in the database, you can sort the requests
before you access the database in the order in which they are physically
stored in the database. Normally you would not know the exact order,
but if you do a VACUUM on the database you know that the data
is ordered in order of ROWID. But that works only in some cases,
where you query the data by ROWID.

I general, sqlite is great as long as all data fits into
memory. I have no comparison with other database systems,
how they perform when you hit the disk.

Michael


Anish Enos Mathew wrote:
Hi Michael,
           I came to know that increasing the page size would help in
better performance. So I used PRAGMA and set the page size to 32768
using the command,
          sqlite3_exec (db, "PRAGMA page_size = 32768", NULL, NULL,
NULL);

Still result is the same. Taking a time of 110 sec for 50,000
retrievals. Can u suggest me a method by which the performance of
retrieval can be increased.

-----Original Message-----
From: Michael Sizaki [mailto:[EMAIL PROTECTED]

Sent: Saturday, May 20, 2006 12:00 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] High retrieval time. Please help

Anish,

So my problem of retrieving 1,000,000 random records 1,000,000 times
works fine for 15 bytes. But it is taking too long a time for 1k
records. It is almost taking 102 seconds for retrieving 50,000 records
of size 1k. Can u suggest me a way for reducing the time taken for the
same? I have'nt done any changes in my program. The only change I made
was adding primary key to the seq_number column in my data insertion
program.

My guess is that with 15 byte records you operating system keeps the
entire database in the file cache and no real IO is done. When you use
the 1k records, the entire database does not fit into memory anymore,
and therefore real IO is done. That slows database access dramatically
down. I guess if you watch the CPU usage in the 15 byte case, its close
to 100% and in the 1k case it's very low (an you have a lot of disc
access).

Michael


The information contained in, or attached to, this e-mail, contains 
confidential information and is intended solely for the use of the individual 
or entity to whom they are addressed and is subject to legal privilege. If you 
have received this e-mail in error you should notify the sender immediately by 
reply e-mail, delete the message from your system and notify your system 
manager. Please do not copy it for any purpose, or disclose its contents to any 
other person. The views or opinions presented in this e-mail are solely those 
of the author and do not necessarily represent those of the company. The 
recipient should check this e-mail and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused, directly or 
indirectly, by any virus transmitted in this email.

www.aztecsoft.com



Reply via email to