Nuno, unfortunately your psychic skills are a bit off on this one.  Sorry I 
wasn't more explicit.  I am not using any LIMIT or OFFSET to do any virtual 
scrolling.  Basically I have table A which has 900 rows.  Table B has 180,000 
rows (900 * 200) which has a foreign key relationship back to table A.  So for 
each row in table A, there are 200 rows in table B.  My query is basically a 
"SELECT * FROM Table B WHERE ID = TableA.ID".  I'm executing this query 900 
times, once for each row in table A.  

When I start the 900 read iterations (always in the same order), the first one 
generally reads in about 50ms and by the last read, it's taking roughly 1000ms. 
 Sometimes it slows down immediately, sometimes after the 100th iteration.  The 
only absolutely reproducible aspect is that it always slows down eventually and 
once it slows down, it never speeds back up.  I don't believe it's a locking 
issue since my timer doesn't start until the query is successfully executed.

Any ideas?  Would the occasional write operation in the midst of these reads 
cause any permanent slow down to the read time?  Thanks.

Mike Borland

-----Original Message-----
From: Nuno Lucas [mailto:ntlu...@gmail.com] 
Sent: Friday, June 12, 2009 7:16 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite3_step performance degredation

On Sat, Jun 13, 2009 at 1:52 AM, Mike
Borland<mike.borl...@cygnetscada.com> wrote:
> I have a fairly complex program where I am seeing a performance
> degradation of the sqlite3_step() function.  Basically I'm iterating
> roughly 200 rows at a time, over and over.  The only work happening
> during the iteration is I'm copying the record into an array.  At first,
> sqlite3_step() takes less than a millisecond to run.  After 0-50
> iterations, it's taking anywhere from 10-100ms.
>
> Does anybody have any insight into what's happening behind the scenes
> with this function to help me track down the cause?  I appreciate it!

You should explicitly say what your SQL query is. Without that we can
only guess.

My current "psychic" guess is that you are using LIMIT to obtain those
200 rows, one "page" at a time, and as you go advancing "pages" it
becomes slower and slower.
If this is true, then you should re-think your design as LIMIT just
skips the rows, but it will  "generate" them before, meaning it
becomes slower as you advance on the offset given.
Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor (and
notice the "What not to do" at the end, talking about "LIMIT" and
"OFFSET").

If my my psychic abilities are becoming weak, then please supply your
exact query that is getting slower  (and maybe your database schema)
and then someone can give you an exact answer.


Regards,
~Nuno Lucas

>
> Mike Borland
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to