Hi --

Okay.  I've finally gotten my execution time down from 30 minutes to 50 seconds 
in the preliminary one-time insertion-heavy scenario  and from 20 minutes to 2 
minutes in the subsequent read-heavy scenario.    22 minutes of speedup in the 
first scenario was contributed by the indexing of a key field.  Turns out it 
wasn't the SELECTs that were causing the problem, as I had thought; their 
execution time remained the same whether the field was indexed or not.   It was 
rather a COUNT on the field that was being done in order to derive an order 
number for maintaining ordered collections of subsets of these rows in the 
application.   Then, of the remaining 8 minutes, I eliminated 7 of them by 
getting rid of the SELECTs altogether; I had been checking to see if an entry 
was already there before inserting it so as not to insert twice.  This was 
deemed not valuable enough to warrant the roughly 7 minutes it was consuming, 
since we're basically iterating through a bunch of objects once to store
  these rows in the first place.

Yet the following oddity still remains:  when we create the index 
programmatically before adding the rows, it doesn't seem to have any effect.  
Both interactively in the Firefox tool and programmatically from the second 
phase of my application it takes about an average 18ms to do a select -- the 
same amount of time without the index even being there at all.  Only when I 
manually drop and recreate the index from within the Firefox tool does the 
index kick in and subsequently reduce select time to about 1ms.   I would have 
expected that with the index definition in place the index would have been 
maintained as rows were being added so that by the time we finished adding 
100,000 rows we would have a nicely indexed table with 1ms select times.  
Unfortunately the portion of the program that is adding the rows shouldn't have 
to know anything about indexing, so it isn't really possible to do the indexing 
programmatically after we add the rows.  We really need it to be part of the 
original s
 chema definition and just work correctly from the start.

Do you have any thoughts on this?  Am I correct in thinking this is not the 
correct desired behavior?  Or perhaps is there a setting that I can tweak in 
order to change the behavior?

Thanks much for any further help you can give (although you've probably already 
gone above the call of duty).

Have a great weekend.

Mike 

-----Original Message-----
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Richard Hipp
Sent: Friday, October 25, 2013 10:13 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Trying to figure out how to circumvent 
sqlite3_win32_mbcs_to_utf8

On Fri, Oct 25, 2013 at 10:05 AM, Mike Clagett
<mike.clag...@mathworks.com>wrote:

> I believe the code that I use to insert a record may somehow check to 
> see if it's already there first, which might explain why the index 
> would have improved performance.
>

Yes, that would definitely explain a lot.


--
D. Richard Hipp
d...@sqlite.org
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to