On 10 Sep 2013, at 4:15pm, Harmen de Jong - CoachR Group B.V. 
<har...@coachr.com> wrote:

> That is something we suspected too. We already made some tests where we timed 
> the time needed for all memory allocations executed in the entire operation. 
> In total for the 10000 tables test this was somewhere around 25 msec. Since 
> this is just a little overhead and the instructions as you point out have a 
> linear increasement this still does not explain the polynomial increasement 
> in preperation time.

Polynomial time would be explained if the software is scanning every item in a 
list of lists for each table mentioned.  So a guess about what's happening is 
that for each DELETE FROM instruction the software has to look up the entry 
being deleted, and to check that none of the items in any of the other tables 
refers to it.

This makes sense given how FOREIGN KEYs are implemented in SQLite: there is no 
master 'check-for-changes' table, but each table which has the foreign key has 
to be scanned.  It is irrelevant that there are no entries in any of these 
tables: the table (actually, the index) still has to be found so that SQLite 
can figure out that there are no entries in it.  Because SQLite hashes table 
names, rather than scanning a simple list, it is scanning a list of lists.  
Hence, polynomial time.

I see that E. Pasma has posted that the long scanning time has been fixed in a 
later release.  Good.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to