I noticed sqlite is using a lot of memory (up to 300MB before it hits a data
segment size ulimit and fails) during a delete operation.  This is
reproducable using the 'sqlite3' executable distributed with sqlite.  My
platform is sqlite 3.6.13 with the OS being solaris 10 on sparc
architecture, but the problem is reproducible on linux as well if I copy the
database file to a linux machine.  The relevant table schema is as follows:

  CREATE TABLE old_conns (a TEXT, b INTEGER, c TEXT, d TEXT, e TEXT, start
INTEGER, end INTEGER, f INTEGER, g INTEGER, h INTEGER, i INTEGER)
  CREATE INDEX end_idx ON old_conns ( end )

The delete operation fails as follows:

  sqlite> DELETE FROM old_conns WHERE end < strftime('%s', 'now', '-7
days');
  SQL error: out of memory

I would have thought that the memory used would be small for a delete
operation, but maybe there's some large temporary table being created in
memory for this operation?  Perhaps the fact that it has to update the index
along with the delete is causing memory usage where it wouldn't otherwise if
there was no index?  It still fails if I set "pragma temp_store=1" which I
believe instructs sqlite to put temporary tables on disk instead of memory.

The sqlite file observable via 'ls -al' is about 8GB in size (although I've
never vacuumed it), and the total size of the 'old_conns' table is about 68
million rows.  I expect roughly 50% of them would be deleted by the above
delete operation, but I have yet to see it succeed.

Is there any obvious explanation for this?  Any administrative controls I
can use to prevent it from happening?

Thanks.
-- 
Benjamin Rutt
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to