Hi,

Are there any circumstances in which a vacuum could corrupt a database?

I am storing about 300000 images in an SQLite database, where each row
contains a couple of integers and a BLOB of data (average size 80Kb).

There are a some indices on the integer columns, and the final database is
around 25Gb on disk. This is a lot of data, but the database is read-only
after creation and performance is excellent.


Prior to performing a vacuum, the database functions perfectly - selecting
data out of a row returns the correct results.

Performing a vacuum takes a while, reduces the size of the database, and
does not return any errors.

After the vacuum, selects trigger sqlite3Corrupt() and sqlite3 returns "SQL
error: database disk image is malformed".


Running an integrity check immediately after the vacuum shows:

  ----------------------------------
  sqlite> pragma integrity_check;
  *** in database main ***
  Page 4198007: sqlite3BtreeInitPage() returns error code 11
  On tree page 3597839 cell 61: Child page depth differs
  Page 4214662: sqlite3BtreeInitPage() returns error code 11
  [... 36 similar lines ...]
  On tree page 4 cell 2: Child page depth differs
  Page 5082741: sqlite3BtreeInitPage() returns error code 11
  On tree page 4 cell 3: Child page depth differs
  Page 5945673: sqlite3BtreeInitPage() returns error code 11
  [... 57 similar lines ...]
  sqlite> 
  ----------------------------------

I don't think I am near any of the limits mentioned on limits.html, and I am
using pragma temp_store_directory to make sure temporary files are going to
a disk with several hundred Gb free (monitoring the attached disks shows
that only that disk is touched during the vacuum, so I don't think I am
running out of space).

Normally we vacuum so that we can increase the page size to 32Kb, but
omitting that and going straight from a working database with the default
page size (1Kb) to a vacuum'd database with the same page size also corrupts
the file.

The first problem listed is on page 4198007, which with 1Kb pages is a few
thousand pages past the 4Gb mark - which may just be a coincidence, as I
didn't think there was a 4Gb limit.


Is there any reason this operation should fail, like a 4Gb ceiling on some
operations?

If not I will try and create a test case to reproduce it, but I was
wondering if anyone had seen something similar before.


-dair
___________________________________________________
d...@refnum.com              http://www.refnum.com/


_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to