--- Jim Correia <[EMAIL PROTECTED]> wrote: > I notice that SQLite 3.4.0 and later impose hard limits on some > sizes. I'm running into a problem where a .dump/.load cycle fails on > a database with columns that have blobs which are about 2MB in size. > > Looking at the source for 3.5.3 (I can't find a tarball of 3.4 on the > web site, but I'm using 3.4 since that is what ships on Mac OS X 10.5) > > I see: > > /* > ** The maximum length of a TEXT or BLOB in bytes. This also > ** limits the size of a row in a table or index. > ** > ** The hard limit is the ability of a 32-bit signed integer > ** to count the size: 2^31-1 or 2147483647. > */ > #ifndef SQLITE_MAX_LENGTH > # define SQLITE_MAX_LENGTH 1000000000 > #endif > > and more importantly: > > /* > ** The maximum length of a single SQL statement in bytes. > ** The hard limit here is the same as SQLITE_MAX_LENGTH. > */ > #ifndef SQLITE_MAX_SQL_LENGTH > # define SQLITE_MAX_SQL_LENGTH 1000000 > #endif > > Is the comment wrong, or the source? The value is not the same as > SQLITE_MAX_LENGTH; it is in fact much smaller. > > If this is intentional, what is the recommended replacement > for .dump/.load for large rows?
You have to recompile with a large value for SQLITE_MAX_SQL_LENGTH via a compiler -D flag or other means. Monotone encountered this issue as well for dumping/restoring databases with large BLOBs: http://lists.gnu.org/archive/html/monotone-devel/2007-09/msg00246.html I think the default value is too small, but as long as you're able to compile/use your own library, it's not too much trouble. ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping ----------------------------------------------------------------------------- To unsubscribe, send email to [EMAIL PROTECTED] -----------------------------------------------------------------------------