Tom Poindexter wrote:
On Fri, Dec 12, 2003 at 12:01:04AM +0000, Tom Poindexter wrote:

I'm trying to load a fairly large table (~ 7.5 million rows, 1.5 gb of raw
data) with the 'copy' command, and I'm getting a sqlite fatal error:


I think large file support didn't get compiled in to my build, I confirmed
with:
        nm os.o | grep open

and saw that the object didn't use the open64 system call.  AIX apparently
uses '_LARGE_FILES' as the define to turn on the 64 bit i/o.  I've got a
new build cranking and will check results in the morning.


I hope this fixes the problem. But I still think there is a bug, since the library should have returned SQLITE_FULL, not SQLITE_CORRUPT.

No, wait.  I remember seeing something like this before.  If you compile
things so that large-file support is sort of half-way supported - you can
get into a situation where "off_t" is defined as a 32-bit integer instead
of a 64-bit integer.  Then when you write past 4GB, the OS does not complain
(thus no SQLITE_FULL error) but the off_t wraps around and you start
overwriting the first part of the file.  This leads quickly to an
SQLITE_CORRUPT.  I should come up with an assert() to detect this
problem....

BTW, did you compile with -DNDEBUG? Are assert()s disabled?

--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to