On Aug 10, 2010, at 11:55 AM, Victor Morales-Duarte wrote:

> As it turns out, I can reproduce the failure using a single huge  
> insert.
> The code that I'm including below compiles under bcc32 from
> Embarcadero's C++ Builder 2007 and cl from vs2005. Since it's more
> likely that people have MS compilers available, the compilation line
> having this source file and the sqlite3 3.7.0 amalgamation files in  
> the
> same folder is:
>
> cl -EHsc -Fefail.exe main.cpp sqlite3.c
>
> You can then invoke fail.exe with a single command line argument of
> 80000000 like this:
>
> Fail.exe 80000000
>
> The source for the executable is listed below. If you're wondering  
> about
> why the numbers being inserted are more complicated than need be, it's
> because I just wanted the table and indices to look as much as  
> possible
> like the actual data that our application stores in sqlite because I  
> had
> not realized that the failure could be reproduced with simply  
> inserting.
> Beware that there is no handling of incorrect command line arguments.
>
> If you monitor this executable run with perfmon and look at its  
> virtual
> bytes, you'll see them hit 2GB and then the next time the insert
> statement is stepped, it fails with an I/O disc error.

When this happens, how large are the "test.db-wal" and "test.db-shm"
files on disk?

Here, using g++/Linux, the test.db-shm file (the one memory mapped
into the processes address space) grows to about 24 MB here. The  
test.db-wal file
grows to about 12 GB. This is as expected - the *-shm file
requires 8 bytes of space for each page written into the *-wal
log. So with your 4KB pages, the *-wal file should be roughly
512 times the size of the *-shm.

When the transaction is committed, the database is checkpointed.
In the checkpoint code there is a big heap memory allocation -
Say 2.5 bytes for each page in the WAL. So in this case maybe
6-7MB. It's not ideal to be making allocations this big, but
it shouldn't cause any trouble for a desktop PC.

Memory usage here peaks at around 130MB. That's the 85MB of
configured cache space (20,000 * 4KB pages), plus the mapping
of the test.db-shm file plus the big allocation made during
the checkpoint. Plus something else I suppose.

> If you break up the insert into chunks
> _and_close_the_connection_between_chunks_ then the error does not  
> occur.

Does this imply that if you add an sqlite3_exec("COMMIT;BEGIN")
every 10,000 inserts the program still does not run to completion?

Dan.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to