[sqlite] Problem with incremental_vacuum and WAL

2011-04-05 Thread Filip Navara
Hello,

we are having problem with database that originated on computer of one
of our customers.

The database is used in WAL mode with auto_vacuum=1 and page_size=1024.

When running the pragma incremental_vacuum(1); command the WAL file
grows to 14Mb, while we would expect it to grow only to about 4Kb (1Kb
per page + some overhead). This causes the transaction to run for much
longer time than expected and eventually the WAL file grows to several
gigabytes when we try to run incremental_vacuum for 4096 pages.

Additional facts:
- The database was created with SQLite 3.7.5 running on Windows
- pragma integrity_check; reports ok
- Some free trunk list pages contain weird values in the number of
entries field

I have attached most of the dumps that I could create with the regular
SQLite tools (showdb dbheader, sqlite3_analyzer output, beginning of
the free list dump). I'm willing to send the whole database file
(~5Gb) to sqlite developers on request.

My questions are:
- Is is expected behavior that pragma incremental_vacuum(1); can
create 14Mb WAL file even though the page size is just 1Kb?
- If not, is it a know bug? If yes, in what circumstances?
- Is the free list corrupted? And if it is, how could this happen and
should pragma integrity_check; verify it?

Best regards,
Filip Navara

-- showdb mail_data.dat dbheader --

Pagesize: 1024
Available pages: 1..1389458
 000: 53 51 4c 69 74 65 20 66 6f 72 6d 61 74 20 33 00 SQLite format 3.
 010: 04 00 02 02 00 40 20 20 00 00 00 2f 00 55 33 92 .@  .../.U3.
 020: 00 4b 47 99 00 54 53 34 00 00 00 01 00 00 00 04 .KG..TS4
 030: 00 00 00 00 00 00 00 04 00 00 00 01 00 00 00 00 
 040: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 
 050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2f .../
 060: 00 2d e2 1d 00  .-...
Decoded:
 010: 04 00  1024  Database page size
 012: 022  File format write version
 013: 022  File format read version
 014: 000  Reserved space at end of page
 018: 00 00 00 2f  47  File change counter
 01c: 00 55 33 92 5583762  Size of database in pages
 020: 00 4b 47 99 4933529  Page number of first freelist page
 024: 00 54 53 34 5526324  Number of freelist pages
 028: 00 00 00 01   1  Schema cookie
 02c: 00 00 00 04   4  Schema format version
 030: 00 00 00 00   0  Default page cache size
 034: 00 00 00 04   4  Largest auto-vac root page
 038: 00 00 00 01   1  Text encoding
 03c: 00 00 00 00   0  User version
 040: 00 00 00 01   1  Incremental-vacuum mode
 044: 00 00 00 00   0  meta[7]
 048: 00 00 00 00   0  meta[8]
 04c: 00 00 00 00   0  meta[9]
 050: 00 00 00 00   0  meta[10]
 054: 00 00 00 00   0  meta[11]
 058: 00 00 00 00   0  meta[12]
 05c: 00 00 00 2f  47  Change counter for version number
 060: 00 2d e2 1d 3007005  SQLite version number

-- showdb mail_data.dat 4933529t (truncated) --

Pagesize: 1024
Available pages: 1..1389458
Decode of freelist trunk page 4933529:
 000: 00 0b 47 9a  739226  Next freelist trunk page
 004: b5 b7 bc b5   -1246249803  Number of entries on this page
Decode of freelist trunk page 739226:
 000: 00 0b 47 9b  739227  Next freelist trunk page
 004: 9f 65 86 9f   -1620736353  Number of entries on this page
Decode of freelist trunk page 739227:
 000: 00 0b 47 9c  739228  Next freelist trunk page
 004: d9 ed ea d9   -638719271  Number of entries on this page
Decode of freelist trunk page 739228:
 000: 00 0b 47 9d  739229  Next freelist trunk page
 004: d9 ed ea d9   -638719271  Number of entries on this page
Decode of freelist trunk page 739229:
 000: 00 0b 47 9e  739230  Next freelist trunk page
 004: db f0 ed db   -604967461  Number of entries on this page
Decode of freelist trunk page 739230:
 000: 00 0b 47 9f  739231  Next freelist trunk page
 004: db f0 ed db   -604967461  Number of entries on this page
Decode of freelist trunk page 739231:
 000: 00 0b 47 a1  739233  Next freelist trunk page
 004: db f0 ed db   -604967461  Number of entries on this page
Decode of freelist trunk page 739233:
 000: 00 0b 47 a2  739234  Next freelist trunk page
 004: d8 ec e9 d8   -655562280  Number of entries on this page
Decode of freelist trunk page 739234:
 000: 00 0b 47 a3  739235  Next freelist trunk page
 004: d8 ec e9 d8   -655562280  Number of entries on this page
...
Decode of freelist trunk page 739463:
 000: 00 0b 48 88  739464  Next freelist trunk page
 004: e9 d8 ec e9   -371659543  Number of entries on this page
Decode of freelist trunk page 739464:
 000: 00 0b 47 8d  739213  Next freelist trunk page
 004: 00 00 00 f8 248  Number of entries on this page
Decode of freelist trunk page 739213:
 000: 00 0b 46 93  738963  Next freelist trunk page
 004: 00 00 00 f8 248  

Re: [sqlite] insert statement using temp variable

2011-04-05 Thread Igor Tandetnik
RAKESH HEMRAJANI rakesh_hemraj...@hotmail.com wrote:
 int i=0;
 rc = sqlite3_exec(db, insert into emp values(i);, 0, 0, zErrMsg);

Use sqlite3_prepare_v2, sqlite3_step, sqlite3_bind_* et al to run a 
parameterized query. Something like this:

sqlite3_stmt* stmt;
sqlite3_prepare_v2(db, insert into emp values(?);, -1, stmt, NULL);
for (int i = 0; i  10; ++i) {
  sqlite3_bind_int(stmt, 1, i);
  sqlite3_step(stmt);
  sqlite3_reset(stmt);
}
sqlite3_finalize(stmt);

This example also shows how you can run the same statement multiple times with 
different values for parameters.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with incremental_vacuum and WAL

2011-04-05 Thread Filip Navara
On Tue, Apr 5, 2011 at 11:49 AM, Filip Navara filip.nav...@gmail.com wrote:
 Hello,

 we are having problem with database that originated on computer of one
 of our customers.

 The database is used in WAL mode with auto_vacuum=1 and page_size=1024.

 When running the pragma incremental_vacuum(1); command the WAL file
 grows to 14Mb, while we would expect it to grow only to about 4Kb (1Kb
 per page + some overhead). This causes the transaction to run for much
 longer time than expected and eventually the WAL file grows to several
 gigabytes when we try to run incremental_vacuum for 4096 pages.

 Additional facts:
 - The database was created with SQLite 3.7.5 running on Windows
 - pragma integrity_check; reports ok
 - Some free trunk list pages contain weird values in the number of
 entries field

 I have attached most of the dumps that I could create with the regular
 SQLite tools (showdb dbheader, sqlite3_analyzer output, beginning of
 the free list dump). I'm willing to send the whole database file
 (~5Gb) to sqlite developers on request.

 My questions are:
 - Is is expected behavior that pragma incremental_vacuum(1); can
 create 14Mb WAL file even though the page size is just 1Kb?
 - If not, is it a know bug? If yes, in what circumstances?
 - Is the free list corrupted? And if it is, how could this happen and
 should pragma integrity_check; verify it?

 Best regards,
 Filip Navara

Also the free list trunk pages seem to filled with 3-byte patterns.
Each page has different pattern, but they look very similar. I'm not
sure if these patterns come from the original data or not.

02D1E6000  00 0B 47 9A B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
..Gť¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6010  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6020  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6030  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6040  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6050  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6060  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6070  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6080  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6090  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E60A0  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E60B0  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E60C0  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E60D0  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E60E0  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E60F0  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6100  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6110  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6120  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6130  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6140  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6150  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6160  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6170  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6180  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6190  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E61A0  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E61B0  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E61C0  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E61D0  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E61E0  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E61F0  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6200  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6210  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC
Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ
02D1E6220  B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5
¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾
02D1E6230  B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7
¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡Ľ¾¡
02D1E6240  BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC B5 B7 BC

Re: [sqlite] import thousands of documents in SQLite

2011-04-05 Thread Matt Young
How about a text editor with search and replace.  There must exist a list of
the files in text form, hence search and replace gets a batch function that
uses .import

On Mon, Apr 4, 2011 at 2:41 PM, Simon Slavin slav...@bigfraud.org wrote:


 On 4 Apr 2011, at 9:04pm, Gert Van Assche wrote:

  We need to import thousands of documents in an SQLite db for use with
 FTS.
  The FTS part I understand (more or less) but I don't know how to import
 so
  many docs in the DB.
  Does anyone know a tool to do this?
  I won't be the one doing the import, but users that can't work on command
  line (or we don't want them to do this).

 Employ a programmer to write you a program that will do it.  That's what
 they're for.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sql syntax grammar: single-source

2011-04-05 Thread Vitali Kiruta
Hi everybody,

I hope somebody could clarify this for me. According to the sql
grammar definition, the single-source production
does not allow the table-alias after the join-source

See http://www.sqlite.org/syntaxdiagrams.html#single-source

However, when I'm trying to run this query it works just fine:

select t.* from (t1 join t2) as t;

Is it an oversight in the definition or am I missing something?

Thanks in advance
Vitali
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sql syntax grammar: single-source

2011-04-05 Thread Richard Hipp
On Tue, Apr 5, 2011 at 8:30 AM, Vitali Kiruta kir...@gmail.com wrote:

 Hi everybody,

 I hope somebody could clarify this for me. According to the sql
 grammar definition, the single-source production
 does not allow the table-alias after the join-source

 See http://www.sqlite.org/syntaxdiagrams.html#single-source

 However, when I'm trying to run this query it works just fine:

 select t.* from (t1 join t2) as t;

 Is it an oversight in the definition or am I missing something?


This currently works.  But because it is not part of the language spec, we
do not guarantee that we will continue to support it.



 Thanks in advance
 Vitali
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Balance between keeping DBs open vs repeated open/close

2011-04-05 Thread Technology Lighthouse
My application makes use of a number of separate SQLite DB files (in 
some cases  100). Each DB is handled by its own thread, which may be 
making frequent small writes, or sleeping extensively then occasionally 
making a more substantial number of writes. I'm trying to decide on a 
policy for keeping DBs open versus repeatedly opening/closing them.

Are there any hard or practical limits on the number of SQLite DBs that 
can be held open at the same time? And is there any data on the relative 
performance cost of the open operation?

Any info or rules of thumb would be appreciated!

-- 
Paul Roberts
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Build instructions for Winodws with unicode support

2011-04-05 Thread tiwaris

Thanks for the help.

I finally resolved the problem. The compile flags that I was using on
Windows were the culprit. The following set of commands correctly build the
sqlite library (build environment is Microsoft Platform SDK v6.1).

cl.exe /O2 /GL /D WIN32 /D _WINDLL /D _UNICODE /D UNICODE /MD /W3 /c
/Wp64 /TC /D SQLITE_ENABLE_COLUMN_METADATA /D SQLITE_ENABLE_FTS3 /D
SQLITE_THREADSAFE=1 /D SQLITE_ENABLE_ICU /I
../external/icu/Win32/include sqlite3.c

cl.exe /O2 /GL /D WIN32 /D _DLL /D _WINDLL /D _UNICODE /D UNICODE
/MD /W3 /c /Wp64 /TC /D SQLITE_ENABLE_COLUMN_METADATA /D
SQLITE_ENABLE_FTS3 /D SQLITE_THREADSAFE=1 /D SQLITE_ENABLE_ICU
NativeDB.c

link.exe /INCREMENTAL:NO /DLL /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /LTCG
/MACHINE:X86 /LIBPATH:../external/icu/Win32/lib /out:sqlite.dll
NativeDB.obj sqlite3.obj icuuc.lib icuin.lib kernel32.lib user32.lib
gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib
oleaut32.lib uuid.lib 

mt.exe /manifest sqlite.dll.manifest /outputresource:sqlite.dll';#2'



Pavel Ivanov-2 wrote:
 
 Basically, the column with name model has data type BLOB, and null is
 being written to that column.
 
 I think your problem is with jdbc driver (I guess its setBytes
 implemented via the text data type, not blob) and with the fact that
 writeBuffer[0] is equal to 0. Changing writeBuffer[0] to something
 other than 0 could prove that.
 
 
 Pavel
 
 

-- 
View this message in context: 
http://old.nabble.com/Build-instructions-for-Winodws-with-unicode-support-tp31315626p31324752.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Balance between keeping DBs open vs repeated open/close

2011-04-05 Thread Stephan Beal
On Tue, Apr 5, 2011 at 4:05 PM, Technology Lighthouse tlho...@gmail.comwrote:

 Are there any hard or practical limits on the number of SQLite DBs that
 can be held open at the same time?


Not quite an answer your whole question, but possibly of interest...

The OS environment determines how many file descriptor can be open at one
time. If your OS runs out of file descriptor, it won't be able to open any
more. Note that file descriptor does not just mean files, but also network
connection (at the kernel level, sockets and files are, at least on Unix
systems, basically the same thing).

i have NO idea what the file handle limits are on non-Unix systems, but
having, say, 200 dbs, each with 2(?) file handles (one for the journal and
one for the db?), shouldn't be a problem on default Unix configurations (on
some systems the limit can be raised if needed: google for man 1 ulimit).

If some of your DBs spuriously cannot be opened then you are possibly
running into the file descriptor limit and you may be forced to raise that
limit (if possible) or start closing DBs which you do not currently need.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Balance between keeping DBs open vs repeated open/close

2011-04-05 Thread Simon Slavin

On 5 Apr 2011, at 3:05pm, Technology Lighthouse wrote:

 My application makes use of a number of separate SQLite DB files (in 
 some cases  100). Each DB is handled by its own thread, which may be 
 making frequent small writes, or sleeping extensively then occasionally 
 making a more substantial number of writes. I'm trying to decide on a 
 policy for keeping DBs open versus repeatedly opening/closing them.
 
 Are there any hard or practical limits on the number of SQLite DBs that 
 can be held open at the same time? And is there any data on the relative 
 performance cost of the open operation?

As Stephan remarks this is more to do with the operating system than SQLite.  
In fact I think all your questions are tightly bound to characteristics of your 
operating system, several of them giving completely different answers even 
between different versions of Windows.  Windows 7, for instance, is slower at 
opening files than Windows XP, but does more caching once a file is open.  
Unix, on the other hand, does caching on the first read rather than when the 
file is initially opened.

So I suspect you should run some experiments with your own particular setup: 
the OS your customers will use, the amount of memory they'll have in their 
computers, and the access speeds their hard disks will have.

The natural question is whether all these really need to be in separate files.  
Do all these databases have the same schemas (same tables with the same 
columns) ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sql syntax grammar: single-source

2011-04-05 Thread Vitali Kiruta
 This currently works.  But because it is not part of the language spec, we
 do not guarantee that we will continue to support it.

Thanks a lot for quick answer.
Do you mean the sqlite language spec, or the sql standard?

I would be very much in favor of keeping this behavior. It makes
select statement more modular and simplifies sql code generation.

Regards,
Vitali
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Request for an example code use async IO

2011-04-05 Thread Ricky Huang
On Apr 4, 2011, at 7:10 PM, Pavel Ivanov wrote:

 Can someone be so kind as to provide a short example of initializing 
 asynchronous module, opening DB for read/write, create a table, and write 
 some data to it?
 
 […] All you need to do is to initialize async module and call 
 sqlite3async_run in the dedicated thread. […]

Thanks for getting back to me Pavel.

So let me see if I understand what you said:

Main thread:
  - Starts child thread
  - sqlite3_open_v2()
  - sqlite3_exec(), sqlite3_prepare_v2()/sqlite3_step(), etc
  - sqlite3_close()
  - Halts and joins child thread

Child thread:
  - sqlite3async_initialize()
  - sqlite3async_run()
  - sqlite3async_shutdown()


is that correct?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Request for an example code use async IO

2011-04-05 Thread Pavel Ivanov
To avoid races you should do this:

Main thread:
 - sqlite3async_initialize()
 - Starts child thread
 - sqlite3_open_v2()
 - sqlite3_exec(), sqlite3_prepare_v2()/sqlite3_step(), etc
 - sqlite3_close()
 - Halts and joins child thread
 - sqlite3async_shutdown()
 - exits

Child thread:
 - sqlite3async_run()


Pavel


On Tue, Apr 5, 2011 at 1:45 PM, Ricky Huang rhhu...@soe.ucsd.edu wrote:
 On Apr 4, 2011, at 7:10 PM, Pavel Ivanov wrote:

 Can someone be so kind as to provide a short example of initializing 
 asynchronous module, opening DB for read/write, create a table, and write 
 some data to it?

 […] All you need to do is to initialize async module and call 
 sqlite3async_run in the dedicated thread. […]

 Thanks for getting back to me Pavel.

 So let me see if I understand what you said:

 Main thread:
  - Starts child thread
  - sqlite3_open_v2()
  - sqlite3_exec(), sqlite3_prepare_v2()/sqlite3_step(), etc
  - sqlite3_close()
  - Halts and joins child thread

 Child thread:
  - sqlite3async_initialize()
  - sqlite3async_run()
  - sqlite3async_shutdown()


 is that correct?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help with changing the code

2011-04-05 Thread Simon Slavin

On 5 Apr 2011, at 6:54pm, Guilherme Batista wrote:

 let's say I get the number of rows from the first table scan of the
 execution. I would compare it with the number of rows defined for the table
 in sqlite_stat1 (I ran the analyze once, and there are no index in my
 tables). If the difference between them is too big, the optimizer may have
 built an inefficient query execution plan.

I think you've left out a lot of considerations there.  You will get far more 
of a speed up simply by creating appropriate indexes than by anything you 
describe here.

 So I would stop the query
 execution, execute the analyze for that table and start the query
 optimization all over again.
 Of course this would improve almost nothing. But it's just the beginning of
 my study...

Okay, you're trying to pick apart the inner workings of SQLite, and are 
assuming that the query analyzer makes mistakes.  In that case I have a 
suggestion for you: Have an option that does an ANALYZE on every table.  Have 
your users run it overnight once every six months.

But first, and more important, make the indexes appropriate to your searches.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] BEGIN DEFERRED TRANSACTION causes core dump in a heavily periodic load with BEGIN EXCLUSIVE/IMMEDIATE TRANSACTION in version 3.7.5

2011-04-05 Thread ChingChang Hsiao
I believe it is a bug in 3.7.5. It didn't happen in 3.6.22. It causes core dump 
when using BEGIN DEFERRED TRANSACTION in one of our application to access DB 
periodically(every 1 second) . There are other applications access the same DB 
periodically but using BEGIN EXCLUSIVE/IMMEDIATE TRANSACTION. Only the 
application with BEGIN DEFERRED TRANSACTION went to core dump. It seems that 
it doesn't get the lock for some reasons and fails in assert. After changing 
from BEGIN DEFERRED TRANSACTION to BEGIN EXCLUSIVE TRANSACTION in this 
application, the problem is gone. The core dump report is shown as below.

ChingChang


(gdb) bt
#0  0x3370bb04 in raise () from /lib/libc.so.6
#1  0x3370d2f4 in abort () from /lib/libc.so.6
#2  0x337032a4 in __assert_fail () from /lib/libc.so.6
#3  0x100dc940 in btreeInvokeBusyHandler (pArg=0x102b3b50) at sqlite3.c:47153
#4  0x1013f1dc in sqlite3VdbeHalt (p=0x103ae298) at sqlite3.c:38543
#5  0x1018fda8 in sqlite3VdbeExec (p=value optimized out) at sqlite3.c:63340
#6  sqlite3Step (p=0x103ae298) at sqlite3.c:59036
#7  0x101987e8 in sqlite3_step (pStmt=0x103ae298) at sqlite3.c:59101
#8  0x1016cb7c in sqlite3_exec (db=0x10856e18, zSql=0x106b3aa4 COMMIT;,
xCallback=0, pArg=0x0, pzErrMsg=0x388a87c0) at sqlite3.c:84523
#9  0x1003f744 in SqlQuery::execw (this=0x388a8844,
sql_stmt=0x106b3aa4 COMMIT;, context=0x101b91b8 SlotUtilEvent.cpp,
linenum=69, warnings=value optimized out) at SqlQuery.cpp:281
#10 0x10089db8 in SlotUtilEvent::run (this=0x10a81e94) at SlotUtilEvent.cpp:94
#11 0x10003f40 in HwMonListener::run (this=0x106b28a8)
at 
/mnt/local/cch/bugfix_test_11_01_02232011/isg6000/isg6k/mgmt-crd/linuxapps/hwmon/hwmon.cpp:1993
#12 0x10025c8c in Thread::start_thread (arg=0x106b28a8) at thread.cpp:199
#13 0x334265cc in ?? () from /lib/libpthread.so.0
#14 0x337b0b88 in clone () from /lib/libc.so.6
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BEGIN DEFERRED TRANSACTION causes core dump in a heavily periodic load with BEGIN EXCLUSIVE/IMMEDIATE TRANSACTION in version 3.7.5

2011-04-05 Thread Richard Hipp
On Tue, Apr 5, 2011 at 2:39 PM, ChingChang Hsiao 
chingchang.hs...@overturenetworks.com wrote:

 I believe it is a bug in 3.7.5. It didn't happen in 3.6.22. It causes core
 dump when using BEGIN DEFERRED TRANSACTION in one of our application to
 access DB periodically(every 1 second) . There are other applications access
 the same DB periodically but using BEGIN EXCLUSIVE/IMMEDIATE TRANSACTION.
 Only the application with BEGIN DEFERRED TRANSACTION went to core dump. It
 seems that it doesn't get the lock for some reasons and fails in assert.
 After changing from BEGIN DEFERRED TRANSACTION to BEGIN EXCLUSIVE
 TRANSACTION in this application, the problem is gone. The core dump report
 is shown as below.

 ChingChang


Please recompile using -O0 instead of -O3.  Rerun your test and send us the
resulting stack trace, and we will investigate further.





 (gdb) bt
 #0  0x3370bb04 in raise () from /lib/libc.so.6
 #1  0x3370d2f4 in abort () from /lib/libc.so.6
 #2  0x337032a4 in __assert_fail () from /lib/libc.so.6
 #3  0x100dc940 in btreeInvokeBusyHandler (pArg=0x102b3b50) at
 sqlite3.c:47153
 #4  0x1013f1dc in sqlite3VdbeHalt (p=0x103ae298) at sqlite3.c:38543
 #5  0x1018fda8 in sqlite3VdbeExec (p=value optimized out) at
 sqlite3.c:63340
 #6  sqlite3Step (p=0x103ae298) at sqlite3.c:59036
 #7  0x101987e8 in sqlite3_step (pStmt=0x103ae298) at sqlite3.c:59101
 #8  0x1016cb7c in sqlite3_exec (db=0x10856e18, zSql=0x106b3aa4 COMMIT;,
xCallback=0, pArg=0x0, pzErrMsg=0x388a87c0) at sqlite3.c:84523
 #9  0x1003f744 in SqlQuery::execw (this=0x388a8844,
sql_stmt=0x106b3aa4 COMMIT;, context=0x101b91b8 SlotUtilEvent.cpp,
linenum=69, warnings=value optimized out) at SqlQuery.cpp:281
 #10 0x10089db8 in SlotUtilEvent::run (this=0x10a81e94) at
 SlotUtilEvent.cpp:94
 #11 0x10003f40 in HwMonListener::run (this=0x106b28a8)
at
 /mnt/local/cch/bugfix_test_11_01_02232011/isg6000/isg6k/mgmt-crd/linuxapps/hwmon/hwmon.cpp:1993
 #12 0x10025c8c in Thread::start_thread (arg=0x106b28a8) at thread.cpp:199
 #13 0x334265cc in ?? () from /lib/libpthread.so.0
 #14 0x337b0b88 in clone () from /lib/libc.so.6
 Backtrace stopped: previous frame inner to this frame (corrupt stack?)
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BEGIN DEFERRED TRANSACTION causes core dump in a heavily periodic load with BEGIN EXCLUSIVE/IMMEDIATE TRANSACTION in version 3.7.5

2011-04-05 Thread Dan Kennedy
On 04/06/2011 01:39 AM, ChingChang Hsiao wrote:
 I believe it is a bug in 3.7.5. It didn't happen in 3.6.22. It causes core 
 dump when using BEGIN DEFERRED TRANSACTION in one of our application to 
 access DB periodically(every 1 second) . There are other applications access 
 the same DB periodically but using BEGIN EXCLUSIVE/IMMEDIATE TRANSACTION. 
 Only the application with BEGIN DEFERRED TRANSACTION went to core dump. It 
 seems that it doesn't get the lock for some reasons and fails in assert. 
 After changing from BEGIN DEFERRED TRANSACTION to BEGIN EXCLUSIVE 
 TRANSACTION in this application, the problem is gone. The core dump report 
 is shown as below.

 ChingChang



In frame 3, what do you get for p *pBt? And p *pBt-db?

Dan.


 (gdb) bt
 #0  0x3370bb04 in raise () from /lib/libc.so.6
 #1  0x3370d2f4 in abort () from /lib/libc.so.6
 #2  0x337032a4 in __assert_fail () from /lib/libc.so.6
 #3  0x100dc940 in btreeInvokeBusyHandler (pArg=0x102b3b50) at sqlite3.c:47153
 #4  0x1013f1dc in sqlite3VdbeHalt (p=0x103ae298) at sqlite3.c:38543
 #5  0x1018fda8 in sqlite3VdbeExec (p=value optimized out) at sqlite3.c:63340
 #6  sqlite3Step (p=0x103ae298) at sqlite3.c:59036
 #7  0x101987e8 in sqlite3_step (pStmt=0x103ae298) at sqlite3.c:59101
 #8  0x1016cb7c in sqlite3_exec (db=0x10856e18, zSql=0x106b3aa4 COMMIT;,
  xCallback=0, pArg=0x0, pzErrMsg=0x388a87c0) at sqlite3.c:84523
 #9  0x1003f744 in SqlQuery::execw (this=0x388a8844,
  sql_stmt=0x106b3aa4 COMMIT;, context=0x101b91b8 SlotUtilEvent.cpp,
  linenum=69, warnings=value optimized out) at SqlQuery.cpp:281
 #10 0x10089db8 in SlotUtilEvent::run (this=0x10a81e94) at SlotUtilEvent.cpp:94
 #11 0x10003f40 in HwMonListener::run (this=0x106b28a8)
  at 
 /mnt/local/cch/bugfix_test_11_01_02232011/isg6000/isg6k/mgmt-crd/linuxapps/hwmon/hwmon.cpp:1993
 #12 0x10025c8c in Thread::start_thread (arg=0x106b28a8) at thread.cpp:199
 #13 0x334265cc in ?? () from /lib/libpthread.so.0
 #14 0x337b0b88 in clone () from /lib/libc.so.6
 Backtrace stopped: previous frame inner to this frame (corrupt stack?)
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] read full txt file in one record

2011-04-05 Thread Gert Van Assche
Dear all,

what would be the best way to read a full txt file into one record?
Now, when I use
 .import FILE TABLE
I have every line on a record.

I need all lines of one file on one record.
The next record I need to fill with all lines of another file.

thanks for your advise.

Gert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help with changing the code

2011-04-05 Thread Simon Slavin

On 5 Apr 2011, at 8:47pm, Guilherme Batista wrote:

 Yes it's true.
 But I'm not trying to improve the SQLite performance. I'm just studying the
 query optimization of databases in general, and a better way to optimize if
 the tables do not have indexes and if the query is complex and deals with
 big amount of data. It may probably be useless but it's just a study.

Oh okay, I get it.  In that case you can't do better than study the source code 
for SQLite.  That way you'll know exactly how it works.

And you can insert your own diagnostics (print statements) in it and compile 
your own version, which gives you all the internal information you need.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] substr fails in a query: looks like a bug

2011-04-05 Thread Eric Promislow
I have full paths of a set of resources in a database, and am using
substrings to find arbitrary hierarchical resources.  Worked fine on
Windows using Sqlite3 3.6.5, but on Linux with 3.5.9 the query fails.

The schema includes tables:
paths:(integer id, string path)
common_details:(integer path_id (fkey), string type, string name)

select p3.id, cd3.name, cd2.name, p2.path as p2, length(p1.path) as len_p1,
p1.path as p1, substr(p2.path, length(p1.path) + 1, 1) as p1Char
from paths as p1, paths as p2, paths as p3,
 common_details as cd3, common_details as cd2, common_details as cd1
where cd1.name = Abbreviations and cd1.type = folder
  and cd2.type = folder and (cd2.name like Perl or cd2.name like
General)
  and cd1.path_id = p1.id
  and cd2.path_id = p2.id
  and substr(p2.path, 1, length(p1.path)) = p1.path
  and (substr(p2.path, length(p1.path) + 1, 1) == /
or substr(p2.path, length(p1.path) + 1, 1) != /)
  and cd3.type = snippet and cd3.name like fore
  and cd3.path_id = p3.id
  and substr(p3.path, 1, length(p2.path)) = p2.path ;

Notice the clause in the middle of the query:
  and (substr(p2.path, length(p1.path) + 1, 1) = /
or substr(p2.path, length(p1.path) + 1, 1) != /)

If I comment out this full clause, the query returns the expected result
set.

But if I leave it uncommented, the query returns an empty set.  But the
clause
should be tautologically true -- either the character after the prefix is
/ or it
isn't.

Looks like a bug to me.  The code ran fine on Windows (where I used
only the '=' test, not the '!=' test. I do have a workaround -- it's to
return paths
in my result set and to the substring test in Python instead of sqlite.  But
I'd prefer having sqlite run this code.

- Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] substr fails in a query: looks like a bug

2011-04-05 Thread Simon Slavin

On 5 Apr 2011, at 11:59pm, Eric Promislow wrote:

 Notice the clause in the middle of the query:
  and (substr(p2.path, length(p1.path) + 1, 1) = /
or substr(p2.path, length(p1.path) + 1, 1) != /)
 
 If I comment out this full clause, the query returns the expected result
 set.

Could you make us a demonstration sample ?  Make a database with one row in and 
the simplest SELECT you can that gives one result on Windows and the other in 
Unix.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] substr fails in a query: looks like a bug

2011-04-05 Thread Richard Hipp
On Tue, Apr 5, 2011 at 6:59 PM, Eric Promislow eric.promis...@gmail.comwrote:

 Notice the clause in the middle of the query:
  and (substr(p2.path, length(p1.path) + 1, 1) = /
or substr(p2.path, length(p1.path) + 1, 1) != /)

 If I comment out this full clause, the query returns the expected result
 set.

 But if I leave it uncommented, the query returns an empty set.  But the
 clause
 should be tautologically true -- either the character after the prefix is
 / or it
 isn't.


Can you please post values of p2.path and p1.path for which the above
expression returns false?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] substr fails in a query: looks like a bug

2011-04-05 Thread Igor Tandetnik
On 4/5/2011 6:59 PM, Eric Promislow wrote:
 Notice the clause in the middle of the query:
and (substr(p2.path, length(p1.path) + 1, 1) = /
  or substr(p2.path, length(p1.path) + 1, 1) != /)

 If I comment out this full clause, the query returns the expected result
 set.

 But if I leave it uncommented, the query returns an empty set.  But the
 clause
 should be tautologically true -- either the character after the prefix is
 / or it
 isn't.

The third possiblity is that substr() returns null, in which case both 
comparisons will be false.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] substr fails in a query: looks like a bug

2011-04-05 Thread Eric Promislow
This won't be a trivial case to reproduce -- I need to create two tables,
with several rows, and my main db tool is broken right now.  I did find a
workaround,
groveling over the results in Python.

If I bind the values to literals like so:
select ... str1 as p1_path and ... str2 as p2_path where

the substr clause works.  So I do need to create a real table to
repro this.

On Tue, Apr 5, 2011 at 5:12 PM, Igor Tandetnik itandet...@mvps.org wrote:

 On 4/5/2011 6:59 PM, Eric Promislow wrote:
  Notice the clause in the middle of the query:
 and (substr(p2.path, length(p1.path) + 1, 1) = /
   or substr(p2.path, length(p1.path) + 1, 1) != /)
 
  If I comment out this full clause, the query returns the expected result
  set.
 
  But if I leave it uncommented, the query returns an empty set.  But the
  clause
  should be tautologically true -- either the character after the prefix is
  / or it
  isn't.

 The third possiblity is that substr() returns null, in which case both
 comparisons will be false.
 --
 Igor Tandetnik

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] No journal vs WAL journal

2011-04-05 Thread Nikolaus Rath
Hello,

I'm experimenting with creation of a 156 MB database from scratch. I've
set synchronous = NORMAL and locking_mode = exlusive.

With journal_mode = off and without wrapping the INSERTs into a
transaction, creating the db takes 54 seconds.

With journal_mode = WAL (and still no transaction), it takes an
impressive 900 seconds.

With journal_mode = WAL and the INSERTs packed into a transaction, the
total time is 60 seconds.

I am slightly surprised by this. I was expecting that the difference
between transaction/no transaction with WAL is just the writing of an
additional few bytes of commit marker into the WAL file with every
insert. But apparently something much more time consuming is happening
as well.

Furthermore, if I turn off auto checkpointing, the WAL file grows to
more than 5 GB without transactions, but only to about 922 MB with a
transaction. Are the commit markers really taking that much space?

Can someone shed some light on what factor I am missing here?


Thanks,

   -Nikolaus

-- 
 »Time flies like an arrow, fruit flies like a Banana.«

  PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No journal vs WAL journal

2011-04-05 Thread Igor Tandetnik
On 4/5/2011 10:01 PM, Nikolaus Rath wrote:
 I'm experimenting with creation of a 156 MB database from scratch. I've
 set synchronous = NORMAL and locking_mode = exlusive.

 With journal_mode = off and without wrapping the INSERTs into a
 transaction, creating the db takes 54 seconds.

 With journal_mode = WAL (and still no transaction), it takes an
 impressive 900 seconds.

 With journal_mode = WAL and the INSERTs packed into a transaction, the
 total time is 60 seconds.

 I am slightly surprised by this. I was expecting that the difference
 between transaction/no transaction with WAL is just the writing of an
 additional few bytes of commit marker into the WAL file with every
 insert. But apparently something much more time consuming is happening
 as well.

Committing a transaction involves waiting until the data is physically 
transferred to the disk surface (as opposed to sitting in OS buffers). 
This requires at least two complete revolutions of the disk platter. 
With an above-average hard drive running at 15000 RPM, you can have a 
throughput of 125 transactions per second, at best. How many rows are 
you inserting?
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with incremental_vacuum and WAL

2011-04-05 Thread Dan Kennedy
On 04/05/2011 04:49 PM, Filip Navara wrote:
 Hello,

 we are having problem with database that originated on computer of one
 of our customers.

 The database is used in WAL mode with auto_vacuum=1 and page_size=1024.

 When running the pragma incremental_vacuum(1); command the WAL file
 grows to 14Mb, while we would expect it to grow only to about 4Kb (1Kb
 per page + some overhead). This causes the transaction to run for much
 longer time than expected and eventually the WAL file grows to several
 gigabytes when we try to run incremental_vacuum for 4096 pages.

 Additional facts:
 - The database was created with SQLite 3.7.5 running on Windows
 - pragma integrity_check; reports ok
 - Some free trunk list pages contain weird values in the number of
 entries field

 I have attached most of the dumps that I could create with the regular
 SQLite tools (showdb dbheader, sqlite3_analyzer output, beginning of
 the free list dump). I'm willing to send the whole database file
 (~5Gb) to sqlite developers on request.

 My questions are:
 - Is is expected behavior that pragma incremental_vacuum(1); can
 create 14Mb WAL file even though the page size is just 1Kb?
 - If not, is it a know bug? If yes, in what circumstances?
 - Is the free list corrupted? And if it is, how could this happen and
 should pragma integrity_check; verify it?

Was there a *-wal file in the file-system when you ran the
[showdb] tool? [showdb] just reads the database file, it is
not smart enough to know when it should read pages out of the
wal file instead of the db. So if there is a wal file that
contains content, [showdb] could report corruption.

Also, can you try with this patch?

   http://www.sqlite.org/src/ci/311d0b613d

It might help with the 14MB wal files.

Dan.







___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users