[sqlite] File locking to harsh?

2010-02-16 Thread Marian Aldenhoevel

Hi,

I am trying to use sqlite (3.6.22) on a linux-system that I have built 
from scratch. That system currently uses kernel 2.6.32.8 and is 
ucLibc-based.


The problem I am facing is that sqlite3 cannot lock database files. Any 
statement that needs a lock tells me the database is locked already.


~ # sqlite3 /test "create table A (B integer);"
Error: database is locked

An strace of that command is attached. The problem surfaces here, I think:

open("/test", O_RDWR|O_CREAT|O_LARGEFILE, 0644) = 3
...
fcntl64(3, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, 
len=1}, 0xbfae5690) = -1 EACCES (Permission denied)


I have verified that this is not a mundane issue of access privileges. I 
can open /test in an editor and modify it as I like.


So apparently something is wrong with that fcntl64-call. Any idea what 
could cause this?


I feel this is not truly a sqlite3-issue, but as anything else I am 
using this system for is working fine, it is to me :-).


Ciao, MM
~ # strace sqlite3 /test "create table A (B integer);"
execve("/usr/bin/sqlite3", ["sqlite3", "/test", "create table A (B integer);"], 
[/* 12 vars */]) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0xb77aa000
stat("/etc/ld.so.cache", 0xbfae6e48)= -1 ENOENT (No such file or directory)
stat("/etc/ld.so.preload", 0xbfae7024)  = -1 ENOENT (No such file or directory)
open("/lib/libsqlite3.so.0", O_RDONLY)  = -1 ENOENT (No such file or directory)
open("/lib/libsqlite3.so.0", O_RDONLY)  = -1 ENOENT (No such file or directory)
open("/usr/lib/libsqlite3.so.0", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0755, st_size=424608, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0xb77a9000
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\220F\0\0004\0\0\0"..., 
4096) = 4096
mmap2(NULL, 425984, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7741000
mmap2(0xb7741000, 416432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED, 3, 0) = 
0xb7741000
mmap2(0xb77a7000, 5536, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x66) = 
0xb77a7000
close(3)= 0
munmap(0xb77a9000, 4096)= 0
open("/lib/libdl.so.0", O_RDONLY)   = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=9044, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0xb77a9000
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0|\7\0\0004\0\0\0"..., 
4096) = 4096
mmap2(NULL, 16384, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb773d000
mmap2(0xb773d000, 4920, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED, 3, 0) = 
0xb773d000
mmap2(0xb773f000, 4144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x1) = 
0xb773f000
close(3)= 0
munmap(0xb77a9000, 4096)= 0
open("/lib/libc.so.0", O_RDONLY)= 3
fstat(3, {st_mode=S_IFREG|0644, st_size=285328, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0xb77a9000
read(3, 
"\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\224\251\0\0004\0\0\0"..., 
4096) = 4096
mmap2(NULL, 307200, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76f2000
mmap2(0xb76f2000, 280032, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED, 3, 0) = 
0xb76f2000
mmap2(0xb7737000, 5175, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 3, 0x44) = 
0xb7737000
mmap2(0xb7739000, 15884, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7739000
close(3)= 0
munmap(0xb77a9000, 4096)= 0
open("/lib/libdl.so.0", O_RDONLY)   = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=9044, ...}) = 0
close(3)= 0
open("/lib/libc.so.0", O_RDONLY)= 3
fstat(3, {st_mode=S_IFREG|0644, st_size=285328, ...}) = 0
close(3)= 0
open("/lib/libc.so.0", O_RDONLY)= 3
fstat(3, {st_mode=S_IFREG|0644, st_size=285328, ...}) = 0
close(3)= 0
stat("/lib/ld-uClibc.so.0", {st_mode=S_IFREG|0755, st_size=21096, ...}) = 0
mprotect(0xb773f000, 4096, PROT_READ)   = 0
mprotect(0xb7737000, 4096, PROT_READ)   = 0
mprotect(0xb77b, 4096, PROT_READ)   = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
rt_sigaction(SIGINT, {0x804e4c5, [INT], SA_RESTORER|SA_RESTART, 0xb76fc99b}, 
{SIG_DFL, [], 0}, 8) = 0
access("/test", F_OK)   = 0
brk(0)  = 0x8053000
brk(0x8054000)  = 0x8054000
stat64("/test", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
open("/test", O_RDWR|O_CREAT|O_LARGEFILE, 0644) = 3
fcntl64(3, F_GETFD) = 0
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
fstat64(3, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
_llseek(3, 0, [0], SEEK_SET)= 0

[sqlite] Detecting real updates

2009-10-22 Thread Marian Aldenhoevel
Hi,

My application is downloading data from the internet, parses and 
transforms it and (currently) stores it in a sqlite3-database. The data 
parses out into a small variable number of records with just two fields 
each.

Whenever that data changes, either the number of records or the actual 
values in existing records, my application needs to forward the complete 
set to an embedded system over a serial link.

So the intermediate storage is mainly used to have a status-quo to 
compare against.

Is there a way to have sqlite3 detect the actual changes after I did a 
number of INSERT OR UPDATE statements? Using a trigger maybe?

If so I could keep the intermediate storage nicely organized and still 
not incur a lot of read-and-then-update overhead to detect the changes.

The alternative is to just prepare it in the format that would be sent 
over the link, record that in a blob and check on changes on that single 
item. Losing the ability to easily query the data to check it manually.

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Design for concurrency

2009-09-08 Thread Marian Aldenhoevel
Hi,

I am currently designing a system where I am planning to use SQLite3 and 
would like some advice on basic architecture. The system is for a 
point-of-sale-like semi-embedded application.

The program will be running on quite limited hardware with Compact Flash 
for storage. The database consists of currently about 350k records, 
on-disk that amounts to a 48MB SQLite database file. That number is 
likely to grow over the lifetime of the application, maybe by a factor 
of two or so.

The main operation will be single-record queries. The records being 
identified via a primary key or one of two other indexed columns. These 
reads are initiated via a user-interaction and it is not predictable 
when they occur. There may be a few per day, or one every minute.

These reads need to be quick, that is the overriding design criterion. 
Say they may take two or three seconds at most, an arbitrary upper bound 
for the sake of discussion. At first glance that poses no problem at 
all, those reads are very fast.

But there will be updates to the database as well. These are cyclic at 
preplanned times, several times a day and may want to update anything 
from a few hundred to a few thousand records. The data is fetched from a 
website as CSV and parsed and transformed into INSERT OR UPDATE statements.

Now the problem becomes one of concurrency: How can I ensure an upper 
bound on the time it takes to do the single-record reads in this scenario?

Technically my programm will be a single multithreaded executable 
written in C++. I can assume that this program will be the only program 
using the database, so synchronisation mechanism outside of SQLite are 
acceptable, but it would be nice to do without.

A) The naive approach.

For the update start a transaction, do all the INSERT OR UPDATES in one 
batch and then commit it.

I have implemented that in a predecessor-version that would not allow 
any other approach and is not concurrent. Users are locked out during 
the update which is unacceptable for the new design.

I have timed the COMMIT to take anything between 30 seconds and 3 
minutes, depending on the number of updated records.

As I understand SQLite locking no reads can be serviced in the time it 
actually takes to COMMIT the transaction. Right?

So that won't work.

B) The a little less naive approach.

Instead of batching all the updates into one transaction only batch 
some. Tune the size of the batches so that their individual COMMIT does 
not take too long so that the time-constraint for the reads can be met. 
ACIDity is not an issue here, partial updates (some records updated, 
others not) may be applied without any ill effect, the remaining records 
would then be updated as part of the next cycle.

I would have to make sure that reads get a chance even if the writing 
process starts fresh transaction in a tight loop. I think that can be 
done with the SQLite concurrency system if I understand it correctly.

The total time for the update would be greatly increased of course due 
to it being split into many transactions. By how long would remain to be 
tested, I have no idea. But that is not a big problem in itself as long 
as the reads still are being serviced.

C) A silly idea.

I could also split the database file into two copies.

One "active" copy that is used to service the reads. And one "in 
transit" copy that is being updated.

So at the start of an update cycle I would make a copy of the active 
database file. Either on the filesystem (would that be safe? A hot copy 
of a SQLite database file that is only read from?) or using SQLite itself.

Then I would update the copy in one transaction. Commit it. And after 
the commit somehow flag the copy as active so that the next read will be 
from that copy.

This approach would decouple reads and writes at the price of added 
complexity for the switch, making sure it's all well-synchronized etc.. 
Homemade concurrency.

Making a copy of the database on the CF card currently takes around two 
minutes. So that would have to be added on top of the update time and 
the single-batch commit time. But it's a constant (well, for a given 
database size only of course) and does not interfere with the reads. So 
no problem here.



Those are the strategies I was able to think of so far. Comments are 
welcome. Better ideas as well. Please point out my dumb errors in any case.

Ciao, MM

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Indexing problem

2009-02-26 Thread Marian Aldenhoevel
Hi,

>>  > CREATE TABLE IF NOT EXISTS KFZ (
> 
> Is that as reported by the command-line sqlite3 executable program, or 
> is it from some script that you hope is the one that was used to create 
> the table?

That is from the script I _know_ is the one that created the table. I 
will send output from the commandline client later.

Yes, kfznr is declared TEXT and inserted as text. Using a prepared 
statement and bound parameters. Again I can show code when I'm back later.

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Indexing problem

2009-02-26 Thread Marian Aldenhoevel
Hi,

I am having a strange problem with a sqlite3 database. See the following 
transcript:

 > sqlite3 kdb "select * from kfz where kfznr=48482364;"
 > 48482364|48|0|0C|00|00|0||20|5B93|1746294314|||0|GP-T 1006|0

kfznr is the primary key, so this is to be expected. Now two queries as 
fired from the application code:

 > sqlite3 kdb "select * from kfz where CRC32=-797785824;"
 > 48482364|48|0|0C|00|00|0||20|5B93|-797785824|||0|GP-T 1006|0
 > 20209001|20|1|3C|00|32|24999||13|CE42|-797785824|||0|FL-HH 11|1
 > 20209001|20|1|3C|00|32|24999||13|CE42|-797785824|||0|FL-HH 11|1

 > kdb "select * from kfz where CRC32=-1509747892;"
 > 48482364|48|0|0C|00|00|0||20|5B93|-1509747892|||0|GP-T 1006|0
 > 20209667|20|1|3C|00|32|202880||99|4FBD|-1509747892|||0|FL-AK 98|1
 > 20209667|20|1|3C|00|32|202880||99|4FBD|-1509747892|||0|FL-AK 98|1

What could cause 48482364 to show up in both results with a different 
value for CRC32?

The table is defined like this:

 > CREATE TABLE IF NOT EXISTS KFZ (
 >   kfznr TEXT PRIMARY KEY,
 >   partnernr INTEGER,
 >   sendtoTA INTEGER,  
 >   saeule TEXT,
 >   berechtigung2 TEXT,
 >   berechtigung TEXT,
 >  a_km TEXT,
 >  max_km TEXT,
 >  kont TEXT,
 >  pincode TEXT,
 >  CRC32 INTEGER,
 >  verweis BLOB,
 >  handynummer TEXT,
 >  sperrung TEXT,
 >kennzeichen TEXT,
 >  kontingentierung INTEGER);
 >
 > CREATE INDEX IF NOT EXISTS IDX_KFZ_PARTNERNR ON KFZ (sendtoTA,partnernr);
 > CREATE INDEX IF NOT EXISTS IDX_KFZ_CRC32 ON KFZ (CRC32);
 > CREATE INDEX IF NOT EXISTS IDX_KFZ_VERWEIS ON KFZ (VERWEIS);
 > CREATE INDEX IF NOT EXISTS IDX_KFZ_HANDYNUMMER ON KFZ (HANDYNUMMER);

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Determine number of records in table

2008-12-03 Thread Marian Aldenhoevel
Hi,

> And will only work if you never delete any rows from the table.

Yes, I am aware of that limitation and I am not ever deleting apart from 
  a full truncate. I have yet to test that case, if shut turn out to not 
work I can live with a DROP TABLE or even delete the database file. It 
is that kind of project :-).

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Determine number of records in table

2008-12-03 Thread Marian Aldenhoevel
Hi,

 > select max(rowid) from sometable;

Looks good and is instantaneous. Thank you very much.

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Determine number of records in table

2008-12-03 Thread Marian Aldenhoevel
Hi,

SELECT COUNT(*) FROM sometable;

Takes 10 seconds on my testcase (340.000 rows, Database on CF, dead slow 
CPU).

Is there a quicker way? Does SQLLite maybe store the total number of 
records somewhere else?

The table only ever grows, there are now DELETEs on it ever, apart from 
complete truncations now and then. Can the number of rows be estimated 
from a total table size?

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unhappy with performance

2008-11-01 Thread Marian Aldenhoevel
Hi,

I have now sanitized the logic in the rest of the code to not require
these flags anymore. Thus I got rid of the frequent updates to each
record, that were an abomination in the first place.

So I am left with decent DB-Operations which SQLite can manage perfectly
well.

Happy now. Thanks for all the help and for fantastic software.

That said I can elaborate a bit:

> That is a tad over 36,000 b-tree nodes per second.  What are your actual 
> performance requirements?

There are no hard requirements. But a 10-minute query every now and then
   really is outside of scope :-).

I have inherited this piece of software. It used a fixed-size array in
memory, blasted 1:1 in binary format out to disc periodically, as a
database.

That was fine except for two problems that needed adressing:

1) The device is limited to 128MB of RAM. That is for kernel,
application and data.

The current DB was 40MB. The number of records is quickly growing with
business and is projected to become a problem soon. Also because of the
fixed size it would have to be decided on a new maximum size and about
500 systems in the field upgraded. Only to repeat when that new maximum
size is too small again.

2) Updating changed records is slow even in RAM because there is no
indexing whatsoever. A larger number of overall records means more
updates per time frame, the device is unresponsive while updating, and
this is starting to become a problem.

2) I could have fixed by adding an indexing scheme but 1) is inherently
unfixable. Any solution requires a switch to a disk-based system and any
such system is going to be slower than
stuff-it-ALL-into-a-contiguous-block-of-RAM . That is perfectly
acceptable as long as the slowdown is well constrained. And it is now.

So it was a shootout between some system between a homebrew
on-disc-structure plus indexing, something like Berkeley-DB or a SQL-Engine.

I preferred the latter because:

- It would radically simplify the application code. And it did: The new
version is less than 10% the application-LoC as the old one and much
much cleaner. It almost looks a designed piece of software now as
opposed to a smoldering heap of, of, something.

- Also it would give me easy access to the database for debugging.
Having a commandline tool to browse, query and update the data (and not
having to write it myself) is a real plus.

- After initially having rolled out an update to change to the DB-based
code changes to the format of the data become much easier to handle
(There have been several cases in the past where string fields needed
resizing and so on, don't ask, it's all very sad).

I was planning to clean up the client code to that DB backend anyway.
That is the part doing all the ridiculous updates. But I had planned to
do so in a seperate cycle. It IS an extremely ugly codebase and still
breaks whenever I look at it hard enough.

But now that I upgraded DB-backend to SQLite and fixed the basic
algorithms in the client-code together it is beginning to resemble a
real database-application and already works much better.

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unhappy with performance

2008-10-31 Thread Marian Aldenhoevel
Hi,

> Are you able to benchmark it using an actual PC's local hard drive?
> Just for comparison.  To be fair, you'd have to use the same build of
> sqlite, or at one that was built the same way.

That would be quite an effort.

For a quick data-point I copied the database file to my 
development-machine running Ubuntu Server 7.x.

The statement runs 11s on that system. That still feels somewhat 
excessive for a simple "update 'em all", but I have no data to fairly 
compare it with.

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unhappy with performance

2008-10-31 Thread Marian Aldenhoevel
Hi,

> Considering that all or most of the records have the same value in 
> musttrans column, do you really need an index on it? Try dropping the 
> index, see if it helps.

They have the same value in my test. In the real application that field 
is used as a status field and most of the records will have 0, and a few 
dozen something else.

I have repeated the test without the index. The difference is negligible 
and probaply in the normal range of measurements using my dummy 
benchmarking-technique.

 > What happens when you run the update inside a transaction?

I tried it like this:

 > time sqlite3 kfzdb 'begin ; update kfz set musttrans=5 ; end'

No significant change in runtime either.

Thank you for your suggestions so far. Anything else?

The alternative is cleaning up the homebrew version used so far and 
adding some sort of indexing scheme to it. And I am definitely NOT 
looking forward to having to do that. It is extremely yucky code!

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unhappy with performance

2008-10-31 Thread Marian Aldenhoevel
Hi,

I have tried converting a program from a homebrew "database" to sqlite3 
for easier maintenance and hopefully better performance. While the 
former is easily achieved, the performance is not making me happy. The 
system is a "semi-embedded" small form-factor x86 machine with 128MB of 
RAM booting and running off CF. OS is a 2.4.18-based linux built from 
scratch.

I have run several tests outlined below and I can't get decent 
UPDATE-Performance out of my database. Apparently I am doing something 
horribly wrong. Can someone enlighten me?

The DB consists of a single table I am creating like this:

CREATE TABLE IF NOT EXISTS KFZ (
kfznr TEXT PRIMARY KEY,
saeule TEXT,
berechtigung2 TEXT,
berechtigung TEXT,
a_km TEXT,
max_km TEXT,
kont TEXT,
pincode TEXT,
CRC32 INTEGER,
verweis BLOB,
sperrung TEXT,
isNew INTEGER,
mustTrans INTEGER,
kennzeichen TEXT,
kontingentierung INTEGER);

CREATE INDEX IF NOT EXISTS IDX_KFZ_MUSTRANS ON KFZ (mustTrans);

CREATE INDEX IF NOT EXISTS IDX_KFZ_CRC32 ON KFZ (CRC32);

Then I insert about 30 records in the context of a transaction. That 
takes a while, but works reasonably well. The result is a DB file of 
about 30MB.

The problem is with bulk-updating:

 > # time sqlite3 kfzdb 'update kfz set musttrans=3'
 > real10m 7.75s
 > user8m 49.73s
 > sys 0m 24.29s

10 minutes is too long.

I must be doing something wrong. My database is on CF memory, and I 
suspected that to be the problem. To verify that I mounted a tmpfs, 
copied the DB there (taking 5.7s), and reran the test. Using memory 
instead of disk brings the total down to just under 9 minutes.

So disk-I/O is propably not the cause. It's dominated by user-space time 
and while the command is running the CPU is used to 99% by sqlite3.

Next I tried several of the suggestions from the SQLite Optimization 
FAQ[1]. I timed the final combination of most of them:

 ># time sqlite3 kfzdb 'pragma synchronous=OFF ; pragma 
count_changes=OFF ; pragma
  journal_mode=OFF ; pragma temp_store=MEMORY ; update kfz set musttrans=3'
 >off
 >real8m 29.87s
 >user8m 17.64s
 >sys 0m 8.10s

So no substantial improvement.

Finally I repeated the test using a simpler table consisting only of the 
column musttrans and 30 records. Updating that took abount the same 
amount of time.

Ciao, MM

[1] http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Segfault in initialiazation

2008-10-25 Thread Marian Aldenhoevel
Hi,

I am trying to add sqllite3 to a C-Program to replace a hand-made 
on-disk-datastructure that has proven to be cumbersome to change and 
inefficient.

Unfortunately the program crashes very early in the initialization 
before the first line of my own code executes, making the problem 
difficult to debug.

The linux-system this is to run on is built from scratch using the T2 
build-system (www.t2-project.org). I am cross-compiling the system from 
an Ubuntu-System for a semi-embedded machine. The program itself is 
cross-compiled on top of that using standard GNU auto-tools.

The sqlite3 command-line tool works fine, so I suspect the library 
itself has been built OK, and that I am doing something wrong in my 
compile-and-link stuff. Because as said above, none of my own code get's 
executed, the program never reaches main().

Are there any common pitfalls for beginners like me that could cause 
this problem?

The only idea I had so far is running both programs with strace. The 
output of strace for sqlite3 is at

http://www.marian-aldenhoevel.de/tmp/sqlite.txt

And the output from my program is at

http://www.marian-aldenhoevel.de/tmp/kbox.txt

I cannot see anything obvious that precedes the problem.

Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users