The largest drawback of this approach is scalability -- in my case I
often deal with databases with 20-30 closely related tables, many of
which can have 1,000,000 or more records in them, and containing
multiple relations. Copying this into memory before each query would be
very time consuming. Having two copies of the database (one in memory
and one on disk) and ensuring that changes are made to both as part of
a transaction would be more effective, but this would require large
amounts of memory!
-Ken
On 19-Apr-05, at 3:36 AM, Ben Clewett wrote:
There are two thinks I am working on. Firstly, just to loop around
the execution:
do {
execute(sql);
} while (SQLITE_BUSY || SQLITE_LOCKED)
This does the job, but is not very nice. Also impossible to tell
where a genuine lock through bad programming, and a lock through heavy
load.
Secondly, an alternate simple API to complete a SELECT query. My
version will load the table into memory, then complete the query.
That way no locking is maintained, and I can use the data for as long
as I want. (This is the way MySQL and PostgreSQL work.) I can also
stream this table down a TCP/IP connection. Although this will be
only a table of text, and require freeing after use. (Has any person
done this already to save me the work?) If my version work, I'll try
and offer a patch.
But I would still like to offer encouragement to the great developers
of SQLite to get row locking working. For me will transform SQLite
into a very professional system.
Kind regards,
Ben Clewett.
Will Leshner wrote:
On 4/18/05, Ben Clewett <[EMAIL PROTECTED]> wrote:
I see in the manual there are plans use Row Locking. This would
solve
my problems. Allowing me to load the database from multiple
processes
to a far greater amount without fear of SQL_BUSY. As well as
simplifying my programs.
I am currently experimenting with using triggers to do row locking and
it seems to work quite well, though I don't know if it will help solve
your particular problem.