This is true - selecting * from a table of size exceeding the memory of the machine would not work in any way. Any large table would be slow.

A solution used by MySQL is to have two cursors.

The standard cursor returns a memory resident table containing the result of the query. Ideal for small queries. No locking problems, easy to program with, good for concurrency. You can move forward and back through the result set, and know the dimensions of the result set. This is still very fast.

Then they provide a second cursor which works like SQLite. Returning a row at a time. This is faster and better for large tables.

However, neither cursor locks more than a row at a time. Then only locking for as long as it takes to generate a copy of the row. So concurrency still works very well.

I would hate to see SQLite become MySQL, there is one too many of this already :) But they have may have some good ideas.

Regards, Ben.


Ken & Deb Allen wrote:
The largest drawback of this approach is scalability -- in my case I often deal with databases with 20-30 closely related tables, many of which can have 1,000,000 or more records in them, and containing multiple relations. Copying this into memory before each query would be very time consuming. Having two copies of the database (one in memory and one on disk) and ensuring that changes are made to both as part of a transaction would be more effective, but this would require large amounts of memory!

-Ken

On 19-Apr-05, at 3:36 AM, Ben Clewett wrote:

There are two thinks I am working on. Firstly, just to loop around the execution:

do {
    execute(sql);
} while (SQLITE_BUSY || SQLITE_LOCKED)

This does the job, but is not very nice. Also impossible to tell where a genuine lock through bad programming, and a lock through heavy load.

Secondly, an alternate simple API to complete a SELECT query. My version will load the table into memory, then complete the query. That way no locking is maintained, and I can use the data for as long as I want. (This is the way MySQL and PostgreSQL work.) I can also stream this table down a TCP/IP connection. Although this will be only a table of text, and require freeing after use. (Has any person done this already to save me the work?) If my version work, I'll try and offer a patch.

But I would still like to offer encouragement to the great developers of SQLite to get row locking working. For me will transform SQLite into a very professional system.

Kind regards,

Ben Clewett.



Will Leshner wrote:

On 4/18/05, Ben Clewett <[EMAIL PROTECTED]> wrote:

I see in the manual there are plans use Row Locking.  This would solve
my problems.  Allowing me to load the database from multiple processes
to a far greater amount without fear of SQL_BUSY.  As well as
simplifying my programs.

I am currently experimenting with using triggers to do row locking and it seems to work quite well, though I don't know if it will help solve your particular problem.






Reply via email to