Thank you for your response

Simon Slavin <slavins@...> writes:
> Another way would be to use the LIMIT clause on UPDATE

Yes, this was the very first thing I tried.
But without index on C1 the time of each query with limit
is not limited. For example on 1 mln rows the first
UPDATE T set C1 = calculation(C2) where C1 is NULL LIMIT 10000
takes 1 second, but the last: 4.5 seconds
which is dangerously close to the timeout.

> Do you have an index on T(C1) ?  
> That should dramatically reduce the search time.

I tried adding index.
It caused slow down of the original insert of records to the table by about
25%, which is unacceptable to me.
I also slows down the update because it has to update index as well.
Note that C1 is not the only column, on which I would like to make updates.
So, I will have to add several indexes, which will be even worse.

But the index does, indeed, limits the time of one update query
UPDATE T set C1 = calculation(C2) where C1 is NULL LIMIT 10000
to about 2 seconds.

> UPDATE T SET C1 = calculation(C2) WHERE C1 IS NULL AND rowid < 10000

This is the best solution when the table is freshly created
and max(rowid) == number of rows.
But after many deletes and inserts many rowid will be unused,
max(rowid) may grow indefinitely and the number of required updates will be
also indefinite.

So, no perfect solution so far.

I wonder whether there is a theoretical possibility
of making commit in response to interrupt or special progress_handler return
value?
I am not asking to implement it in a standard sqlite,
but, may be, I can do it as my own custom patch?
Or is it completely impossible?

Thank you



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to