As you are running in WAL mode, readers will still be seeing the state of the 
data BEFORE you started the delete transaction. This holds true also for 
readers that run while your delete transaction is running. Nobody is pulling 
out the rug from under any transaction at all.

Even though you are running in WAL mode, any client process attempting to write 
will still be blocked until AFTER the delete transaction FINISHES. As you have 
discovered, this means HOURS in your case.

The "total replacement" algorithm should minimize the overall effort for your 
"bulk delete" operation. The improvements may be several orders of magnitude, 
but you will have to find out the exact scale for your data and your 
application and decide if the improved processing time is compatible with your 
requirements.

The "piecemeal delete" approach requires adapting the delete statement(s) so 
that the time taken fits into your requirements, e.g.

DELETE FROM table WHERE rowid IN (SELECT rowid FROM table WHERE <deleteable> 
LIMIT <quickdeletelimit>);

With <quickdeletelimit> replaced by the number of rows that gives an acceptable 
latency for concurrent updates. And delete transactions spaced out, so that 
other clients hava chance to perform their writes too.

Current websites attempt to achieve 500ms reaction times, so maybe running a 
delete that takes 100ms every second will do...

-----Ursprüngliche Nachricht-----
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Dinu
Gesendet: Montag, 18. Dezember 2017 11:03
An: sqlite-users@mailinglists.sqlite.org
Betreff: [EXTERNAL] Re: [sqlite] Atomic DELETE index optimisation?

Rowan Worth-2 wrote
> I'm not sure what you're worried about? Dropping and recreating
> identical indices within a transaction won't cause a visible structure
> change to concurrent readers -- that's the point of a transaction.

I honestly don't see how in any DB system the client process would not crash if 
the index it's running a curson on were to be removed. Even if SQLite were to 
pull this magic out of the hat, starving client processes for the lack of an 
index (a full scan query would probably take in excess of 30s) would quickly 
pile up the clients to the point where one would have to kill them anyway.
So with this in mind, I'm really not looking for a barbaric fix to this, I'm 
more of tryng to understand the problem and find a viable, semantically stable 
solution (and maybe trigger some improvements in SQLite, if there's a system 
bug).




--
Sent from: http://sqlite.1065341.n5.nabble.com/
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___________________________________________
 Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to