Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Rowan Worth
On 18 December 2017 at 06:59, Dinu wrote: > Richard Hipp-3 wrote > > Can you try this: > > (1) BEGIN TRANSACTION > > (2) DROP all indexes from the table > > (3) Do the DELETE operation > > (4) CREATE all indexes again > > (5) COMMIT > Thanks Richard, as mentioned earlier,

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf
Hmmm. 4.6 GB in the Filesystem cache (the Level 2 cache), and only 2 pages (78 MB or thereabouts) in the process pagecache (the Level 1 cache). And with only 4.6 GB in the Level 2 cache, the I/O rate drops to almost zero. If you "moved" that memory (or some of it) from Level 2 to Level

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf
Ok, just checking. The database I was referring to was an Historian Collector so the amount of data that it needed to maintain was time limited so it was more or less a FIFO by tag. --- The fact that there's a Highway to Hell but only a Stairway to Heaven says a lot about anticipated traffic

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Simon Slavin
On 17 Dec 2017, at 10:57pm, Dinu wrote: > Na, they are not a single-time use rows, otherwise I'd have used a FIFO :) > Every now and then, a large portion of the table becomes obsolete by > external factors. Are you issuing one DELETE command with a WHERE clause which

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
Richard Hipp-3 wrote > Can you try this: Thanks Richard, as mentioned earlier, any structure change is unacceptable due to concurrent reader clients. -- Sent from: http://sqlite.1065341.n5.nabble.com/ ___ sqlite-users mailing list

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
Keith Medcalf wrote > I had a similar problem with a multi-terabyte database once upon a time. Na, they are not a single-time use rows, otherwise I'd have used a FIFO :) Every now and then, a large portion of the table becomes obsolete by external factors. -- Sent from:

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
SQLite 3.19.3 CentOS 7 (64 bit) RAM: 6G total, 4.4G buff/cache, 0.6G unused Sqlite memory: 133M RES, 0 SWAP, 0 DIRTY - don't know if it's relevant anymore, the query seems to have entered into another execution phase, it looks like now it's flushing the WAL. No other PRAGMA No FKs, no triggers.

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf
I had a similar problem with a multi-terabyte database once upon a time. The solution was simply to run the DELETE every time you insert, thus keeping the database continuously "pruned" by continuous removal of a small number of records rather than periodically trying to get rid of a lot all

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Richard Hipp
On 12/17/17, Dinu wrote: > > We have ~32M records out of which we are deleting about ~24M. Can you try this: (1) BEGIN TRANSACTION (2) DROP all indexes from the table (3) Do the DELETE operation (4) CREATE all indexes again (5) COMMIT -- D. Richard Hipp d...@sqlite.org

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf
(1) Since you are in WAL, have you set pragma SYNCHRONOUS=1 (NORMAL)? (Don't know if it will make any difference in your particular case). (2) The page_cache size makes a HUGE difference. Modified pages are ONLY flushed to the WAL file when they overflow the cache. The fewer modified pages

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
INSERTs are taking constant time as they should, the DELETE is quite obviously working is a blatantly non-linear progress. -- Sent from: http://sqlite.1065341.n5.nabble.com/ ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
Thanks, it's something worth trying. However, no, that can't be the only reason. Populating scriptically with the same data takes less than 10 minutes, including side processing... So the disproportion between insert and delete performance is huge. -- Sent from:

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Simon Slavin
On 17 Dec 2017, at 10:18pm, Dinu wrote: > For the table swap operation, that is out of the question, the DB is running > on a live system that has multiple clients. Foreign keys or not, any > structure change crashes all clients (DB structure has changed). Too high a >

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
For the table swap operation, that is out of the question, the DB is running on a live system that has multiple clients. Foreign keys or not, any structure change crashes all clients (DB structure has changed). Too high a price to pay for a delete thay may happen routinely. -- Sent from:

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
By "one time only", I mean in my understanding, the way most DBs do on a DELETE is this: cache the ROWIDs while deleting data rows from the main and from the indexes, then when all ROWIDS are explored, sort the ROWID stream, and prune the trees from a sorted stream. This is both highly efficient

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Simon Slavin
On 17 Dec 2017, at 10:01pm, Dinu wrote: > The CPU is capped up badly, the HDD > is at idle level so this also hints to the same issue. Apparently your program/OS is trying to do everything in cache. This may be self-defeating for this operation. Your description of

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Clemens Ladisch
Dinu wrote: > The WAL size is in excess of 9G atm. > The WAL size is growing ever slower (was about 10M/s, now it's 1M every 2s, > slowing down). This indicates to me a non-linear process that I can link > only to the B-trees, it's the only non-linear component I can think of that > could cause

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
Ok, so to address the size of the problem properly: We have ~32M records out of which we are deleting about ~24M. The raw size of the DB is about 20G out of which 15G goes away. Under these circumstances any page cache becomes irrelevant. The delete operation, which is a single delete run from

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Simon Slavin
On 17 Dec 2017, at 8:53pm, Dinu Marina wrote: > It became apparent to me from performance measurements that the DELETE > operation is very slow, when operating on large recordsets with indexes > involved. My deduction is that SQLite updates the indexes for every deleted

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf
You page cache is probably too small to contain the working set of database pages that require modification. There should be no difference between "deleting" and "inserting". Both require changes to all applicable B+ and B- Trees. Though of course, it can be VERY slow if you have SECURE

Re: [sqlite] Odd question

2017-12-17 Thread Keith Medcalf
How do you know the insert is not working? Have you verified that the data is not being inserted or are you simply relying on a popup provided by the application? If the later, you should be aware that the application can pop-up whatever it wants at any time. For example, it could claim the

Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu Marina
Update: we are also using WAL; I have noticed during a huge delete, the WAL also grows huge. Could this also be a reason for slowness, that SQLite duplicates every block that hosts a delete? Is there any way to work around this? On 17.12.2017 22:53, Dinu Marina wrote: Hi all, It became

[sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu Marina
Hi all, It became apparent to me from performance measurements that the DELETE operation is very slow, when operating on large recordsets with indexes involved. My deduction is that SQLite updates the indexes for every deleted row, which in painstakingly slow... Since the DELETE should be

Re: [sqlite] Odd question

2017-12-17 Thread nomad
On Sat Dec 16, 2017 at 07:53:22PM +, Nelson, Erik - 2 wrote: > For unfortunate reasons, I need a query that does an insert and also > returns at least one row... for example, something along the lines of > > Select 1 as value from (insert into table1 values(a, b, c)) > > Or > > Select

Re: [sqlite] sqlite3_aggregate_context() in xFinal callback does not zeroes out memory

2017-12-17 Thread Lifepillar
On 17/12/2017 11:40, Lifepillar wrote: When I call a custom aggregate function on an empty table T, e.g., `select myaggr(A) from T`, the sqlite3_aggregate_context() call in the xFinal callback does not return zeroed out memory, but a block of seemingly uninitialized memory. Is that expected? If

[sqlite] sqlite3_aggregate_context() in xFinal callback does not zeroes out memory

2017-12-17 Thread Lifepillar
When I call a custom aggregate function on an empty table T, e.g., `select myaggr(A) from T`, the sqlite3_aggregate_context() call in the xFinal callback does not return zeroed out memory, but a block of seemingly uninitialized memory. Is that expected? If so, how do I know that the xStep