Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Rowan Worth
On 18 December 2017 at 06:59, Dinu  wrote:

> Richard Hipp-3 wrote
> > Can you try this:
>
> (1) BEGIN TRANSACTION
> > (2) DROP all indexes from the table
> > (3) Do the DELETE operation
> > (4) CREATE all indexes again
> > (5) COMMIT
> Thanks Richard, as mentioned earlier, any structure change is unacceptable
> due to concurrent reader clients.
>

I'm not sure what you're worried about? Dropping and recreating identical
indices within a transaction won't cause a visible structure change to
concurrent readers -- that's the point of a transaction. Unless perhaps
those readers are s old they're using the deprecated sqlite3_prepare
rather than prepare_v2, and fall over when encountering SQLITE_SCHEMA?

-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf

Hmmm.  4.6 GB in the Filesystem cache (the Level 2 cache), and only 2 pages 
(78 MB or thereabouts) in the process pagecache (the Level 1 cache).  And with 
only 4.6 GB in the Level 2 cache, the I/O rate drops to almost zero.  If you 
"moved" that memory (or some of it) from Level 2 to Level 1 you would increase 
performance tremendously.

pragma cache_size=262144; -- 1 GB page cache

Will increase the page cache to 1 GB.  Might not be big enough, but should be 
much better.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Dinu
>Sent: Sunday, 17 December, 2017 15:55
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>SQLite 3.19.3
>CentOS 7 (64 bit)
>RAM: 6G total, 4.4G buff/cache, 0.6G unused
>Sqlite memory: 133M RES, 0 SWAP, 0 DIRTY - don't know if it's
>relevant
>anymore, the query seems to have entered into another execution
>phase, it
>looks like now it's flushing the WAL.
>No other PRAGMA
>No FKs, no triggers.
>
>
>
>
>
>
>--
>Sent from: http://sqlite.1065341.n5.nabble.com/
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf

Ok, just checking.  The database I was referring to was an Historian Collector 
so the amount of data that it needed to maintain was time limited so it was 
more or less a FIFO by tag.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Dinu
>Sent: Sunday, 17 December, 2017 15:58
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>Keith Medcalf wrote
>> I had a similar problem with a multi-terabyte database once upon a
>time.
>
>Na, they are not a single-time use rows, otherwise I'd have used a
>FIFO :)
>Every now and then, a large portion of the table becomes obsolete by
>external factors.
>
>
>
>--
>Sent from: http://sqlite.1065341.n5.nabble.com/
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Simon Slavin


On 17 Dec 2017, at 10:57pm, Dinu  wrote:

> Na, they are not a single-time use rows, otherwise I'd have used a FIFO :)
> Every now and then, a large portion of the table becomes obsolete by
> external factors.

Are you issuing one DELETE command with a WHERE clause which deletes lots of 
rows or are you issuing lots of DELETE commands ?

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
Richard Hipp-3 wrote
> Can you try this:

Thanks Richard, as mentioned earlier, any structure change is unacceptable
due to concurrent reader clients.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
Keith Medcalf wrote
> I had a similar problem with a multi-terabyte database once upon a time.  

Na, they are not a single-time use rows, otherwise I'd have used a FIFO :)
Every now and then, a large portion of the table becomes obsolete by
external factors.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
SQLite 3.19.3
CentOS 7 (64 bit)
RAM: 6G total, 4.4G buff/cache, 0.6G unused
Sqlite memory: 133M RES, 0 SWAP, 0 DIRTY - don't know if it's relevant
anymore, the query seems to have entered into another execution phase, it
looks like now it's flushing the WAL.
No other PRAGMA
No FKs, no triggers.






--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf

I had a similar problem with a multi-terabyte database once upon a time.  

The solution was simply to run the DELETE every time you insert, thus keeping 
the database continuously "pruned" by continuous removal of a small number of 
records rather than periodically trying to get rid of a lot all at once.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Keith Medcalf
>Sent: Sunday, 17 December, 2017 15:43
>To: SQLite mailing list
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>
>(1) Since you are in WAL, have you set pragma SYNCHRONOUS=1 (NORMAL)?
>(Don't know if it will make any difference in your particular case).
>
>(2) The page_cache size makes a HUGE difference.  Modified pages are
>ONLY flushed to the WAL file when they overflow the cache.  The fewer
>modified pages there are to flush, the less I/O is performed.  The
>default pagecache is about 2 pages which is not much at all.  If
>you do not see I/O to the drive it is probably going to the OS
>Filesystem cache.  This is still slower than going directly to the
>pagecache to access the tree data.  This will appear as CPU usage
>rather than I/O usage.  Since it started out writing LOTS of modified
>pages which slowly became less and less indicates that the pagecache
>is too small and that the B-Tree structure is continuously being
>reloaded.
>
>What is the version of SQLite?
>The Operating System and its bitedness (32/64)?
>The bitedness (32/64) of the shell?
>How much RAM is in the machine?
>How much RAM is the process using (both the VM commit size, and the
>Working Set size)?
>Have you set any pragma's other than JOURNAL_MODE=WAL?
>Do you have any Foreign Keys defined on or against the table you are
>deleting from?
>If so, is foreign key enforcement turned on?
>If so, are there recursive foreign keys?
>Are the proper indexes defined for any foreign keys?  On both parent
>and child?
>Are there any ON DELETE triggers?
>
>The simplest fix might very well be to compile your own shell with
>UPDATE DELETE LIMIT enabled, and delete the records in smaller
>batches by putting a LIMIT on the delete statement and running it
>until all the records are deleted.  Or perhaps dropping all the
>indexes, deleting the records, then re-creating the indexes.
>
>---
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>
>>-Original Message-
>>From: sqlite-users [mailto:sqlite-users-
>>boun...@mailinglists.sqlite.org] On Behalf Of Dinu
>>Sent: Sunday, 17 December, 2017 15:02
>>To: sqlite-users@mailinglists.sqlite.org
>>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>>
>>Ok, so to address the size of the problem properly:
>>
>>We have ~32M records out of which we are deleting about ~24M.
>>The raw size of the DB is about 20G out of which 15G goes away.
>Under
>>these
>>circumstances any page cache becomes irrelevant.
>>The delete operation, which is a single delete run from the sqlite3
>>tool, is
>>running for 3 hours now.
>>The WAL size is in excess of 9G atm.
>>The WAL size is growing ever slower (was about 10M/s, now it's 1M
>>every 2s,
>>slowing down). This indicates to me a non-linear process that I can
>>link
>>only to the B-trees, it's the only non-linear component I can think
>>of that
>>could cause this slowing-down-to-a-drag. The CPU is capped up badly,
>>the HDD
>>is at idle level so this also hints to the same issue.
>>
>>In reply to your remarks:
>>- I understand the B-trees need to be modified. However, if you
>>prune,
>>(maybe rebalance), write-to-disk every node at a time, that is
>hugely
>>stressful and inefficient when you are pruning half of a 32M nodes
>>tree.
>>Since the operation is atomic, the indexes could be updated one time
>>and one
>>time only.
>>
>>So, what to do? I think this delete may never end... by the time it
>>took to
>>write this reply, the WAL grow has succombed to 1M every 4s.
>>
>>
>>
>>--
>>Sent from: http://sqlite.1065341.n5.nabble.com/
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Richard Hipp
On 12/17/17, Dinu  wrote:
>
> We have ~32M records out of which we are deleting about ~24M.

Can you try this:

(1) BEGIN TRANSACTION
(2) DROP all indexes from the table
(3) Do the DELETE operation
(4) CREATE all indexes again
(5) COMMIT
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf

(1) Since you are in WAL, have you set pragma SYNCHRONOUS=1 (NORMAL)? (Don't 
know if it will make any difference in your particular case).

(2) The page_cache size makes a HUGE difference.  Modified pages are ONLY 
flushed to the WAL file when they overflow the cache.  The fewer modified pages 
there are to flush, the less I/O is performed.  The default pagecache is about 
2 pages which is not much at all.  If you do not see I/O to the drive it is 
probably going to the OS Filesystem cache.  This is still slower than going 
directly to the pagecache to access the tree data.  This will appear as CPU 
usage rather than I/O usage.  Since it started out writing LOTS of modified 
pages which slowly became less and less indicates that the pagecache is too 
small and that the B-Tree structure is continuously being reloaded.

What is the version of SQLite?  
The Operating System and its bitedness (32/64)?  
The bitedness (32/64) of the shell?  
How much RAM is in the machine?  
How much RAM is the process using (both the VM commit size, and the Working Set 
size)?
Have you set any pragma's other than JOURNAL_MODE=WAL?
Do you have any Foreign Keys defined on or against the table you are deleting 
from?
If so, is foreign key enforcement turned on?
If so, are there recursive foreign keys?
Are the proper indexes defined for any foreign keys?  On both parent and child?
Are there any ON DELETE triggers?

The simplest fix might very well be to compile your own shell with UPDATE 
DELETE LIMIT enabled, and delete the records in smaller batches by putting a 
LIMIT on the delete statement and running it until all the records are deleted. 
 Or perhaps dropping all the indexes, deleting the records, then re-creating 
the indexes.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Dinu
>Sent: Sunday, 17 December, 2017 15:02
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>Ok, so to address the size of the problem properly:
>
>We have ~32M records out of which we are deleting about ~24M.
>The raw size of the DB is about 20G out of which 15G goes away. Under
>these
>circumstances any page cache becomes irrelevant.
>The delete operation, which is a single delete run from the sqlite3
>tool, is
>running for 3 hours now.
>The WAL size is in excess of 9G atm.
>The WAL size is growing ever slower (was about 10M/s, now it's 1M
>every 2s,
>slowing down). This indicates to me a non-linear process that I can
>link
>only to the B-trees, it's the only non-linear component I can think
>of that
>could cause this slowing-down-to-a-drag. The CPU is capped up badly,
>the HDD
>is at idle level so this also hints to the same issue.
>
>In reply to your remarks:
>- I understand the B-trees need to be modified. However, if you
>prune,
>(maybe rebalance), write-to-disk every node at a time, that is hugely
>stressful and inefficient when you are pruning half of a 32M nodes
>tree.
>Since the operation is atomic, the indexes could be updated one time
>and one
>time only.
>
>So, what to do? I think this delete may never end... by the time it
>took to
>write this reply, the WAL grow has succombed to 1M every 4s.
>
>
>
>--
>Sent from: http://sqlite.1065341.n5.nabble.com/
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
INSERTs are taking constant time as they should, the DELETE is quite
obviously working is a blatantly non-linear progress.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
Thanks, it's something worth trying.
However, no, that can't be the only reason. Populating scriptically with the
same data takes less than 10 minutes, including side processing... So the
disproportion between insert and delete performance is huge.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Simon Slavin


On 17 Dec 2017, at 10:18pm, Dinu  wrote:

> For the table swap operation, that is out of the question, the DB is running
> on a live system that has multiple clients. Foreign keys or not, any
> structure change crashes all clients (DB structure has changed). Too high a
> price to pay for a delete thay may happen routinely.

Okay.  In that case you might try this.  The reason it may be fast is that 
SQLite has DROP without a WHERE clause optimized.  It happens very quickly.

BEGIN IMMEDIATE
create a temporary table with the same columns as MyTable, but no indexes
copy the rows you need to keep to the temporary table
DELETE FROM MyTable
copy all rows from the temporary table back to the original table
COMMIT

As with the last suggestion, the only reason I think these my be faster is that 
you are keeping less than half the rows in the original table.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
For the table swap operation, that is out of the question, the DB is running
on a live system that has multiple clients. Foreign keys or not, any
structure change crashes all clients (DB structure has changed). Too high a
price to pay for a delete thay may happen routinely.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
By "one time only", I mean in my understanding, the way most DBs do on a
DELETE is this: cache the ROWIDs while deleting data rows from the main and
from the indexes, then when all ROWIDS are explored, sort the ROWID stream,
and prune the trees from a sorted stream. This is both highly efficient
(just like inserts, deletes of already ordered records are very efficient)
and highly parallelizable.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Simon Slavin


On 17 Dec 2017, at 10:01pm, Dinu  wrote:

> The CPU is capped up badly, the HDD
> is at idle level so this also hints to the same issue.

Apparently your program/OS is trying to do everything in cache.  This may be 
self-defeating for this operation.

Your description of what’s happening is reasonable, and suggests SQLite is 
working the way it should do.

> We have ~32M records out of which we are deleting about ~24M.

It might be faster to create a new table with the rows which survive, then 
delete the whole old table.

1) Rename old table.
2) CREATE new table.
3) Copy the rows which survive from the old table.  Depending on your selection 
criteria you might be able to use

INSERT INTO MyTable (SELECT FROM OldTable WHERE survive=1)

4) DROP OldTable
5) Create the indexes you need on MyTable

If you have FOREIGN KEYs it may be problematic to do this.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Clemens Ladisch
Dinu wrote:
> The WAL size is in excess of 9G atm.
> The WAL size is growing ever slower (was about 10M/s, now it's 1M every 2s,
> slowing down). This indicates to me a non-linear process that I can link
> only to the B-trees, it's the only non-linear component I can think of that
> could cause this slowing-down-to-a-drag.

The WAL file contains the new version(s) of any modified page; for every page
access, the database has to search in the WAL for the newest version.

Consider using DELETE/TRUNCATE/PERSIST journal mode instead; any changed
pages are copied to the journal before being modified, so there is never
more than one copy of a page in the journal.  And during normal operation,
the database never has to look at the contents of an old page.

Alternatively, try copying all the other rows to a new table, and dropping
the old one.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu
Ok, so to address the size of the problem properly:

We have ~32M records out of which we are deleting about ~24M.
The raw size of the DB is about 20G out of which 15G goes away. Under these
circumstances any page cache becomes irrelevant.
The delete operation, which is a single delete run from the sqlite3 tool, is
running for 3 hours now.
The WAL size is in excess of 9G atm.
The WAL size is growing ever slower (was about 10M/s, now it's 1M every 2s,
slowing down). This indicates to me a non-linear process that I can link
only to the B-trees, it's the only non-linear component I can think of that
could cause this slowing-down-to-a-drag. The CPU is capped up badly, the HDD
is at idle level so this also hints to the same issue.

In reply to your remarks:
- I understand the B-trees need to be modified. However, if you prune,
(maybe rebalance), write-to-disk every node at a time, that is hugely
stressful and inefficient when you are pruning half of a 32M nodes tree.
Since the operation is atomic, the indexes could be updated one time and one
time only.

So, what to do? I think this delete may never end... by the time it took to
write this reply, the WAL grow has succombed to 1M every 4s. 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Simon Slavin
On 17 Dec 2017, at 8:53pm, Dinu Marina  wrote:

> It became apparent to me from performance measurements that the DELETE 
> operation is very slow, when operating on large recordsets with indexes 
> involved. My deduction is that SQLite updates the indexes for every deleted 
> row, which in painstakingly slow...
> 
> Since the DELETE should be atomic, the index update could also be atomic, and 
> a mass pruning of the index tree is hugely faster than individual removals.
> 
> My question is: is my deduction correct? Is there any way to fix/improve this 
> in userland? Or are there prospects for this to be improved in SQLite in the 
> foreseeable future?

The data for the table is held in one tree, occupying one set of pages.
The data for each index is held in a separate tree, occupying its own set of 
pages.
So if you have N indexes on the table, N+1 sets of pages need to be updated.

If a DELETE deletes more than one row, then SQLite does the caching you’d 
expect when working out which pages to update.  SQLite does this efficiently 
and work went into optimizing for speed.

If the timing of DELETE statements is important to you, use journal mode of 
PERSIST, MEMORY or WAL.  You might want to compare PERSIST and WAL and find out 
which performs better for you.  Remember that journal mode is saved with the 
database: you only need to set it once for the file, and SQLite will retrieve 
the setting when it reopens the file.

If you are doing more than one DELETE at a time, enclose them all in a 
transaction:

BEGIN
DELETE …
DELETE …
COMM|T

This can be far faster than individual commands, since the 'dirty' pages are 
written back once at the end of the transaction rather than the end of each 
command.

On 17 Dec 2017, at 9:06pm, Dinu Marina  wrote:

> Update: we are also using WAL; I have noticed during a huge delete, the WAL 
> also grows huge. Could this also be a reason for slowness, that SQLite 
> duplicates every block that hosts a delete? Is there any way to work around 
> this?

Correct.  While a transaction with DELETE is being carried out, SQLite must 
temporarily store both the old and the new versions of all pages affected.  
This is to allow recover if your computer crashes or loses power before the 
transaction is complete.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Keith Medcalf

You page cache is probably too small to contain the working set of database 
pages that require modification.  There should be no difference between 
"deleting" and "inserting".  Both require changes to all applicable B+ and B- 
Trees.

Though of course, it can be VERY slow if you have SECURE DELETE enabled since 
that requires overwriting the row rather than merely removing it from the 
applicable trees.

The WAL file contains images of changed pages AFTER they are changed.  So 
firstly the fact that you see pages added to the WAL file means the changed 
pages cannot be maintained in the page cache (it is too small).  Secondly it 
can mean that you have the database and WAL file stored on slow media (ie, a 
360 RPM floppy disk rather than a 2600 MB/s NVMe disk).

One of the resources on the computer is being consumed 100%.  Once you figure 
out what that resource is, you can address the issue.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Dinu Marina
>Sent: Sunday, 17 December, 2017 14:06
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>Update: we are also using WAL; I have noticed during a huge delete,
>the
>WAL also grows huge. Could this also be a reason for slowness, that
>SQLite duplicates every block that hosts a delete? Is there any way
>to
>work around this?
>
>
>On 17.12.2017 22:53, Dinu Marina wrote:
>> Hi all,
>> It became apparent to me from performance measurements that the
>DELETE
>> operation is very slow, when operating on large recordsets with
>> indexes involved. My deduction is that SQLite updates the indexes
>for
>> every deleted row, which in painstakingly slow...
>>
>> Since the DELETE should be atomic, the index update could also be
>> atomic, and a mass pruning of the index tree is hugely faster than
>> individual removals.
>>
>> My question is: is my deduction correct? Is there any way to
>> fix/improve this in userland? Or are there prospects for this to be
>> improved in SQLite in the foreseeable future?
>>
>> Thanks,
>> Dinu
>>
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-17 Thread Keith Medcalf

How do you know the insert is not working?  Have you verified that the data is 
not being inserted or are you simply relying on a popup provided by the 
application?  If the later, you should be aware that the application can pop-up 
whatever it wants at any time.  For example, it could claim the statement 
failed because the moon is not made of blue stilton.  While the moon is indeed 
not made of blue stilton, that is not (and cannot possibly be) the reason that 
the statement failed.  It does not mean that the statement in fact failed at 
all, only that the application popped up a pop saying it failed bue to the moon 
not being made of blue stilton.

By the time the "application" knows that no results were returned it is already 
too late, the statement has already been run to completion.  The application is 
likely just "spitting out a message" based on their being no results.

Unless of course the application has been deliberately designed to preclude 
insert/update statements by wrapping each statement execution in a transaction, 
which it then does a rollback on.  Or perhaps by making sure the first token of 
the statement is the word "SELECT" ...  If this is the case then perhaps you 
can get a version of the application that does not have these, rather 
intentionally created, restrictions.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Nelson, Erik - 2
>Sent: Saturday, 16 December, 2017 14:44
>To: SQLite mailing list
>Subject: Re: [sqlite] Odd question
>
>Keith Medcalf wrote on Sent: Saturday, December 16, 2017 4:04 PM
>
>>Your method of achieving a solution to whatever problem you are
>trying to solve will not work.
>
>>Perhaps you can state the problem you are trying to solve without
>making an assumption as to how to solve that problem, someone may be
>able to solve the problem with a workable solution rather than an
>unworkable solution.
>
>I'm using an application that I can't change.  I can give it multiple
>queries to run but the application assumes that each query will
>produce at least one row and causes an error if that's not the case.
>I want to cause some rows to be inserted into a table but plain
>insert queries don't work because they violate the application's
>assumption that a result will be returned.
>
>>-Original Message-
>>From: sqlite-users [mailto:sqlite-users-
>>boun...@mailinglists.sqlite.org] On Behalf Of Nelson, Erik - 2
>>Sent: Saturday, 16 December, 2017 12:53
>>To: SQLite mailing list
>>Subject: [sqlite] Odd question
>>
>>For unfortunate reasons, I need a query that does an insert and also
>>returns at least one row... for example, something along the lines
>of
>>
>>Select 1 as value from (insert into table1 values(a, b, c))
>>
>>Or
>>
>>Select coalesce((insert into table1 values(a, b, c)), 1) as value
>>
>>I've tried a number of options but haven't been able to get anything
>>to work.  Is it possible?
>>
>=
>
>-
>-
>This message, and any attachments, is for the intended recipient(s)
>only, may contain information that is privileged, confidential and/or
>proprietary and subject to important terms and conditions available
>at http://www.bankofamerica.com/emaildisclaimer.   If you are not the
>intended recipient, please delete this message.
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu Marina
Update: we are also using WAL; I have noticed during a huge delete, the 
WAL also grows huge. Could this also be a reason for slowness, that 
SQLite duplicates every block that hosts a delete? Is there any way to 
work around this?



On 17.12.2017 22:53, Dinu Marina wrote:

Hi all,
It became apparent to me from performance measurements that the DELETE 
operation is very slow, when operating on large recordsets with 
indexes involved. My deduction is that SQLite updates the indexes for 
every deleted row, which in painstakingly slow...


Since the DELETE should be atomic, the index update could also be 
atomic, and a mass pruning of the index tree is hugely faster than 
individual removals.


My question is: is my deduction correct? Is there any way to 
fix/improve this in userland? Or are there prospects for this to be 
improved in SQLite in the foreseeable future?


Thanks,
Dinu



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Atomic DELETE index optimisation?

2017-12-17 Thread Dinu Marina

Hi all,
It became apparent to me from performance measurements that the DELETE 
operation is very slow, when operating on large recordsets with indexes 
involved. My deduction is that SQLite updates the indexes for every 
deleted row, which in painstakingly slow...


Since the DELETE should be atomic, the index update could also be 
atomic, and a mass pruning of the index tree is hugely faster than 
individual removals.


My question is: is my deduction correct? Is there any way to fix/improve 
this in userland? Or are there prospects for this to be improved in 
SQLite in the foreseeable future?


Thanks,
Dinu

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-17 Thread nomad
On Sat Dec 16, 2017 at 07:53:22PM +, Nelson, Erik - 2 wrote:
> For unfortunate reasons, I need a query that does an insert and also
> returns at least one row... for example, something along the lines of
> 
> Select 1 as value from (insert into table1 values(a, b, c))
> 
> Or
> 
> Select coalesce((insert into table1 values(a, b, c)), 1) as value
> 
> I've tried a number of options but haven't been able to get anything
> to work.  Is it possible?

How about a user defined function that does the insert in the
background?

SELECT my_insert(table_name, val1, val2);

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_aggregate_context() in xFinal callback does not zeroes out memory

2017-12-17 Thread Lifepillar

On 17/12/2017 11:40, Lifepillar wrote:

When I call a custom aggregate function on an empty table T, e.g.,
`select myaggr(A) from T`, the sqlite3_aggregate_context() call in the
xFinal callback does not return zeroed out memory, but a block of
seemingly uninitialized memory. Is that expected? If so, how do I know
that the xStep callback has not been called?

The code looks like this:

typedef struct Aggr Aggr;
struct Aggr {
   Value value;
   uint32_t count;
};

static void
myaggrStep(sqlite3_context* ctx, int argc, sqlite3_value** argv) {
   Aggr* sum = (Aggr*)sqlite3_aggregate_context(ctx, sizeof(Aggr*));


Oh dear. Never mind, I have just noticed the spurious pointer.

Sorry for the noise,
Life.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_aggregate_context() in xFinal callback does not zeroes out memory

2017-12-17 Thread Lifepillar

When I call a custom aggregate function on an empty table T, e.g.,
`select myaggr(A) from T`, the sqlite3_aggregate_context() call in the
xFinal callback does not return zeroed out memory, but a block of
seemingly uninitialized memory. Is that expected? If so, how do I know
that the xStep callback has not been called?

The code looks like this:

typedef struct Aggr Aggr;
struct Aggr {
  Value value;
  uint32_t count;
};

static void
myaggrStep(sqlite3_context* ctx, int argc, sqlite3_value** argv) {
  Aggr* sum = (Aggr*)sqlite3_aggregate_context(ctx, sizeof(Aggr*));

  if (sum == 0) return;

  if (sqlite3_value_type(argv[0]) == SQLITE_NULL)
return;

  if (sum->count == 0) { // First invocation
// Initialize aggr->value
sum->count++;
  }
  else {
// Update aggr->value
sum->count++;
  }
}

static void myaggrFinal(sqlite3_context* ctx) {
  Aggr* aggr = (Aggr*)sqlite3_aggregate_context(ctx, sizeof(Aggr*));
  if (aggr == 0) return;
  if (aggr->count == 0) { // (*)
// xStep was not called, set default value
  }
  // Set result using aggr->value
}

(*) The problem is that in the situation described above, aggr->count
here is not zero.

Using SQLite 3.21.0.

Thanks,
Life.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users