On 18 December 2017 at 06:59, Dinu wrote:
> Richard Hipp-3 wrote
> > Can you try this:
>
> (1) BEGIN TRANSACTION
> > (2) DROP all indexes from the table
> > (3) Do the DELETE operation
> > (4) CREATE all indexes again
> > (5) COMMIT
> Thanks Richard, as mentioned earlier,
Hmmm. 4.6 GB in the Filesystem cache (the Level 2 cache), and only 2 pages
(78 MB or thereabouts) in the process pagecache (the Level 1 cache). And with
only 4.6 GB in the Level 2 cache, the I/O rate drops to almost zero. If you
"moved" that memory (or some of it) from Level 2 to Level
Ok, just checking. The database I was referring to was an Historian Collector
so the amount of data that it needed to maintain was time limited so it was
more or less a FIFO by tag.
---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a
lot about anticipated traffic
On 17 Dec 2017, at 10:57pm, Dinu wrote:
> Na, they are not a single-time use rows, otherwise I'd have used a FIFO :)
> Every now and then, a large portion of the table becomes obsolete by
> external factors.
Are you issuing one DELETE command with a WHERE clause which
Richard Hipp-3 wrote
> Can you try this:
Thanks Richard, as mentioned earlier, any structure change is unacceptable
due to concurrent reader clients.
--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
Keith Medcalf wrote
> I had a similar problem with a multi-terabyte database once upon a time.
Na, they are not a single-time use rows, otherwise I'd have used a FIFO :)
Every now and then, a large portion of the table becomes obsolete by
external factors.
--
Sent from:
SQLite 3.19.3
CentOS 7 (64 bit)
RAM: 6G total, 4.4G buff/cache, 0.6G unused
Sqlite memory: 133M RES, 0 SWAP, 0 DIRTY - don't know if it's relevant
anymore, the query seems to have entered into another execution phase, it
looks like now it's flushing the WAL.
No other PRAGMA
No FKs, no triggers.
I had a similar problem with a multi-terabyte database once upon a time.
The solution was simply to run the DELETE every time you insert, thus keeping
the database continuously "pruned" by continuous removal of a small number of
records rather than periodically trying to get rid of a lot all
On 12/17/17, Dinu wrote:
>
> We have ~32M records out of which we are deleting about ~24M.
Can you try this:
(1) BEGIN TRANSACTION
(2) DROP all indexes from the table
(3) Do the DELETE operation
(4) CREATE all indexes again
(5) COMMIT
--
D. Richard Hipp
d...@sqlite.org
(1) Since you are in WAL, have you set pragma SYNCHRONOUS=1 (NORMAL)? (Don't
know if it will make any difference in your particular case).
(2) The page_cache size makes a HUGE difference. Modified pages are ONLY
flushed to the WAL file when they overflow the cache. The fewer modified pages
INSERTs are taking constant time as they should, the DELETE is quite
obviously working is a blatantly non-linear progress.
--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
Thanks, it's something worth trying.
However, no, that can't be the only reason. Populating scriptically with the
same data takes less than 10 minutes, including side processing... So the
disproportion between insert and delete performance is huge.
--
Sent from:
On 17 Dec 2017, at 10:18pm, Dinu wrote:
> For the table swap operation, that is out of the question, the DB is running
> on a live system that has multiple clients. Foreign keys or not, any
> structure change crashes all clients (DB structure has changed). Too high a
>
For the table swap operation, that is out of the question, the DB is running
on a live system that has multiple clients. Foreign keys or not, any
structure change crashes all clients (DB structure has changed). Too high a
price to pay for a delete thay may happen routinely.
--
Sent from:
By "one time only", I mean in my understanding, the way most DBs do on a
DELETE is this: cache the ROWIDs while deleting data rows from the main and
from the indexes, then when all ROWIDS are explored, sort the ROWID stream,
and prune the trees from a sorted stream. This is both highly efficient
On 17 Dec 2017, at 10:01pm, Dinu wrote:
> The CPU is capped up badly, the HDD
> is at idle level so this also hints to the same issue.
Apparently your program/OS is trying to do everything in cache. This may be
self-defeating for this operation.
Your description of
Dinu wrote:
> The WAL size is in excess of 9G atm.
> The WAL size is growing ever slower (was about 10M/s, now it's 1M every 2s,
> slowing down). This indicates to me a non-linear process that I can link
> only to the B-trees, it's the only non-linear component I can think of that
> could cause
Ok, so to address the size of the problem properly:
We have ~32M records out of which we are deleting about ~24M.
The raw size of the DB is about 20G out of which 15G goes away. Under these
circumstances any page cache becomes irrelevant.
The delete operation, which is a single delete run from
On 17 Dec 2017, at 8:53pm, Dinu Marina wrote:
> It became apparent to me from performance measurements that the DELETE
> operation is very slow, when operating on large recordsets with indexes
> involved. My deduction is that SQLite updates the indexes for every deleted
You page cache is probably too small to contain the working set of database
pages that require modification. There should be no difference between
"deleting" and "inserting". Both require changes to all applicable B+ and B-
Trees.
Though of course, it can be VERY slow if you have SECURE
How do you know the insert is not working? Have you verified that the data is
not being inserted or are you simply relying on a popup provided by the
application? If the later, you should be aware that the application can pop-up
whatever it wants at any time. For example, it could claim the
Update: we are also using WAL; I have noticed during a huge delete, the
WAL also grows huge. Could this also be a reason for slowness, that
SQLite duplicates every block that hosts a delete? Is there any way to
work around this?
On 17.12.2017 22:53, Dinu Marina wrote:
Hi all,
It became
Hi all,
It became apparent to me from performance measurements that the DELETE
operation is very slow, when operating on large recordsets with indexes
involved. My deduction is that SQLite updates the indexes for every
deleted row, which in painstakingly slow...
Since the DELETE should be
On Sat Dec 16, 2017 at 07:53:22PM +, Nelson, Erik - 2 wrote:
> For unfortunate reasons, I need a query that does an insert and also
> returns at least one row... for example, something along the lines of
>
> Select 1 as value from (insert into table1 values(a, b, c))
>
> Or
>
> Select
On 17/12/2017 11:40, Lifepillar wrote:
When I call a custom aggregate function on an empty table T, e.g.,
`select myaggr(A) from T`, the sqlite3_aggregate_context() call in the
xFinal callback does not return zeroed out memory, but a block of
seemingly uninitialized memory. Is that expected? If
When I call a custom aggregate function on an empty table T, e.g.,
`select myaggr(A) from T`, the sqlite3_aggregate_context() call in the
xFinal callback does not return zeroed out memory, but a block of
seemingly uninitialized memory. Is that expected? If so, how do I know
that the xStep
26 matches
Mail list logo