Simon,
Absolutely no need to apologise. We should apologise for all the time we
have taken from other people :(
We recognise that the collate no case is inappropriate for our database.
We suspect this was added from a SQLite tool we used some time ago. We
are going to use this opportunity to
Chris,
I'll try and summarise.
1. We have a 60GB database collecting data. This database is accessed by
a single process once every five mins and around 5MB of data (approx
600-800 rows) is added. Data has never been deleted.
2. The database is getting too big for the server it's hosted on.
On 31 Jul 2018, at 2:59pm, Rob Willett wrote:
> We've created a new table based on your ideas, moved the collate into the
> table, analysed the database. We did **not** add COLLATE NOCASE to the
> columns which are defined as integers. Would that make a difference?
What you did is correct. I
I've been following this thread with interest, but this just doesn't make
sense...
> Logically speaking SQLite shouldn't notice the difference in row order,
but things do slow down,
> even with analyse.
Are you accessing each row via its ID? Even so, that should still be
indexed.
I thought you
Dear all,
We think we have now found the issue with the slow commits.
We believe this is due to an inherent (and old) defect in our database
design. We think our original design has an implicit ordering of rows in
a table, when the table is only increasing this flaw in the design isn't
appare
Simon,
As an exercise we have just added in COLLATE NOCASE to our integer
columns.
Whoops! We thought this would make no difference but its added extra 70%
to our processing speeds.
We've now got to the stage where we can make changes quickly, so we'll
back that change out and go back to t
Simon,
Apologies for taking so long to get back, we've been building a test
system and its taken a long time.
We're just getting round to trying your ideas out to see what difference
they make,
We've created a new table based on your ideas, moved the collate into
the table, analysed the da
Droedel,
Fortunately we have no performance issues in production with the 60GB
database. These issues came out in testing (which is what testing is
for).
We're investigating the newly generated ID's as we speak or we will be
once we get our replica production system setup.
sqlite_analyser
Hi Rob,
Answers are in the text below
On Mon, Jul 30, 2018, at 13:45, Rob Willett wrote:
> Droedel,
>
> We don't think there are significant read access. The database is a
> single database on a single thread on a single process. The only access
> to it is a Perl script that logs the incoming
Warren,
The hardware is different, at the time we didn't want to spin up a
complete production replica as thats quite expensive. We used a smaller
machine, both have the same type of back end spining raid array, but we
would think that writing 4-5MB of changed data back shouldn't take 7
secon
On Jul 30, 2018, at 5:53 AM, Rob Willett wrote:
>
> I would wonder why writing the data to a 60GB database and doing a commit is
> fast and writing exactly the same data to the 600MB database is different.
> The programs for doing it are the same, the database schema is identical.
I assume the
Warren,
On 30 Jul 2018, at 12:28, Warren Young wrote:
On Jul 30, 2018, at 4:51 AM, Rob Willett
wrote:
The system has been running for 2-3 years
Has performance changed over that span? Which direction?
Performance hasn't changed on the large 60GB data database. its pretty
consistent.
Droedel,
We don't think there are significant read access. The database is a
single database on a single thread on a single process. The only access
to it is a Perl script that logs the incoming information. We never have
two accesses at the same time.
We have a nagging feeling (and thats al
Hi Rob,
Is there significant read access (iostat: r/s) during these slow writes ? If
yes, it might be due to a small cache, requiring the database to read (index)
pages before updating them.
And is the data you're adding in both databases (large/small) added at the end
of the table using the a
On Jul 30, 2018, at 4:51 AM, Rob Willett wrote:
>
> The system has been running for 2-3 years
Has performance changed over that span? Which direction?
> we have taken the opportunity to try and prune the database from 60GB down to
> 600MB.
SQLite’s speed is only weakly affected by database s
Droedel,
Thanks for the comprehensive reply. We have actually done all of this.
The system has been running for 2-3 years and we have taken the
opportunity to try and prune the database from 60GB down to 600MB.
Currently the live system is working OK with a 60GB database, but our
test system
Hi,
When having bad performance, I usually first try to find out if the slowness is
due to disk througput (sequential), slow random access or something else. In
Linux, try "iostat -xtc 5". Do this with and without your application writing
to disk.
If you see high CPU %iowait and high %util on
Please try moving your COLLATE clauses into the table definition. e.g. instead
of
CREATE UNIQUE INDEX "Disruptions_Idx3" ON Disruptions ("version" COLLATE
NOCASE ASC, "Disruption_id" COLLATE NOCASE ASC, "location" COLLATE NOCASE ASC);
Your table definition should have
"version" i
Thanks for the mail.
We ran analyze with no indexes, made no difference.
We recreated the indexes and ran analyze again. The very long commit is
back, this time it took 14 secs :)
It does appear that the indexes have something to do with this whereby
they have not been an issue to now.
Rob
Anton,
Dropped the indexes and created them without order.
We'll need to look at what your second para means. It could be a major
and massive change.
Rob
On 29 Jul 2018, at 11:52, Djelf wrote:
Rob,
Try creating indexes without order.
Or, try to make a column with a hash of the values e
Rob,
Try creating indexes without order.
Or, try to make a column with a hash of the values entering the index and
search for the value by the index of. This will complicate the logic of your
program, but it will decrease the volume of the database, and possibly
significantly speed up both readin
On July 29, 2018 5:47:29 AM EDT, Rob Willett
wrote:
>John,
>
>Thanks for the prompt reply and a very good question..
>
>We've dropped the indexes and the commit is now very quick, approx two
>seconds
>
>However the overall performance of the run is much the same as other
>areas of the code are
Also, kindly clarify one bit - I'm not sure whether you use the word
"commit" as an easy substitute for the entire process of updating the DB
(i.e all SQL INSERT/UPDATE code that runs up to and including the COMMIT
statement), of if you actually mean the "COMMIT" operation, because the
things t
On 2018/07/29 11:47 AM, Rob Willett wrote:
John,
Thanks for the prompt reply and a very good question..
We've dropped the indexes and the commit is now very quick, approx two
seconds
However the overall performance of the run is much the same as other
areas of the code are now significantly
John,
Thanks for the prompt reply and a very good question..
We've dropped the indexes and the commit is now very quick, approx two
seconds
However the overall performance of the run is much the same as other
areas of the code are now significantly slower, whereas before they were
quick.
What is the performance without the indexes?
On Sun, 29 Jul 2018 10:20:11 +0100
"Rob Willett" wrote:
> Update 1
>
> We've copied the disruptions table to a new table, dropped the old
> table, copied the new table back in and recreated all the indexes.
>
> Exactly the sam commit performance.
>
Update 1
We've copied the disruptions table to a new table, dropped the old
table, copied the new table back in and recreated all the indexes.
Exactly the sam commit performance.
We've also tracked the -shm and -wal files and they are around 5MB in
size.
Mmmm tricky (as Deepthought sai
Hi,
Background
We've been refactoring our database to reduce the size of it. Through
some simple logic we've managed to pull out 99% of the data to reduce
the size from 51GB down to approx 600MB. This logic has been to remove
rows that are almost the same but not quite identical. As with all
28 matches
Mail list logo