Thanks to both Christian Smith and John Stanton for your posts.

> On Wed, 19 Oct 2005, Christian Smith wrote:
> From the VDBE output you originally posted, you are doing a 
> fair amount of
> work for each deleted row:
> - Index search to find the next row from EXAMINATIONS to delete
> - Removing the row from 3 indexes on EXAMINATIONS
> - (trigger) Remove related row in SPIRO_TEST from 2 indexes 
> on SPIRO_TEST
> - (trigger) Remove related row in SPIRO_TEST
> - Remove the row from EXAMINATIONS
> 
> Check your cache size. If the above work is causing the 75 page entry
> cache to thrash, you're likely to hit worst case performance as the
> thrashing pages may be being accessed in a cyclical fashion. 
> Not sure how
> like it is that your page cache is not big enough. How big is a row of
> data, typically?

Our cache size is 75 pages of 8192 bytes = 600Kb.

The maximum size of an EXAMINATIONS record is about 500 bytes, 450 bytes of 
this is a varchar field. In the test example the average size of an 
EXAMINATIONS record is 60 bytes as not much text is saved.

The maximum size of a SPIRO_TEST record is about 5Kb, these will vary from test 
to test, but for the test example the size of each SPIRO_TEST record is fixed 
to approx 1Kb.

Based on this I dont think that we should be thrashing the cache. I am however 
unsure how SQlite works here.

> Also, a 60x slowdown is not to be unexpected. The PC version, while
> probably having the same SQLite page cache size in the SQLite 
> app itself,
> will most likely be reading and writing to the OSes cache at memory to
> memory copy speed most of the time, with synchronous writes 
> only done when
> needed. The embedded platform you're using probably writes straight to
> FLASH, which is necassarily a synchronous operation if your OS doesn't
> have a cache between your app and the FLASH FS. While flash 
> writes are low
> latency, they are also low bandwidth, and won't be within an order of
> magnitude of performance when compared to a desktop PC write to OS
> filesystem cache.
> 
> Finally, you give no indication on the actual CPU speed of 
> the embedded
> platform. It's quite reasonable to assume a development PC could be an
> order of magnitude faster on sheer integer throughput. I'm 
> amazed how slow
> my 50MHz microSPARC based SPARCclassic is. Such a platform 
> would not be
> much, if at all, slower than a modern embedded platform, and has the
> benefit of gobs of RAM, but still runs the same code two orders of
> magnitude slower at least than my Athlon XP 1700 based 
> desktop. You have
> to keep your performance expectations realistic. You are, afterall,
> running a complete, ACID transaction, SQL relational database.


The maximum CPU speed of our ARM7 chip is 71Mhz.

> Others have indicated that dropping indexes might help when 
> deleting or
> inserting records. However, have you tried simply not having 
> indexes at
> all? Would that cause unacceptable slowdown? Perhaps, for the 
> demo query
> from the original post, just keep the DATE index on 
> EXAMINATIONS, and use
> full table scans for queries based on EXAM_TYPE and 
> STATUS_FLAG. Truth is,
> given the small number of EXAM_TYPE and STATUS_FLAG values (I 
> presume),
> you're as well just doing table scans when looking for 
> specific exam types
> and statuses. Indexes only really help when you have a large 
> variation in
> values with few collisions. Doing this will leave a single 
> index update in
> addition to the actual row removals, which should improve performance.

I have tried permanently dropping the indexes on EXAM_TYPE and STATUS_FLAG and 
this gives some improvement in time. Indeed it does seem that the STATUS_FLAG 
index is worthless and in the initial version of the software we will have only 
1 EXAM_TYPE (although this will increase for each module we release over the 
next few months).

I have also tried the suggested method of dropping the EXAM_PATIENT_ID_INDEX 
index on the examinations table before delete and rebuilding it on completion. 
I cannot delete the remaining indexes as they are used during the delete 
operation and this slows the whole operation down.

The latest changes have reduced the time to delete the same number of records 
from 3:45 minutes to 2:53 minutes. Still a long time to wait but any time 
saving is welcome, especially as the test is for a 50% full scenario so at 99% 
we can expect it to take 6 minutes.

Thanks again for your help.

If there are any other ideas on how we can optimise this further then please 
let me know.


Mark


DISCLAIMER:
This information and any attachments contained in this email message is 
intended only for the use of the individual or entity to which it is addressed 
and may contain information that is privileged, confidential, and exempt from 
disclosure under applicable law.  If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, forwarding, or copying of this communication is 
strictly prohibited.  If you have received this communication in error, please 
notify the sender immediately by return email, and delete the original message 
immediately.

Reply via email to