On 9 Jul 2009, at 2:58pm, Rizzuto, Raymond wrote:

> I have 4 servers, all with 4 cores.  This is to handle a volume of  
> 10-20 million orders per day.
>
> Most of the work load (~90%) is unrelated to the database.  In fact,  
> I added the database just to allow me to offload orders out of  
> memory when they look done so that the app doesn't run out of  
> memory.  It is a 32-bit app, so it typically dies when it hits ~2.5  
> gig.  Moving to a 64-bit app was not an option.

Okay, you're way ahead of most of my recommendations and obviously  
understand what you're doing.

> In approx .2% of the orders, I guess wrong, and have to read the  
> order back in from the db to process additional activity.  I could  
> remove indices to improve write performance, however the hit on read  
> performance might outweigh the gain.  Since the processing is  
> supposed to be in near-realtime, the delay in that case might be  
> excessive.  Still, it may be worth trying that.

Since it takes very little extra coding to test the effect on speed it  
might be worth experimenting with unusual index methods.  If you read  
the database only for two operations, both of which need all records  
to do with a particular order, it might be worth experimenting with  
having no primary key, just one index on the 'order number' column.   
One DBMS I used to use was famously faster if you used 'DESC' in  
indices, since records you were normally looking for were more likely  
to be recent records than extremely old ones, and the search algorithm  
worked faster that way.

> I am using begin/commit around the writes since each order can  
> require between 1 and 5 writes.

If you're writing orders in big batches, and if your previous  
statements about crash-recovery are true, then it might be worth  
putting begin/commit just around each batch of orders instead of each  
individual order.  You might want to take it even further: by analogy  
with a non-SQL DBMS, I once wrote a logging program which did a COMMIT  
only just before a SELECT was needed, or when the logging program was  
quit.  However, I don't know how SQLite acts if you have thousands of  
uncommitted changes: it might get slower if you have that many.

> Ken made a suggestion that I create separate db's for each thread.   
> Since the threads don't share data, that approach would work for me,  
> and eliminate all contention.  Each db would have the same schema,  
> of course.

If you don't share data, that might be good.  You'd need to work out a  
a system for order codes, of course, so that you can work out which  
dataset a particular order is in.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to