On Tue, 2003-11-18 at 00:55, Arthur C. Hsu wrote: > Hello, > > I have an in-memory DB and try to perform inserts to tables inside. Seems > that there are some performance bottlenecks for continuous inserts. The row > insertion speed will drop dramatically after the first 6000 rows are > inserted. That is > > Time Elapsed Rows inserted Rows inserted per 30 seconds > -------------------- -------------------- ------------ > ------------------------------ > 30 7920 7920 > 60 10711 2791 > 90 13147 2436 > 120 14944 1797 > 150 16598 1654 > 180 17878 1280 > 210 19609 1731 > 240 20711 1102 > > I've already tune my SQL to precompiled execution (using sqlite_compile && > sqlite_step). > I've modified MAX_PAGES to 1048576, MAX_BYTES_PER_ROW to 17646 and > TEMP_STORE to 3, where my projected in-mem DB size is 50M. > I've tried using PRAGMA cache to add cache size, it helps a little bit but > not much (say, from 1500 inserts per 30 seconds to 1510). > > Any clues that I can further squeeze the performance? Or the limitation is > by design? I just can't realize why the first 6000 rows are amazing fast > but later the speed drops down so dramatically.
It would really help if we could see an example of the queries/code that generate this problem, including the CREATE TABLE/INDEX definitions. And yes, this does mean including data (or a generator) that can exploit this problem. You also didn't specify when your transactions are being created or removed. Your benchmark should also include how much memory is actually being used. You expect it to be approximately 50M, but you haven't explained why this is the case. I'll share my suspicions, but I'm not prepared to stand by them until I see the above: Allocating that much space for cache means you could be using more than three times the amount of ram that you expect- possibly even more. --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]