On Thu, Mar 13, 2008 at 12:41:15PM +0500, Zoltan Bencik scratched on the wall: > Hello, > > Thank you all for answering my questions. > > On Thu, Mar 13, 2008 at 8:23 AM, Jay A. Kreibich <[EMAIL PROTECTED]> wrote: > > On Wed, Mar 12, 2008 at 11:50:57PM +0500, Zoltan Bencik scratched on the > > wall: > > > My questions are: > > > -- Will be this way of indexing speed up my ordered retrieval of data? > > > > If you can, it would be best to batch them in a > > transaction to help with that. > > I'm not really sure I understand/know what this is about. Can you please > give me some instructions regarding this batch processing.
Inserts become more expensive when you add indexes since the system needs to update each index along with the table. Additionally, any statement that writes has the overhead of having to lock the database and a number of other operations that are "per transaction" rather than "per statement." If your usage pattern involves inserting a moderate to large number of records all at once (and the design of your table makes me think that might be the case), you can greatly improve the total insert time by batching 10 to 100 (or more) INSERT statements into a single transaction. This isn't any special "batch load" kind of system, it's just a means to reduce the "per transaction" overhead. Example in pseudo-code: # start a new transaction BEGIN foreach ( record ) { INSERT record if ( record_number mod 100 == 0 ) { # close off this transaction and start a new one COMMIT BEGIN } } # close off the final transaction (may be empty) COMMIT If, on the other hand, you're adding single records here and there, this won't make any difference. -j -- Jay A. Kreibich < J A Y @ K R E I B I.C H > "'People who live in bamboo houses should not throw pandas.' Jesus said that." - "The Ninja", www.AskANinja.com, "Special Delivery 10: Pop!Tech 2006" _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users