> > > BTW, in case you don't do that yet your best performance will be if
> > > you prepare your UPDATE and INSERT statements only once and then do
> > > bind + step + reset in that 100k times loop.
> > >  
> >  
> > In principle I agree, but since the temporary-table version is blindingly 
> > fast up the the update-the-disk portion it's definitely not a bottleneck at 
> > this point
> >  
>  
> I was talking about your initial implementation when you did 100k times
> > update_counter(k1, k2, count=count+1, expires=now+count*1day)
> > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day)
> >  
>  
> Not about your final version with one INSERT OR REPLACE. Was your
> statement about the same thing? If yes I didn't understand what you
> meant.
>  
>  


I just meant that the naïve way of making the prepared statements with python's 
sqlite3 module (which it may or may not cache, but I assume doesn't) was 
already so fast that I'm not worried about shaving a few milliseconds off of 
re-preparing the statements every time when the actual problem occurs at a 
lower level than that.

So yeah, preparing the statement once and re-binding it every time would speed 
things up, but so little that I'd rather solve the real problem of reducing the 
time taken by the disk-writes first

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to