Hi Ran,

no, sorry. It's running on a normal Linux PC, runnning 24/7.
No laptop, no stepping up. Time calculation is probably
a bit less accurate in the lower regions, but not that much.

It's really a mind boggler.

Thanks and kind regards

Ulrich


On Wednesday 22 February 2006 11:27, Ran wrote:
> Could it be connected to the stepping up of the CPU? Do you run those tests
> on a laptop? This at least could explain how the many iterations are faster
> (the CPU has time to step up).
> It does not explain why the 10 and 5 are fast as well (maybe when doing few
> iterations, the time calculation is less accurate), but mmm... maybe it
> could explain part of the phenomena?
>
> Ran
>
> On 2/22/06, Ulrich Schöbel <[EMAIL PROTECTED]> wrote:
> > Hi Adrian,
> >
> > I tried your script and got, after a slight modification, quite
> > consistent results. When I tried it as is, I got slightly varying
> > time results with a peak in the 50 to 100 region. Then I
> > commented out all lines concerning the deletion, creation
> > and filling to get the pure retrieval times. Drom then on
> > I got the following almost invariable results,
> >
> > t(1)=538 microseconds per iteration
> > t(5)=69.2 microseconds per iteration
> > t(10)=39.9 microseconds per iteration
> > t(50)=391.48 microseconds per iteration
> > t(100)=215.61 microseconds per iteration
> > t(500)=73.154 microseconds per iteration
> > t(1000)=54.753 microseconds per iteration
> > t(5000)=40.9094 microseconds per iteration
> > t(10000)=39.4558 microseconds per iteration
> >
> > The t(1) time is probably due to Tcls bytecode engine, but
> > the t(50) and t(100) times are inexplicable, at least for me.
> >
> > The 'mini database' you use is, apart from a few additional
> > fields, almost identical to the one I used in my previous tests.
> >
> > Do you come to similar results?
> >
> > I have to oppose your statement, Tcl has garbage collection.
> > It doesn't, at least in the sense, that it calls a routine to
> > collect unused space and free it at arbitrary times, i.e. during
> > idle times. Tcl collects its garbage when there is some. Tcls
> > objects are reference counted and as soon as this count
> > reaches zero the object is cleaned up. This costs time, of
> > course, but it happens each time the garbage is due. That
> > has the effect, that garbage collection times are simply
> > included in execution times, regularly. It should not produce
> > the peak times I see at t(50) and t(100).
> >
> > Thanks for your help
> >
> > Ulrich
> >
> > On Wednesday 22 February 2006 02:45, Adrian Ho wrote:
> > > On Wed, Feb 22, 2006 at 01:13:19AM +0200, Ulrich Sch??bel wrote:
> > > > I don't think it's an interface problem. I'm using Tcl, more or less
> > > > the 'natural' language for sqlite. Tcl doesn't have a garbage
> > > > collection.
> > >
> > > Tcl certainly *does* have garbage collection:
> > >
> > > <http://wiki.tcl.tk/3096>
> > > <http://wiki.tcl.tk/12144>
> > >
> > > > The strangest thing is, I can reproduce this behaviour.
> > > > I'm absolutely clueless. I stumbled over it by coincidence.
> > > > Tried 1000 repetitions, was quite fast, so I tried 10000,
> > > > which was even faster. This led me to the (obviously wrong)
> > > > conclusion, that sqlite spends some time parsing the sql.
> > > > Next I tried 100 repetitions, expecting a bit more than
> > > > 76 microseconds. 310 microsecs didn't bother me really,
> > > > I tried the 10 reps expecting even more. Then came the surprise:
> > > > only 67 microsecs.
> > > >
> > > > My first feeling was, something like a busy disk or so came
> > > > in just when I tried the 100 reps. But the results were reproducible,
> > > > deviating only by a few microseconds.
> > >
> > > Try running the following script and see if there's an odd pattern to
> > > the timing variations:
> > >
> > > #!/usr/bin/env tclsh
> > > package require sqlite3
> > > if {[file exists aho.db]} {
> > >   file delete aho.db
> > > }
> > > sqlite3 db aho.db
> > > db eval {create table cust_persons ( first_name string, last_name
> > > string )}
> > > db eval {insert into cust_persons values ('Adrian','Ho')}
> > > db eval {insert into cust_persons values ('Thunder','Lightning')}
> > > foreach rounds {1 5 10 50 100 500 1000 5000 10000} {
> > >   puts "t($rounds)=[time {db eval {select * from cust_persons where
> > > first_name = 'Adrian'}} $rounds]" }
> > > db close
> > >
> > > - Adrian

Reply via email to