Hi,

If you read the code more deeply, you'll find that the timeit is only
wrapped around select and not around insert.
We've written the insert code so that in the first round you can populate
the database.
You comment out the insert code after the first round and run the benchmark
several times. This would only do select and time select.

Connecting this error to an axiom that "Benchmarks are useless" is bad
indeed. Shouldn't we be ironing out errors and runing benchmarks which are
good.

Your recommendation is to pick a DB best suited to your app. But How ??
a) Either by hiring a guru who has seen all kinds of apps with different DBs
who can give you the answer with which we can run
b) Run a benchmark on critical programs which represent you app across
databases and find what performs best.
I've read too much literature on DB features. All DBs have all features
(except MySQL which does not have commit !!!!)
You can't make a thing out of DB literature.

We believe that we have extracted the core of our application in this small
program. We also believe that there will be many more such applications
which will benefit from this benchmark.
Clearly if there is a non-transactional system (System with heavy selects
and very few updates), they can use this benchmark as a relative comparison
among different access methods.

 Wakeup Cees..... you can't just preside over a discussion like this :-)

 Thanks and Regards,

 S Muthu Ganesh & V Murali
Differentiated Software Solutions Pvt. Ltd.,
90, 3rd Cross,2nd Main,
Ganga Nagar,
Bangalore - 560 032
Phone : 91 80 3631445, 3431470
Visit us at www.diffsoft.com

> ----- Original Message -----
> From: Cees Hek <[EMAIL PROTECTED]>
> To: Clayton Cottingham aka drfrog <[EMAIL PROTECTED]>
> Cc: <[EMAIL PROTECTED]>
> Sent: Thursday, April 19, 2001 8:08 PM
> Subject: [OT] Re: Fast DB access
>
>
> > On 18 Apr 2001, Clayton Cottingham aka drfrog wrote:
> >
> > > [drfrog]$ perl fast_db.pl
> > > postgres
> > > 16 wallclock secs ( 0.05 usr + 0.00 sys =  0.05 CPU) @ 400.00/s (n=20)
> > > mysql
> > >  3 wallclock secs ( 0.07 usr + 0.00 sys =  0.07 CPU) @ 285.71/s (n=20)
> > > postgres
> > > 17 wallclock secs ( 0.06 usr + 0.00 sys =  0.06 CPU) @ 333.33/s (n=20)
> > > mysql
> > >  3 wallclock secs ( 0.01 usr + 0.01 sys =  0.02 CPU) @ 1000.00/s
(n=20)
> > >
> > >
> > > correct me if im wrong but if fast_db.pl is
> > > working right
> > > first set is insert
> > > second set is select
> >
> > I am mad at myself for getting dragged into this, but I couldn't help
> > myself...
> >
> > You are crippling postgreSQL by doing a tonne of inserts with a commit
> > after each statement.  This completely misses the fact that postgreSQL
is
> > transaction based whereas MySQL is not.  Turn off AutoCommit and do a
> > commit at the end of the insert loop.
> >
> > Also, if your selects are taking just as long as your inserts then you
> > must have other problems as well.  Did you set up any indeces for the
> > columns of your table, or is that considered "optimizing the database"
and
> > therefore not valid in your benchmark?
> >
> > Benchmarks like this are pretty much useless (actually 99% of all
> > benchmarks are useless).
> >
> > Use the database that best fits your needs based on the features it
> > supports, and the experience you have using it.  If you find your
database
> > is too slow, look into optimizing it because there are usually hundreds
of
> > things you can do to make a database faster (faster disks, more ram,
> > faster CPU, fixing indeces, optimizing queries, etc...).
> >
> > Don't pick a database because a benchmark on the web somewhere says it's
> > the fastest...
> >
> > Sorry for the rant, I'll go back to sleep now...
> >
> > Cees
> >
> > >
> > > find attached the modified ver of fast_db.pl
> > > i sued to conduct this test
> > >
> > >
> > > comp stats
> > > running stock rpms from mandrake 7.2 for both
> > > postgresql and mysql
> > >  3.23.23-beta of mysql and
> > > 7.02 of postgresql
> > >
> > > [drfrog@nomad desktop]$ uname -a
> > > Linux nomad.localdomain 2.2.18 #2 Tue Apr 17 22:55:04 PDT 2001 i686
> unknown
> > >
> > > [drfrog]$ cat /proc/meminfo
> > > total:   used:    free:  shared: buffers:  cached:
> > > Mem:  257511424 170409984 87101440 24219648 96067584 44507136
> > > Swap: 254943232        0 254943232
> > > MemTotal:    251476 kB
> > > MemFree:      85060 kB
> > > MemShared:    23652 kB
> > > Buffers:      93816 kB
> > > Cached:       43464 kB
> > > SwapTotal:   248968 kB
> > > SwapFree:    248968 kB
> > > [drfrog]$ cat /proc/cpuinfo
> > > processor : 0
> > > vendor_id : AuthenticAMD
> > > cpu family : 6
> > > model : 3
> > > model name : AMD Duron(tm) Processor
> > > stepping : 1
> > > cpu MHz : 697.535
> > > cache size : 64 KB
> > > fdiv_bug : no
> > > hlt_bug : no
> > > sep_bug : no
> > > f00f_bug : no
> > > coma_bug : no
> > > fpu : yes
> > > fpu_exception : yes
> > > cpuid level : 1
> > > wp : yes
> > > flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat
> > > pse36 psn mmxext mmx fxsr 3dnowext 3dnow
> > > bogomips : 1392.64
> > >
> > >
> > >
> > > i will recomp both the newest postgresql and  mysql
> > >
> > > not using any optimizing techs at all i'll post the
> > >
> > > config scripts i use
> > > On Tue, 17 Apr 2001 18:24:43 -0700, clayton said:
> > >
> > > > Matt Sergeant wrote:
> > > >
> > > >  > On Tue, 17 Apr 2001, Differentiated Software Solutions Pvt. Ltd.,
> wrote:
> > > >  >
> > > >  >> H/W : Celeron 433 with 64 MB RAM, IDE HDD using RH 6.1, perl
> 5.005,
> > > >  >> Postgres 6.5.3
> > > >  >
> > > >  >
> > > >  > This is a very very old version of postgresql. Try it again with
> 7.1 for
> > > >  > more respectable results.
> > > >  >
> > > >
> > > >
> > > >  im very glad to see this thread
> > > >
> > > >  i wanted a good benchmark for postgres and mysql
> > > >  {i hope to transpose the sql properly!}
> > > >
> > > >  i do have 7.1 installed and it is very sweet
> > > >
> > > >  ill report back when i rerun under postgresql at the very least
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> >
> > --
> > Cees Hek
> > SiteSuite Corporation
> > [EMAIL PROTECTED]
> >
>

Reply via email to