Thanks Roger.

I understand aspects of your position.

First up, as you imply, there are many cases where people call out
that the sqlite tests are invalid.  I would like to shore up against
that by increasing the number of tests and increasing the system
dimension that is tested.

The current test is trivial, however it identifies caching behaviour
on the host very easily.  (I still am waiting for some data to
determine if Ubuntu's KVM implementation ignores synchronous fileIO
requests, the numbers indicate so, but the kvm guys are stuck in
disbelief/denial stage - I need the last facts to determine either
stand down, or convince them :)

I am not looking for tests to be developed, or new tests to be
created, only a domain expert or similar to offer guidance as to how
to answer the following question in a real-world, relevant way.

  'How do I determine what impact the host os/environment has on sqlite'.

I know it is broad, and depnding on point of view, may be divisive,
but it is still something there is interest in.  I want to move the
comments from 'the test is invalid' to "the test is invalid for me'.
(Although I am the first to say that a heck of a lot of people drop
the 'for me' in the latter in almost all cases :)

I am personally thinking of a composite test that contains the
following classes of performance tests.
    1) Trivial looped tests (the current set).
    2) Complex query (again looped)
   3) Large data set
   4) Large field set

Any thoughts?

Regards... Matthew


On 10/3/09, Roger Binns <rog...@rogerbinns.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Matthew Tippett wrote:
>> Any takers?
>
> It isn't clear what you want.  It mostly appears to be people to fix the
> Phoronix test suite.  That is really their problem!
>
> SQLite already includes various speed tests.  For other people the only
> benchmark that is relevant is their own.  Some think 10MB is a large
> database while others are in the tens of gigabytes.  Some strings are short
> (eg names), some are longer (eg full pathnames and others very long (eg
> genetic sequences).  Some use SQLite to store 3D data while others are plain
> old tables that even Excel could handle.  Some transactions are small,
> others are large as percentages of the tables.  Quite simply there is no way
> someone else's benchmark is going to be representative of what you do.
>
> If Phoronix wants to be relevant then they need to decide what it is they
> are benchmarking.  You can make SQLite use lots of CPU by doing sorts on
> large data sets.  You can make it do lots of I/O by making a query access
> far more data than fits in the SQLite cache and the operating system cache
> (the latter is largely a function of spare RAM).  You can make up synthetic
> scenarios such as "pet shop", "gene splicer", "web log analyzer" etc and
> code up to them.  A critique of those is easy providing the SQL executed and
> a .dump of the database are provided so they can be reproduced by the shell.
>
> Roge
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iEYEARECAAYFAkrIB+oACgkQmOOfHg372QTrbgCfTDFr7109qXh0U7RIqtWapBzw
> 2bgAn1K5PwA9NKOLSLHa1UUx9cr80Y6z
> =f+ot
> -----END PGP SIGNATURE-----
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

-- 
Sent from my mobile device
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to