Dawn Wolthuis wrote:
>> 5) Do similar tests using common ODBC/OLEDB tools.
> 
> You will be hard-pressed to get products like OpenQM to participate in
> that test ;-) There can be useful information with such tests, but if
> the products are never or rarely used with such tools, then those
> products can be written out of the performance testing altogether.

With mv.NET we can query any MV environment from ADO.NET as though it were
a relational data source, and use the same schema against all MV
environments.  This is one of the reasons I mention mv.NET when people ask
about getting MV to/from an RDBMS.  You don't need to use platform-specific
pseudo-relational query interfaces like U2 BCI or the D3 OpenDB.  When I
wasn't selling mv.NET people called me a Microsoft biggot (even after I had
gone down the Java and other paths).  Now that I am selling the software
some people look at this as an ad - a guy can't win.  ;)  With regard to
benchmarks I would recommend using the native ODBC interfaces of any MV
environment wherever one is available, but as a common denominator I think
mv.NET can be used across the board for all MV environments to participate
in some of these tests.  And as a "more" apples to apples test it's good to
compare ADO.NET access to an RDBMS with ADO.NET to an MVDBMS.

I think other comments here are correct that there's no way we can get true
apples/apples, but with atomic testing and more common denominators we can
rule out excuses about connectivity and query languages, and with enough
tests we can get a good sampling of quality even if we rule out some tests
as being somewhat invalid.

>... each test can
> favor one platform or another unless you look at actual solutions to
> problems built for each environment and ask the quality requirement
> questions about it, including performance.

That's true.  By doing the same tests using the best tools available on
both sides we can rule out issues due to one platform being forced tools
that might be better optimized for another environment.

 
>...  I became convinced we
> needed the bake-off approach, starting with actual requirements. We
> need to eat the results (the baked goods) and get a full range of
> comparisons of the various implementations of a solution for the same
> problem in order to compare them in a way that would be helpful toward
> making a decision.

Performance statistics are just one factor in a product assessment.  It's
important to have those numbers, but as you imply, that shouldn't be the
most important factor.  A benchmark will tell you how fast transactions are
processed but nothing about how long it took to define the schema or write
the code/queries that do the transactions, or about the sorts of
optimization decisions that were made in creating the database tables.
Comparisons between MV environments and relational should involve an
analysis if how long it takes to make things work and maintain it aftward,
not just how long it takes for the system to work once everything is done.
This is really where MV shines.  If someone is going to do a benchmark,
they should focus on nothing but that.  But if someone is going to create a
full DBMS comparison then some bake-off tests should be included as well.

> By the way, Tony, I did come up with a business model for doing such
> bake-offs that would be sustainable once off the ground, but could
> only come up with one that was prohibitive in the need for
> considerable up front dollars

If I or anyone else wanted to make an investment of time we could write a
ton of tests, run them against various platforms, then sell the results to
people who wanted the results of the effort.  Unfortunately I'm not in the
"build it and they will come" business anymore.

> plus a need for the process to be fair
> (unbiased) and also perceived as fair.  As I am sure our politicians
> know, I suspect it is difficult to be unbiased if dollars are coming
> from here and not there, and impossible for others to believe you are
> unbiased if you are getting dollars from here and not there.  --dawn

Anyone who really knows me knows my sense of ethics and can be sure it's
unbiased.  I don't care where money is coming from, I do the best job I can
and would never skew results.  In any case, a project like this can't be
run in a vacuum, I think it would need to be done with oversight from a
diverse committee to ensure proper conduct and effective implementation.
The introduction of bias or incompetence into testing like this would be a
tragedy and a big waste of time.  I think most of us are interested in
knowing the real capabilities of these environments so that we know where
we stand, and where we have fallacies we can petition our vendors to make
improvements.

Regarding perceptions, any effort like this would be suspected of bias, so
as I said, it needs to be done in a manner subject to peer review.
Unfortunately some people believe what they want to believe anyway - that
just goes with the territory.

Whoever does something like this, if it's ever done, I think it would be
best if it wasn't the DBMS vendors themselves.

T
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to