Anthony W. Youngman wrote:

In article <[EMAIL PROTECTED]>, Lauri Pietarinen
<[EMAIL PROTECTED]> writes


So in your opinion, is the problem

1) SQL is so hard that the average programmer will not know how to use it
efficiently



Nope




or
2)  Relational (or SQL-) DBMS'es are just too slow



Yes.



If 2) then why don't we get a bit more concrete. Could you give
an example of a query that in your experience would be too slow using
a standard SQL database (e.g. Oracle, or MySQL). We could then
actually try it out on some machine and compare. I suggest using
the customer-order-order_detail-product database



Okay. Give me a FORMULA that returns a time in seconds for your query.


Let's assume I want to print a statement of how many invoices were sent
to a customer, along with various details of those invoices. My invoice
file is indexed by company/month, and we can reasonably assume that the
time taken to produce the statement is infinitesimal compared to the
time taken to retrieve the invoice data from disk. For MV

T = (2 + N) * ST * 1.05

Where T is the time taken to produce the report, N is the number of
invoices, and ST is the hard disk seek time.

First of all it is important to note that an important component of all modern SQL-DBMS's is
the buffer pool (or cache) meaning that in a reasonably well tuned database you get very few
disk I/O's, even when *writing* data into tables.


SQL-DBMS's also are very clever at using indexes, i.e. if they can find all necessary data
from an index it will not even look at the table, so to speak.


And, even when presuming conservatively that there is no data in cache, depending on how
the data is clustered, you will get more than one row/disk read (= 8K in most(?) systems).


So, assuming the (simplified) example

Customer(cust_id, .....)
Order(order_id, cust_id,...)
OrderDetail(order_id, prod_id, ...
Product(prod_id,....)

If you created a clustering index on
Customer(cust_id)
Order(cust_id)
OrderDetail(order_id)

And presumed that the average length of
customer = 1K
order=500
orderDetail=300

You would get, with 3 I/O's
- 8 customer rows
- 16 order rows
- 24 order detail rows (which would only apply to one order)

so, granted, that would result in one I/O per order which is more than
in your example.

I could now denormalise OrderDetail so that it contains cust_id also
and cluster by cust_id
(might cause you trouble down the road, if you can change the customer
of an order), in which case, with 3 I/O's I would get
- 8 customer rows
- 16 order rows
- 24 order detail rows (which would all apply to one customer)

Now the amout of I/O's would depend on how many detail rows
we have per customer.

And, of course, because we are using sequential prefetch, we would be
getting more than one I/O block (8?, 16?) per disk seek, so it's a hard comparison to
make but I suspect that it would about equal your example.


Now, that was a *conservative* estimate, and we assumed that we did not have
any rows lying around in the (global!) cache. As the size of the cache grows in
proportion to the size of the total database we can assume less and less disk I/O.


Note also that the cache can be configured many ways, you can put different
tables (or indexes) in different caches, and even change the size of the cache
on the fly (you might want a bigger cache during evening and night when your
batch programs are running) so you can rig your system to favour certain
types of queries.


I havn't even gone into the topic of using thick indexes so table access can
be totally avoided (=we are reading into memory only interesting columns).

Now, in your example, what if the product department comes along and
wants to make a report with sales / product?  What would be your formula
in that case?

And: what if I was just reading customer-data.  Would the same formula
apply (= (2+N)*ST*1.05)?

But as I understand relational theory, such a question is completely
outside the scope of the theory. Seeing as it tries to divorce the
database logic from the practical implementation ...

The theory, indeed, does not say anything about buffer pools, but by decoupling logic
from implementation we leave the implementor (DBMS) to do as it feels fit to do.
As DBMS technology advances, we get faster systems without having to change our
programs.


When we design databases we can decouple logical planning from performance
considerations, which, you must agree, are two separate issues.

And you know it's been proven that Huffman coding is the most efficient
compression algorithm? (Actually, it isn't - it's been proven it can't
be improved upon, which isn't the same thing...). Can you improve on the
formula I've just given you? Given that if we could change the 1.05 to 1
then we can prove it can't be improved upon ... again - I've taken the
liberty of assuming that a MV FILE is equivalent to an entity if we
assume the relational designer has been thinking in an entity-attribute-
relation sort of way. My maths isn't good enough to prove it, but I
think it would be pretty easy to prove that accessing data as "one and
only one complete entity" at a time is the most efficient way.

I think that in a typical system your cache hit ratio would approach 90%
so that could mean 0.1 disk seeks.


best regards,
Lauri Pietarinen





---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
     subscribe-nomail command to [EMAIL PROTECTED] so that your
     message can get through to the mailing list cleanly

Reply via email to