Joerg Bruehe wrote:
Hi Dan, all!

Dan Trainor wrote:

Thanks for the prompt reply, Augusto -

I completely understand what you're saying. To have anything such as a real-time measurement to the exact number of tables would be an incredible preformance degration, not to mention overhead and the like.


Right.
This is the conflict between fast operation (wanted by everybody) and maintaining statistics (shared data = bottleneck).


I think I'm willing to accept the fact that while data is being sent to the database server, I won't get an exact reading of database/table/row size. This makes complete sense. However, what I am after, is how to get the exact size of a database/table/row when NO connections are being made.


What is the "exact size" you refer too?
Is it
a) the resources consumed on the disk (file size etc),
b) the data, index, and metadata stored (not including any gaps),
c) the "valid user data" which would be returned by SELECT statements?

Remember: In order to speed up further operations (inserts), a DBMS may not shrink disk structures even if data get deleted (freeing up space). So as long as the data are only growing, these three aspects may correlate closely, but when updates and deletes start, this may change.
(Compare the use of heap space by "malloc()" and "free()" in C.)

In the MySQL case, different table handlers will show different behaviour, further complicating the picture.


Say, if I started MySQL with no networking. This way, I could positively ensure that no data modification would be taking place. Is this possible?


For a), I recommend using the operating system means (Unix: "du").
For b) and c), I do not know the answer, but there may be ways.

Depending on the database size, getting any exact answer to b) or c) may take longer than the "typical user" (a vague term, I know) is willing to wait.


Regards,
Jörg


Good morning, Jörg -

I think what I was going for, was resources consumed on disk. However, when talking about the ndbcluster table type, in a Cluster environment, resources consumed in memory. I believe that's more appropriate for the cluster@ list, which I'll check in a few here.

I believe that 'du' would give some appropriate numbers, and a close estimate as to what I'll be needing to look for here.

I appreciate your help.

Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to