Hi all,

I am in the process of planning for the construction of a very large
database and I wanted to do a reality check before hand. in this
database a typical table would be 100,000,000 rows and some tables could
be as large as 100 times that, 10,000,000,000. I am wondering:

1- is it possible?
2- how do the indices files grow with the number of rows. is it more or
less linear or should I expect some explosion in size as the number of
rows increases.
3- I would need to do joins on as many as 5 tables of that size.
providing that joins are done on appropriately indexed columns. how
would you expect the performance to be like? for my purposes it doesn't
need to be real time. but a response within 15 minutes is probably
necessary.
4- some of these tables might need to sit on an nfsed file system. would
that be a completely crazy thing to do?
5- what sort of server memory you would think be a minimum to handle
this DB.

and lastly:

6-would any other DBMS (than mysql), say commercial ones, be better
equipped to handle such data sizes?

thanks. any relayed experiences the subject of large database is very
much appreciated.

best
Murad

---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to