I've been asked to put together a very large (well, it's large to me)
database, and while mySQL is great for my current uses, I haven't had
experience with stuff of this scale.

The database will have about 88 tables, with up to 100 fields per table.
There is a _lot_ of interlinking among the tables, and each "transaction"
will have about 10k of data.  By the end of the first year, almost 500,000
transactions will be in the database.  Unfortunately, I can't be more
specific, as another party is designing the database specification, which I
don't have a copy of yet.

Now, if I were to use mySQL I would want to use the transactional version.
I haven't had any experience with this, how does its performance and
reliability compare (obviously the transactions are a + to its reliability).

My question is: Will mySQL be able to handle this amount / complexity of
data well, and how much better would, say, Oracle or even MS SQL Server 2000
be?  What about PostgreSQL? PostgreSQLs relationships, constraints, views,
and stored procedures would be beneficial, but not at the cost of of
suitable performance.

It would be much appreciated if someone with more experience developing
databases of this scale could give me some advice on the pros and cons of
each platform.

Thanks,

Sam


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to