You should search the archives. This topic has been covered quite a few
times before, mainly in reference to increasing performance. Of course,
peoples definition of "large" differs.
If you think large is 10 or 20 millions records, then you will find
stuff in the archives discussing databases this size. If you are
looking for billions of records, you may not find anything. I would
search on million and billion and see what comes up.
The databases I work with are fairly small, the largest tables
containing only a few 100,000 records.
I haven't heard much about crashes or corruption except in specific
setups. And usually when only first trying to set things up. Compiler
flags have usually solved the problems.
On Thursday, November 13, 2003, at 03:24 PM, ow wrote:
Anyone? Hope that does not mean that mySql is not used with large dbs
... :)
Thanks
--- ow <[EMAIL PROTECTED]> wrote:
Hi,
We are considering using mySql or postgreSql for an app that handles
large
volume of data. Somehow, it appears (mostly from the mailing lists)
that
mySql
does not handle large volumes of data very well (crashes, db
corruptions,
etc).
Would ppl who use mySql for large volumes of data share their
experience
about
number of tables, avg/max number of records in a tables, db size,
time that
it
takes to dump/restore db, potential problems.
Thanks in advance
__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]