We are thinking of moving to MySQL. We have a table of several tens
of millions of rows, with two indices, which will be accessed by
roughly 100 different processes. At any one time, 5 or so of the
processes will be doing selects on the table, while 40 or so will be
doing updates. However, no two processes will ever try to update the
same row at once.
Can MySQL handle this efficiently and without allowing the table or
indices to become corrupt?
(The total throughput we need is on the order of 100 indexed updates
per second; currently we are running a single 900 MHz Athlon with generic
IDE disk but would buy more processors if it would help).
---------------------------------------------------------------------
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php