On Fri, 5 Mar 2004 22:14, Tomàs Núñez Lirola <[EMAIL PROTECTED]> wrote:
> We're planning a new website where we will use a DB with 500.000 to
> 1.000.000 records. We are now deciding which database server we will use.
> We've read that MySQL has big problems from 150.000 records and more. Also
> we've read that PostgreSQL is very slow on such records.
> But we don't have any experience, so we must rely on other people
> experience.

How big are these records?  Usually records are no more than 1K in size, so 
the entire database should fit into cache.  I've run databases much slower 
than those on hardware that was OK by 1999 standards (but sucks badly by 
today's standards) and it was OK.

Of course it really depends on what exactly you are doing, how many indexes, 
how many programs may be writing at the same time, whether you need 
transactions, etc.  But given RAM prices etc I suggest first making sure that 
your RAM is about the same size as the database if at all possible.  If you 
can do that then apart from 5-10 mins at startup IO performance is totally 
dependant on writes.  Then get a battery-backed write-back disk cache for 
best write performance (maybe use data journalling and put an external 
journal on a device from http://www.umem.com ).

Probably getting the performance you want is easy if you have the right budget 
and are able to be a little creative with the way you install things (EG the 
uMem device).  The REAL issue will probably be redundancy.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/    Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


Reply via email to