Hervé Piedvache wrote:


No ... as I have said ... how I'll manage a database getting a table of may be 250 000 000 records ? I'll need incredible servers ... to get quick access or index reading ... no ?


So what we would like to get is a pool of small servers able to make one virtual server ... for that is called a Cluster ... no ?

I know they are not using PostgreSQL ... but how a company like Google do to get an incredible database in size and so quick access ?

Probably by carefully partitioning their data. I can't imagine anything being fast on a single table in 250,000,000 tuple range. Nor can I really imagine any database that efficiently splits a single table across multiple machines (or even inefficiently unless some internal partitioning is being done).

So, you'll have to do some work at your end and not just hope that
a "magic bullet" is available.

Once you've got the data partitioned, the question becomes one of
how to inhance performance/scalability.  Have you considered RAIDb?


-- Steve Wampler -- [EMAIL PROTECTED] The gods that smiled on your birth are now laughing out loud.

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to