On 21. okt 2004, at 01:30, Dennis Gearon wrote:

I am designing something that may be the size of yahoo, google, ebay, etc.

Grrr. Geek wet-dream.

Just ONE many to many table could possibly have the following characteristics:

   3,600,000,000 records
   each record is 9 fields of INT4/DATE

I don't do this myself (my data is only 3 gig, and most of that is in blobs), but people have repeatedly reported such sizes on this list.


Check
        http://archives.postgresql.org/pgsql-admin/2001-01/msg00188.php

... but the best you can do is just to try it out. With a few commands in the 'pql' query tool you can easily populate a ridiculously large database ("insert into foo select * from foo" a few times).

In few hours you'll have some feel of it.

Other tables will have about 5 million records of about the same size.

There are lots of scenarios here to lessson this.

What you'll have to worry about most is the access pattern, and update frequency.


There's a lot of info out there. You may need any of the following:
 • clustering, the 'slony' project seems to be popular around here.
 • concurrency of updating
 • connnection pooling, maybe via Apache or some java-thingey
 • securing yourself from hardware errors

This list is a goldmine of discussions. Search the archives for discussions and pointers. Search interfaces at

        http://archives.postgresql.org/pgsql-general/
        http://archives.postgresql.org/pgsql-admin/

.... or download the list archive mbox files into your mail-program and use that (which is what I do).

d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049



---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to