On 11/1/06, mos wrote:
Sure, I've thought of those too. But has anyone gotten Firebird to store 700-800gb tables? Can you split Firebird's .gdb file across drives? The main problem with tables of that size is maintaining the index. My upper limit for MySQL is 100 million rows. After that any new rows that are added will take much longer to add because the index tree has to be maintained. I definitely recommend cramming as much memory in the box as humanly possible because indexes of that size will need it. Probably the simplist solution for MySQL is to use Merge tables. I know some people with MySQL, Oracle and MS SQL have terabyte tables, but I haven't heard of other databases storing tables that large. So if you or anyone else has used FireBird or PostgreSQL to store terabyte tables, I'd certainly would be interested in hearing about it. :)
What is the big deal of a TB? Now, if you get past 20 TB you might want to team up with one of the commercial PostgreSQL supporters (Fujitsu, EnterpriseDB, Greenplum etc.), but Sun even sells appliances for 100 TB PostgreSQL databases. Jochem -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]