On Wed, Aug 10, 2011 at 1:26 AM, Trevyn Meyer <[email protected]> wrote:
> I have a client who expects to have some large tables.  10-100 GB of data.
>
> Do you have any suggestions how to handle those?  I suspect even if
> indexed you would have speed issues?
>
> I can image a rolling system, where tables are rolled, like log files or
> something?  Any suggestions on how to handle 20 tables, with lots of
> rows? Break them into smaller?

I don't mean to troll or anything, but if your client is open to other
databases, PostgreSQL has had considerable work done for that kind of
table size. According to its documentation [1], the maximum supported
table size is 32 TB.

As for performance, there are several approaches. Skype, for example,
wrote PL/Proxy, a database partinioning tool [2] written as a
procedural language for PostgreSQL. Here's a tutorial [3] of it that
should give you a good idea of how that approach works.

Roberto

[1] http://www.postgresql.org/about/
[2] http://pgfoundry.org/projects/plproxy/
[3] http://plproxy.projects.postgresql.org/doc/tutorial.html

_______________________________________________

UPHPU mailing list
[email protected]
http://uphpu.org/mailman/listinfo/uphpu
IRC: #uphpu on irc.freenode.net

Reply via email to