Mark Roberts wrote:
1. 2.5-3TB, several others that are of fractional sisize.


...


5. They do pretty well, actually.  Our aggregate fact tables regularly
join to metadata tables and we have an average query return time of
10-30s.  We do make some usage of denormalized mviews for
chained/hierarchical metadata tables.

Just out of curiosity, how do you replicate that amount of data?
...

A few notes: our database really can be broken into a very typical ETL
database: medium/high input (write) volume with low latency access
required.  I can provide a developer's view of what is necessary to keep
a database of this size running, but I'm under no illusion that it's
actually a "large" database.

I'd go into more details, but I'd hate to be rambling.  If anyone's
actually interested about any specific parts, feel free to ask. :)
I'd be very interested in a developers view of running and maintaining a database this size. Mostly what choices is made during development that might have been different on a smaller database. I'm also curious about the maintenance needed to keep a database this size healthy over time.

Regards,
/roppert
Also, if you feel that we're doing "something wrong", feel free to
comment there too. :)

-Mark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to