* Kevin Grittner ([email protected]) wrote: > > Could some of you please share some info on such scenarios- where > > you are supporting/designing/developing databases that run into at > > least a few hundred GBs of data (I know, that is small by todays' > > standards)?
Just saw this, so figured I'd comment:
tsf=> \l+
List of databases
Name | Owner | Encoding | Collation | Ctype | Access
privileges | Size | Tablespace | Description
-----------+----------+----------+-------------+-------------+----------------------------+---------+-------------+---------------------------
beac | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres
| 1724 GB | pg_default |
Doesn't look very pretty, but the point is that its 1.7TB. There's a
few other smaller databases on that system too. PG handles it quite
well, though this is primairly for data-mining.
Thanks,
Stephen
signature.asc
Description: Digital signature
