Hi Artur,

Some general comments:

I'd look at partitioning and tablespaces to better manage the files where the 
data is stored, but also look at some efficiently parallelised disks behind the 
filesystems. You might also look at optimising the filesystem &OS parameters to 
increase efficiency as well, so it is a mix of hardware/OS/filesystem & db 
setup to optimise for such a situation.

For data retrieval, clustered indexes may help, but as this requires a physical 
reordering of the data on disk, it may be impractical.


Cheers,

  Brent Wood



Brent Wood
DBA/GIS consultant
NIWA, Wellington
New Zealand
>>> Artur <a_wron...@gazeta.pl> 06/16/09 3:30 AM >>>
Hi!

We are thinking to create some stocks related search engine.
It is experimental project just for fun.

The problem is that we expect to have more than 250 GB of data every month.
This data would be in two tables. About 50.000.000 new rows every month.

We want to have access to all the date mostly for generating user 
requesting reports (aggregating).
We would have about 10TB of data in three years.

Do you think is it possible to build this with postgresql and have any 
idea how to start? :)


Thanks in advance,
Artur




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

NIWA is the trading name of the National Institute of Water & Atmospheric 
Research Ltd.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to