Anton Nikiforov пишет:

Dear All!
I have a question about how the PostgreSQL will manage a huge number of raws.
I have a projet where each half an hour 10 millions of records will be added to the database and they should be calculated, summarized and managed.
I'm planning to have a few servers that will receive something like a million records per server and then they will store this data into the centeral server in report-ready format.
I know that one million records could be managed by postgres (i have a database with 25 millions of records and it is working just fine)
But i'm worry about mentioned centeral database that should store 240 millions of records daily and should collect this data for years.
I cannot even imagine the needed hardware to collect monthly statistics. And my question is - is this task is for postgres, or i should think about Oracle or DB2?
I'm also thinking about replication of data between two servers for redundancy, what you could suggst for this?
And the data migration problem is still an opened issue for me - how to make data migration from fast devices (RAID ARRAY) to slower devices (MO Library or something like this) still having access to this data?

And one more question - is there in postgress something like table partitioning in Oracle to store data according to the some rules, like a group of data source (IP network or something)?

--
Best regads,
Anton Nikiforov

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature



Reply via email to