On Mon, Jan 25, 2010 at 10:53 PM, nair rajiv <nair...@gmail.com> wrote:

> Hello,
>
>           I am working on a project that will take out structured content
> from wikipedia
> and put it in our database. Before putting the data into the database I
> wrote a script to
> find out the number of rows every table would be having after the data is
> in and I found
> there is a table which will approximately have 5 crore entries after data
> harvesting.
> Is it advisable to keep so much data in one table ?
>

It is not good to keep these much amount of data in a single table, again,
it depends on your application and the database usage.


>           I have read about 'partitioning' a table. An other idea I have is
> to break the table into
> different tables after the no of rows  in a table has reached a certain
> limit say 10 lacs.
> For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'
> each having 10 lac entries.
>

I think this wont help that much if you have a single machine. Partition the
table and keep the data in different nodes. Have a look at the tools like
pgpool.II


> I needed advice on whether I should go for partitioning or the approach I
> have thought of.
>           We have a HP server with 32GB ram,16 processors. The storage has
> 24TB diskspace (1TB/HD).
> We have put them on RAID-5. It will be great if we could know the
> parameters that can be changed in the
> postgres configuration file so that the database makes maximum utilization
> of the server we have.
>

What would be your total data base size? What is the IOPS? You should
partition the db and keep the data across multiple nodes and process them in
parallel.


> For eg parameters that would increase the speed of inserts and selects.
>
>
>
pgfoundry.org/projects/*pgtune*/  - have a look at check the docs




> Thank you in advance
> Rajiv Nair

Reply via email to