On Mon, 25 Jan 2010, Viji V Nair wrote:
I think this wont help that much if you have a single machine. Partition the
table and keep the data in different nodes. Have a look at the tools like
pgpool.II
So partitioning. You have three choices:
1. Use a single table
2. Partition the table on the
On Mon, 25 Jan 2010, nair rajiv wrote:
I am working on a project that will take out structured content from
wikipedia and put it in our database...
there is a table which will approximately have 5 crore entries after data
harvesting.
Have you asked the Wikimedia Foundation if they mind you
On Tue, Jan 26, 2010 at 5:15 PM, Matthew Wakeling matt...@flymine.orgwrote:
On Mon, 25 Jan 2010, nair rajiv wrote:
I am working on a project that will take out structured content from
wikipedia and put it in our database...
there is a table which will approximately have 5 crore entries
Viji V Nair wrote:
A 15k rpm SAS drive will give you a throughput of 12MB and 120 IOPS.
Now you can calculate the number of disks, specifically spindles, for
getting your desired throughput and IOPs
I think you mean 120MB/s for that first part. Regardless, presuming you
can provision a
On Tue, Jan 26, 2010 at 11:11 PM, Greg Smith g...@2ndquadrant.com wrote:
Viji V Nair wrote:
A 15k rpm SAS drive will give you a throughput of 12MB and 120 IOPS. Now
you can calculate the number of disks, specifically spindles, for getting
your desired throughput and IOPs
I think you mean
Viji V Nair wrote:
There are catches in the SAN controllers also. SAN vendors wont give
that much information regarding their internal controller design. They
will say they have 4 external 4G ports, you should also check how many
internal ports they have and the how the controllers are
Hello,
I am working on a project that will take out structured content
from wikipedia
and put it in our database. Before putting the data into the database I
wrote a script to
find out the number of rows every table would be having after the data is in
and I found
there is a table which
nair rajiv nair...@gmail.com wrote:
I found there is a table which will approximately have 5 crore
entries after data harvesting.
Is it advisable to keep so much data in one table ?
That's 50,000,000 rows, right? At this site, you're looking at a
non-partitioned table with more than seven
Kevin Grittner wrote:
nair rajiv nair...@gmail.com wrote:
I found there is a table which will approximately have 5 crore
entries after data harvesting.
Is it advisable to keep so much data in one table ?
That's 50,000,000 rows, right?
You should remember that words like lac and crore are
On Tue, Jan 26, 2010 at 1:01 AM, Craig James craig_ja...@emolecules.comwrote:
Kevin Grittner wrote:
nair rajiv nair...@gmail.com wrote:
I found there is a table which will approximately have 5 crore
entries after data harvesting.
Is it advisable to keep so much data in one table ?
On Tuesday 26 January 2010 01:39:48 nair rajiv wrote:
On Tue, Jan 26, 2010 at 1:01 AM, Craig James
craig_ja...@emolecules.comwrote:
I am working on a project that will take out structured content
from wikipedia
and put it in our database. Before putting the data into the database I
On Tue, Jan 26, 2010 at 6:19 AM, Andres Freund and...@anarazel.de wrote:
On Tuesday 26 January 2010 01:39:48 nair rajiv wrote:
On Tue, Jan 26, 2010 at 1:01 AM, Craig James
craig_ja...@emolecules.comwrote:
I am working on a project that will take out structured content
from
On Tue, Jan 26, 2010 at 9:18 AM, nair rajiv nair...@gmail.com wrote:
On Tue, Jan 26, 2010 at 6:19 AM, Andres Freund and...@anarazel.de wrote:
On Tuesday 26 January 2010 01:39:48 nair rajiv wrote:
On Tue, Jan 26, 2010 at 1:01 AM, Craig James
craig_ja...@emolecules.comwrote:
I
13 matches
Mail list logo