I just wanted to thank everyone for your input on my question. You've
given me a lot of tools to solve my problem here.
Orion
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On Fri, 2006-02-10 at 16:39, Ragnar wrote:
> On Fri, 2006-02-10 at 11:24 +0100, Markus Schaber wrote:
>
> > For lots non-read-only database workloads, RAID5 is a performance
> > killer. Raid 1/0 might be better, or having two mirrors of two disks
> > each, the first mirror holding system, swap, an
On Fri, 2006-02-10 at 11:24 +0100, Markus Schaber wrote:
> For lots non-read-only database workloads, RAID5 is a performance
> killer. Raid 1/0 might be better, or having two mirrors of two disks
> each, the first mirror holding system, swap, and the PostgreSQL WAL
> files, the second one holding
was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
Argh.
1) The database is very large, the largest table has 40 million tuples.
Is this simple types (like a few ints, text...) ?
How much space does it use on disk ? can it fit in RAM ?
2) The datab
Hi, Henry,
Orion Henry wrote:
> 1) The database is very large, the largest table has 40 million tuples.
I'm afraid this doesn't qualify as '_very_ large' yet, but it
definitively is large enough to have some deep thoughts about it. :-)
> 1) The data is easily partitionable by client ID. In an
Hi, Greg,
Greg Stark wrote:
>>(Aside question: if I were to find a way to use COPY and I were loading
>>data on a single client_id, would dropping just the indexes for that client_id
>>accelerate the load?)
> Dropping indexes would accelerate the load but unless you're loading a large
> numbe
On 2/9/06, Orion Henry <[EMAIL PROTECTED]> wrote:
>
> Hello All,
>
> I've inherited a postgresql database that I would like to refactor. It
> was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
> design decisions were made that don't make sense any more. Here's the
> problem:
>
>
Orion Henry <[EMAIL PROTECTED]> writes:
> What I would LIKE to do but am afraid I will hit a serious performance wall
> (or am missing an obvious / better way to do it)
>
> 1) Merge all 133 client tables into a single new table, add a client_id
> column,
> do the data partitioning on the index
On Thu, Feb 09, 2006 at 11:07:06AM -0800, Orion Henry wrote:
>
> Hello All,
>
> I've inherited a postgresql database that I would like to refactor. It
> was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
> design decisions were made that don't make sense any more. Here's the
Hello All,
I've inherited a postgresql database that I would like to refactor. It
was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
design decisions were made that don't make sense any more. Here's the
problem:
1) The database is very large, the largest table has 40 mil
10 matches
Mail list logo