I just wanted to thank everyone for your input on my question. You've
given me a lot of tools to solve my problem here.
Orion
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Hi, Greg,
Greg Stark wrote:
(Aside question: if I were to find a way to use COPY and I were loading
data on a single client_id, would dropping just the indexes for that client_id
accelerate the load?)
Dropping indexes would accelerate the load but unless you're loading a large
number of
Hi, Henry,
Orion Henry wrote:
1) The database is very large, the largest table has 40 million tuples.
I'm afraid this doesn't qualify as '_very_ large' yet, but it
definitively is large enough to have some deep thoughts about it. :-)
1) The data is easily partitionable by client ID. In an
was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
Argh.
1) The database is very large, the largest table has 40 million tuples.
Is this simple types (like a few ints, text...) ?
How much space does it use on disk ? can it fit in RAM ?
2) The
On Fri, 2006-02-10 at 11:24 +0100, Markus Schaber wrote:
For lots non-read-only database workloads, RAID5 is a performance
killer. Raid 1/0 might be better, or having two mirrors of two disks
each, the first mirror holding system, swap, and the PostgreSQL WAL
files, the second one holding the
On Fri, 2006-02-10 at 16:39, Ragnar wrote:
On Fri, 2006-02-10 at 11:24 +0100, Markus Schaber wrote:
For lots non-read-only database workloads, RAID5 is a performance
killer. Raid 1/0 might be better, or having two mirrors of two disks
each, the first mirror holding system, swap, and the
Hello All,
I've inherited a postgresql database that I would like to refactor. It
was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
design decisions were made that don't make sense any more. Here's the
problem:
1) The database is very large, the largest table has 40