I have had to bump the stats on a partitioned table in order to get the
planner to use an index over a seqscan. This has worked well in making
the system perform where it needs to as it reduced one query's execution
from 45 seconds to 1 second.
The one problem I have run into is that when I
I was trying to run a bulk data load using the COPY command on PGSQL 8.1.0.
After loading about 3,500,000 records it ran out of memory - I am
assuming because it ran out of space to store such a large transaction.
Does the COPY command offer a similar feature to Oracle's SQL*Loader
where you
If I have followed the chain correctly, I saw that you were trying to
run an update statement on a large number of records in a large table
right? I have changed my strategy in the past for this type of problem.
I don't know if it would have fixed this problem or not, but I have seen
with
: Michael Fuhr [EMAIL PROTECTED]
To: Kevin Keith [EMAIL PROTECTED]
CC: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Copy command not writing complete data to text file
Date: Thu, 22 Sep 2005 08:52:36 -0600
On Thu, Sep 22, 2005 at 08:27:00AM -0500, Kevin Keith wrote:
The platform is Free BSD
I am coming from an Oracle background - which in the case of bulk data loads
there were several options I had where I could disable writing to the redo
log to speed up the bulk data load (i.e. direct load, set the user session
in no archive logging, set the affected tables to have no logging).
locks Date: Thu, 25 Aug
2005 12:08:11 -0400
Kevin Keith [EMAIL PROTECTED] writes:
I have a question regarding blocking locks in the pg database. I ran
into a
process which terminated abnormally, and to fully clear the locks it
left
behind I had to reboot the system (probably restarting
I have a question regarding blocking locks in the pg database. I ran into a
process which terminated abnormally, and to fully clear the locks it left
behind I had to reboot the system (probably restarting postmaster would have
had the same effect). This was a personal development system so this