On 19 Oct 2004 at 17:35, Josh Close wrote:
Well, I didn't find a whole lot in the list-archives, so I emailed
that list whith a few more questions. My postgres server is just
crawling right now :(
Unlike many other database engines the shared buffers of Postgres is
not a private cache of
On Wed, 20 Oct 2004 08:00:55 +0100, Gary Doades [EMAIL PROTECTED] wrote:
Unlike many other database engines the shared buffers of Postgres is
not a private cache of the database data. It is a working area shared
between all the backend processes. This needs to be tuned for number
of
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this
On Wed, 2004-10-20 at 07:25, Josh Close wrote:
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe [EMAIL PROTECTED] wrote:
1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?
The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at a time, then disconnects, reconnects, and copies
On 20 Oct 2004 at 11:37, Josh Close wrote:
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe [EMAIL PROTECTED] wrote:
1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?
The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at
On Wed, Oct 20, 2004 at 08:25:22 -0500,
Josh Close [EMAIL PROTECTED] wrote:
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour
On Wed, 20 Oct 2004 18:47:25 +0100, Gary Doades [EMAIL PROTECTED] wrote:
What about triggers? Also constraints (check contraints, integrity
constraints) All these will slow the inserts/updates down.
No triggers or constraints. There are some foreign keys, but the
tables that have the inserts
On Wed, 20 Oct 2004 13:35:43 -0500, Bruno Wolff III [EMAIL PROTECTED] wrote:
You might not need to do the vacuum fulls that often. If the your hourly
vacuums have a high enough fsm setting, they should be keeping the database
from continually growing in size. At that point daily vacuum fulls
On 20 Oct 2004 at 13:34, Josh Close wrote:
How long does 100,000 rows take to insert exactly?
I believe with the bulk inserts, 100k only takes a couple mins.
Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
the cpu and IO usage during the inserts? You
On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades [EMAIL PROTECTED] wrote:
Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
monitoring to determine where the bottleneck is.
The
On 20 Oct 2004 at 14:09, Josh Close wrote:
On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades [EMAIL PROTECTED] wrote:
Hmm, that seems a bit slow. How big are the rows you are inserting? Have you
checked
the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades [EMAIL PROTECTED] wrote:
Is this the select(1) query? Please post an explain analyze for this and any other
slow
queries.
I think it took so long 'cause it wasn't cached. The second time I ran
it, it took less than a second. How you can tell if
On 20 Oct 2004 at 15:36, Josh Close wrote:
On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades [EMAIL PROTECTED] wrote:
Is this the select(1) query? Please post an explain analyze for this and any other
slow
queries.
I think it took so long 'cause it wasn't cached. The second time I ran
On Wed, 20 Oct 2004 23:43:54 +0100, Gary Doades [EMAIL PROTECTED] wrote:
You will need to tell us the number of updates/deletes you are having. This will
determine the vacuum needs. If the bulk of the data is inserted you may only need to
analyze frequently, not vacuum.
In order to get more
I know this is kinda a debate, but how much ram do I give postgres?
I've seen many places say around 10-15% or some say 25%... If all
this server is doing is running postgres, why can't I give it 75%+?
Should the limit be as much as possible as long as the server doesn't
use any swap?
Any
Josh Close [EMAIL PROTECTED] writes:
I know this is kinda a debate, but how much ram do I give postgres?
I've seen many places say around 10-15% or some say 25%... If all
this server is doing is running postgres, why can't I give it 75%+?
Should the limit be as much as possible as long as
On Tue, 19 Oct 2004 17:42:16 -0400, Tom Lane [EMAIL PROTECTED] wrote:
The short answer is no; the sweet spot for shared_buffers is usually on
the order of 1 buffers, and trying to go for 75% of RAM isn't
going to do anything except hurt. For the long answer see the
pgsql-performance list
18 matches
Mail list logo