Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Gary Doades
On 19 Oct 2004 at 17:35, Josh Close wrote:

 Well, I didn't find a whole lot in the list-archives, so I emailed
 that list whith a few more questions. My postgres server is just
 crawling right now :(
 

Unlike many other database engines the shared buffers of Postgres is 
not a private cache of the database data. It is a working area shared 
between all the backend processes. This needs to be tuned for number 
of connections and overall workload, *not* the amount of your database 
that you want to keep in memory. There is still lots of debate about what 
the sweet spot is. Maybe there isn't one, but its not normally 75% of 
RAM.

If anything, the effective_cache_size needs to be 75% of (available) 
RAM as this is telling Postgres the amount of your database the *OS* is 
likely to cache in memory.

Having  said that, I think you will need to define crawling. Is it 
updates/inserts that are slow? This may be triggers/rules/referential 
integrity checking etc that is slowing it. If it is selects that are slow, this 
may be incorrect indexes or sub-optimal queries. You need to show us 
what you are trying to do and what the results are.

Regards,
Gary.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Josh Close
On Wed, 20 Oct 2004 08:00:55 +0100, Gary Doades [EMAIL PROTECTED] wrote:
 Unlike many other database engines the shared buffers of Postgres is
 not a private cache of the database data. It is a working area shared
 between all the backend processes. This needs to be tuned for number
 of connections and overall workload, *not* the amount of your database
 that you want to keep in memory. There is still lots of debate about what
 the sweet spot is. Maybe there isn't one, but its not normally 75% of
 RAM.
 
 If anything, the effective_cache_size needs to be 75% of (available)
 RAM as this is telling Postgres the amount of your database the *OS* is
 likely to cache in memory.
 
 Having  said that, I think you will need to define crawling. Is it
 updates/inserts that are slow? This may be triggers/rules/referential
 integrity checking etc that is slowing it. If it is selects that are slow, this
 may be incorrect indexes or sub-optimal queries. You need to show us
 what you are trying to do and what the results are.

It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
select count(1) from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?

I would just like to do anything possible to help speed this up.

-Josh

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Weiping

It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
select count(1) from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?
I would just like to do anything possible to help speed this up.
 

If there are really many rows in table , select count(1) would be a 
little bit slow,
for postgresql use sequential scan to count the rows. If the query is 
other kind,
then may be check if there are index on search condition or use EXPLAIN 
command
to see the query plan would be greatly help.

By the way, what's the version of your postgresql? older version (7.4?) 
still suffer from index
space bloating.

regards
Laser
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Scott Marlowe
On Wed, 2004-10-20 at 07:25, Josh Close wrote:

 It's slow due to several things happening all at once. There are a lot
 of inserts and updates happening. There is periodically a bulk insert
 of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
 hour due to the amount of transactions happening, and a vacuum full
 every night. All this has caused selects to be very slow. At times, a
 select count(1) from a table will take several mins. I don't think
 selects would have to wait on locks by inserts/updates would it?

1: Is the bulk insert being done inside of a single transaction, or as
individual inserts?

2: Are your fsm settings high enough for an hourly vacuum to be
effective?

3: How selective is the where clause for your select (1) query?  If
there is no where clause or the where clause isn't very selective, then
there will be a sequential scan every time.  Since PostgreSQL has to hit
the table after using an index anyway, if it's going to retrieve a fair
percent of a table, it just goes right to a seq scan, which for
postgresql, is the right thing to do.

Post explain analyze of your slowest queries to the performance list
if you can.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Josh Close
On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe [EMAIL PROTECTED] wrote:
 1: Is the bulk insert being done inside of a single transaction, or as
 individual inserts?

The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
rows at a time, then disconnects, reconnects, and copies 100k more,
and repeats 'till done. There are no indexes on the tables that the
copy is being done into either, so it won't be slowed down by that at
all.

 
 2: Are your fsm settings high enough for an hourly vacuum to be
 effective?

What is fsm? I'll tell you when I find that out.

 
 3: How selective is the where clause for your select (1) query?  If
 there is no where clause or the where clause isn't very selective, then
 there will be a sequential scan every time.  Since PostgreSQL has to hit
 the table after using an index anyway, if it's going to retrieve a fair
 percent of a table, it just goes right to a seq scan, which for
 postgresql, is the right thing to do.

There was no where clause.

 
 Post explain analyze of your slowest queries to the performance list
 if you can.

I don't think it's a query problem ( but I could optimize them more
I'm sure ), 'cause the same query takes a long time when there are
other queries happening, and not long at all when nothing else is
going on.

-Josh

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Gary Doades
On 20 Oct 2004 at 11:37, Josh Close wrote:

 On Wed, 20 Oct 2004 09:52:25 -0600, Scott Marlowe [EMAIL PROTECTED] wrote:
  1: Is the bulk insert being done inside of a single transaction, or as
  individual inserts?
 
 The bulk insert is being done by COPY FROM STDIN. It copies in 100,000
 rows at a time, then disconnects, reconnects, and copies 100k more,
 and repeats 'till done. There are no indexes on the tables that the
 copy is being done into either, so it won't be slowed down by that at
 all.
 
  

What about triggers? Also constraints (check contraints, integrity 
constraints) All these will slow the inserts/updates down.

If you have integrity constraints make sure you have indexes on the 
referenced columns in the referenced tables and make sure the data 
types are the same.

How long does 100,000 rows take to insert exactly?

How many updates are you performing each hour?

Regards,
Gary.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Bruno Wolff III
On Wed, Oct 20, 2004 at 08:25:22 -0500,
  Josh Close [EMAIL PROTECTED] wrote:
 
 It's slow due to several things happening all at once. There are a lot
 of inserts and updates happening. There is periodically a bulk insert
 of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
 hour due to the amount of transactions happening, and a vacuum full
 every night. All this has caused selects to be very slow. At times, a
 select count(1) from a table will take several mins. I don't think
 selects would have to wait on locks by inserts/updates would it?

You might not need to do the vacuum fulls that often. If the your hourly
vacuums have a high enough fsm setting, they should be keeping the database
from continually growing in size. At that point daily vacuum fulls are
overkill and if they are slowing stuff down you want to run quickly, you
should cut back on them.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Josh Close
On Wed, 20 Oct 2004 18:47:25 +0100, Gary Doades [EMAIL PROTECTED] wrote:
 What about triggers? Also constraints (check contraints, integrity
 constraints) All these will slow the inserts/updates down.

No triggers or constraints. There are some foreign keys, but the
tables that have the inserts don't have anything to them, even
indexes, to help speed up the inserts.

 
 If you have integrity constraints make sure you have indexes on the
 referenced columns in the referenced tables and make sure the data
 types are the same.
 
 How long does 100,000 rows take to insert exactly?

I believe with the bulk inserts, 100k only takes a couple mins.

 
 How many updates are you performing each hour?

I'm not sure about this. Is there a pg stats table I can look at to
find this out. I suppose I could do a count on the time stamp
also. I'll let you know when I find out.

 
 Regards,
 Gary.
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Josh Close
On Wed, 20 Oct 2004 13:35:43 -0500, Bruno Wolff III [EMAIL PROTECTED] wrote:
 You might not need to do the vacuum fulls that often. If the your hourly
 vacuums have a high enough fsm setting, they should be keeping the database
 from continually growing in size. At that point daily vacuum fulls are
 overkill and if they are slowing stuff down you want to run quickly, you
 should cut back on them.

I have the vacuum_mem set at 32M right now. I haven't changed the fsm
settings at all though.

-Josh

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Gary Doades
On 20 Oct 2004 at 13:34, Josh Close wrote:

  How long does 100,000 rows take to insert exactly?
 
 I believe with the bulk inserts, 100k only takes a couple mins.
 

Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked 
the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO 
monitoring to determine where the bottleneck is.

What hardware is this on? Sorry if you specified it earlier, I can't seem to find 
mention of 
it.

Cheers,
Gary.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Josh Close
On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades [EMAIL PROTECTED] wrote:
 Hmm, that seems a bit slow. How big are the rows you are inserting? Have you checked
 the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
 monitoring to determine where the bottleneck is.

The bulk inserts don't take full cpu. Between 40% and 80%. On the
other hand, a select will take 99% cpu.

 
 What hardware is this on? Sorry if you specified it earlier, I can't seem to find 
 mention of
 it.

It's on a P4 HT with 1,128 megs ram.

-Josh

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Gary Doades
On 20 Oct 2004 at 14:09, Josh Close wrote:

 On Wed, 20 Oct 2004 19:59:38 +0100, Gary Doades [EMAIL PROTECTED] wrote:
  Hmm, that seems a bit slow. How big are the rows you are inserting? Have you 
  checked
  the cpu and IO usage during the inserts? You will need to do some kind of cpu/IO
  monitoring to determine where the bottleneck is.
 
 The bulk inserts don't take full cpu. Between 40% and 80%. On the
 other hand, a select will take 99% cpu.

Is this the select(1) query? Please post an explain analyze for this and any other 
slow 
queries.

I would expect the selects to take 99% cpu if all the data you were trying to select 
was 
already in memory. Is this the case in general? I can do a select count(1) on a 
500,000 
row table in about 1 second on a Athlon 2800+ if all the data is cached. It takes 
about 25 
seconds if it has to fetch it from disk.

I have just done a test by inserting (via COPY) of 149,000 rows in a table with 23 
columns, mostly numeric, some int4, 4 timestamps. This took 28 seconds on my 
Windows XP desktop, Athlon 2800+, 7200 rpm SATA disk, Postgres 8.0 beta 2. It used 
around 20% to 40% cpu during the copy. The only index was the int4 primary key, 
nothing else.

How does this compare?

  What hardware is this on? Sorry if you specified it earlier, I can't seem to find 
  mention of
  it.
 
 It's on a P4 HT with 1,128 megs ram.

Disk system??

Regards,
Gary.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Josh Close
On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades [EMAIL PROTECTED] wrote:
 Is this the select(1) query? Please post an explain analyze for this and any other 
 slow
 queries.

I think it took so long 'cause it wasn't cached. The second time I ran
it, it took less than a second. How you can tell if something is
cached? Is there a way to see what's in cache?

 
 I would expect the selects to take 99% cpu if all the data you were trying to select 
 was
 already in memory. Is this the case in general? I can do a select count(1) on a 
 500,000
 row table in about 1 second on a Athlon 2800+ if all the data is cached. It takes 
 about 25
 seconds if it has to fetch it from disk.

I think that's what's going on here.

 
 I have just done a test by inserting (via COPY) of 149,000 rows in a table with 23
 columns, mostly numeric, some int4, 4 timestamps. This took 28 seconds on my
 Windows XP desktop, Athlon 2800+, 7200 rpm SATA disk, Postgres 8.0 beta 2. It used
 around 20% to 40% cpu during the copy. The only index was the int4 primary key,
 nothing else.

Well, there are a 3 text columns or so, and that's why the COPY takes
longer than yours. That hasn't been a big issue though. I copies fast
enough.

 
 How does this compare?
 
 Disk system??

It's in ide raid 1 config I believe. So it's not too fast. It will
soon be on a scsi raid 5 array. That should help speed some things up
also.

 
 Regards,
 Gary.

What about the postgresql.conf config settings. This is what I have and why.

shared_buffers = 21250

This is 174 megs, which is 15% of total ram. I read somewhere that it
should be between 12-15% of total ram.

sort_mem = 32768

This is default.

vacuum_mem = 32768

This is 32 megs. I put it that high because of something I read here
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html

#max_fsm_pages = 2

Default. I would think this could be upped more, but I don't know how much.

effective_cache_size = 105750

This is 846 megs ram which is 75% of total mem. I put it there 'cause
of a reply I got on the performance list.

I made all these changes today, and haven't had much of a chance to
speed test postgres since.

Any thoughs on these settings?

-Josh

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Gary Doades
On 20 Oct 2004 at 15:36, Josh Close wrote:

 On Wed, 20 Oct 2004 20:49:54 +0100, Gary Doades [EMAIL PROTECTED] wrote:
  Is this the select(1) query? Please post an explain analyze for this and any other 
  slow
  queries.
 
 I think it took so long 'cause it wasn't cached. The second time I ran
 it, it took less than a second. How you can tell if something is
 cached? Is there a way to see what's in cache?

No. The OS caches the data as read from the disk. If you need the data to be in memory 
for performance then you need to make sure you have enough available RAM to hold 
your typical result sets if possible.

 What about the postgresql.conf config settings. This is what I have and why.
 
 sort_mem = 32768
 
 This is default.

This is not the default. The default is 1000. You are telling Postgres to use 32Megs 
for 
*each* sort that is taking place. If you have several queries each performing large 
sorts 
you can quickly eat up available RAM this way. If you will only have a small number of 
concurrrent queries performing sorts then this may be OK. Don't forget, a single query 
can perform more than one sort operation. If you have 10 large sorts happening at the 
same time, you can eat up to 320 megs this way!

You will need to tell us the number of updates/deletes you are having. This will 
determine the vacuum needs. If the bulk of the data is inserted you may only need to 
analyze frequently, not vacuum.

In order to get more help you will need to supply the update/delete frequency and the 
explain analyze output from your queries.

Regards,
Gary.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Josh Close
On Wed, 20 Oct 2004 23:43:54 +0100, Gary Doades [EMAIL PROTECTED] wrote:
 You will need to tell us the number of updates/deletes you are having. This will
 determine the vacuum needs. If the bulk of the data is inserted you may only need to
 analyze frequently, not vacuum.
 
 In order to get more help you will need to supply the update/delete frequency and the
 explain analyze output from your queries.

I will have to gather this information for you.

-Josh

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] how much ram do i give postgres?

2004-10-19 Thread Josh Close
I know this is kinda a debate, but how much ram do I give postgres?
I've seen many places say around 10-15% or some say 25%... If all
this server is doing is running postgres, why can't I give it 75%+?
Should the limit be as much as possible as long as the server doesn't
use any swap?

Any thoughts would be great, but I'd like to know why.

Thanks.

-Josh

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] how much ram do i give postgres?

2004-10-19 Thread Tom Lane
Josh Close [EMAIL PROTECTED] writes:
 I know this is kinda a debate, but how much ram do I give postgres?
 I've seen many places say around 10-15% or some say 25%... If all
 this server is doing is running postgres, why can't I give it 75%+?
 Should the limit be as much as possible as long as the server doesn't
 use any swap?

The short answer is no; the sweet spot for shared_buffers is usually on
the order of 1 buffers, and trying to go for 75% of RAM isn't
going to do anything except hurt.  For the long answer see the
pgsql-performance list archives.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] how much ram do i give postgres?

2004-10-19 Thread Josh Close
On Tue, 19 Oct 2004 17:42:16 -0400, Tom Lane [EMAIL PROTECTED] wrote:
 The short answer is no; the sweet spot for shared_buffers is usually on
 the order of 1 buffers, and trying to go for 75% of RAM isn't
 going to do anything except hurt.  For the long answer see the
 pgsql-performance list archives.
 
 regards, tom lane

Well, I didn't find a whole lot in the list-archives, so I emailed
that list whith a few more questions. My postgres server is just
crawling right now :(

-Josh

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html