[PERFORM] Postgres performance Linux vs FreeBSD

2007-02-21 Thread Jacek Zaręba

Hello, I've set up 2 identical machines, hp server 1ghz p3,
768mb ram, 18gb scsi3 drive. On the first one I've installed
Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both
machines I've installed Postgresql 8.2.3 from sources.
Now the point :)) According to my tests postgres on Linux
box run much faster then on FreeBSD, here are my results:

*** setting up **
creeate table foo as select x from generate_series(1,250) x;
vacuum foo;
checkpoint;
\timing

*

*** BSD *
actual=# select count(*) from foo;
  count
-
 250
(1 row)

Time: 1756.455 ms
actual=# explain analyze select count(*) from foo;
  QUERY PLAN
--
 Aggregate  (cost=34554.20..34554.21 rows=1 width=0) (actual
time=12116.841..12116.843 rows=1 loops=1)
   -  Seq Scan on foo  (cost=0.00..28304.20 rows=250 width=0)
(actual time=9.276..6435.890 rows=250 loops=1)
 Total runtime: 12116.989 ms
(3 rows)

Time: 12117.803 ms

**


*** LIN **
actual=# select count(*) from foo;
  count
-
 250
(1 row)

Time: 1362,193 ms
actual=# EXPLAIN ANALYZE
actual-# select count(*) from foo;
  QUERY PLAN
--
 Aggregate  (cost=34554.20..34554.21 rows=1 width=0) (actual
time=4737.243..4737.244 rows=1 loops=1)
   -  Seq Scan on foo  (cost=0.00..28304.20 rows=250 width=0)
(actual time=0.058..2585.170 rows=250 loops=1)
 Total runtime: 4737.363 ms
(3 rows)

Time: 4738,367 ms
actual=#
**

Just a word about FS i've used:
BSD:
/dev/da0s1g on /usr/local/pgsql (ufs, local, noatime, soft-updates)

LIN:
/dev/sda7 on /usr/local/pgsql type xfs (rw,noatime)


My question is simple :) what's wrong with the FreeBSD BOX??
What's the rule for computing gettimeofday() time ??

Thanks for any advices :))
..and have a nice day!!

J.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Postgres performance Linux vs FreeBSD

2007-02-21 Thread Merlin Moncure

On 2/21/07, Jacek Zaręba [EMAIL PROTECTED] wrote:

Hello, I've set up 2 identical machines, hp server 1ghz p3,
768mb ram, 18gb scsi3 drive. On the first one I've installed
Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both
machines I've installed Postgresql 8.2.3 from sources.
Now the point :)) According to my tests postgres on Linux
box run much faster then on FreeBSD, here are my results:

*** setting up **
creeate table foo as select x from generate_series(1,250) x;
vacuum foo;
checkpoint;
\timing

*

*** BSD *
actual=# select count(*) from foo;
   count
-
  250
(1 row)

Time: 1756.455 ms
actual=# explain analyze select count(*) from foo;
   QUERY PLAN
--
  Aggregate  (cost=34554.20..34554.21 rows=1 width=0) (actual
time=12116.841..12116.843 rows=1 loops=1)
-  Seq Scan on foo  (cost=0.00..28304.20 rows=250 width=0)
(actual time=9.276..6435.890 rows=250 loops=1)
  Total runtime: 12116.989 ms
(3 rows)

Time: 12117.803 ms

**


*** LIN **
actual=# select count(*) from foo;
   count
-
  250
(1 row)

Time: 1362,193 ms
actual=# EXPLAIN ANALYZE
actual-# select count(*) from foo;
   QUERY PLAN
--
  Aggregate  (cost=34554.20..34554.21 rows=1 width=0) (actual
time=4737.243..4737.244 rows=1 loops=1)
-  Seq Scan on foo  (cost=0.00..28304.20 rows=250 width=0)
(actual time=0.058..2585.170 rows=250 loops=1)
  Total runtime: 4737.363 ms
(3 rows)

Time: 4738,367 ms
actual=#
**

Just a word about FS i've used:
BSD:
/dev/da0s1g on /usr/local/pgsql (ufs, local, noatime, soft-updates)

LIN:
/dev/sda7 on /usr/local/pgsql type xfs (rw,noatime)


My question is simple :) what's wrong with the FreeBSD BOX??
What's the rule for computing gettimeofday() time ??


'explain analyze' can't be reliably used to compare results from
different operating systems...1756ms v. 1362ms is a win for linux but
not a blowout and there might be other things going on...

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Postgres performance Linux vs FreeBSD

2007-02-21 Thread Bill Moran
In response to Jacek Zaręba [EMAIL PROTECTED]:

 Hello, I've set up 2 identical machines, hp server 1ghz p3,
 768mb ram, 18gb scsi3 drive. On the first one I've installed
 Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both
 machines I've installed Postgresql 8.2.3 from sources.
 Now the point :)) According to my tests postgres on Linux
 box run much faster then on FreeBSD, here are my results:
 
 *** setting up **
 creeate table foo as select x from generate_series(1,250) x;
 vacuum foo;
 checkpoint;
 \timing
 
 *
 
 *** BSD *
 actual=# select count(*) from foo;
count
 -
   250
 (1 row)
 
 Time: 1756.455 ms
 actual=# explain analyze select count(*) from foo;
QUERY PLAN
 --
   Aggregate  (cost=34554.20..34554.21 rows=1 width=0) (actual
 time=12116.841..12116.843 rows=1 loops=1)
 -  Seq Scan on foo  (cost=0.00..28304.20 rows=250 width=0)
 (actual time=9.276..6435.890 rows=250 loops=1)
   Total runtime: 12116.989 ms
 (3 rows)
 
 Time: 12117.803 ms
 
 **
 
 
 *** LIN **
 actual=# select count(*) from foo;
count
 -
   250
 (1 row)
 
 Time: 1362,193 ms
 actual=# EXPLAIN ANALYZE
 actual-# select count(*) from foo;
QUERY PLAN
 --
   Aggregate  (cost=34554.20..34554.21 rows=1 width=0) (actual
 time=4737.243..4737.244 rows=1 loops=1)
 -  Seq Scan on foo  (cost=0.00..28304.20 rows=250 width=0)
 (actual time=0.058..2585.170 rows=250 loops=1)
   Total runtime: 4737.363 ms
 (3 rows)
 
 Time: 4738,367 ms
 actual=#
 **
 
 Just a word about FS i've used:
 BSD:
 /dev/da0s1g on /usr/local/pgsql (ufs, local, noatime, soft-updates)
 
 LIN:
 /dev/sda7 on /usr/local/pgsql type xfs (rw,noatime)
 
 
 My question is simple :) what's wrong with the FreeBSD BOX??
 What's the rule for computing gettimeofday() time ??

I can't speak to the gettimeofday() question, but I have a slew of comments
regarding other parts of this email.

The first thing that I expect most people will comment on is your testing
strategy.  You don't get a lot of details, but it seems as if you ran
1 query on each server, 1 run on each.  If you actually did more tests,
you should provide that information, otherwise, people will criticize your
testing strategy instead of looking at the problem.

The other side to this is that you haven't shown enough information about
your alleged problem to even start to investigate it.

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] How to debug performance problems

2007-02-21 Thread Ray Stell

I'd like to have a toolbox prepared for when performance goes south.
I'm clueless.  Would someone mind providing some detail about how to
measure these four items Craig listed:

1. The first thing is to find out which query is taking a lot of time.

2. A long-running transaction keeps vacuum from working.

3. A table grows just enough to pass a threshold in the
   planner and a drastically different plan is generated.

4. An index has become bloated and/or corrupted, and you
   need to run the REINDEX command.

Thx.





On Wed, Aug 30, 2006 at 11:45:06AM -0700, Jeff Frost wrote:
 On Wed, 30 Aug 2006, Joe McClintock wrote:
 
 I ran a vacuum, analyze and reindex on the database with no change in 
 performance, query time was still 37+ sec, a little worse. On our test 
 system I found that a db_dump from production and then restore brought the 
 database back to full performance. So in desperation I shut down the 
 production application, backed up the production database, rename the 
 production db, create a new empty production db and restored the 
 production backup to the empty db. After a successful db restore and 
 restart of the web application, everything was then up and running like a 
 top.
 
 Joe,
 
 I would guess that since the dump/restore yielded good performance once 
 again, a VACUUM FULL would have also fixed the problem.  How are your FSM 
 settings in the conf file?  Can you run VACUUM VERBOSE and send us the last 
 10 or so lines of output?
 
 A good article on FSM settings can be found here:
 
 http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087cNode=5K1C3W
 
 You probably should consider setting up autovacuum and definitely should 
 upgrade to at least 8.0.8 if not 8.1.4 when you get the chance.
 
 When you loaded the new data did you delete or update old data or was it 
 just a straight insert?
 
 -- 
 Jeff Frost, Owner [EMAIL PROTECTED]
 Frost Consulting, LLC http://www.frostconsultingllc.com/
 Phone: 650-780-7908   FAX: 650-649-1954
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match







--

On Mon, Feb 19, 2007 at 10:02:46AM -0800, Craig A. James wrote:
 Andreas Tille wrote:
 My web application was running fine for years without any problem
 and the performance was satisfying.  Some months ago I added a
 table containing 450 data rows ...
 
 Since about two weeks the application became *drastically* slower
 and I urgently have to bring back the old performance.  As I said
 I'm talking about functions accessing tables that did not increased
 over several years and should behave more or less the same.
 
 Don't assume that the big table you added is the source of the problem.  It 
 might be, but more likely it's something else entirely.  You indicated that 
 the problem didn't coincide with creating the large table.
 
 There are a number of recurring themes on this discussion group:
 
  * A long-running transaction keeps vacuum from working.
 
  * A table grows just enough to pass a threshold in the
planner and a drastically different plan is generated.
  
  * An index has become bloated and/or corrupted, and you
need to run the REINDEX command.
 
 And several other common problems.
 
 The first thing is to find out which query is taking a lot of time.  I'm no 
 expert, but there have been several explanations on this forum recently how 
 to find your top time-consuming queries.  Once you find them, then EXPLAIN 
 ANALYZE should get you started 
 Craig
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings

-- 
You have no chance to survive make your time.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] How to debug performance problems

2007-02-21 Thread Craig A. James

Ray,


I'd like to have a toolbox prepared for when performance goes south.
I'm clueless.  Would someone mind providing some detail about how to
measure these four items Craig listed:


I hope I didn't give the impression that these were the only thing to look at 
... those four items just popped into my head, because they've come up 
repeatedly in this forum.  There are surely more things that could be suspect; 
perhaps others could add to your list.

You can find the answers to each of the four topics I mentioned by looking 
through the archives of this list.  It's a lot of work.  It would be really 
nice if there was some full-time employee somewhere whose job was to monitor 
this group and pull out common themes that were put into a nice, tidy manual.  
But this is open-source development, and there is no such person, so you have 
to dig in and find it yourself.

Craig

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] How to debug performance problems

2007-02-21 Thread Ray Stell
On Wed, Feb 21, 2007 at 08:09:49AM -0800, Craig A. James wrote:
 I hope I didn't give the impression that these were the only thing to look 
 at ... those four items just popped into my head, because they've come up 
 repeatedly in this forum.  There are surely more things that could be 
 suspect; perhaps others could add to your list.

I'm only clueless about the details of pg, not db perf concepts.  Really,
a mechanism to determine where the system is spending the response
time is key.  As you pointed out, the added table may not be the issue.
In fact, if you can't measure where the db time is being spent
you will be lucky to fix a performance issue, since you don't really
know what resources need to be addressed.  


 so you have to dig in and find it yourself.

this afternoon, maybe.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] How to debug performance problems

2007-02-21 Thread Mark Stosberg
Ray Stell wrote:
 I'd like to have a toolbox prepared for when performance goes south.
 I'm clueless.  Would someone mind providing some detail about how to
 measure these four items Craig listed:
 
 1. The first thing is to find out which query is taking a lot of time.
 
 2. A long-running transaction keeps vacuum from working.
 
 3. A table grows just enough to pass a threshold in the
planner and a drastically different plan is generated.

I just ran into a variation of this:

3.5 A table grows so large so that VACUUMING it takes extremely long,
interfering with the general performance of the system.

In our case, we think the table had about 36 million rows when it hit
that threshold.

I'm now working on a better solution for that table.

  Mark

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[PERFORM] How to avoid vacuuming a huge logging table

2007-02-21 Thread Mark Stosberg
Our application has a table that is only logged to, and infrequently
used for reporting. There generally no deletes and updates.

Recently, the shear size (an estimated 36 million rows) caused a serious
problem because it prevented a vacuum analyze on the whole database
from finishing in a timely manner.

As I understand, a table with this usage pattern wouldn't need to be
vacuumed anyway.

I'm looking for general advice from people who have faced the same
issue. I'm looking at a number of alternatives:

1. Once a month, we could delete and archive old rows, for possible
re-import later if we need to report on them. It would seem this would
need to be done as proper insert statements for re-importing. (Maybe
there is a solution for that with table partitioning? )

2. We could find a way to exclude the table for vacuuming, and let it
grow even larger. Putting the table in it's own database would
accomplish that, but it would nice to avoid the overhead of a second
database connection.

3. Take a really different approach. Log in CSV format to text files
instead, And only import the date ranges we need on demand if a report
is requested on the data.

Thanks for any tips.

Mark

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] General advice on user functions

2007-02-21 Thread Dan Harris
I have a new task of automating the export of a very complex Crystal 
Report.  One thing I have learned in the last 36 hours is that the 
export process to PDF is really, really, slooww..


Anyway, that is none of your concern.  But, I am thinking that I can 
somehow utilize some of PG's strengths to work around the bottleneck in 
Crystal.  The main problem seems to be that tens of thousands of rows of 
data must be summarized in the report and calculations made.  Based on 
my recent experience, I'd say that this task would be better suited to 
PG than relying on Crystal Reports to do the summarizing.


The difficulty I'm having is that the data needed is from about 50 
different snapshots of counts over time.  The queries are very simple, 
however I believe I am going to need to combine all of these queries 
into a single function that runs all 50 and then returns just the 
count(*) of each as a separate column in a single row.


I have been Googling for hours and reading about PL/pgsql functions in 
the PG docs and I have yet to find examples that returns multiple items 
in a single row.  I have seen cases that return sets of, but that 
appears to be returning multiple rows, not columns.  Maybe this I'm 
barking up the wrong tree?


Here's the gist of what I need to do:

1) query count of rows that occurred between 14 months ago and 12 months 
ago for a given criteria, then count the rows that occurred between 2 
months ago and current.  Repeat for 50 different where clauses.


2) return each count(*) as a column so that in the end I can say:

select count_everything( ending_date );

and have it return to me:

count_a_lastyear   count_a_last60count_b_lastyearcount_b_last60
   ----
 100150   200 250

I'm not even sure if a function is what I'm after, maybe this can be 
done in a view?  I am embarrassed to ask something that seems like it 
should be easy, but some key piece of knowledge is escaping me on this.


I don't expect someone to write this for me, I just need a nudge in the 
right direction and maybe a URL or two to get me started.


Thank you for reading this far.

-Dan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[PERFORM] Auto-Vacuum in 8.1 was ineffective for me. 8.2 may work better?

2007-02-21 Thread Mark Stosberg

When I upgraded a busy database system to PostgreSQL 8.1, I was excited
about AutoVacuum, and promptly enabled it, and turned off the daily
vacuum process.

(
I set the following, as well as the option to enable auto vacuuming
stats_start_collector = true
stats_row_level = true
)

I could see in the logs that related activity was happening, but within
a few days, the performance became horrible, and enabling the regular
vacuum fixed it.

Eventually autovacuum was completely disabled.

What could have happened? Is 8.2 more likely to just work in the
regard? Is the the table-specific tuning that I would have needed to do?

I realize getting autovacuuming to work could be one way to exclude the
large table I wrote about in a recent post.

   Mark

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Auto-Vacuum in 8.1 was ineffective for me. 8.2 may work better?

2007-02-21 Thread Alvaro Herrera
Mark Stosberg wrote:
 
 When I upgraded a busy database system to PostgreSQL 8.1, I was excited
 about AutoVacuum, and promptly enabled it, and turned off the daily
 vacuum process.
 
 (
 I set the following, as well as the option to enable auto vacuuming
 stats_start_collector = true
 stats_row_level = true
 )
 
 I could see in the logs that related activity was happening, but within
 a few days, the performance became horrible, and enabling the regular
 vacuum fixed it.
 
 Eventually autovacuum was completely disabled.

This has been tracked down to a bug in 8.1's Windows port.  See
http://people.planetpostgresql.org/mha/index.php?/archives/134-8.1-on-win32-pgstat-and-autovacuum.html

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] How to avoid vacuuming a huge logging table

2007-02-21 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 Take a really different approach. Log in CSV format to text files
 instead, And only import the date ranges we need on demand if a report
 is requested on the data.

Seems like more work than a separate database to me. :)

 2. We could find a way to exclude the table for vacuuming, and let it
 grow even larger. Putting the table in it's own database would
 accomplish that, but it would nice to avoid the overhead of a second
 database connection.

Specific exclusions is generally what I've done for similar problems in 
the past. If you can live without the per-database summary at the end of 
the vacuum, you can do something like this:

SET search_path = 'pg_catalog';
SELECT set_config('search_path',
  current_setting('search_path')||','||quote_ident(nspname),'false')
  FROM pg_namespace
  WHERE nspname  'pg_catalog'
  ORDER BY 1;

\t
\o pop
SELECT 'vacuum verbose analyze '||quote_ident(relname)||';' 
  FROM pg_class
  WHERE relkind = 'r'
  AND relname  'ginormous_table'
  ORDER BY 1;
\o
\i pop

Or put any tables you don't want vacuumed by this script into their own schema:

...
SELECT 'vacuum verbose analyze '||quote_ident(relname)||';' 
  FROM pg_class c, pg_namespace n
  WHERE relkind = 'r'
  AND relnamespace = n.oid
  AND nspname = 'novac'
  ORDER BY 1;
...

Just flip the equality operator, and you've got a way to vacuum just those 
excluded tables, for example once a week during a slow time.


- --
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation
PGP Key: 0x14964AC8 200702211402
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iD8DBQFF3JeivJuQZxSWSsgRA7LZAKC7Sfz4XBTAfHuk1CpR+eBl7ixBIACeML8N
1W2sLLI4HMtdyV4EOoh2XkY=
=eTUi
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Postgres performance Linux vs FreeBSD

2007-02-21 Thread Dimitri Fontaine
Le mercredi 21 février 2007 10:57, Jacek Zaręba a écrit :
 Now the point :)) According to my tests postgres on Linux
 box run much faster then on FreeBSD, here are my results:

You may want to compare some specific benchmark, as in bench with you 
application queries. For this, you can consider Tsung and pgfouine softwares.
  http://tsung.erlang-projects.org/
  http://pgfouine.projects.postgresql.org/tsung.html

Regards,
-- 
Dimitri Fontaine

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] General advice on user functions

2007-02-21 Thread Merlin Moncure

On 2/21/07, Dan Harris [EMAIL PROTECTED] wrote:

I have a new task of automating the export of a very complex Crystal
Report.  One thing I have learned in the last 36 hours is that the
export process to PDF is really, really, slooww..

Anyway, that is none of your concern.  But, I am thinking that I can
somehow utilize some of PG's strengths to work around the bottleneck in
Crystal.  The main problem seems to be that tens of thousands of rows of
data must be summarized in the report and calculations made.  Based on
my recent experience, I'd say that this task would be better suited to
PG than relying on Crystal Reports to do the summarizing.

The difficulty I'm having is that the data needed is from about 50
different snapshots of counts over time.  The queries are very simple,
however I believe I am going to need to combine all of these queries
into a single function that runs all 50 and then returns just the
count(*) of each as a separate column in a single row.

I have been Googling for hours and reading about PL/pgsql functions in
the PG docs and I have yet to find examples that returns multiple items
in a single row.  I have seen cases that return sets of, but that
appears to be returning multiple rows, not columns.  Maybe this I'm
barking up the wrong tree?

Here's the gist of what I need to do:

1) query count of rows that occurred between 14 months ago and 12 months
ago for a given criteria, then count the rows that occurred between 2
months ago and current.  Repeat for 50 different where clauses.

2) return each count(*) as a column so that in the end I can say:

select count_everything( ending_date );

and have it return to me:

count_a_lastyear   count_a_last60count_b_lastyearcount_b_last60
   ----
  100150   200 250

I'm not even sure if a function is what I'm after, maybe this can be
done in a view?  I am embarrassed to ask something that seems like it
should be easy, but some key piece of knowledge is escaping me on this.


this could be be done in a view, a function, or a view function combo.
you can select multiple counts at once like this:

select (select count(*) from foo) as foo, (select count(*) from bar) as bar;

but this may not be appropriate in some cases where something complex
is going on.  you may certainly return multiple columns from a single
call using one of two methods:

* out parameters (8.1+)
* custom type

both of which basically return a record instead of a scalar.  any
function call can be wrapped in a view which can be as simple as

create view foo as select * from my_count_proc();

this is especially advised if you want to float input parameters over
a table and also filter the inputs via 'where'.

merlin

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] Auto-Vacuum in 8.1 was ineffective for me. 8.2 may work better?

2007-02-21 Thread Alvaro Herrera
Mark Stosberg wrote:
 Alvaro Herrera wrote:
  Mark Stosberg wrote:
  When I upgraded a busy database system to PostgreSQL 8.1, I was excited
  about AutoVacuum, and promptly enabled it, and turned off the daily
  vacuum process.
 
  (
  I set the following, as well as the option to enable auto vacuuming
  stats_start_collector = true
  stats_row_level = true
  )
 
  I could see in the logs that related activity was happening, but within
  a few days, the performance became horrible, and enabling the regular
  vacuum fixed it.
 
  Eventually autovacuum was completely disabled.

  This has been tracked down to a bug in 8.1's Windows port.  See
  http://people.planetpostgresql.org/mha/index.php?/archives/134-8.1-on-win32-pgstat-and-autovacuum.html
 
 Thanks for the response Alvaro. This would have been on FreeBSD.

Oh, maybe I misread your OP :-)  With completely disabled I thought
you meant it was frozen, i.e., it ran, but did nothing.

 Let me ask the question a different way: Is simply setting the two
 values plus enabling autovacuuming generally enough, or is further
 tweaking common place?

I assume your FSM configuration is already good enough?

What you should do is find out what tables are not getting vacuumed
enough (e.g. by using contrib/pgstattuple repeteadly and seeing where is
dead space increasing) and tweak the autovacuum settings to have them
vacuumed more often.  This is done by inserting appropriate tuples in
pg_autovacuum.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgres performance Linux vs FreeBSD

2007-02-21 Thread Mark Kirkwood

Jacek Zarêba wrote:

Hello, I've set up 2 identical machines, hp server 1ghz p3,
768mb ram, 18gb scsi3 drive. On the first one I've installed
Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both
machines I've installed Postgresql 8.2.3 from sources.
Now the point :)) According to my tests postgres on Linux
box run much faster then on FreeBSD, here are my results:



With respect to 'select count(*) from ...' being slower on FreeBSD, 
there are a number of things to try to make FreeBSD faster for this sort 
of query. Two I'm currently using are:


- setting sysctl vfs.read_max to 16 or 32
- rebuilding the relevant filesystem with 32K blocks and 4K frags

I have two (almost) identical systems - one running Gentoo, one running 
FreeBSD 6.2. With the indicated changes the FreeBSD system performs 
pretty much the same as the Gentoo one.


With respect to the 'explain analyze' times, FreeBSD has a more accurate 
and more expensive gettimeofday call - which hammers its 'explain 
analyze' times compared to Linux.


Cheers

Mark


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] General advice on user functions

2007-02-21 Thread Dan Harris

Thank you all for your ideas.  I appreciate the quick response.

-Dan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] How to avoid vacuuming a huge logging table

2007-02-21 Thread D'Arcy J.M. Cain
On Wed, 21 Feb 2007 21:58:33 -
Greg Sabino Mullane [EMAIL PROTECTED] wrote:
 SELECT 'vacuum verbose analyze 
 '||quote_ident(nspname)||'.'||quote_ident(relname)||';' 
   FROM pg_class c, pg_namespace n
   WHERE relkind = 'r'
   AND relnamespace = n.oid
   AND nspname = 'novac'
   ORDER BY 1;

I assume you meant AND nspname != 'novac'

-- 
D'Arcy J.M. Cain darcy@druid.net |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate