Re: [PERFORM] Configuration Advice

2007-01-18 Thread Jeremy Haile
 I once had a query which would operate on a recordlist and 
 see whether there were any gaps larger than 1 between consecutive 
 primary keys.

Would you mind sharing the query you described?  I am attempting to do
something similar now. 

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[PERFORM] Autoanalyze settings with zero scale factor

2007-01-18 Thread Jeremy Haile
Some of my very large tables (10 million rows) need to be analyzed by
autovacuum on a frequent basis.  Rather than specifying this as a
percentage of table size + base threshold, I wanted to specify it as an
explicit number of rows.

I changed the table-specific settings so that the ANALYZE base threshold
was 5000 and the ANALYZE scale factor is 0.  According to the documented
formula: analyze threshold = analyze base threshold + analyze scale
factor * number of tuples, I assumed that this would cause the table to
be analyzed everytime 5000 tuples were inserted/updated/deleted.

However, the tables have been updated with tens of thousands of inserts
and the table has still not been analyzed (according to
pg_stat_user_tables).  Does a scale factor of 0 cause the table to never
be analyzed?  What am I doing wrong?

I'm using PG 8.2.1.

Thanks,
Jeremy Haile

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Autoanalyze settings with zero scale factor

2007-01-18 Thread Jeremy Haile
 Unless it's just a bug, my only guess is that autovacuum may be getting 
 busy at times (vacuuming large tables for example) and hasn't had a 
 chance to even look at that table for a while, and by the time it gets 
 to it, there have been tens of thousands of inserts.  Does that sounds 
 plausible?

Possible, but I think your next suggestion is more likely.

 Also, are other auto-vacuums and auto-analyzes showing up in the 
 pg_stats table?  Maybe it's a stats system issue.

No tables have been vacuumed or analyzed today.  I had thought that this
problem was due to my pg_autovacuum changes, but perhaps not.  I
restarted PostgreSQL (in production - yikes)  About a minute after being
restarted, the autovac process fired up.

What could get PG in a state where autovac isn't running?  Is there
anything I should watch to debug or monitor for this problem in the
future?  I wish I'd noticed whether or not the stats collector process
was running before I restarted.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Autoanalyze settings with zero scale factor

2007-01-18 Thread Jeremy Haile
Well - it hadn't run on any table in over 24 hours (according to
pg_stat_user_tables).  My tables are constantly being inserted into and
deleted from, and the autovacuum settings are pretty aggressive.  I also
had not seen the autovac process running in the past 24 hours. (although
I wasn't watching it *all* the time)

So - as far as I could tell it wasn't running.


On Thu, 18 Jan 2007 16:30:17 -0500, Tom Lane [EMAIL PROTECTED] said:
 Jeremy Haile [EMAIL PROTECTED] writes:
  No tables have been vacuumed or analyzed today.  I had thought that this
  problem was due to my pg_autovacuum changes, but perhaps not.  I
  restarted PostgreSQL (in production - yikes)  About a minute after being
  restarted, the autovac process fired up.
 
  What could get PG in a state where autovac isn't running?
 
 Um, are you sure it wasn't?  The autovac process is not an always-there
 thing, it quits after each pass and then the postmaster starts a new one
 awhile later.
 
   regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] PG8.2.1 choosing slow seqscan over idx scan

2007-01-17 Thread Jeremy Haile
Thanks for the great info Chad.  I'm learning a lot from this thread!

 347434 rows * 156 bytes = 52MB (reasonable it could be held in your
 shared buffers, which makes Tom's suggestion very plausible, the 
 index scan may not be cheaper -- because it is all cached)

Maybe - I tried running the same query for an older time range that is
less likely to be cached.  The index scan took longer than my previous
example, but still only took 16 seconds, compared to the 87 seconds
required to seqscan the table.  When I can, I'll restart the machine and
run a comparison again to get a pure test.


 One of the reasons why the sequential scan is slower is because the
 optimizer doesn't know the data you are requesting is sitting in the
 cache (and it is VERY unlikely you have the entire table in cache, 
 unless it is a heavily used table or very small table, which it's probably 
 not).  

This is a large table (3 million rows).  Rows are typically inserted in
date order, although large numbers of rows are deleted every night. 
Basically, this table contains a list of transactions in a rolling time
window.  So inserts happen throughout the day, and then a day's worth of
old rows are deleted every night.  The most accessed rows are going to
be today's rows, which is a small subset of the overall table. (maybe
14%)

 One thing to be careful of here is that you really need to consider what
 is  the primary use of the table, and what are the major queries you will be
 launching against it.  But you could improve the correlation by
 rebuilding the table ordered by the transaction_date column, but it may screw 
 up
 other range scans.  

Date is almost always a criteria in scans of this table.  As mentioned
earlier, the table is naturally built in date order - so would
rebuilding the table help?  Is it possible that even though I'm
inserting in date order, since I delete rows so often the physical
correlation would get disrupted as disk pages are reused?  Perhaps
clustering on the transaction_date index and periodically running
cluster would help?  Does vacuum full help with this at all?

 Another option is partitioning.  I wouldn't do any of this
 stuff, until you find out the last tweak you made still holds true, give
 it a few days, perhaps test it after a clean reboot of the server.

Yeah - partitioning makes a lot of sense and I've thought about doing
this in the past.  Although I do run queries that cross multiple days,
most of my queries only look at today's data, so the physical disk
organization would likely be much better with a partitioned table setup.
 Also, since I usually delete old data one day at a time, I could simply
drop the old day's partition.  This would make vacuuming much less of an
issue.  

But I won't be making any changes immediately, so I'll continue to run
tests given your advice.

Thanks again,
Jeremy Haile



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] PG8.2.1 choosing slow seqscan over idx scan

2007-01-17 Thread Jeremy Haile
 How much memory does the box have
2GB

 Yes, it takes up space
Well, I upped max_fsm_pages to 200 because it vacuums were failing
with it set to 150.  However, I'm now autovacuuming, which might be
keeping my fsm lower.  I didn't realize that setting it too high had
negative effects, so I'll try to get a better handle on how large this
needs to be.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] PG8.2.1 choosing slow seqscan over idx scan

2007-01-17 Thread Jeremy Haile
 That's about 32% dead rows.  Might be worth scheduling a vacuum full,
 but it's not like I was afraid it might be.  It looks to me like you
 could probably use a faster I/O subsystem in that machine though.

I'll try to schedule a full vacuum tonight.  As far as I/O - it's using
SAN over fiber.  Not as fast as internal SCSI though...

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] PG8.2.1 choosing slow seqscan over idx scan

2007-01-17 Thread Jeremy Haile
 It would be nice if the database could
 learn to estimate these values, as newer versions of Oracle does.

That would be really nice since it would take some of the guess work out
of it.

 Yes, cluster would rebuild the table for you.  I wouldn't do anything too
 intrusive, run with the random_page_cost lowered, perhaps vacuum full,
 reindex, and see what happens.  

I'll try doing the vacuum full and reindex tonight since they require
exclusive locks.

 Yep, my thoughts exactly.  Partitioning support is PostgreSQL is there,
 but  it needs a bit more of a tighter integration into the core (I shouldn't
 have to create a view, n tables, n rules, etc).  Additionally, I have read
 that at some point when you have y partitions the performance degrades,
 haven't really looked into it myself.

Yeah - I haven't setup partitioning in PostgreSQL before, although I've
read quite a bit about it.  I've talked about getting improved syntax
for partitioning in PostgreSQL.  MySQL's syntax is much simpler and more
intuitive compared to setting them up with Postgres - it would be nice
if PostgreSQL adopted a similar syntax where partitions were first-class
citizens.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] PG8.2.1 choosing slow seqscan over idx scan

2007-01-17 Thread Jeremy Haile
 Also, look at the thread going by about index bloat by 4x.  You'll
 likely want to reindex after a vacuum full since vacuum full doesn't
 reclaim space in indexes and in fact often bloats indexes.

Thanks for the pointer.  That thread might indeed apply to my situation.
 I'm going to reindex the the table tonight.

Jeremy Haile

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] PG8.2.1 choosing slow seqscan over idx scan

2007-01-17 Thread Jeremy Haile
Interesting - I haven't seen that tool before.  I'll have to check it
out when I get a chance.  Thanks!


On Wed, 17 Jan 2007 20:32:37 +0100, Tomas Vondra [EMAIL PROTECTED] said:
  That's about 32% dead rows.  Might be worth scheduling a vacuum full,
  but it's not like I was afraid it might be.  It looks to me like you
  could probably use a faster I/O subsystem in that machine though.
  
  If the random page cost being lower fixes your issues, then I'd just run
  with it lower for now.  note that while lowering it may fix one query,
  it may break another.  Tuning pgsql, like any database, is as much art
  as science...
 
 A nice feature of postgresql is the ability to log the 'slow queries'
 (exceeding some time limit) - you can use it to compare the performance
 of various settings. We're using it to 'detect' stupid SQL etc.
 
 Just set it reasonably (the value depends on you), for example we used
 about 500ms originally and after about two months of improvements we
 lowered it to about 100ms.
 
 You can analyze the log by hand, but about a year ago I've written a
 tool to parse it and build a set of HTML reports with an overview and
 details about each query) along with graphs and examples of queries.
 
 You can get it here: http://opensource.pearshealthcyber.cz/
 
 Just beware, it's written in PHP and it definitely is not perfect:
 
(1) memory requirements (about 4x the size of the log)
(2) not to fast (about 20mins of [EMAIL PROTECTED] for a 200MB log)
(3) it requires a certain log format (see the page)
 
 I did some improvements to the script recently, but forgot to upload it.
 I'll do that tomorrow.
 
 Tomas
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] PG8.2.1 choosing slow seqscan over idx scan

2007-01-16 Thread Jeremy Haile
Running PostgreSQL 8.2.1 on Win32.   The query planner is choosing a seq
scan over index scan even though index scan is faster (as shown by
disabling seqscan).  Table is recently analyzed and row count estimates
seem to be in the ballpark.

Another tidbit - I haven't done a vacuum full ever, although I vacuum
analyze regularly (and autovacuum).  I recently noticed that the PG
data drive is 40% fragmented (NTFS).  Could that be making the seqscan
slower than it should be?  Regardless of the fragmentations affect on
performance, is the query planner making a good decision here?


SOME CONFIGURATION PARAMS
effective_cache_size=1000MB
random_page_cost=3
default_statistics_target=50
shared_buffers=400MB
temp_buffers=10MB
work_mem=10MB
checkpoint_segments=12


QUERY
select merchant_dim_id, 
   dcms_dim_id,
   sum(success) as num_success, 
   sum(failed) as num_failed, 
   count(*) as total_transactions,
   (sum(success) * 1.0 / count(*)) as success_rate 
from transaction_facts 
where transaction_date = '2007-1-16' 
and transaction_date  '2007-1-16 15:20' 
group by merchant_dim_id, dcms_dim_id;


EXPLAIN ANALYZE (enable_seqscan=true)
HashAggregate  (cost=339573.01..340089.89 rows=15904 width=16) (actual
time=140606.593..140650.573 rows=10549 loops=1)
  -  Seq Scan on transaction_facts  (cost=0.00..333928.25 rows=322558
  width=16) (actual time=19917.957..140036.910 rows=347434 loops=1)
Filter: ((transaction_date = '2007-01-16 00:00:00'::timestamp
without time zone) AND (transaction_date  '2007-01-16
15:20:00'::timestamp without time zone))
Total runtime: 140654.813 ms


EXPLAIN ANALYZE (enable_seqscan=false)
HashAggregate  (cost=379141.53..379658.41 rows=15904 width=16) (actual
time=3720.838..3803.748 rows=10549 loops=1)
  -  Bitmap Heap Scan on transaction_facts  (cost=84481.80..373496.76
  rows=322558 width=16) (actual time=244.568..3133.741 rows=347434
  loops=1)
Recheck Cond: ((transaction_date = '2007-01-16
00:00:00'::timestamp without time zone) AND (transaction_date 
'2007-01-16 15:20:00'::timestamp without time zone))
-  Bitmap Index Scan on transaction_facts_transaction_date_idx 
(cost=0.00..84401.16 rows=322558 width=0) (actual
time=241.994..241.994 rows=347434 loops=1)
  Index Cond: ((transaction_date = '2007-01-16
  00:00:00'::timestamp without time zone) AND
  (transaction_date  '2007-01-16 15:20:00'::timestamp
  without time zone))
Total runtime: 3810.795 ms

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] PG8.2.1 choosing slow seqscan over idx scan

2007-01-16 Thread Jeremy Haile
Thanks Tom!  Reducing random_page_cost to 2 did the trick for this
query.  It now favors the index scan.

Even if this is a cached situation, I wouldn't expect a difference of 3
min vs 3 seconds.   

Even if unrelated, do you think disk fragmentation would have negative
effects?  Is it worth trying to defragment the drive on a regular basis
in Windows?

Jeremy Haile


On Tue, 16 Jan 2007 16:39:07 -0500, Tom Lane [EMAIL PROTECTED] said:
 Jeremy Haile [EMAIL PROTECTED] writes:
  Running PostgreSQL 8.2.1 on Win32.   The query planner is choosing a seq
  scan over index scan even though index scan is faster (as shown by
  disabling seqscan).  Table is recently analyzed and row count estimates
  seem to be in the ballpark.
 
 Try reducing random_page_cost a bit.  Keep in mind that you are probably
 measuring a fully-cached situation here, if you repeated the test case.
 If your database fits into memory reasonably well then that's fine and
 you want to optimize for that case ... but otherwise you may find
 yourself pessimizing the actual behavior.
 
   regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Partitioning

2007-01-11 Thread Jeremy Haile
Well - whether or not MySQL's implementation of partitioning has some
deficiency, it sure is a lot easier to set up than PostgreSQL.  And I
don't think there is any technical reason that setting up partitioning
on Postgres couldn't be very easy and still be robust.

On Thu, 11 Jan 2007 13:59:20 +0100, Mikael Carneholm
[EMAIL PROTECTED] said:
  On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
   Take a look at the set of partitioning functions I wrote shortly
 after
   the 8.1 release:
  
   http://www.studenter.hb.se/~arch/files/part_functions.sql
  
   You could probably work something out using those functions (as-is,
 or
   as inspiration) together with pgAgent
   (http://www.pgadmin.org/docs/1.4/pgagent.html)
  
   /Mikael
  
  Those are  some great functions.
  
 
 Well, they're less than optimal in one aspect: they add one rule per
 partition, making them unsuitable for OLTP type applications (actually:
 any application where insert performance is crucial). Someone with time
 and/or energy could probably fix that, I guess...patches are welcome :)
 
 /Mikael
 
 
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] Slow inner join, but left join is fast

2007-01-10 Thread Jeremy Haile
I have a query made by joining two subqueries where the outer query
performing the join takes significantly longer to run than the two
subqueries.  

The first subquery runs in 600ms.  The seconds subquery runs in 700ms. 
But the outer query takes 240 seconds to run!  Both of the two
subqueries only return 8728 rows.  

Changing the inner join to a left join makes the outer query run in
about 1000ms (which is great), but I don't understand why the inner join
is so slow!

I'm using PostgreSQL 8.2.1.  Any ideas?

QUERY PLAN (Inner Join) - takes 240 seconds
---
Nested Loop  (cost=17.46..17.56 rows=1 width=120)
  Join Filter: ((a.merchant_dim_id = b.merchant_dim_id) AND
  (a.dcms_dim_id = b.dcms_dim_id))
  -  HashAggregate  (cost=8.71..8.74 rows=1 width=16)
-  Index Scan using transaction_facts_transaction_date_idx on
transaction_facts  (cost=0.00..8.69 rows=1 width=16)
  Index Cond: ((transaction_date = '2007-01-09
  00:00:00'::timestamp without time zone) AND
  (transaction_date  '2007-01-09 09:30:00'::timestamp
  without time zone))
  -  HashAggregate  (cost=8.75..8.78 rows=1 width=16)
-  HashAggregate  (cost=8.71..8.72 rows=1 width=55)
  -  Index Scan using
  transaction_facts_transaction_date_idx on
  transaction_facts  (cost=0.00..8.69 rows=1 width=55)
Index Cond: ((transaction_date = '2007-01-09
00:00:00'::timestamp without time zone) AND
(transaction_date  '2007-01-09 09:30:00'::timestamp
without time zone))


QUERY PLAN (Left Join) - takes one second
---
Merge Left Join  (cost=304037.63..304064.11 rows=2509 width=120)
  Merge Cond: ((a.dcms_dim_id = b.dcms_dim_id) AND (a.merchant_dim_id =
  b.merchant_dim_id))
  -  Sort  (cost=152019.45..152025.72 rows=2509 width=64)
Sort Key: a.dcms_dim_id, a.merchant_dim_id
-  HashAggregate  (cost=151771.15..151852.69 rows=2509
width=16)
  -  Bitmap Heap Scan on transaction_facts 
  (cost=5015.12..150419.90 rows=77214 width=16)
Recheck Cond: ((transaction_date = '2007-01-09
00:00:00'::timestamp without time zone) AND
(transaction_date  '2007-01-09 09:30:00'::timestamp
without time zone))
-  Bitmap Index Scan on
transaction_facts_transaction_date_idx 
(cost=0.00..4995.81 rows=77214 width=0)
  Index Cond: ((transaction_date = '2007-01-09
  00:00:00'::timestamp without time zone) AND
  (transaction_date  '2007-01-09
  09:30:00'::timestamp without time zone))
  -  Sort  (cost=152018.18..152020.54 rows=943 width=64)
Sort Key: b.dcms_dim_id, b.merchant_dim_id
-  Subquery Scan b  (cost=151931.51..151971.59 rows=943
width=64)
  -  HashAggregate  (cost=151931.51..151962.16 rows=943
  width=16)
-  HashAggregate  (cost=151578.11..151672.35
rows=9424 width=55)
  -  Bitmap Heap Scan on transaction_facts 
  (cost=5015.12..150419.90 rows=77214 width=55)
Recheck Cond: ((transaction_date =
'2007-01-09 00:00:00'::timestamp without
time zone) AND (transaction_date 
'2007-01-09 09:30:00'::timestamp without
time zone))
-  Bitmap Index Scan on
transaction_facts_transaction_date_idx 
(cost=0.00..4995.81 rows=77214 width=0)
  Index Cond: ((transaction_date =
  '2007-01-09 00:00:00'::timestamp
  without time zone) AND
  (transaction_date  '2007-01-09
  09:30:00'::timestamp without time
  zone))


QUERY
---
select a.merchant_dim_id, a.dcms_dim_id, 
   a.num_success, a.num_failed, a.total_transactions,
   a.success_rate,
   b.distinct_num_success, b.distinct_num_failed,
   b.distinct_total_transactions, b.distinct_success_rate
from (

-- SUBQUERY 1
select merchant_dim_id, 
   dcms_dim_id,
   sum(success) as num_success, 
   sum(failed) as num_failed, 
   count(*) as total_transactions,
   (sum(success) * 1.0 / count(*)) as success_rate 
from transaction_facts 
where transaction_date = '2007-1-9' 
and transaction_date  '2007-1-9 9:30' 
group by merchant_dim_id, dcms_dim_id

) as a inner join (

-- SUBQUERY 2
select merchant_dim_id, 
   

Re: [PERFORM] Slow inner join, but left join is fast

2007-01-10 Thread Jeremy Haile
The table should have been analyzed, but to make sure I ran analyze on
the table before executing the explain analyze queries.  Well - problem
solved.  This time the inner join query runs quickly.  

I still don't understand why the inner join would be so different from
the left join prior to the analyze.  It looks like the amount of rows
expected in the original query plan for inner join was 1 (not correct
since it was really 8728)  The left join query had the exact same
subqueries but expected 77214 rows to be returned from them, which was
still not correct but resulted in a better query plan.

After the recent analyze, here's the new inner join query plan.  I won't
bother pasting the left join plan, since it is almost identical now
(including row counts)  FYI -the result of the queries is (and always
was) identical for inner join and left join.


QUERY PLAN (inner join)
Merge Join  (cost=279457.86..279479.83 rows=43 width=120) (actual
time=626.771..670.275 rows=8728 loops=1)
  Merge Cond: ((a.dcms_dim_id = b.dcms_dim_id) AND (a.merchant_dim_id =
  b.merchant_dim_id))
  -  Sort  (cost=139717.30..139722.38 rows=2029 width=64) (actual
  time=265.669..269.878 rows=8728 loops=1)
Sort Key: a.dcms_dim_id, a.merchant_dim_id
-  HashAggregate  (cost=139519.61..139585.56 rows=2029
width=16) (actual time=211.368..247.429 rows=8728 loops=1)
  -  Bitmap Heap Scan on transaction_facts 
  (cost=4427.62..138316.05 rows=68775 width=16) (actual
  time=21.858..100.998 rows=65789 loops=1)
Recheck Cond: ((transaction_date = '2007-01-09
00:00:00'::timestamp without time zone) AND
(transaction_date  '2007-01-09 09:30:00'::timestamp
without time zone))
-  Bitmap Index Scan on
transaction_facts_transaction_date_idx 
(cost=0.00..4410.42 rows=68775 width=0) (actual
time=21.430..21.430 rows=65789 loops=1)
  Index Cond: ((transaction_date = '2007-01-09
  00:00:00'::timestamp without time zone) AND
  (transaction_date  '2007-01-09
  09:30:00'::timestamp without time zone))
  -  Sort  (cost=139740.56..139742.67 rows=843 width=64) (actual
  time=361.083..365.418 rows=8728 loops=1)
Sort Key: b.dcms_dim_id, b.merchant_dim_id
-  Subquery Scan b  (cost=139663.76..139699.59 rows=843
width=64) (actual time=308.567..346.135 rows=8728 loops=1)
  -  HashAggregate  (cost=139663.76..139691.16 rows=843
  width=16) (actual time=308.563..337.677 rows=8728 loops=1)
-  HashAggregate  (cost=139347.68..139431.97
rows=8429 width=55) (actual time=198.093..246.591
rows=48942 loops=1)
  -  Bitmap Heap Scan on transaction_facts 
  (cost=4427.62..138316.05 rows=68775 width=55)
  (actual time=24.080..83.988 rows=65789
  loops=1)
Recheck Cond: ((transaction_date =
'2007-01-09 00:00:00'::timestamp without
time zone) AND (transaction_date 
'2007-01-09 09:30:00'::timestamp without
time zone))
-  Bitmap Index Scan on
transaction_facts_transaction_date_idx 
(cost=0.00..4410.42 rows=68775 width=0)
(actual time=23.596..23.596 rows=65789
loops=1)
  Index Cond: ((transaction_date =
  '2007-01-09 00:00:00'::timestamp
  without time zone) AND
  (transaction_date  '2007-01-09
  09:30:00'::timestamp without time
  zone))
Total runtime: 675.638 ms



On Wed, 10 Jan 2007 12:15:44 -0500, Tom Lane [EMAIL PROTECTED] said:
 Jeremy Haile [EMAIL PROTECTED] writes:
  I have a query made by joining two subqueries where the outer query
  performing the join takes significantly longer to run than the two
  subqueries.  
 
 Please show EXPLAIN ANALYZE results, not just EXPLAIN.
 Also, have you analyzed your tables recently?
 
   regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Slow inner join, but left join is fast

2007-01-10 Thread Jeremy Haile
I'm pretty sure it didn't analyze in between  - autovac is turned off
and I ran the test multiple times before posting.  

But since I can't reproduce it anymore, I can't be 100% sure.  And it
certainly doesn't make sense that the estimate for the index scan would
change based on an unrelated join condition.

If I ever get it to happen again, I'll be more careful and repost if it
is a real issue.  Thanks for pointing me in the right direction!


On Wed, 10 Jan 2007 13:38:15 -0500, Tom Lane [EMAIL PROTECTED] said:
 Jeremy Haile [EMAIL PROTECTED] writes:
  I still don't understand why the inner join would be so different from
  the left join prior to the analyze.
 
 Are you sure you hadn't analyzed in between?  Or maybe autovac did it
 for you?  The reason for the plan change is the change from estimating
 1 row matching the transaction_date range constraint, to estimating lots
 of them, and the join type away up at the top would surely not have
 affected that.
 
   regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Slow inner join, but left join is fast

2007-01-10 Thread Jeremy Haile
Another random idea - does PostgreSQL do any caching of query plans?
even on the session level?  

I ran these queries from the same Query window, so my idea is that maybe
the inner join plan was cached prior to an automatic analyze being run.  

But I'm doubting PostgreSQL would do something like that.  And of
course, if PostgreSQL doesn't cache query plans - this idea is bogus =)


On Wed, 10 Jan 2007 13:38:24 -0500, Jeremy Haile [EMAIL PROTECTED]
said:
 I'm pretty sure it didn't analyze in between  - autovac is turned off
 and I ran the test multiple times before posting.  
 
 But since I can't reproduce it anymore, I can't be 100% sure.  And it
 certainly doesn't make sense that the estimate for the index scan would
 change based on an unrelated join condition.
 
 If I ever get it to happen again, I'll be more careful and repost if it
 is a real issue.  Thanks for pointing me in the right direction!
 
 
 On Wed, 10 Jan 2007 13:38:15 -0500, Tom Lane [EMAIL PROTECTED] said:
  Jeremy Haile [EMAIL PROTECTED] writes:
   I still don't understand why the inner join would be so different from
   the left join prior to the analyze.
  
  Are you sure you hadn't analyzed in between?  Or maybe autovac did it
  for you?  The reason for the plan change is the change from estimating
  1 row matching the transaction_date range constraint, to estimating lots
  of them, and the join type away up at the top would surely not have
  affected that.
  
  regards, tom lane
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Slow inner join, but left join is fast

2007-01-10 Thread Jeremy Haile
I did create and drop an index at some point while looking at this
issue.  But I definitely reran both of the queries (and explains) after
the index was dropped, so I don't understand why there would be a
difference between the inner and left query plans.  (which were run
back-to-back more than once)  Anyways - I'll let you know if something
similar happens again.

Thanks,
Jeremy Haile 


On Wed, 10 Jan 2007 14:22:35 -0500, Tom Lane [EMAIL PROTECTED] said:
 Jeremy Haile [EMAIL PROTECTED] writes:
  Another random idea - does PostgreSQL do any caching of query plans?
 
 Only if the client specifies it, either by PREPARE or the equivalent
 protocol-level message.  I dunno what client software you were using,
 but I think few if any would PREPARE behind your back.  Might be worth
 checking into though, if you've eliminated autovacuum.
 
 Actually there's another possibility --- did you create any indexes on
 the table in between?  CREATE INDEX doesn't do a full stats update, but
 it does count the rows and update pg_class.reltuples.  But it's hard to
 believe that'd have caused as big a rowcount shift as we see here ...
 
   regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Partitioning

2007-01-10 Thread Jeremy Haile
I really wish that PostgreSQL supported a nice partitioning syntax
like MySQL has.  

Here is an example:
CREATE TABLE tr (id INT, name VARCHAR(50), purchased DATE)
  PARTITION BY RANGE( YEAR(purchased) ) (
  PARTITION p0 VALUES LESS THAN (1990),
  PARTITION p1 VALUES LESS THAN (1995),
  PARTITION p2 VALUES LESS THAN (2000),
  PARTITION p3 VALUES LESS THAN (2005)
);

And to drop a partition:
ALTER TABLE tr DROP PARTITION p2;


This seems so much more intuitive and simpler than what is required to
set it up in PostgreSQL.  Does PostgreSQL's approach to table
partitioning have any advantage over MySQL?  Is a nicer syntax planned
for Postgres?


On Wed, 10 Jan 2007 14:20:06 -0600, Jim C. Nasby [EMAIL PROTECTED] said:
 BTW, someone coming up with a set of functions to handle partitioning
 for the general 'partition by time' case would make a GREAT project on
 pgFoundry.
 
 On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
  Take a look at the set of partitioning functions I wrote shortly after
  the 8.1 release:
  
  http://www.studenter.hb.se/~arch/files/part_functions.sql
  
  You could probably work something out using those functions (as-is, or
  as inspiration) together with pgAgent
  (http://www.pgadmin.org/docs/1.4/pgagent.html)
  
  /Mikael
  
   -Original Message-
   From: [EMAIL PROTECTED]
  [mailto:pgsql-performance-
   [EMAIL PROTECTED] On Behalf Of Arnau
   Sent: den 5 januari 2007 12:02
   To: pgsql-performance@postgresql.org
   Subject: [PERFORM] Partitioning
   
   Hi all,
   
  I'm not sure if this question fits in the topic of this list.
   
  I'm interested in partitioning and it's the first time I'd use it.
   There is an issue I don't know how you handle it. Lets say I'm
   interested in store monthly based statistical data like the example of
   http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html. What
  I
   don't like of this approach is that the monthly tables, rules... must
  be
   created manually or at least I haven't found any other option.
   
  My question is how do you manage this? do you have a cron task that
   creates automatically these monthly elements (tables, rules, ... ) or
   there is another approach that doesn't require external things like
  cron
 only PostgreSQL.
   --
   Arnau
   
   ---(end of
  broadcast)---
   TIP 5: don't forget to increase your free space map settings
  
  
  ---(end of broadcast)---
  TIP 9: In versions below 8.0, the planner will ignore your desire to
 choose an index scan if your joining column's datatypes do not
 match
  
 
 -- 
 Jim Nasby[EMAIL PROTECTED]
 EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Performance of PostgreSQL on Windows vs Linux

2007-01-10 Thread Jeremy Haile
Hey Jim - 

Thanks for the feedback.  The server has dual Xeons with HyperThreading
enabled - so perhaps I should try disabling it.  How much performance
boost have you seen by disabling it?  Of course, the bottleneck in my
case is more on the I/O or RAM side, not the CPU side.

Jeremy Haile


On Wed, 10 Jan 2007 14:15:26 -0600, Jim C. Nasby [EMAIL PROTECTED] said:
 On Wed, Jan 03, 2007 at 12:24:24PM -0500, Jeremy Haile wrote:
  I am sure that this has been discussed before, but I can't seem to find
  any recent posts. (I am running PostgreSQL 8.2)
  
  I have always ran PostgreSQL on Linux in the past, but the company I am
  currently working for uses Windows on all of their servers.  I don't
  have the luxury right now of running my own benchmarks on the two OSes,
  but wanted to know if anyone else has done a performance comparison.  Is
  there any significant differences?
 
 One thing to consider... I've seen a case or two where pgbench running
 on windows with HyperThreading enabled was actually faster than with it
 turned off. (General experience has been that HT hurts PostgreSQL). I
 suspect that the windows kernel may have features that allow it to
 better utilize HT than linux.
 
 Of course if you don't have HT... it doesn't matter. :)
 -- 
 Jim Nasby[EMAIL PROTECTED]
 EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Partitioning

2007-01-10 Thread Jeremy Haile
You can do list partitioning in MySQL:
http://dev.mysql.com/doc/refman/5.1/en/partitioning-list.html

My comment was not meant as a criticism of PostgreSQL's current state -
I'm glad that it has partitioning.  I'm simply wondering if there are
any plans of adopting a more user-friendly syntax in the future similar
to MySQL partitioning support.  Having first-class citizen support of
partitions would also allow some nice administrative GUIs and views to
be built for managing them.  

Jeremy Haile


On Wed, 10 Jan 2007 15:09:31 -0600, Jim C. Nasby [EMAIL PROTECTED] said:
 On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:
  This seems so much more intuitive and simpler than what is required to
  set it up in PostgreSQL.  Does PostgreSQL's approach to table
  partitioning have any advantage over MySQL?  Is a nicer syntax planned
  for Postgres?
 
 The focus was to get the base functionality working, and working
 correctly. Another consideration is that there's multiple ways to
 accomplish the partitioning; exposing the basic functionality without
 enforcing a given interface provides more flexibility (ie: it appears
 that you can't do list partitioning with MySQL, while you can with
 PostgreSQL).
 -- 
 Jim Nasby[EMAIL PROTECTED]
 EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] High inserts, bulk deletes - autovacuum vs scheduled

2007-01-10 Thread Jeremy Haile
 BTW, that's the default values for analyze... the defaults for vacuum
 are 2x that.

Yeah - I was actually more concerned that tables would need to be
analyzed more often than I was about vacuuming too often, so I used
analyze as the example.  Since my app is inserting constantly throughout
the day and querying for recent data - I want to make sure the query
planner realizes that there are lots of rows with new timestamps on
them.  In other words, if I run a query select * from mytable where
timestamp  '9:00am' - I want to make sure it hasn't been a day since
the table was analyzed, so the planner thinks there are zero rows
greater than 9:00am today.

 What's more important
 is to make sure critical tables (such as queue tables) are getting
 vacuumed frequently so that they stay small. 

Is the best way to do that usually to lower the scale factors?  Is it
ever a good approach to lower the scale factor to zero and just set the
thresholds to a pure number of rows? (when setting it for a specific
table)

Thanks,
Jeremy Haile

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] High inserts, bulk deletes - autovacuum vs scheduled vacuum

2007-01-09 Thread Jeremy Haile
I am developing an application that has very predictable database
operations:
  -inserts several thousand rows into 3 tables every 5 minutes. (table
  contain around 10 million rows each)
  -truncates and rebuilds aggregate tables of this data every 5 minutes.
  (several thousand rows each)
  -regular reads of aggregate table and sometimes large tables by user
  interaction
  -every night, hundreds of thousands of rows are deleted from these 3
  tables (old data)
  -20-30 other tables get inserted/updated slowly throughout the day

In order to optimize performance of the inserts, I disabled
autovacuum/row-level stats and instead run vacuum analyze on the whole
DB every hour.  However this operation takes around 20 minutes of each
hour.  This means that the database is involved in vacuum/analyzing
tables 33% of the time.

I'd like any performance advice, but my main concern is the amount of
time vacuum/analyze runs and its possible impact on the overall DB
performance.  Thanks!


I am running 8.2 (will be 8.2.1 soon).  The box is Windows with 2GB RAM
connected to a SAN over fiber.  The data and pg_xlog are on separate
partitions.  

Modified configuration:
effective_cache_size = 1000MB
random_page_cost = 3
default_statistics_target = 50
maintenance_work_mem = 256MB
shared_buffers = 400MB
temp_buffers = 10MB
work_mem = 10MB
max_fsm_pages = 150
checkpoint_segments = 30
stats_row_level = off
stats_start_collector = off

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] High inserts, bulk deletes - autovacuum vs scheduled

2007-01-09 Thread Jeremy Haile
Good advice on the partitioning idea.  I may have to restructure some of
my queries, since some of them query across the whole range - but it may
be a much more performant solution.  How is the performance when
querying across a set of partitioned tables vs. querying on a single
table with all rows?  This may be a long term idea I could tackle, but
is probably not feasible for my current time-frame.  

Does my current approach of disabling autovacuum and manually vacuuming
once-an-hour sound like a good idea, or would I likely have better
results by auto-vacuuming and turning row-level stats back on?


On Tue, 09 Jan 2007 19:02:25 +0100, Florian Weimer [EMAIL PROTECTED]
said:
 * Jeremy Haile:
 
  I'd like any performance advice, but my main concern is the amount of
  time vacuum/analyze runs and its possible impact on the overall DB
  performance.  Thanks!
 
 You could partition your data tables by date and discard old data
 simply by dropping the tables.  This is far more effective than
 vacuuming, but obviously, this approach cannot be used in all cases
 (e.g. if you need more dynamic expiry rules).
 
 -- 
 Florian Weimer[EMAIL PROTECTED]
 BFK edv-consulting GmbH   http://www.bfk.de/
 Kriegsstraße 100  tel: +49-721-96201-1
 D-76133 Karlsruhe fax: +49-721-96201-99

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Performance of PostgreSQL on Windows vs Linux

2007-01-04 Thread Jeremy Haile
Thanks for the recommendations.  I wasn't familiar with those packages!

On Thu, 4 Jan 2007 00:46:32 +0100, Dimitri Fontaine [EMAIL PROTECTED]
said:
 Le jeudi 4 janvier 2007 00:18, Magnus Hagander a écrit :
  But to get a good answer on if the difference is
  significant enough to matter, you really need to run some kind of simple
  benchmark on *your* workload.
 
 To easily stress test a couple of servers and compare results on *your* 
 workload, please consider using both pgfouine[1,2] and tsung[3].
 
 The companion tool tsung-ploter[4] (for plotting several results using
 common 
 graph, hence scales), may also be usefull.
 
 [1]: http://pgfouine.projects.postgresql.org/
 [2]: http://pgfouine.projects.postgresql.org/tsung.html
 [3]: http://tsung.erlang-projects.org/
 [4]: http://debian.dalibo.org/unstable/tsung-ploter_0.1-1.tar.gz
 
 Regards,
 -- 
 Dimitri Fontaine
 http://www.dalibo.com/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Performance of PostgreSQL on Windows vs Linux

2007-01-04 Thread Jeremy Haile
Thanks for the response!  I know I have to benchmark them to get a real
answer.  I am just looking to hear someone say We benchmarked Linux vs.
Windows with similar configuration and hardware and experienced a 25%
performance boost in Linux.  or We benchmarked them and found no
significant difference.  

I realize the situation varies based on usage patterns, but I'm just
looking for some general info based on anyone else's experiences.  

My usage pattern is a single application that hits the database.  The
application uses a connection pool, so opening lots of connections is
not a huge issue.  However - it does have very large tables and
regularly queries and inserts into these tables.  I insert several
million rows into 3 tables every day - and also delete about the same
amount.



On Thu, 04 Jan 2007 00:18:23 +0100, Magnus Hagander
[EMAIL PROTECTED] said:
 Jeremy Haile wrote:
  I am sure that this has been discussed before, but I can't seem to find
  any recent posts. (I am running PostgreSQL 8.2)
  
  I have always ran PostgreSQL on Linux in the past, but the company I am
  currently working for uses Windows on all of their servers.  I don't
  have the luxury right now of running my own benchmarks on the two OSes,
  but wanted to know if anyone else has done a performance comparison.  Is
  there any significant differences?
 
 That depends on your usage pattern. There are certainly cases where the
 Win32 version will be significantly slower.
 For example, if you open a lot of new connections, that is a lot more
 expensive on Windows since each connection needs to execute a new
 backend due to the lack of fork().
 
 I don't think you'll find any case where the Windows version is faster
 than Linux ;-) But to get a good answer on if the difference is
 significant enough to matter, you really need to run some kind of simple
 benchmark on *your* workload.
 
 //Magnus

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Performance of PostgreSQL on Windows vs Linux

2007-01-04 Thread Jeremy Haile
I'm using 8.2.  I don't know when I'll get a chance to run my own
benchmarks.  (I don't currently have access to a Windows and Linux
server with similar hardware/configuration) But when/if I get a chance
to run them, I will post the results here.

Thanks for the feedback.

Jeremy Haile


On Thu, 04 Jan 2007 10:23:51 -0500, Tom Lane [EMAIL PROTECTED] said:
 Jeremy Haile [EMAIL PROTECTED] writes:
  Thanks for the response!  I know I have to benchmark them to get a real
  answer.  I am just looking to hear someone say We benchmarked Linux vs.
  Windows with similar configuration and hardware and experienced a 25%
  performance boost in Linux.  or We benchmarked them and found no
  significant difference.  
 
 I've heard anecdotal reports both ways: there's no difference and
 there's a big difference.  So there's no substitute for benchmarking
 your own application.
 
 I think one big variable in this is which PG version you are testing.
 We've been gradually filing down some of the rough edges in the native
 Windows port, so I'd expect that the performance gap is closing over
 time.  I don't know how close to closed it is in 8.2, but I'd surely
 suggest that you do your benchmarking with 8.2.
 
   regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] Performance of PostgreSQL on Windows vs Linux

2007-01-03 Thread Jeremy Haile
I am sure that this has been discussed before, but I can't seem to find
any recent posts. (I am running PostgreSQL 8.2)

I have always ran PostgreSQL on Linux in the past, but the company I am
currently working for uses Windows on all of their servers.  I don't
have the luxury right now of running my own benchmarks on the two OSes,
but wanted to know if anyone else has done a performance comparison.  Is
there any significant differences?

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] High update activity, PostgreSQL vs BigDBMS

2007-01-02 Thread Jeremy Haile
More specifically, you should set the noatime,data=writeback options in
fstab on ext3 partitions for best performance.  Correct?

 it doesn't really belong here but ext3 has
 data journaled (data and meta data)
 ordered (meta data journald but data written before meta data (default))
 journald (meta data only journal)
 modes.
 
 The performance differences between ordered and meta data only  
 journaling should be very small enyway

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[PERFORM] Config parameters

2007-01-02 Thread Jeremy Haile
I'm curious what parameters you guys typically *always* adjust on new
PostgreSQL installs.  

I am working with a database that contains several large tables (10-20
million) and many smaller tables (hundreds of rows).  My system has 2 GB
of RAM currently, although I will be upping it to 4GB soon.

My motivation in asking this question is to make sure I'm not making a
big configuration no-no by missing a parameter, and also for my own
checklist of parameters I should almost always set when configuring a
new install.  

The parameters that I almost always change when installing a new system
is shared_buffers, max_fsm_pages, checkpoint_segments, and
effective_cache_size.

Are there any parameters missing that always should be changed when
deploying to a decent server?  


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Config parameters

2007-01-02 Thread Jeremy Haile
What is a decent default setting for work_mem and maintenance_work_mem,
considering I am regularly querying tables that are tens of millions of
rows and have 2-4 GB of RAM?

Also - what is the best way to determine decent settings for
temp_buffers and random_page_cost?


On Tue, 02 Jan 2007 16:34:19 +, Richard Huxton dev@archonet.com
said:
 Jeremy Haile wrote:
  I'm curious what parameters you guys typically *always* adjust on new
  PostgreSQL installs.  
 
  The parameters that I almost always change when installing a new system
  is shared_buffers, max_fsm_pages, checkpoint_segments, and
  effective_cache_size.
 
 Always: work_mem, maintenance_work_mem
 Also consider temp_buffers and random_page_cost.
 
 A lot will depend on how much of the data you handle ends up cached.
 
 -- 
Richard Huxton
Archonet Ltd

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Config parameters

2007-01-02 Thread Jeremy Haile
Thanks for the information!

Are there any rule-of-thumb starting points for these values that you
use when setting up servers?  I'd at least like a starting point for
testing different values.  

For example, I'm sure setting a default work_mem of 100MB is usually
overkill - but is 5MB usually a reasonable number?  20MB?  My system
does not have a huge number of concurrent users, but they are hitting
large tables.  I'm not sure what numbers people usually use here
successfully.

For maintenance_work_mem, I turned off autovacuum to save on
performance, but run a vacuum analyze once an hour.  My current database
characteristics are heavy insert (bulk inserts every 5 minutes) and
medium amount of selects on large, heavily indexed tables.

For temp_buffers - any rule of thumb starting point?  What's the best
way to evaluate if this number is adjusted correctly?

For random_page_cost - is the default of 4 pretty good for most drives? 
Do you usually bump it up to 3 on modern servers?  I've usually done
internal RAID setups, but the database I'm currently working on is
hitting a SAN over fiber.

I realize that these values can vary a lot based on a variety of factors
- but I'd love some more advice on what good rule-of-thumb starting
points are for experimentation and how to evaluate whether the values
are set correctly. (in the case of temp_buffers and work_mem especially)


On Tue, 02 Jan 2007 18:49:54 +, Richard Huxton dev@archonet.com
said:
 Jeremy Haile wrote:
  What is a decent default setting for work_mem and maintenance_work_mem,
  considering I am regularly querying tables that are tens of millions of
  rows and have 2-4 GB of RAM?
 
 Well, work_mem will depend on your query-load. Queries that do a lot of 
 sorting should benefit from increased work_mem. You only have limited 
 RAM though, so it's a balancing act between memory used to cache disk 
 and per-process sort memory. Note that work_mem is per sort, so you can 
 use multiples of that amount in a single query. You can issue a set to 
 change the value for a session.
 
 How you set maintenance_work_mem will depend on whether you vacuum 
 continually (e.g. autovacuum) or at set times.
 
  Also - what is the best way to determine decent settings for
  temp_buffers and random_page_cost?
 
 With all of these, testing I'm afraid. The only sure thing you can say 
 is that random_page_cost should be 1 if all your database fits in RAM.
 
 -- 
Richard Huxton
Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Config parameters

2007-01-02 Thread Jeremy Haile
 So, on a 4 Gig machine you could divide 1G (25%) by the total possible
 connections, then again by the average number of sorts you'd expect per
 query / connection to get an idea.  

Thanks for the advice.  I'll experiment with higher work_mem settings,
as I am regularly doing sorts on large datasets.  I imagine the default
setting isn't very optimal in my case.


 Did you turn off stats collection as well?  That's really the major
 performance issue with autovacuum, not autovacuum itself.  

I did turn off stats collection.  I'm not sure how much of a difference
it makes, but I was trying to squeeze every ounce of performance out of
the database.  


 I.e. the cure may be worse than the disease.  OTOH, if you don't delete
 / update often, then don't worry about it.

I hardly ever delete/update.  I update regularly, but only on small
tables so it doesn't make as big of a difference.  I do huge inserts,
which is why turning off stats/autovacuum gives me some performance
benefit.  I usually only do deletes nightly in large batches, so
autovacuuming/analyzing once an hour works fairly well.


 Haven't researched temp_buffers at all.

Do you usually change temp_buffers?  Mine is currently at the default
setting.  I guess I could arbitrarily bump it up - but I'm not sure what
the consequences would be or how to tell if it is set correctly.


 random_page_cost is the hardest to come up with the proper setting.  

This definitely sounds like the hardest to figure out.  (since it seems
to be almost all trial-and-error)  I'll play with some different values.
 This is only used by the query planner right?  How much of a
performance difference does it usually make to tweak this number?  (i.e.
how much performance difference would someone usually expect when they
find that 2.5 works better than 4?)


 While you can't
 change buffers on the fly, you can change work_mem and random_page_cost
 on the fly, per connection, to see the change.

Thanks for the advice.  I was aware you could change work_mem on the
fly, but didn't think about setting random_page_cost on-the-fly.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] URGENT: Out of disk space pg_xlog

2006-12-23 Thread Jeremy Haile
  Once you free some space on the data partition and restart, you should
  be good to go --- there will be no loss of committed transactions, since
  all the operations are in pg_xlog.  Might take a little while to replay
  all that log though :-(
 
 Amazing that all works.  What I did not see is confirmation from the
 user that the data directory filled up _before_ pg_xlog filled up.

After I freed up space on the pg_xlog partition and restarted, it took
some time to replay all of the log (15-20 minutes) and everything
recovered with no data corruption!  However, the theory about the data
partition filling up first didn't happen in my case.  The data partition
was (and still is) less than 50% utilized.  My pg_xlog files typically
run around 400MB, but with the long running update filled up the entire
10GB partition.  (which is now a 70 GB partition)  

So, I'm still not sure what caused the problem.  When I get back to work
(or maybe sooner), I'll take a look in the PG logs and post anything
that looks suspicious here.  Thanks for all of your comments and
suggestions.  Even though I haven't figured out the root of the problem
yet, they've been very informative.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PERFORM] URGENT: Out of disk space pg_xlog

2006-12-22 Thread Jeremy Haile
I created a 10GB partition for pg_xlog and ran out of disk space today
during a long running update.  My checkpoint_segments is set to 12, but
there are 622 files in pg_xlog.  What size should the pg_xlog partition
be?  

Postmaster is currently not starting up (critical for my organization)
and reports FATAL: The database system is starting up .

The log reports:
2006-12-22 10:50:09 LOG:  checkpoint record is at 2E/87A323C8
2006-12-22 10:50:09 LOG:  redo record is at 2E/8729A6E8; undo record is
at 0/0; shutdown FALSE
2006-12-22 10:50:09 LOG:  next transaction ID: 0/25144015; next OID:
140986
2006-12-22 10:50:09 LOG:  next MultiXactId: 12149; next MultiXactOffset:
24306
2006-12-22 10:50:09 LOG:  database system was not properly shut down;
automatic recovery in progress
2006-12-22 10:50:09 LOG:  redo starts at 2E/8729A6E8


This has been running for 20 minutes.  What can I do?  Please help!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] URGENT: Out of disk space pg_xlog

2006-12-22 Thread Jeremy Haile
Sorry for my rushed posting, as I was in a bit of a panic.

We moved the pg_xlog directory over to a 70GB partition, and after 15-20
minutes the automatic recovery finished.  Everything is working fine
now.

I would still appreciate a PG guru explaining how to estimate size for a
pg_xlog partition.  It seems like it can vary considerably depending on
how intensive your current transactions are.  Is there a way to
determine a maximum?

On Fri, 22 Dec 2006 11:06:46 -0500, Jeremy Haile [EMAIL PROTECTED]
said:
 I created a 10GB partition for pg_xlog and ran out of disk space today
 during a long running update.  My checkpoint_segments is set to 12, but
 there are 622 files in pg_xlog.  What size should the pg_xlog partition
 be?  
 
 Postmaster is currently not starting up (critical for my organization)
 and reports FATAL: The database system is starting up .
 
 The log reports:
 2006-12-22 10:50:09 LOG:  checkpoint record is at 2E/87A323C8
 2006-12-22 10:50:09 LOG:  redo record is at 2E/8729A6E8; undo record is
 at 0/0; shutdown FALSE
 2006-12-22 10:50:09 LOG:  next transaction ID: 0/25144015; next OID:
 140986
 2006-12-22 10:50:09 LOG:  next MultiXactId: 12149; next MultiXactOffset:
 24306
 2006-12-22 10:50:09 LOG:  database system was not properly shut down;
 automatic recovery in progress
 2006-12-22 10:50:09 LOG:  redo starts at 2E/8729A6E8
 
 
 This has been running for 20 minutes.  What can I do?  Please help!
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] URGENT: Out of disk space pg_xlog

2006-12-22 Thread Jeremy Haile
The archive_status directory is empty.  I've never seen any files in
there and I've never set archive_command.

Well, the problem has since resolved, but here is what is in the
directory now.  Previously there were hundreds of files, but these
disappeared after Postgres performed the automatic recovery.

12/22/2006  11:16 AM16,777,216 0001003000D2
12/22/2006  11:17 AM16,777,216 0001003000D3
12/22/2006  11:17 AM16,777,216 0001003000D4
12/22/2006  11:17 AM16,777,216 0001003000D5
12/22/2006  11:18 AM16,777,216 0001003000D6
12/22/2006  11:19 AM16,777,216 0001003000D7
12/22/2006  11:19 AM16,777,216 0001003000D8
12/22/2006  11:19 AM16,777,216 0001003000D9
12/22/2006  11:19 AM16,777,216 0001003000DA
12/22/2006  11:21 AM16,777,216 0001003000DB
12/22/2006  10:07 AM16,777,216 0001003000DC
12/22/2006  10:07 AM16,777,216 0001003000DD
12/22/2006  10:07 AM16,777,216 0001003000DE
12/22/2006  10:33 AM16,777,216 0001003000DF
12/22/2006  10:08 AM16,777,216 0001003000E0
12/22/2006  10:32 AM16,777,216 0001003000E1
12/22/2006  10:08 AM16,777,216 0001003000E2
12/22/2006  10:08 AM16,777,216 0001003000E3
12/22/2006  10:17 AM16,777,216 0001003000E4
12/22/2006  10:11 AM16,777,216 0001003000E5
12/22/2006  11:10 AM16,777,216 0001003000E6
12/22/2006  11:11 AM16,777,216 0001003000E7
12/22/2006  11:15 AM16,777,216 0001003000E8
12/22/2006  11:15 AM16,777,216 0001003000E9
12/22/2006  11:15 AM16,777,216 0001003000EA
12/22/2006  11:16 AM16,777,216 0001003000EB
12/22/2006  11:16 AM16,777,216 0001003000EC
12/22/2006  11:16 AM16,777,216 0001003000ED
12/18/2006  08:52 PMDIR  archive_status
  28 File(s)469,762,048 bytes
   3 Dir(s)  10,206,756,864 bytes free

On Fri, 22 Dec 2006 17:02:43 +, Simon Riggs
[EMAIL PROTECTED] said:
 On Fri, 2006-12-22 at 11:52 -0500, Jeremy Haile wrote:
 
  I would still appreciate ... explaining how to estimate size for a
  pg_xlog partition.  It seems like it can vary considerably depending on
  how intensive your current transactions are.  Is there a way to
  determine a maximum?
 
 There should be at most 2*checkpoint_segments+1 files in pg_xlog, which
 are 16MB each. So you shouldn't be having a problem.
 
 If there are more than this, it could be because you have
 currently/previously had archive_command set and the archive command
 failed to execute correctly, or the database was shutdown/crashed prior
 to the archive commands being executed.
 
 IIRC there was a bug that allowed this to happen, but that was some time
 ago.
 
 Perhaps you could show us the dir listing, so we can check that there is
 not a new problem emerging? Can you also show us the contents of the
 pg_xlog/archive_status directory? Thanks.
 
 -- 
   Simon Riggs 
   EnterpriseDB   http://www.enterprisedb.com
 
 

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] URGENT: Out of disk space pg_xlog

2006-12-22 Thread Jeremy Haile
checkpoint_segments has been set at 12 for a while and was never set
higher than that. (before that it was set to the PG default - 3 I think)

Before the server crashed I was running an update that updates a boolean
flag on two large tables (10 million rows each) for transactions older
than today (roughly 80% of the rows)  The transaction ran for a long
time and I assume is what caused the pg_xlog to fill up.

On Fri, 22 Dec 2006 17:36:39 +, Simon Riggs
[EMAIL PROTECTED] said:
 On Fri, 2006-12-22 at 12:30 -0500, Jeremy Haile wrote:
  The archive_status directory is empty.  I've never seen any files in
  there and I've never set archive_command.
  
  Well, the problem has since resolved, but here is what is in the
  directory now.  Previously there were hundreds of files, but these
  disappeared after Postgres performed the automatic recovery.
 
 What were you doing before the server crashed?
 
 Did you previously have checkpoint_segments set higher? When/how was it
 reduced?
 
 -- 
   Simon Riggs 
   EnterpriseDB   http://www.enterprisedb.com
 
 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[PERFORM] Inner join vs where-clause subquery

2006-12-19 Thread Jeremy Haile
I have the following query which performs extremely slow:
select min(nlogid) as start_nlogid,
   max(nlogid) as end_nlogid,
   min(dtCreateDate) as start_transaction_timestamp,
   max(dtCreateDate) as end_transaction_timestamp
from activity_log_facts 
where nlogid  ( select max(a.end_nlogid) from
activity_log_import_history a)
and dtCreateDate  '2006-12-18 9:10'


If I change the where clause to have the return value of the subquery it
runs very fast:
select min(nlogid) as start_nlogid,
   max(nlogid) as end_nlogid,
   min(dtCreateDate) as start_transaction_timestamp,
   max(dtCreateDate) as end_transaction_timestamp
from activity_log_facts 
where nlogid  402123456
and dtCreateDate  '2006-12-18 9:10'


If I change the query to the following, it runs fast:
select min(nlogid) as start_nlogid,
   max(nlogid) as end_nlogid,
   min(dtCreateDate) as start_transaction_timestamp,
   max(dtCreateDate) as end_transaction_timestamp
from activity_log_facts 
inner join ( select max(end_nlogid) as previous_nlogid from
activity_log_import_history) as a 
on activity_log_facts.nlogid  a.previous_nlogid
where dtCreateDate  ${IMPORT_TIMESTAMP}


I am running PG 8.2.  Why is that this the case?  Shouldn't the query
planner be smart enough to know that the first query is the same as the
second and third?  The inner query does not refer to any columns outside
of itself.  I personally find the first query easiest to read and wish
it performed well.

Jeremy Haile

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Inner join vs where-clause subquery

2006-12-19 Thread Jeremy Haile
Here is the explain analyze output:

Result  (cost=9.45..9.46 rows=1 width=0) (actual
time=156589.390..156589.391 rows=1 loops=1)
  InitPlan
-  Result  (cost=0.04..0.05 rows=1 width=0) (actual
time=0.034..0.034 rows=1 loops=1)
  InitPlan
-  Limit  (cost=0.00..0.04 rows=1 width=4) (actual
time=0.029..0.030 rows=1 loops=1)
  -  Index Scan Backward using
  activity_log_import_history_end_nlogid_idx on
  activity_log_import_history a  (cost=0.00..113.43
  rows=2877 width=4) (actual time=0.027..0.027 rows=1
  loops=1)
Filter: (end_nlogid IS NOT NULL)
-  Limit  (cost=0.00..1.19 rows=1 width=12) (actual
time=0.052..0.052 rows=0 loops=1)
  -  Index Scan using activity_log_facts_pkey on
  activity_log_facts  (cost=0.00..1831613.82 rows=1539298
  width=12) (actual time=0.050..0.050 rows=0 loops=1)
Index Cond: (nlogid  $1)
Filter: ((nlogid IS NOT NULL) AND (dtcreatedate 
'2006-12-18 09:10:00'::timestamp without time zone))
-  Limit  (cost=0.00..1.19 rows=1 width=12) (actual
time=0.006..0.006 rows=0 loops=1)
  -  Index Scan Backward using activity_log_facts_pkey on
  activity_log_facts  (cost=0.00..1831613.82 rows=1539298
  width=12) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (nlogid  $1)
Filter: ((nlogid IS NOT NULL) AND (dtcreatedate 
'2006-12-18 09:10:00'::timestamp without time zone))
-  Limit  (cost=0.00..3.51 rows=1 width=12) (actual
time=100221.955..100221.955 rows=0 loops=1)
  -  Index Scan using activity_log_facts_dtcreatedate_idx on
  activity_log_facts  (cost=0.00..5406927.50 rows=1539298
  width=12) (actual time=100221.953..100221.953 rows=0 loops=1)
Index Cond: (dtcreatedate  '2006-12-18
09:10:00'::timestamp without time zone)
Filter: ((dtcreatedate IS NOT NULL) AND (nlogid  $1))
-  Limit  (cost=0.00..3.51 rows=1 width=12) (actual
time=56367.367..56367.367 rows=0 loops=1)
  -  Index Scan Backward using
  activity_log_facts_dtcreatedate_idx on activity_log_facts 
  (cost=0.00..5406927.50 rows=1539298 width=12) (actual
  time=56367.364..56367.364 rows=0 loops=1)
Index Cond: (dtcreatedate  '2006-12-18
09:10:00'::timestamp without time zone)
Filter: ((dtcreatedate IS NOT NULL) AND (nlogid  $1))
Total runtime: 156589.605 ms


On Tue, 19 Dec 2006 16:31:41 +, Richard Huxton dev@archonet.com
said:
 Jeremy Haile wrote:
  I have the following query which performs extremely slow:
  select min(nlogid) as start_nlogid,
 max(nlogid) as end_nlogid,
 min(dtCreateDate) as start_transaction_timestamp,
 max(dtCreateDate) as end_transaction_timestamp
  from activity_log_facts 
  where nlogid  ( select max(a.end_nlogid) from
  activity_log_import_history a)
  and dtCreateDate  '2006-12-18 9:10'
 
 Can you post the EXPLAIN ANALYSE for this one please? That'll show us 
 exactly what it's doing.
 
 -- 
Richard Huxton
Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Inner join vs where-clause subquery

2006-12-19 Thread Jeremy Haile
Here's the query and explain analyze using the result of the sub-query
substituted: 

QUERY
explain analyze select min(nlogid) as start_nlogid,
   max(nlogid) as end_nlogid,
   min(dtCreateDate) as start_transaction_timestamp,
   max(dtCreateDate) as end_transaction_timestamp
from activity_log_facts
where nlogid  478287801
and dtCreateDate  '2006-12-18 9:10'

EXPLAIN ANALYZE
Aggregate  (cost=657.37..657.38 rows=1 width=12) (actual
time=0.018..0.019 rows=1 loops=1)
  -  Index Scan using activity_log_facts_nlogid_idx on
  activity_log_facts  (cost=0.00..652.64 rows=472 width=12) (actual
  time=0.014..0.014 rows=0 loops=1)
Index Cond: (nlogid  478287801)
Filter: (dtcreatedate  '2006-12-18 09:10:00'::timestamp without
time zone)
Total runtime: 0.076 ms


Sorry if the reason should be obvious, but I'm not the best at
interpreting the explains.  Why is this explain so much simpler than the
other query plan (with the subquery)?



On Tue, 19 Dec 2006 18:23:06 +, Richard Huxton dev@archonet.com
said:
 Jeremy Haile wrote:
  Here is the explain analyze output:
 
 Well, the row estimates are about as far out as you can get:
 
-  Index Scan using activity_log_facts_pkey on
activity_log_facts  (cost=0.00..1831613.82 rows=1539298
width=12) (actual time=0.050..0.050 rows=0 loops=1)
 
-  Index Scan Backward using activity_log_facts_pkey on
activity_log_facts  (cost=0.00..1831613.82 rows=1539298
width=12) (actual time=0.004..0.004 rows=0 loops=1)
 
-  Index Scan using activity_log_facts_dtcreatedate_idx on
activity_log_facts  (cost=0.00..5406927.50 rows=1539298
width=12) (actual time=100221.953..100221.953 rows=0 loops=1)
 
-  Index Scan Backward using
activity_log_facts_dtcreatedate_idx on activity_log_facts 
(cost=0.00..5406927.50 rows=1539298 width=12) (actual
time=56367.364..56367.364 rows=0 loops=1)
 
 Hmm - it's using the indexes on dtCreateDate and nlogid which seems 
 broadly sensible, and then plans to limit the results for min()/max(). 
 However, it's clearly wrong about how many rows will satisfy
   nlogid  (select max(a.end_nlogid) from activity_log_import_history a)
 
  select min(nlogid) as start_nlogid,
 max(nlogid) as end_nlogid,
 min(dtCreateDate) as start_transaction_timestamp,
 max(dtCreateDate) as end_transaction_timestamp
  from activity_log_facts 
  where nlogid  ( select max(a.end_nlogid) from
  activity_log_import_history a)
  and dtCreateDate  '2006-12-18 9:10'
 
 If you run explain on the other forms of your query, I'd guess it's much 
 more accurate. There's a simple way to see if that is the issue. Run the 
 sub-query and substitute the actual value returned into the query above. 
 Then, try the same but with a prepared query. If it's down to nlogid 
 estimates then the first should be fast and the second slow.
 
 -- 
Richard Huxton
Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Inner join vs where-clause subquery

2006-12-19 Thread Jeremy Haile
Makes sense.   It is NOT executing the subquery more than once is it?

On Tue, 19 Dec 2006 20:02:35 +, Richard Huxton dev@archonet.com
said:
 Jeremy Haile wrote:
  Here's the query and explain analyze using the result of the sub-query
  substituted: 
  
  QUERY
  explain analyze select min(nlogid) as start_nlogid,
 max(nlogid) as end_nlogid,
 min(dtCreateDate) as start_transaction_timestamp,
 max(dtCreateDate) as end_transaction_timestamp
  from activity_log_facts
  where nlogid  478287801
  and dtCreateDate  '2006-12-18 9:10'
  
  EXPLAIN ANALYZE
  Aggregate  (cost=657.37..657.38 rows=1 width=12) (actual
  time=0.018..0.019 rows=1 loops=1)
-  Index Scan using activity_log_facts_nlogid_idx on
activity_log_facts  (cost=0.00..652.64 rows=472 width=12) (actual
time=0.014..0.014 rows=0 loops=1)
  Index Cond: (nlogid  478287801)
  Filter: (dtcreatedate  '2006-12-18 09:10:00'::timestamp without
  time zone)
  Total runtime: 0.076 ms
  
  
  Sorry if the reason should be obvious, but I'm not the best at
  interpreting the explains.  Why is this explain so much simpler than the
  other query plan (with the subquery)?
 
 Because it's planning it with knowledge of what nlogids it's filtering 
 by. It knows it isn't going to get many rows back with nlogid  
 478287801. In your previous explain it thought a large number of rows 
 would match and was trying not to sequentially scan the 
 activity_log_facts table.
 
 Ideally, the planner would evaluate the subquery in your original form 
 (it should know it's only getting one row back from max()). Then it 
 could plan the query as above. I'm not sure how tricky that is to do
 though.
 
 -- 
Richard Huxton
Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Inner join vs where-clause subquery

2006-12-19 Thread Jeremy Haile
I'm still confused as to why the inner join version ran so much faster
than the where-clause version.  

Here's the inner join query and explain ouput:
select min(nlogid) as start_nlogid,
   max(nlogid) as end_nlogid,
   min(dtCreateDate) as start_transaction_timestamp,
   max(dtCreateDate) as end_transaction_timestamp
from activity_log_facts
inner join ( select max(end_nlogid) as previous_nlogid from
activity_log_import_history) as a
on activity_log_facts.nlogid  a.previous_nlogid
where dtCreateDate  '2006-12-18 9:10'

Aggregate  (cost=246226.95..246226.96 rows=1 width=12)
  -  Nested Loop  (cost=49233.27..231209.72 rows=1501722 width=12)
-  Result  (cost=0.04..0.05 rows=1 width=0)
  InitPlan
-  Limit  (cost=0.00..0.04 rows=1 width=4)
  -  Index Scan Backward using
  activity_log_import_history_end_nlogid_idx on
  activity_log_import_history  (cost=0.00..114.97
  rows=2913 width=4)
Filter: (end_nlogid IS NOT NULL)
-  Bitmap Heap Scan on activity_log_facts 
(cost=49233.23..210449.44 rows=1660817 width=12)
  Recheck Cond: (activity_log_facts.nlogid 
  a.previous_nlogid)
  Filter: (dtcreatedate  '2006-12-18 09:10:00'::timestamp
  without time zone)
  -  Bitmap Index Scan on activity_log_facts_nlogid_idx 
  (cost=0.00..49233.23 rows=1660817 width=0)
Index Cond: (activity_log_facts.nlogid 
a.previous_nlogid)


Since the inner join is basically the same thing as doing the
where-clause subquery, why does it generate a far different plan?



On Tue, 19 Dec 2006 20:02:35 +, Richard Huxton dev@archonet.com
said:
 Jeremy Haile wrote:
  Here's the query and explain analyze using the result of the sub-query
  substituted: 
  
  QUERY
  explain analyze select min(nlogid) as start_nlogid,
 max(nlogid) as end_nlogid,
 min(dtCreateDate) as start_transaction_timestamp,
 max(dtCreateDate) as end_transaction_timestamp
  from activity_log_facts
  where nlogid  478287801
  and dtCreateDate  '2006-12-18 9:10'
  
  EXPLAIN ANALYZE
  Aggregate  (cost=657.37..657.38 rows=1 width=12) (actual
  time=0.018..0.019 rows=1 loops=1)
-  Index Scan using activity_log_facts_nlogid_idx on
activity_log_facts  (cost=0.00..652.64 rows=472 width=12) (actual
time=0.014..0.014 rows=0 loops=1)
  Index Cond: (nlogid  478287801)
  Filter: (dtcreatedate  '2006-12-18 09:10:00'::timestamp without
  time zone)
  Total runtime: 0.076 ms
  
  
  Sorry if the reason should be obvious, but I'm not the best at
  interpreting the explains.  Why is this explain so much simpler than the
  other query plan (with the subquery)?
 
 Because it's planning it with knowledge of what nlogids it's filtering 
 by. It knows it isn't going to get many rows back with nlogid  
 478287801. In your previous explain it thought a large number of rows 
 would match and was trying not to sequentially scan the 
 activity_log_facts table.
 
 Ideally, the planner would evaluate the subquery in your original form 
 (it should know it's only getting one row back from max()). Then it 
 could plan the query as above. I'm not sure how tricky that is to do
 though.
 
 -- 
Richard Huxton
Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] Postgres and Ingres R3 / SAN

2006-03-06 Thread Jeremy Haile
Clustering solutions for PostgreSQL are currently pretty limited.  Slony
could be a good option in the future, but it currently only supports
Master-Slave replication (not true clustering) and in my experience is a
pain to set up and administer.  Bizgres MPP has a lot of promise,
especially for data warehouses, but it currently doesn't have the best
OLTP database performance.  

So, I had a couple of questions:
1) I have heard bad things from people on this list regarding SANs - but
is there a better alternative for a high performance database cluster?
(both for redundancy and performance)  I've heard internal storage
touted before, but then you have to do something like master-master
replication to get horizontal scalability and write performance will
suffer.

2) Has anyone on this list had experience using Ingres R3 in a clustered
environment?  I am considering using Ingres R3's built-in clustering
support with a SAN, but am interested to know other people's experiences
before we start toying with this possibility.  Any experience with the
Ingres support from Computer Associates?  Good/bad?

Jeremy

---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] Slow query

2006-02-22 Thread Jeremy Haile
I am running a query that joins against several large tables (~5 million
rows each).  The query takes an exteremely long time to run, and the
explain output is a bit beyond my level of understanding.  It is an
auto-generated query, so the aliases are fairly ugly.  I can clean them
up (rename them) if it would help.  Also, let me know if I can send any
more information that would help (e.g. table schema)

Also, is there any resources where I can get a better understanding of
what PostgreSQL means when it says Sort Sort Key Bitmap Index Scan
Hash Cond etc. etc. - and how to recognize problems by looking at the
output.  I can understand the output for simple queries (e.g. is the
planner using an index or performing a seq. scan), but when you get to
more complex queries like the one below I lose my way =)

I would really appreciate it if someone from this list could tell me if
there is anything that is obviously wrong with the query or schema and
what I could do to improve the performance.

PostgreSQL 8.1
RedHat Enterprise Linux 4

--QUERY
select distinct city4_.region_id as region1_29_, city4_1_.name as
name29_, city4_.state_id as state2_30_ 
from registered_voters registered0_ 
 inner join registered_voter_addresses addresses1_ on
 registered0_.registered_voter_id=addresses1_.registered_voter_id 
 inner join registered_voter_addresses_regions regions2_ on
 addresses1_.address_id=regions2_.registered_voter_addresses_address_id 
 inner join regions region3_ on
 regions2_.regions_region_id=region3_.region_id 
 inner join cities city4_ on
 addresses1_.city_id=city4_.region_id 
 inner join regions city4_1_ on
 city4_.region_id=city4_1_.region_id 
where region3_.region_id='093c44e8-f3b2-4c60-8be3-2b4d148f9f5a' 
order by city4_1_.name


--EXPLAIN/ANALYZE OUTPUT
Unique  (cost=3572907.42..3623589.94 rows=4076438 width=93) (actual
time=2980825.714..3052333.753 rows=1124 loops=1)
  -  Sort  (cost=3572907.42..3585578.05 rows=5068252 width=93) (actual
time=2980825.710..2987407.888 rows=4918204 loops=1)
Sort Key: city4_1_.name, city4_.region_id, city4_.state_id
-  Hash Join  (cost=717783.40..1430640.10 rows=5068252
width=93) (actual time=1400141.559..2016131.467 rows=4918204 loops=1)
  Hash Cond:
((outer.registered_voter_addresses_address_id)::text =
(inner.address_id)::text)
  -  Bitmap Heap Scan on
registered_voter_addresses_regions regions2_  (cost=54794.95..575616.49
rows=5116843 width=80) (actual time=45814.469..155044.478 rows=4918205
loops=1)
Recheck Cond:
('093c44e8-f3b2-4c60-8be3-2b4d148f9f5a'::text =
(regions_region_id)::text)
-  Bitmap Index Scan on
reg_voter_address_region_region_idx  (cost=0.00..54794.95 rows=5116843
width=0) (actual time=45807.157..45807.157 rows=4918205 loops=1)
  Index Cond:
('093c44e8-f3b2-4c60-8be3-2b4d148f9f5a'::text =
(regions_region_id)::text)
  -  Hash  (cost=642308.89..642308.89 rows=741420
width=173) (actual time=1354217.934..1354217.934 rows=4918204 loops=1)
-  Hash Join  (cost=328502.66..642308.89
rows=741420 width=173) (actual time=204565.031..1268303.832 rows=4918204
loops=1)
  Hash Cond:
((outer.registered_voter_id)::text =
(inner.registered_voter_id)::text)
  -  Seq Scan on registered_voters
registered0_  (cost=0.00..173703.02 rows=4873202 width=40) (actual
time=0.005..39364.261 rows=4873167 loops=1)
  -  Hash  (cost=303970.34..303970.34
rows=748528 width=213) (actual time=204523.861..204523.861 rows=4918204
loops=1)
-  Hash Join  (cost=263.22..303970.34
rows=748528 width=213) (actual time=101.628..140936.062 rows=4918204
loops=1)
  Hash Cond:
((outer.city_id)::text = (inner.region_id)::text)
  -  Seq Scan on
registered_voter_addresses addresses1_  (cost=0.00..271622.23
rows=4919923 width=120) (actual time=0.025..98416.667 rows=4918205
loops=1)
  -  Hash  (cost=260.35..260.35
rows=1147 width=173) (actual time=101.582..101.582 rows=1147 loops=1)
-  Hash Join 
(cost=48.80..260.35 rows=1147 width=173) (actual time=88.608..98.984
rows=1147 loops=1)
  Hash Cond:
((outer.region_id)::text = (inner.region_id)::text)
  -  Seq Scan on
regions city4_1_  (cost=0.00..162.39 rows=7539 width=53) (actual
time=0.048..35.204 rows=7539 loops=1)
  -  Hash 
(cost=45.93..45.93 rows=1147 width=120) (actual time=48.896..48.896
rows=1147 loops=1)
-  Nested Loop
 (cost=0.00..45.93 rows=1147 width=120) (actual time=35.791..47.012
rows=1147 

[PERFORM] Reliability recommendations

2006-02-15 Thread Jeremy Haile
We are a small company looking to put together the most cost effective
solution for our production database environment.  Currently in
production Postgres 8.1 is running on this machine:

Dell 2850
2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache
4 GB DDR2 400 Mhz
2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)
4 x 146 GB 10K SCSI RAID 10 (for postgres data)
Perc4ei controller

The above is a standard Dell box with nothing added or modified beyond
the options available directly through Dell. We had a bad processor last
week that effectively put us down for an entire weekend. Though it was
the web server that failed, the experience has caused us to step back
and spend time coming up with a more reliable/fail-safe solution that
can reduce downtime.

Our load won't be substantial so extreme performance and load balancing
are not huge concerns. We are looking for good performance, at a good
price, configured in the most redundant, high availability manner
possible. Availability is the biggest priority.

I sent our scenario to our sales team at Dell and they came back with
all manner of SAN, DAS, and configuration costing as much as $50k.

We have the budget to purchase 2-3 additional machines along the lines
of the one listed above. As a startup with a limited budget, what would
this list suggest as options for clustering/replication or setting our
database up well in general?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] opinion on disk speed

2005-12-09 Thread Jeremy Haile
 one other note, you probably don't want to use all the disks in a raid10 
 array, you probably want to split a pair of them off into a seperate
 raid1  array and put your WAL on it.

Is a RAID 1 array of two disks sufficient for WAL?  What's a typical
setup for a high performance PostgreSQL installation?  RAID 1 for WAL
and RAID 10 for data?  

I've read that splitting the WAL and data offers huge performance
benefits.  How much additional benefit is gained by moving indexes to
another RAID array?  Would you typically set the indexes RAID array up
as RAID 1 or 10?  



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] Insert performance slows down in large batch

2005-12-01 Thread Jeremy Haile
I am importing roughly 15 million rows in one batch transaction.  I am
currently doing this through batch inserts of around 500 at a time,
although I am looking at ways to do this via multiple (one-per-table)
copy commands for performance reasons.  

I am currently running: PostgreSQL 8.0.4, Redhat Enterprise Linux 4,
ext3, all-on-one partition.  I am aware of methods of improving
performance by changing ext3 mounting options, splitting WAL, data, and
indexes to separate physical disks, etc.  I have also adjusted my
shared_buffers, work_mem, maintenance_work_mem, and checkpoint_segments
and can post their values if anyone thinks it is relevant to my question
(See questions at the bottom)  

What confuses me is that at the beginning of the import, I am inserting
roughly 25,000 rows every 7 seconds..and by the time I get towards the
end of the import, it is taking 145 seconds for the same number of rows.
 The inserts are spread across 4 tables and I have dropped all indexes
and constraints on these tables, including foreign keys, unique keys,
and even primary keys (even though I think primary key doesn't improve
performance)  The entire bulk import is done in a single transaction.

The result is a table with 4.8 million rows, two tables with 4.8*2
million rows, and another table with several thousand rows.

So, my questions are:
1) Why does the performance degrade as the table sizes grow?  Shouldn't
the insert performance remain fairly constant if there are no indexes or
constraints?  

2) Is there anything I can do to figure out where the time is being
spent?  Will postgres log any statistics or information to help me
diagnose the problem?  I have pasted a fairly representative sample of
vmstat below my e-mail in case it helps, although I'm not quite how to
interpret it in this case.  

3) Any other advice, other than the things I listed above (I am aware of
using copy, ext3 tuning, multiple disks, tuning postgresql.conf
settings)? 

Thanks in advance,
Jeremy Haile


#vmstat 2 20
procs ---memory-- ---swap-- -io --system--
cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy
 id wa
 1  0   9368   4416   2536 177878400   124513 2  2 
 0 96  2
 1  0   9368   4416   2536 177878400 0 0 100553 25 
 0 75  0
 1  1   9368   3904   2544 177932000 12164 6 1103   262 24 
 1 59 16
 1  0   9368   3704   2552 177938000 1625624 1140   344 23 
 1 53 23
 1  1   9368   2936   2560 178012000 16832 6 1143   359 23 
 1 52 24
 1  1   9368   3328   2560 177971200 13120 0    285 24 
 1 58 18
 1  0   9368   4544   2560 177855600  5184 0 1046   141 25 
 0 67  8
 1  1   9368   3776   2568 177929600  7296 6 1064   195 24 
 0 67  9
 1  0   9368   4480   2568 177854800  4096 0 1036   133 24 
 0 69  6
 1  0   9368   4480   2576 177860800  7504 0 1070   213 23 
 0 67 10
 1  0   9368   3136   2576 177990000  9536 0 1084   235 23 
 0 66 10
 1  1   9368   3072   2584 177996000 13632 6 1118   313 24 
 1 60 16
 1  0   9368   4480   2592 177859200  857624 1075   204 24 
 0 63 12
 1  0   9368   4480   2592 177859200 0 6 100452 25 
 0 75  0
 1  0   9368   4544   2600 177865200 0 6 100555 25 
 0 75  0
 1  1   9368   3840   2600 177933200 11264 4 1098   260 24 
 0 63 13
 1  1   9368   3072   2592 178015600 1708814 1145   346 24 
 1 51 24
 1  1   9368   4096   2600 177912800 16768 6 1140   360 23 
 1 54 21
 1  1   9368   3840   2600 177933200 16960 0 1142   343 24 
 1 54 22
 1  0   9368   3436   2596 177967600 16960 0 1142   352 24 
 1 53 23

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org