Re: [PERFORM] 100x slowdown for nearly identical tables

2013-05-01 Thread Tom Lane
Craig James  writes:
> On Wed, May 1, 2013 at 5:18 PM, Tom Lane  wrote:
>> It looks like old_str_conntab is more or less clustered by "id",
>> and str_conntab not so much.  You could try EXPLAIN (ANALYZE, BUFFERS)
>> (on newer PG versions) to verify how many distinct pages are getting
>> touched during the indexscan.

> Yeah, now that you say it, it's obvious.  The original table was built with
> ID from a sequence, so it's going to be naturally clustered by ID.  The new
> table was built by reloading the data in alphabetical order by supplier
> name, so it would have scattered the IDs all over the place.

> I guess I could actually cluster the new table, but since that one table
> holds about 90% of the total data in the database, that would be a chore.
> Probably better to find a more efficient way to do the query.

Just out of curiosity, you could try forcing a bitmap indexscan to see
how much that helps.  The planner evidently thinks "not at all", but
it's been wrong before ;-)

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] In progress INSERT wrecks plans on table

2013-05-01 Thread Mark Kirkwood

On 02/05/13 02:06, Tom Lane wrote:

Mark Kirkwood  writes:

I am concerned that the deafening lack of any replies to my original
message is a result of folk glancing at your original quick reply and
thinking... incomplete problem spec...ignore... when that is not that
case - yes I should have muttered "9.2" in the original email, but we
have covered that now.

No, I think it's more that we're trying to get to beta, and so anything
that looks like new development is getting shuffled to folks' "to
look at later" queues.  The proposed patch is IMO a complete nonstarter
anyway; but I'm not sure what a less bogus solution would look like.



Yeah, I did think that beta might be consuming everyone's attention (of 
course immediately *after* sending the email)!


And yes, the patch was merely to illustrate the problem rather than any 
serious attempt at a solution.


Regards

Mark



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 100x slowdown for nearly identical tables

2013-05-01 Thread Craig James
On Wed, May 1, 2013 at 5:18 PM, Tom Lane  wrote:

> Craig James  writes:
> > I have two tables that are nearly identical, yet the same query runs 100x
> > slower on the newer one. ...
>
> > db=> explain analyze select id, 1 from str_conntab
> > where (id >= 12009977 and id <= 12509976) order by id;
>
> >  Index Scan using new_str_conntab_pkey_3217 on str_conntab
> >   (cost=0.00..230431.33 rows=87827 width=4)
> >   (actual time=65.771..51341.899 rows=48613 loops=1)
> >Index Cond: ((id >= 12009977) AND (id <= 12509976))
> >  Total runtime: 51350.556 ms
>
> > db=> explain analyze select id, 1 from old_str_conntab
> > where (id >= 12009977 and id <= 12509976) order by id;
>
> >  Index Scan using str_conntab_pkey on old_str_conntab
> >  (cost=0.00..82262.56 rows=78505 width=4)
> >  (actual time=38.327..581.235 rows=48725 loops=1)
> >Index Cond: ((id >= 12009977) AND (id <= 12509976))
> >  Total runtime: 586.071 ms
>
> It looks like old_str_conntab is more or less clustered by "id",
> and str_conntab not so much.  You could try EXPLAIN (ANALYZE, BUFFERS)
> (on newer PG versions) to verify how many distinct pages are getting
> touched during the indexscan.
>

Yeah, now that you say it, it's obvious.  The original table was built with
ID from a sequence, so it's going to be naturally clustered by ID.  The new
table was built by reloading the data in alphabetical order by supplier
name, so it would have scattered the IDs all over the place.

I guess I could actually cluster the new table, but since that one table
holds about 90% of the total data in the database, that would be a chore.
Probably better to find a more efficient way to do the query.

Thanks,
Craig


>
> regards, tom lane
>


Re: [PERFORM] 100x slowdown for nearly identical tables

2013-05-01 Thread Tom Lane
Craig James  writes:
> I have two tables that are nearly identical, yet the same query runs 100x
> slower on the newer one. ...

> db=> explain analyze select id, 1 from str_conntab
> where (id >= 12009977 and id <= 12509976) order by id;

>  Index Scan using new_str_conntab_pkey_3217 on str_conntab
>   (cost=0.00..230431.33 rows=87827 width=4)
>   (actual time=65.771..51341.899 rows=48613 loops=1)
>Index Cond: ((id >= 12009977) AND (id <= 12509976))
>  Total runtime: 51350.556 ms

> db=> explain analyze select id, 1 from old_str_conntab
> where (id >= 12009977 and id <= 12509976) order by id;

>  Index Scan using str_conntab_pkey on old_str_conntab
>  (cost=0.00..82262.56 rows=78505 width=4)
>  (actual time=38.327..581.235 rows=48725 loops=1)
>Index Cond: ((id >= 12009977) AND (id <= 12509976))
>  Total runtime: 586.071 ms

It looks like old_str_conntab is more or less clustered by "id",
and str_conntab not so much.  You could try EXPLAIN (ANALYZE, BUFFERS)
(on newer PG versions) to verify how many distinct pages are getting
touched during the indexscan.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] 100x slowdown for nearly identical tables

2013-05-01 Thread Craig James
I have two tables that are nearly identical, yet the same query runs 100x
slower on the newer one.  The two tables have the same number of rows (+/-
about 1%), and are roughly the same size:

db=> SELECT relname AS table_name,
db-> pg_size_pretty(pg_relation_size(oid)) AS table_size,
db-> pg_size_pretty(pg_total_relation_size(oid)) AS total_size
db-> FROM pg_class
db-> WHERE relkind in ('r','i')
db-> ORDER BY pg_relation_size(oid) DESC;
   table_name   | table_size | total_size
++
 old_str_conntab| 26 GB  | 27 GB
 str_conntab| 20 GB  | 20 GB

Both tables have a single index, the primary key.  The new table has
several more columns, but they're mostly empty (note that the new table is
SMALLER, yet it is 100x slower).

I've already tried "reindex table ..." and "analyze table".  No difference.

This is running on PG 8.4.17 and Ubuntu 10.04.  Data is in a RAID10 (8
disks), and WAL is on a RAID1, both controlled by an LSI 3WARE  9650SE-12ML
with BBU.

If I re-run the same query, both the old and new tables drop to about 35
msec.  But the question is, why is the initial query so fast on the old
table, and so slow on the new table?  I have three other servers with
similar or identical hardware/software, and this happens on all of them,
including on a 9.1.2 version of Postgres.

Thanks in advance...
Craig


db=> explain analyze select id, 1 from str_conntab
where (id >= 12009977 and id <= 12509976) order by id;

 Index Scan using new_str_conntab_pkey_3217 on str_conntab
  (cost=0.00..230431.33 rows=87827 width=4)
  (actual time=65.771..51341.899 rows=48613 loops=1)
   Index Cond: ((id >= 12009977) AND (id <= 12509976))
 Total runtime: 51350.556 ms

db=> explain analyze select id, 1 from old_str_conntab
where (id >= 12009977 and id <= 12509976) order by id;

 Index Scan using str_conntab_pkey on old_str_conntab
 (cost=0.00..82262.56 rows=78505 width=4)
 (actual time=38.327..581.235 rows=48725 loops=1)
   Index Cond: ((id >= 12009977) AND (id <= 12509976))
 Total runtime: 586.071 ms

db=> \d str_conntab
  Table "registry.str_conntab"
  Column  |  Type   | Modifiers
--+-+---
 id   | integer | not null
 contab_len   | integer |
 contab_data  | text|
 orig_contab_len  | integer |
 orig_contab_data | text|
 normalized   | text|
Indexes:
"new_str_conntab_pkey_3217" PRIMARY KEY, btree (id)
Referenced by:
TABLE "parent" CONSTRAINT "fk_parent_str_conntab_parent_id_3217"
FOREIGN KEY (parent_id) REFERENCES str_conntab(id)
TABLE "version" CONSTRAINT "fk_version_str_conntab_version_id_3217"
FOREIGN KEY (version_id) REFERENCES str_conntab(id)

db=> \d old_str_conntab
 Table "registry.old_str_conntab"
   Column|  Type   | Modifiers
-+-+---
 id  | integer | not null
 contab_len  | integer |
 contab_data | text|
Indexes:
"str_conntab_pkey" PRIMARY KEY, btree (id)


Re: [PERFORM] [BUGS] BUG #8130: Hashjoin still gives issues

2013-05-01 Thread Jeff Davis
On Wed, 2013-05-01 at 17:44 +0200, Stefan de Konink wrote:
> Combined with the recent bugfix regarding hash 
> estimation, it gives me a good indication that there might be a bug.

To which recent bugfix are you referring?

The best venue for fixing an issue like this is pgsql-performance -- it
doesn't make too much difference whether it's a "bug" or not.
Performance problems sometimes end up as bugs and sometimes end up being
treated more like an enhancement; but most of the progress is made on
pgsql-performance regardless.

Regards,
Jeff Davis





-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [BUGS] BUG #8130: Hashjoin still gives issues

2013-05-01 Thread Igor Neyman


> -Original Message-
> 

> 
> The original query:
> 
> select * from ambit_privateevent_calendars as a, ambit_privateevent as
> b, ambit_calendarsubscription as c, ambit_calendar as d where
> c.calendar_id = d.id and a.privateevent_id = b.id and c.user_id = 1270
> and  c.calendar_id = a.calendar_id and c.STATUS IN (1, 8, 2, 15, 18, 4,
> 12, 20) and not b.main_recurrence = true;
> 
> select b.id from ambit_privateevent_calendars as a, ambit_privateevent
> as b, ambit_calendarsubscription as c, ambit_calendar as d where
> c.calendar_id = d.id and a.privateevent_id = b.id and c.user_id = 1270
> and  c.calendar_id = a.calendar_id and c.STATUS IN (1, 8, 2, 15, 18, 4,
> 12, 20) and not b.main_recurrence = true;
> 
> (select * => select b.id, the star query is *fastest*)
> 
> We compare:
> http://explain.depesz.com/s/jRx
> http://explain.depesz.com/s/eKE
> 
> 
> By setting "set enable_hashjoin = off;" performance in our entire
> application increased 30 fold in throughput, which was a bit unexpected
> but highly appreciated. The result of the last query switch the
> mergejoin:
> 
> http://explain.depesz.com/s/AWB
> 
> It is also visible that after hashjoin is off, the b.id query is faster
> than the * query (what would be expected).
> 
> 
> Our test machine is overbudgetted, 4x the memory of the entire database
> ~4GB, and uses the PostgreSQL stock settings.
> 
> 
> Stefan
> 

I'd suggest that you adjust Postgres configuration, specifically memory 
settings (buffer_cache, work_mem, effective_cache_size), to reflect your 
hardware config, and see how it affects your query.

Regards,
Igor Neyman


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread Igor Neyman


> -Original Message-
> From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-
> ow...@postgresql.org] On Behalf Of Anne Rosset
> Sent: Wednesday, May 01, 2013 1:10 PM
> To: k...@rice.edu
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Deterioration in performance when query executed
> in multi threads
> 
> Thanks Ken. I am going to test with different pool sizes and see if I
> see any improvements.
> Are there other configuration options I should look like? I was
> thinking of playing with shared_buffer.
> 
> Thanks,
> Anne
> 
> -Original Message-
> From: k...@rice.edu [mailto:k...@rice.edu]
> Sent: Wednesday, May 01, 2013 9:27 AM
> To: Anne Rosset
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Deterioration in performance when query executed
> in multi threads
> 
> On Wed, May 01, 2013 at 04:07:55PM +, Anne Rosset wrote:
> > Hi Ken,
> > Thanks for your answer. My test is actually running with jboss 7/jdbc
> and the connection pool is defined  with min-pool-size =10 and max-
> pool-size=400.
> >
> > Why would you think it is an issue with the connection pool?
> >
> > Thanks,
> > Anne
> >
> 
> Hi Anne,
> 
> You want to be able to run as many jobs productively at once as your
> hardware is capable of supporting. Usually something starting a 2 x
> number of CPUs is best.
> If you make several runs increasing the size of the pool each time, you
> will see a maximum throughput somewhere near there and then the
> performance will decrease as you add more and more connections. You can
> then use that sweet spot.
> Your test harness should make that pretty easy to find.
> 
> Regards,
> Ken
> 
> 
> --
> Sent via pgsql-performance mailing list (pgsql-
> performa...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

Anne,

Before expecting advice on specific changes to Postgres configuration 
parameters,
You should provide this list with your hardware configuration, Postgres 
version, your current Postgres configuration parameters (at least those that 
changed from defaults).
And, if you do the testing using specific query, would be nice if you provide 
the results of:

Explain analyze ;

along with the definition of database objects (tables, indexes) involved in 
this select.

Also, you mention client-side connection pooler.  In my experience, server-side 
poolers, such as PgBouncer mentioned earlier, are much more effective.

Regards,
Igor Neyman



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread Anne Rosset
Thanks Ken. I am going to test with different pool sizes and see if I see any 
improvements.
Are there other configuration options I should look like? I was thinking of 
playing with shared_buffer.

Thanks,
Anne

-Original Message-
From: k...@rice.edu [mailto:k...@rice.edu] 
Sent: Wednesday, May 01, 2013 9:27 AM
To: Anne Rosset
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Deterioration in performance when query executed in 
multi threads

On Wed, May 01, 2013 at 04:07:55PM +, Anne Rosset wrote:
> Hi Ken,
> Thanks for your answer. My test is actually running with jboss 7/jdbc and the 
> connection pool is defined  with min-pool-size =10 and max-pool-size=400.
> 
> Why would you think it is an issue with the connection pool?
> 
> Thanks,
> Anne
> 

Hi Anne,

You want to be able to run as many jobs productively at once as your hardware 
is capable of supporting. Usually something starting a 2 x number of CPUs is 
best.
If you make several runs increasing the size of the pool each time, you will 
see a maximum throughput somewhere near there and then the performance will 
decrease as you add more and more connections. You can then use that sweet spot.
Your test harness should make that pretty easy to find.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread Scott Marlowe
On Wed, May 1, 2013 at 10:26 AM, k...@rice.edu  wrote:
> On Wed, May 01, 2013 at 04:07:55PM +, Anne Rosset wrote:
>> Hi Ken,
>> Thanks for your answer. My test is actually running with jboss 7/jdbc and 
>> the connection pool is defined  with min-pool-size =10 and max-pool-size=400.
>>
>> Why would you think it is an issue with the connection pool?
>>
>> Thanks,
>> Anne
>>
>
> Hi Anne,
>
> You want to be able to run as many jobs productively at once as your hardware 
> is
> capable of supporting. Usually something starting a 2 x number of CPUs is 
> best.
> If you make several runs increasing the size of the pool each time, you will
> see a maximum throughput somewhere near there and then the performance will
> decrease as you add more and more connections. You can then use that sweet 
> spot.
> Your test harness should make that pretty easy to find.

Here's a graph of tps from pgbench on a 48 core / 32 drive battery
backed cache RAID machine:
https://plus.google.com/u/0/photos/117090950881008682691/albums/5537418842370875697/5537418902326245874
Note that on that machine, the peak is between 40 and 50 clients at once.
Note also the asymptote levelling off at 2800tps. This is a good
indication of how the machine will behave if overloaded / connection
pooling goes crazy etc.
So yeah I suggest Anne do what you're saying and chart it. It should
be obvious where the sweet spot is.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread k...@rice.edu
On Wed, May 01, 2013 at 04:07:55PM +, Anne Rosset wrote:
> Hi Ken,
> Thanks for your answer. My test is actually running with jboss 7/jdbc and the 
> connection pool is defined  with min-pool-size =10 and max-pool-size=400.
> 
> Why would you think it is an issue with the connection pool?
> 
> Thanks,
> Anne
> 

Hi Anne,

You want to be able to run as many jobs productively at once as your hardware is
capable of supporting. Usually something starting a 2 x number of CPUs is best.
If you make several runs increasing the size of the pool each time, you will
see a maximum throughput somewhere near there and then the performance will
decrease as you add more and more connections. You can then use that sweet spot.
Your test harness should make that pretty easy to find.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread Anne Rosset
Hi Ken,
Thanks for your answer. My test is actually running with jboss 7/jdbc and the 
connection pool is defined  with min-pool-size =10 and max-pool-size=400.

Why would you think it is an issue with the connection pool?

Thanks,
Anne


-Original Message-
From: k...@rice.edu [mailto:k...@rice.edu] 
Sent: Wednesday, May 01, 2013 7:13 AM
To: Anne Rosset
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Deterioration in performance when query executed in 
multi threads

On Wed, May 01, 2013 at 02:05:06PM +, Anne Rosset wrote:
> Hi all,
> We are running a stress test that executes one select query with multiple 
> threads.
> The query executes very fast (10ms). It returns 100 rows.  I see 
> deterioration in the performance when we have multiple threads executing the 
> query. With 100 threads, the query takes between 3s and 8s.
> 
> I suppose there is a way to tune our database. What are the parameters 
> I should look into? (shared_buffer?, wal_buffer?)
> 
> Thanks for your help,
> Anne

Try a connection pooler like pgbouncer to keep the number of simultaneous 
queries bounded to a reasonable number. You will actually get better 
performance.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [BUGS] BUG #8130: Hashjoin still gives issues

2013-05-01 Thread Stefan de Konink

Dear Tom,


On Wed, 1 May 2013, Tom Lane wrote:


What can we do to provide a bit more of information?


https://wiki.postgresql.org/wiki/Slow_Query_Questions

There is no particularly good reason to think this is a bug; please
take it up on pgsql-performance if you have more questions.


I beg to disagree, the performance of a select * query and the select b.id 
query are both "hot". The result in a fundamentally different query plan 
(and performance). Combined with the recent bugfix regarding hash 
estimation, it gives me a good indication that there might be a bug.


I am not deep into the query optimiser of PostgreSQL but given the above 
same were different selections can change an entire query plan (and * is 
in fact out of the box 30 times faster than b.id) it does. When hash is 
disabled the entire query is -depending on the system checked- 2 to 
30x faster.



The original query:

select * from ambit_privateevent_calendars as a, ambit_privateevent as b, 
ambit_calendarsubscription as c, ambit_calendar as d where c.calendar_id = 
d.id and a.privateevent_id = b.id and c.user_id = 1270 and  c.calendar_id 
= a.calendar_id and c.STATUS IN (1, 8, 2, 15, 18, 4, 12, 20) and not 
b.main_recurrence = true;


select b.id from ambit_privateevent_calendars as a, ambit_privateevent as 
b, ambit_calendarsubscription as c, ambit_calendar as d where c.calendar_id = 
d.id and a.privateevent_id = b.id and c.user_id = 1270 and  c.calendar_id 
= a.calendar_id and c.STATUS IN (1, 8, 2, 15, 18, 4, 12, 20) and not 
b.main_recurrence = true;


(select * => select b.id, the star query is *fastest*)

We compare:
http://explain.depesz.com/s/jRx
http://explain.depesz.com/s/eKE


By setting "set enable_hashjoin = off;" performance in our entire
application increased 30 fold in throughput, which was a bit unexpected 
but highly appreciated. The result of the last query switch the mergejoin:


http://explain.depesz.com/s/AWB

It is also visible that after hashjoin is off, the b.id query is faster 
than the * query (what would be expected).



Our test machine is overbudgetted, 4x the memory of the entire database 
~4GB, and uses the PostgreSQL stock settings.



Stefan


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread k...@rice.edu
On Wed, May 01, 2013 at 02:05:06PM +, Anne Rosset wrote:
> Hi all,
> We are running a stress test that executes one select query with multiple 
> threads.
> The query executes very fast (10ms). It returns 100 rows.  I see 
> deterioration in the performance when we have multiple threads executing the 
> query. With 100 threads, the query takes between 3s and 8s.
> 
> I suppose there is a way to tune our database. What are the parameters I 
> should look into? (shared_buffer?, wal_buffer?)
> 
> Thanks for your help,
> Anne

Try a connection pooler like pgbouncer to keep the number of simultaneous 
queries
bounded to a reasonable number. You will actually get better performance.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] In progress INSERT wrecks plans on table

2013-05-01 Thread Tom Lane
Mark Kirkwood  writes:
> I am concerned that the deafening lack of any replies to my original 
> message is a result of folk glancing at your original quick reply and 
> thinking... incomplete problem spec...ignore... when that is not that 
> case - yes I should have muttered "9.2" in the original email, but we 
> have covered that now.

No, I think it's more that we're trying to get to beta, and so anything
that looks like new development is getting shuffled to folks' "to
look at later" queues.  The proposed patch is IMO a complete nonstarter
anyway; but I'm not sure what a less bogus solution would look like.

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread Anne Rosset
Hi all,
We are running a stress test that executes one select query with multiple 
threads.
The query executes very fast (10ms). It returns 100 rows.  I see deterioration 
in the performance when we have multiple threads executing the query. With 100 
threads, the query takes between 3s and 8s.

I suppose there is a way to tune our database. What are the parameters I should 
look into? (shared_buffer?, wal_buffer?)




Thanks for your help,
Anne


Re: [PERFORM] In progress INSERT wrecks plans on table

2013-05-01 Thread Mark Kirkwood

On 26/04/13 15:34, Gavin Flower wrote:

On 26/04/13 15:19, Mark Kirkwood wrote:

While in general you are quite correct - in the above case
(particularly as I've supplied a test case) it should be pretty
obvious that any moderately modern version of postgres on any
supported platform will exhibit this.

>

While I admit that I did not look closely at your test case - I am aware
that several times changes to Postgres from one minor version to
another, can have drastic unintended side effects (which might, or might
not, be relevant to your situation). Besides, it helps sets the scene,
and is one less thing that needs to be deduced.



Indeed - however, my perhaps slightly grumpy reply to your email was 
based on an impression of over keen-ness to dismiss my message without 
reading it (!) and a - dare I say it - one size fits all presentation of 
"here are the hoops to jump through". Now I spent a reasonable amount of 
time preparing the message and its attendant test case - and a comment 
such as your based on *not reading it* ...errrm... well lets say I think 
we can/should do better.


I am concerned that the deafening lack of any replies to my original 
message is a result of folk glancing at your original quick reply and 
thinking... incomplete problem spec...ignore... when that is not that 
case - yes I should have muttered "9.2" in the original email, but we 
have covered that now.


Regards


Mark


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance