Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Josh Berkus
Greg,

 The only connection to MVCC is that the obvious solution doesn't work,
 namely storing a cache of the aggregate in the table information.

Well, that solution also doesn't work if you use a WHERE condition or JOIN, 
now does it?

 So what would it take to implement this for all aggregates? Where I think
 all really just means min(), max(), first(), last().

Um, what the heck are first() and last()?   These are not supported aggregates 
... table rows are *not* ordered.

 For min() and max() it would have to indicate not only that only the first
 or last record is necessary but also the sort order to impose.

I think Tom already suggested this based on adding a field to CREATE 
AGGREGATE.  But I think implementation isn't as simple as you think it is.

 Now the problem I see is if there's no index on the sort order imposed, and
 the previous step wasn't a merge join or something else that would return
 the records in order then it's not necessarily any faster to sort the
 records and return only some. It might be for small numbers of records, but
 it might be faster to just read them all in and check each one for min/max
 the linear way.

Yes, Tom mentioned this also.  Working out the rules whereby the planner could 
decide the viability of index use is a non-trivial task.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Thomas Swan
Tom Lane wrote:

scott.marlowe [EMAIL PROTECTED] writes:
  

On Mon, 8 Sep 2003, Neil Conway wrote:


As was pointed out in a thread a couple days ago, MIN/MAX() optimization
has absolutely nothing to do with MVCC. It does, however, make
optimizing COUNT() more difficult.
  


  

Not exactly.  While max(id) is easily optimized by query replacement, 
more complex aggregates will still have perfomance issues that would not 
be present in a row locking database.  i.e. max((field1/field2)*field3) is 
still going to cost more to process, isn't it?



Er, what makes you think that would be cheap in any database?

Postgres would actually have an advantage given its support for
expressional indexes (nee functional indexes).  If we had an optimizer
transform to convert MAX() into an index scan, I would expect it to be
able to match up max((field1/field2)*field3) with an index on
((field1/field2)*field3).
  


Would it be possible to rewrite min and max at the parser level into a
select/subselect  (clause) condition ( repeat condition ) order by
(clause ) descending/ascending limit 1 and thereby avoiding the
penalties of altering the default aggregate behavior?  Would it yield
anything beneficial?



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] slow plan for min/max

2003-09-08 Thread scott.marlowe
On Sun, 7 Sep 2003, Pailloncy Jean-Gérard wrote:

Asking a question about why max(id) is so much slower than select id order 
by id desc limit 1, Pailloncy said:

 I ask for the same thing.
 That's better !

This is a Frequently asked question about something that isn't likely to 
change any time soon.

Basically, Postgresql uses an MVCC locking system that makes massively 
parallel operation possible, but costs in certain areas, and one of those 
areas is aggregate performance over large sets.  MVCC makes it very hard 
to optimize all but the simplest of aggregates, and even those 
optimzations which are possible would wind up being quite ugly at the 
parser level.

You might want to search the archives in the last couple years for this 
subject, as it's come up quite often.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] slow plan for min/max

2003-09-08 Thread Neil Conway
On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:
 Basically, Postgresql uses an MVCC locking system that makes massively 
 parallel operation possible, but costs in certain areas, and one of those 
 areas is aggregate performance over large sets.  MVCC makes it very hard 
 to optimize all but the simplest of aggregates, and even those 
 optimzations which are possible would wind up being quite ugly at the 
 parser level.

As was pointed out in a thread a couple days ago, MIN/MAX() optimization
has absolutely nothing to do with MVCC. It does, however, make
optimizing COUNT() more difficult.

-Neil



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] slow plan for min/max

2003-09-08 Thread Christopher Browne
[EMAIL PROTECTED] (scott.marlowe) writes:
 On Sun, 7 Sep 2003, Pailloncy Jean-Gérard wrote:

 Asking a question about why max(id) is so much slower than select id order 
 by id desc limit 1, Pailloncy said:

 I ask for the same thing.  That's better !

 This is a Frequently asked question about something that isn't
 likely to change any time soon.

 Basically, Postgresql uses an MVCC locking system that makes
 massively parallel operation possible, but costs in certain areas,
 and one of those areas is aggregate performance over large sets.
 MVCC makes it very hard to optimize all but the simplest of
 aggregates, and even those optimzations which are possible would
 wind up being quite ugly at the parser level.

MVCC makes it difficult to optimize aggregates resembling COUNT(*) or
SUM(*), at least vis-a-vis having this available for a whole table
(e.g. - you have to be doing 'SELECT COUNT(*), SUM(SOMEFIELD) FROM
THIS_TABLE' with NO WHERE clause).

But there is nothing about MVCC that makes it particularly difficult
to handle the transformation:

 select max(field) from some_table where another_field 
still_another_field;

   (which isn't particularly efficient) into

 select field from some_table where another_field 
still_another_field order by field desc limit 1;

The problems observed are thus:

 1.  If the query asks for other data, it might be necessary to scan
 the table to get the other data, making the optimization
 irrelevant;

 2.  If there's a good index to key on, the transformed version might
 be a bunch quicker, but it is nontrivial to determine that, a
 priori;

 3.  It would be a fairly hairy optimization to throw into the query
 optimizer, so people are reluctant to try to do so.

Note that MVCC has _nothing_ to do with any of those three problems.

The MVCC-related point is that there is reluctance to create some
special case that will be troublesome to maintain instead of having
some comprehensive handling of _all_ aggregates.  It seems a better
idea to fix them all rather than to kludge things up by fixing one
after another.
-- 
let name=cbbrowne and tld=cbbrowne.com in name ^ @ ^ tld;;
http://cbbrowne.com/info/lisp.html
Signs of a Klingon Programmer -  10. A TRUE  Klingon Warrior does not
comment his code!

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] slow plan for min/max

2003-09-08 Thread scott.marlowe
On Mon, 8 Sep 2003, Neil Conway wrote:

 On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:
  Basically, Postgresql uses an MVCC locking system that makes massively 
  parallel operation possible, but costs in certain areas, and one of those 
  areas is aggregate performance over large sets.  MVCC makes it very hard 
  to optimize all but the simplest of aggregates, and even those 
  optimzations which are possible would wind up being quite ugly at the 
  parser level.
 
 As was pointed out in a thread a couple days ago, MIN/MAX() optimization
 has absolutely nothing to do with MVCC. It does, however, make
 optimizing COUNT() more difficult.

Not exactly.  While max(id) is easily optimized by query replacement, 
more complex aggregates will still have perfomance issues that would not 
be present in a row locking database.  i.e. max((field1/field2)*field3) is 
still going to cost more to process, isn't it?



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow plan for min/max

2003-09-08 Thread Tom Lane
scott.marlowe [EMAIL PROTECTED] writes:
 On Mon, 8 Sep 2003, Neil Conway wrote:
 As was pointed out in a thread a couple days ago, MIN/MAX() optimization
 has absolutely nothing to do with MVCC. It does, however, make
 optimizing COUNT() more difficult.

 Not exactly.  While max(id) is easily optimized by query replacement, 
 more complex aggregates will still have perfomance issues that would not 
 be present in a row locking database.  i.e. max((field1/field2)*field3) is 
 still going to cost more to process, isn't it?

Er, what makes you think that would be cheap in any database?

Postgres would actually have an advantage given its support for
expressional indexes (nee functional indexes).  If we had an optimizer
transform to convert MAX() into an index scan, I would expect it to be
able to match up max((field1/field2)*field3) with an index on
((field1/field2)*field3).

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] slow plan for min/max

2003-09-08 Thread Greg Stark

scott.marlowe [EMAIL PROTECTED] writes:

 Basically, Postgresql uses an MVCC locking system that makes massively 

As discussed, uh, a few days ago, this particular problem is not caused by
MVCC but by postgres having a general purpose aggregate system and not having
special code for handling min/max. Aggregates normally require access to every
record they're operating on, not just the first or last in some particular
order. You'll note the LIMIT 1/DISTINCT ON work-around works fine with MVCC...

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow plan for min/max

2003-09-08 Thread Matt Clark
 Actually, referring down to later parts of this thread, why can't this
 optimisation be performed internally for built-in types?  I understand the
 issue with aggregates over user-defined types, but surely optimising max()
 for int4, text, etc is safe and easy?

Sorry, missed the bit about user-defined functions.  So I should have said
built-in functions operating over built-in types.  Which does sound more
complicated, but anyone redefining max() is surely not in a position to seek
sympathy if they lose performance?


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow plan for min/max

2003-09-08 Thread Matt Clark
 Matt Clark [EMAIL PROTECTED] writes:
  Actually, referring down to later parts of this thread, why can't this
  optimisation be performed internally for built-in types?  I
 understand the
  issue with aggregates over user-defined types, but surely
 optimising max()
  for int4, text, etc is safe and easy?

 I can't see that the datatype involved has anything to do with it.
 None of the issues that come up in making the planner do this are
 datatype-specific.  You could possibly avoid adding some columns
 to pg_aggregate if you instead hard-wired the equivalent knowledge
 (for builtin types only) into some code somewhere, but a patch that
 approached it that way would be rejected as unmaintainable.

I don't pretend to have any useful knowledge of the internals of this, so
much of what I write may seem like noise to you guys.  The naive question is
'I have an index on X, so finding max(X) should be trivial, so why can't the
planner exploit that triviality?'.  AFAICS the short sophisticated answer is
that it just isn't trivial in the general case.

Upon rereading the docs on aggregates I see that it really isn't trivial at
all.  Not even knowing things like 'this index uses the same function as
this aggregate' gets you very far, because of the very general nature of the
implementation of aggs.

So it should be flagged very prominently in the docs that max() and min()
are almost always not what 90% of people want to use 90% of the time,
because indexes do the same job much better for anything other than tiny
tables.

Know what we (OK, I) need?  An explicitly non-aggregate max() and min(),
implemented differently, so they can be optimised.  let's call them
idx_max() and idx_min(), which completely bypass the standard aggregate
code.  Because let's face it, in most cases where you regularly want a max
or a min you have an index defined, and you want the DB to use it.

And I would volunteer to do it, I would, but you really don't want my C in
your project ;-)  I do volunteer to do some doc tweaking though - who do I
talk to?

M


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[PERFORM] slow plan for min/max

2003-09-07 Thread Pailloncy Jean-Gérard
I have:
psql (PostgreSQL) 7.3.2
I do a modification of 'access/index/indexam.c' where I comment:
#ifdef NOT_USED
if (scan-keys_are_unique  scan-got_tuple)
{
if (ScanDirectionIsForward(direction))
{
if (scan-unique_tuple_pos = 0)
scan-unique_tuple_pos++;
}
else if (ScanDirectionIsBackward(direction))
{
if (scan-unique_tuple_pos = 0)
scan-unique_tuple_pos--;
}
if (scan-unique_tuple_pos == 0)
return heapTuple;
else
return NULL;
}
#endif
I do not remember the references of the bug.
But the solution was planned for 7.4.
I do:
psql=# \di
[skip]
 public | url_next_index_time  | index | postgresql | url
 [skip]
(11 rows)
I have an index on next_index_time field on table url.

psql=# explain select min(next_index_time) from url \g
QUERY PLAN
---
 Aggregate  (cost=85157.70..85157.70 rows=1 width=4)
   -  Seq Scan on url  (cost=0.00..80975.56 rows=1672856 width=4)
(2 rows)
Silly SeqScan of all the table.

psql=# explain SELECT next_index_time FROM url ORDER BY  
next_index_time LIMIT 1 \g
   QUERY PLAN
--- 
-
 Limit  (cost=0.00..0.20 rows=1 width=4)
   -  Index Scan using url_next_index_time on url   
(cost=0.00..340431.47 rows=1672856 width=4)
(2 rows)
I ask for the same thing.
That's better !
Why the planner does that ?

Jean-Gérard Pailloncy
Paris, France
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly