On 12/7/15 9:54 AM, Tom Lane wrote:
Jim Nasby<jim.na...@bluetreble.com>  writes:
>On 12/6/15 10:38 AM, Tom Lane wrote:
>>I said "in most cases".  You can find example cases to support almost any
>>weird planner optimization no matter how expensive and single-purpose;
>>but that is the wrong way to think about it.  What you have to think about
>>is average cases, and in particular, not putting a drag on planning time
>>in cases where no benefit ensues.  We're not committing any patches that
>>give one uncommon case an 1100X speedup by penalizing every other query 10%,
>>or even 1%; especially not when there may be other ways to fix it.
>This is a problem that seriously hurts Postgres in data warehousing
>applications.
Please provide some specific examples.  I remain skeptical that this
would make a useful difference all that often in the real world ...
and handwaving like that does nothing to change my opinion.  What do
the queries look like, and why would deducing an extra inequality
condition help them?

I was speaking more broadly than this particular case. There's a lot of planner improvements that get shot down because of the planning overhead they would add. That's great for cases when milliseconds count, but spending an extra 60 seconds (a planning eternity) to shave an hour off a warehouse/reporting query.

There needs to be some way to give the planner an idea of how much effort it should expend. GEQO and *_collapse_limit addresses this in the opposite direction (putting a cap on planner effort), but I think we need something that does the opposite "I know this query will take a long time, so expend extra effort on planning it."
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to