2014-10-20 21:59 GMT-02:00 Tom Lane <t...@sss.pgh.pa.us>:

> Marco Di Cesare <marco.dices...@pointclickcare.com> writes:
> > We are using a BI tool that generates a query with an unusually large
> number of joins. My understanding is that with this many joins Postgres
> query planner can't possibly use an exhaustive search so it drops into a
> heuristics algorithm. Unfortunately, the query runs quite slow (~35
> seconds) and seems to ignore using primary keys and indexes where available.
>
> > Query plan here (sorry had to anonymize):
> > http://explain.depesz.com/s/Uml
>
> It's difficult to make any detailed comments when you've shown us only an
> allegedly-bad query plan, and not either the query itself or the table
> definitions.
>
> However, it appears to me that the query plan is aggregating over a rather
> large number of join rows, and there are very few constraints that would
> allow eliminating rows.  So I'm not at all sure there is a significantly
> better plan available.  Are you claiming this query was instantaneous
> on SQL Server?
>
> The only thing that jumps out at me as possibly improvable is that with
> a further increase in work_mem, you could probably get it to change the
> last aggregation step from Sort+GroupAggregate into HashAggregate,
> which'd likely run faster ... assuming you can spare some more memory.
>
>                         regards, tom lane
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>



Hi,

As Tom said, WORK_MEM seems a nice place to start.

Here are other considerations you might take in account:
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

There's also the opportunity to tune the query itself (if it's not
automatically generated by your BI tool). You can always speed up a query
response by using filtered sub-selects instead of calling the the entire
tables themselves on the joins.

BR

Felipe

Reply via email to