Hello,
I have a little 2-table JOIN, GROUP BY, ORDER BY query that does a sequential
scan on a multi-million row table. I _thought_ I had all the appropriate
indices, but apparently I do not. I was wondering if anyone can spot a way I
can speed up this query.
The query currently takes... *g
On Wed, May 10, 2006 at 13:13:59 -0500,
[EMAIL PROTECTED] wrote:
> Hello,
>
> I have a little 2-table JOIN, GROUP BY, ORDER BY query that does a sequential
> scan on a multi-million row table. I _thought_ I had all the appropriate
> indices, but apparently I do not. I was wondering if anyo
OTECTED]
Cc: pgsql-sql@postgresql.org
Sent: Wednesday, May 10, 2006 3:23:29 PM
Subject: Re: [SQL] Help with a seq scan on multi-million row table
On Wed, May 10, 2006 at 13:13:59 -0500,
[EMAIL PROTECTED] wrote:
> Hello,
>
> I have a little 2-table JOIN, GROUP BY, ORDER BY query th
<[EMAIL PROTECTED]> writes:
>-> Hash Join (cost=2797.65..140758.50 rows=3790 width=10)
> (actual time=248.530..380635.132 rows=8544 loops=1)
> Hash Cond: ("outer".user_url_id = "inner".id)
> -> Seq Scan on user_url_tag userurltag0_
d from something called Hibernate, and I'm not sure if that will let me
set enable_hashjoin=off through its API...
Thanks,
Otis
- Original Message
From: Tom Lane <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: pgsql-sql@postgresql.org
Sent: Wednesday, May 10, 2006 8:27:01 PM
Sub
<[EMAIL PROTECTED]> writes:
> Aha! set hashjoin=off did the trick.
>-> Index Scan using ix_user_url_tag_user_url_id on
> user_url_tag userurltag0_ (cost=0.00..157.34 rows=103 width=14) (actual
> time=1.223..1.281 rows=5 loops=1666)
> Index Cond: (userurlta
s 6-7 M rows.
That query matches about 2500 rows.
If there are other things I can play with and help narrow this down, please let
me know.
Thanks,
Otis
- Original Message
From: Tom Lane <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: pgsql-sql@postgresql.org
Sent: Wednesday, May 10,
Hi, Otis,
[EMAIL PROTECTED] wrote:
> I'm not sure which numbers you are referring to when you said the estimate is
> off, but here are some numbers:
> The whole table has 6-7 M rows.
> That query matches about 2500 rows.
>
> If there are other things I can play with and help narrow this dow
Markus Schaber <[EMAIL PROTECTED]> writes:
> Did you try to set higher statistics targets for this columns?
Yeah, I was about to suggest the same after looking at the pg_stats
entry. It shows estimated num_distinct as 60825, ie only about 60k
distinct values in the column, which sounds kinda low
933
This number grows daily by... not sure how much, probably 5k a day currently.
Thanks,
Otis
- Original Message
From: Markus Schaber <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: pgsql-sql@postgresql.org
Sent: Thursday, May 11, 2006 6:33:55 AM
Subject: Re: [SQL] Help with a
On Thu, May 11, 2006 at 10:09:44AM -0700, [EMAIL PROTECTED] wrote:
> Hi Markus & Tom,
>
> Higher statistics for this column hm, I'd love to try changing
> it to see how that changes things, but I'm afraid I don't know how
> to do that. How can I change the statistics target value for this
> c
- Original Message
From: Andrew Sullivan <[EMAIL PROTECTED]>
To: pgsql-sql@postgresql.org
Sent: Thursday, May 11, 2006 1:18:08 PM
Subject: Re: [SQL] Help with a seq scan on multi-million row table
On Thu, May 11, 2006 at 10:09:44AM -0700, [EMAIL PROTECTED] wrote:
> Hi Markus & Tom,
>
12 matches
Mail list logo