Greg's book is awesome. It really gives a lot of informations/tips/whatever
on performances. I mostly remember all the informations about hardware, OS,
PostgreSQL configuration, and such. Not much on the EXPLAIN part.
Arrived this morning :)
But the row estimates are not precise at the top of the join/filter.
It thinks there will 2120 rows, but there are only 11.
Ah... I didn't spot that one...
Yes, you are right there - this is probably a slightly atypical query of
this sort actually, 2012 is a pretty good guess.
On Claudio's
The difference between cost estimation and actual cost of your queries,
under relatively precise row estimates, seems to suggest your e_c_s or r_p_c
aren't a reflection of your hardware's performance.
Wow, so tweaking these has fixed it and then some. It now picks a slightly
different plan than
Subject: Re: [PERFORM] Slow query: bitmap scan troubles
Well, you don't need to put anything down. Most settings that change planner
decisions can be tuned on per-quey basis by issuing set commands in given
session. This should not affect other queries more than it is needed to run
query in the way
On Tue, 2012-12-04 at 15:42 -0800, Jeff Janes wrote:
On Tue, Dec 4, 2012 at 10:03 AM, postgre...@foo.me.uk wrote:
[...]
Is there some nice bit of literature somewhere that explains what sort of
costs are associated with the different types of lookup?
I've heard good things about Greg
...@foo.me.uk; postgres performance list
Subject: Re: [PERFORM] Slow query: bitmap scan troubles
Jeff Janes jeff.ja...@gmail.com writes:
I now see where the cost is coming from. In commit 21a39de5809 (first
appearing in 9.2) the fudge factor cost estimate for large indexes
was increased by about 10 fold
I also wonder if increasing (say x10) of default_statistics_target or just
doing ALTER TABLE SET STATISTICS for particular tables will help.
It will make planned to produce more precise estimations. Do not forget
ANALYZE afer changing it.
Thanks Sergey, I will try this too.
I think the bother
Hi Jeff
It kind of does. The expected speed is predicated on the number of rows
being 200 fold higher. If the number of rows actually was that much higher,
the two speeds might be closer together. That is why it would be
interesting to see a more typical case where the actual number of rows
On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes jeff.ja...@gmail.com wrote:
I'm not sure that this change would fix your problem, because it might
also change the costs of the alternative plans in a way that
neutralizes
On Thu, Dec 6, 2012 at 2:27 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes jeff.ja...@gmail.com wrote:
I'm not sure that this change would fix your problem, because it might
also
On Thu, Dec 6, 2012 at 12:05 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, Dec 6, 2012 at 2:27 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire klaussfre...@gmail.com
wrote:
As far as I can see on the explain, the misestimation is 3x~4x not
On Tue, Dec 4, 2012 at 3:42 PM, Jeff Janes jeff.ja...@gmail.com wrote:
(Regarding http://explain.depesz.com/s/4MWG, wrote)
But I am curious about how the cost estimate for the primary key look
up is arrived at:
Index Scan using cons_pe_primary_key on position_effect
(cost=0.00..42.96
On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes jeff.ja...@gmail.com wrote:
I'm not sure that this change would fix your problem, because it might
also change the costs of the alternative plans in a way that
neutralizes things. But I suspect it would fix it. Of course, a
correct estimate of the
Jeff Janes jeff.ja...@gmail.com writes:
I now see where the cost is coming from. In commit 21a39de5809 (first
appearing in 9.2) the fudge factor cost estimate for large indexes
was increased by about 10 fold, which really hits this index hard.
This was fixed in commit bf01e34b556 Tweak
Hi guys (and girls)
I've been banging my head over this for a few days now so if any of you kind
souls could take a minute to take a look at this I would be eternally
grateful.
I have a pretty straightforward query that is very slow by default, and
about 70 times faster when I set
-performance@postgresql.org
Subject: [PERFORM] Slow query: bitmap scan troubles
Hi guys (and girls)
I've been banging my head over this for a few days now so if any of you kind
souls could take a minute to take a look at this I would be eternally
grateful.
I have a pretty straightforward query
On Tue, Dec 4, 2012 at 12:06 PM, postgre...@foo.me.uk wrote:
Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7
Fast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG
If you check the fast plan, it has a higher cost compared against
the slow plan.
The
On Tue, Dec 4, 2012 at 7:27 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Tue, Dec 4, 2012 at 12:06 PM, postgre...@foo.me.uk wrote:
Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7
Fast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG
If you check
On Tue, Dec 4, 2012 at 2:22 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Tue, Dec 4, 2012 at 7:27 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Tue, Dec 4, 2012 at 12:06 PM, postgre...@foo.me.uk wrote:
Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7
Fast version
But the row estimates are not precise at the top of the join/filter.
It thinks there will 2120 rows, but there are only 11.
Ah... I didn't spot that one...
Yes, you are right there - this is probably a slightly atypical query of
this sort actually, 2012 is a pretty good guess.
On Claudio's
But the row estimates are not precise at the top of the join/filter.
It thinks there will 2120 rows, but there are only 11.
So it seems like there is a negative correlation between the two tables
which is not recognized.
Yes, you are right there. I am only just beginning to understand how to
On Tue, Dec 4, 2012 at 3:03 PM, postgre...@foo.me.uk wrote:
Though that doesn't account for the 70x difference between the speed of the
two queries in actuality given a pretty similar expected speed (does it?).
It does go some way to explaining why a bad choice of plan was made.
I still
On Tue, Dec 4, 2012 at 3:31 PM, Philip Scott psc...@foo.me.uk wrote:
r_p_c 2- 1 (s_p_c 1-0.5):
Is this really necessary?
(looks like a no-op, unless your CPU is slow)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Well, you don't need to put anything down. Most settings that change
planner decisions can be tuned on per-quey basis by issuing set commands in
given session. This should not affect other queries more than it is needed
to run query in the way planner chooses.
Best regards, Vitalii Tymchyshyn
:33
To: Philip Scott
Cc: postgre...@foo.me.uk; postgres performance list
Subject: Re: [PERFORM] Slow query: bitmap scan troubles
On Tue, Dec 4, 2012 at 3:31 PM, Philip Scott psc...@foo.me.uk wrote:
r_p_c 2- 1 (s_p_c 1-0.5):
Is this really necessary?
(looks like a no-op, unless your CPU is slow
Subject: Re: [PERFORM] Slow query: bitmap scan troubles
Well, you don't need to put anything down. Most settings that change planner
decisions can be tuned on per-quey basis by issuing set commands in given
session. This should not affect other queries more than it is needed to run
query
postgre...@foo.me.uk wrote:
Ah okay, thanks. I knew I could set various things but not
effective_work_mem (I tried reloading the edited config file but
it didn't seem to pick it up)
Check the server log, maybe there was a typo or capitalization
error.
To test on a single connection you
On Tue, Dec 4, 2012 at 9:47 AM, postgre...@foo.me.uk wrote:
eagerly awaiting 6pm when I can bring the DB down and start tweaking. The
effective_work_mem setting is going from 6Gb-88Gb which I think will make
quite a difference.
I also wonder if increasing (say x10) of
On Tue, Dec 4, 2012 at 9:47 AM, postgre...@foo.me.uk wrote:
But the row estimates are not precise at the top of the join/filter.
It thinks there will 2120 rows, but there are only 11.
Ah... I didn't spot that one...
Yes, you are right there - this is probably a slightly atypical query of
On Tue, Dec 4, 2012 at 10:03 AM, postgre...@foo.me.uk wrote:
Though that doesn't account for the 70x difference between the speed of the
two queries in actuality given a pretty similar expected speed (does it?).
It kind of does. The expected speed is predicated on the number of
rows being
30 matches
Mail list logo