Stelian Iancu stel...@iancu.ch writes:
I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a
fairly large database (some tables with approx. 1 mil. records) and I
have the following query:
[ 13-way join joined to a 3-way join ]
Think you'll need to raise join_collapse_limit
Hello,
I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a
fairly large database (some tables with approx. 1 mil. records) and I
have the following query:
SELECT * FROM (
SELECT DISTINCT c.ext_content_id AS type_1_id,
substring(c.ext_content_id::text, 1, 13) AS
On Mon, Jan 27, 2014, at 9:20, salah jubeh wrote:
Hello Stelian,
Hello,
Have you tried to use func_table module?, I think it will help you to
eliminate all the joins.
No, I haven't. I can have a look later, thanks.
Regards
--
Sent via pgsql-performance mailing list
On Mon, Jan 27, 2014, at 7:06, Tom Lane wrote:
Stelian Iancu stel...@iancu.ch writes:
I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a
fairly large database (some tables with approx. 1 mil. records) and I
have the following query:
[ 13-way join joined to a 3-way
Hello Stelian,
Have you tried to use func_table module?, I think it will help you to eliminate
all the joins.
Regards
On Monday, January 27, 2014 5:54 PM, Stelian Iancu stel...@iancu.ch wrote:
Hello,
I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a
fairly large
My developers have had the same issue.
Postgres 9.2.3 on Linux 5.6.
The query planner estimates (for 27 table join SQL) that using the nestloop
is faster, when in fact it is not. A hashjoin returns results faster. We've
set enable_nestloop = false and have gotten good results. The problem is,
On 28/01/14 08:10, bobJobS wrote:
My developers have had the same issue.
Postgres 9.2.3 on Linux 5.6.
The latest Linux kernel is 3.13 (https://www.kernel.org), so I assume
5.6 is a distribution version.
So which distribution of Linux are you using?
Cheers,
Gavin
--
Sent via
What are your thoughts on the right way to use SSDs in a RAID to
enhance postgres I/O performance? In an earlier reply, you
indicated one of a RAID1+0 consisting of several spindles,
NVRAM-based solution (SSD or PCIe card), or a SAN
Well, it's a tiered approach. If you can identify your
On 12/19/2013 03:24 PM, Sergey Konoplev wrote:
2. You are limited with IO
I would also suggest you to upgrade your storage in this case.
I think this is the case. If I recall correctly, his setup includes a
single RAID-1 for everything, and he only has 32GB of RAM. In fact, the
WAL traffic
On 12/20/2013 09:57 AM, Sev Zaslavsky wrote:
There is a separate RAID-1 for WAL, another for tablespace and another
for operating system.
I tend to stick to DB-size / 10 as a minimum, but I also have an OLTP
system. For a more OLAP-type, the ratio is negotiable.
The easiest way to tell is
On 21/12/13 05:11, Shaun Thomas wrote:
[...]
.
Of course, don't forget to buy modules in multiples of four, otherwise
you're not taking advantage of all the CPU's memory channels. :)
Note some processors have 3 (three) memory channels! And I know of some
with 4 memory channels. So it is
Hello,_
_
I've got a very simple table with a very simple SELECT query, but it
takes longer on the initial run than I'd like, so I want to see if there
is a strategy to optimize this.
Table rt_h_nbbo contains several hundred million rows. All rows for a
given entry_date are appended to this
On 12/12/2013 11:30 AM, Sev Zaslavsky wrote:
_First question_ is: Does loading 24Gb of data in 21 sec seem about
right (hardware specs at bottom of email)?
That's actually pretty good. 24GB is a lot of data to process.
_Second question_: Is it possible to tell postgres to physically store
Hi,
I'm seeing a slow running query. After some optimization with indexes,
it appears that the query plan is correct, it's just slow. Running the
query twice, not surprisingly, is very fast, due to OS caching or
shared_buffers caching. If the parameters for the query are different,
however,
I don't have much info on disks, since this is a virtual server on
linode. It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume
(noatime). Hopefully that's helpful.
Bryce
Claudio Freire
December 12, 2013
12:15 PM
On Thu, Dec 12, 2013 at
5:10 PM, Bryce
On Thu, Dec 12, 2013 at 5:20 PM, Bryce Covert
br...@brycecovertoperations.com wrote:
I don't have much info on disks, since this is a virtual server on linode.
It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume (noatime).
Hopefully that's helpful.
Bryce
Well, did you run
Hey Claudio,
Thanks a lot for the help. I'm not familiar with explain buffers, but
here's the results:
Limit (cost=0.00..648.71 rows=50 width=8) (actual
time=653.681..52328.707 rows=50 loops=1)
Buffers: shared hit=7875 read=9870
- GroupAggregate (cost=0.00..55672.36 rows=4291 width=8)
Also, I was reading this:
http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2, and I
realized that index-only scans weren't introduced until 9.2. I tried
creating a covered index for this, but I don't think it helps in this
situation.
Bryce
Claudio Freire
On Thu, Dec 12, 2013 at 6:16 PM, Bryce Covert
br...@brycecovertoperations.com wrote:
Thanks a lot for the help. I'm not familiar with explain buffers, but
here's the results:
Limit (cost=0.00..648.71 rows=50 width=8) (actual
time=653.681..52328.707 rows=50 loops=1)
Buffers: shared
It's strange that it isn't sequential at least for the electric usage,
as i've clustered using the index that it's using..
I had work_mem set to 128mb. I tried bumping it to 1024mb, and I don't
think I see a in the query plan.
Would you think upgrading to 9.2 would help much here? Using a
Not sure if this is
helpful, but I tried upgrading to 9.2, and here's what I got:
-
Limit (cost=0.00..535.78 rows=50 width=8) (actual
time=1037.376..135043.945 rows=50 loops=1)
Output: premiseaccount.id, (sum(electricusage.usage))
Buffers: shared hit=4851 read=18718
- GroupAggregate
On Thu, Dec 12, 2013 at 6:49 PM, Bryce Covert
br...@brycecovertoperations.com wrote:
It's strange that it isn't sequential at least for the electric usage, as
i've clustered using the index that it's using..
electricusage is the inner part of the nested loop, which means it will do
~3000
On Thu, Dec 12, 2013 at 3:04 PM, Bryce Covert
br...@brycecovertoperations.com wrote:
Not sure if this is helpful, but I tried upgrading to 9.2, and here's what
I got:
-
Limit (cost=0.00..535.78 rows=50 width=8) (actual
time=1037.376..135043.945 rows=50 loops=1)
Output:
It looks like you guys
were right. I think vacuum analyzing this made it do an IOS. It seems
like materialized views are going to be the best bet. I see how that
would allow sequential reading. Thanks!
Bryce
Jeff Janes
December 12, 2013
4:01 PM
On Thu, Dec 12, 2013 at
On Tue, Sep 3, 2013 at 2:34 PM, Jesper Krogh jes...@krogh.cc wrote:
On 03/09/13 09:47, Craig Ringer wrote:
On 09/03/2013 03:46 PM, jes...@krogh.cc wrote:
Hi.
I have a strange situation where generating the query plan takes 6s+ and
executing it takes very little time.
How do you determine
Hi.
I have a strange situation where generating the query plan takes 6s+ and
executing it takes very little time.
2013-09-03 09:19:38.726 db=# explain select table.id from db.table left
join db.tablepro on db.id = tablepro.table_id where table.fts @@
to_tsquery('english','q12345') ;
On 09/03/2013 03:46 PM, jes...@krogh.cc wrote:
Hi.
I have a strange situation where generating the query plan takes 6s+ and
executing it takes very little time.
How do you determine that it's planning time at fault here?
Please take separate timing for:
PREPARE testq AS select table.id
On 03/09/13 09:47, Craig Ringer wrote:
On 09/03/2013 03:46 PM, jes...@krogh.cc wrote:
Hi.
I have a strange situation where generating the query plan takes 6s+ and
executing it takes very little time.
How do you determine that it's planning time at fault here?
Not that I'm sure, but the timing
On 03/11/2013 06:56 PM, Tom Lane wrote:
And that means that you get the inefficient plan wherein the
foo-to-tiny_foo join is computed in its entirety.
:(
That's unfortunate, though I guess it makes sense. I moved the join in
the view into the SELECT clause as an EXISTS, and that seems to
Shaun Thomas stho...@optionshouse.com writes:
A developer was complaining about a view he created to abstract an added
column in a left join. ...
Curious, I whipped up this test case:
CREATE VIEW v_slow_view AS
SELECT foo.*, tf.small_label IS NOT NULL AS has_small_label
FROM foo
John Lumby johnlu...@hotmail.com writes:
Meanwhile, I have one other suggestion aimed specifically at problematic
CTEs:
Would it be reasonable to provide a new Planner Configuration option :
enable_nestloop_cte_inner (boolean)
Enables or disables the query planner's use of
On 2012-10-09 23:09:21
Tom Lane wrote:
re subject Why am I getting great/terrible estimates with these CTE queries?
You're assuming the case where the estimate is better is better for a
reason ... but it's only better as a result of blind dumb luck. The
outer-level query planner
Since cte is already an optimization fence, you can go further and make it
temporary table.
Create table;analyze;select should make optimizer's work much easier.
18 лют. 2013 18:45, John Lumby johnlu...@hotmail.com напис.
On 2012-10-09 23:09:21
Tom Lane wrote:
re subject Why am I
Vitalii wrote
Since cte is already an optimization fence, you can go further and make
it temporary table.
Create table;analyze;select should make optimizer's work much easier.
Thanks Vitalii - yes, you are right, and I have used that technique on
other cases like this.
On Friday, February 08, 2013 6:06 PM Karolis Pocius wrote:
I've tried changing autovacuum_analyze_scale_factor as well as setting
job_batches table to auto analyze every 500 changes (by setting scale
factor to 0 and threshold to 500), but I still keep running into that
issue, sometimes
This query http://pgsql.privatepaste.com/359bed8e9e gets executed every
500 ms and normally completes really quickly
http://explain.depesz.com/s/poVQ, but the more job_batches table
(http://pgsql.privatepaste.com/eaf6d63fd2) gets used, the slower this
query gets, to the point where it takes
On Tue, Feb 5, 2013 at 9:15 AM, Will Platnick wplatn...@gmail.com wrote:
We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries
has raised significantly, especially during our peak time where lots of
users are logging in. According to New Relic, this query is now taking up
Good eye, I totally missed that!
Any ideas on how to troubleshoot this delay?
On Wednesday, February 6, 2013 at 3:51 AM, Pavan Deolasee wrote:
On Tue, Feb 5, 2013 at 9:15 AM, Will Platnick wplatn...@gmail.com
(mailto:wplatn...@gmail.com) wrote:
We upgraded from PG 9.1 to 9.2. Since the
Will Platnick wplatn...@gmail.com wrote:
Will Platnick wplatn...@gmail.com wrote:
The only thing that stands out is: on your production server I see
Total runtime: 7.515 ms, but the top node in EXPLAIN ANAYZE shows
actual time as 0.179 ms. Not sure where that additional time is being
spent
Clients are technically our pgbouncer which is on the same machine. The explain
analyze was local through psql direct to postgresql.
On Wednesday, February 6, 2013 at 11:22 AM, Kevin Grittner wrote:
Will Platnick wplatn...@gmail.com (mailto:wplatn...@gmail.com) wrote:
Will Platnick
On Wed, Feb 6, 2013 at 9:52 PM, Kevin Grittner kgri...@ymail.com wrote:
Will Platnick wplatn...@gmail.com wrote:
Will Platnick wplatn...@gmail.com wrote:
The only thing that stands out is: on your production server I see
Total runtime: 7.515 ms, but the top node in EXPLAIN ANAYZE shows
On 05.02.2013 05:45, Will Platnick wrote:
We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries has
raised significantly, especially during our peak time where lots of users are
logging in. According to New Relic, this query is now taking up the most
amount of time
We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries has
raised significantly, especially during our peak time where lots of users are
logging in. According to New Relic, this query is now taking up the most
amount of time during peak activity and my pg_stat_activity and
On 10.01.2013 19:17, Jeff Janes wrote:
On Thu, Jan 10, 2013 at 5:32 AM, Andrzej Zawadzki zawa...@wp.pl wrote:
Why that's happens? All configurations are identical. Only engine is
different.
Could you post explain (analyze, buffers) instead of just explain?
Impossible, 1h of waiting and I've
On 10.01.2013 19:48, Matheus de Oliveira wrote:
On Thu, Jan 10, 2013 at 11:32 AM, Andrzej Zawadzki zawa...@wp.pl
mailto:zawa...@wp.pl wrote:
Hi!
Small query run on 9.0 very fast:
SELECT * from sygma_arrear sar where sar.arrear_import_id = (
select
Hi!
Small query run on 9.0 very fast:
SELECT * from sygma_arrear sar where sar.arrear_import_id = (
select sa.arrear_import_id from sygma_arrear sa, arrear_import ai
where sa.arrear_flag_id = 2
AND sa.arrear_import_id = ai.id
AND ai.import_type_id = 1
On Thu, Jan 10, 2013 at 5:32 AM, Andrzej Zawadzki zawa...@wp.pl wrote:
Why that's happens? All configurations are identical. Only engine is
different.
Could you post explain (analyze, buffers) instead of just explain?
Also, if you temporarily set enable_seqscan=off on 9.2, what plan do
you
On Thu, Jan 10, 2013 at 11:32 AM, Andrzej Zawadzki zawa...@wp.pl wrote:
Hi!
Small query run on 9.0 very fast:
SELECT * from sygma_arrear sar where sar.arrear_import_id = (
select sa.arrear_import_id from sygma_arrear sa, arrear_import ai
where sa.arrear_flag_id = 2
Greg's book is awesome. It really gives a lot of informations/tips/whatever
on performances. I mostly remember all the informations about hardware, OS,
PostgreSQL configuration, and such. Not much on the EXPLAIN part.
Arrived this morning :)
But the row estimates are not precise at the top of the join/filter.
It thinks there will 2120 rows, but there are only 11.
Ah... I didn't spot that one...
Yes, you are right there - this is probably a slightly atypical query of
this sort actually, 2012 is a pretty good guess.
On Claudio's
The difference between cost estimation and actual cost of your queries,
under relatively precise row estimates, seems to suggest your e_c_s or r_p_c
aren't a reflection of your hardware's performance.
Wow, so tweaking these has fixed it and then some. It now picks a slightly
different plan than
Subject: Re: [PERFORM] Slow query: bitmap scan troubles
Well, you don't need to put anything down. Most settings that change planner
decisions can be tuned on per-quey basis by issuing set commands in given
session. This should not affect other queries more than it is needed to run
query in the way
On Tue, 2012-12-04 at 15:42 -0800, Jeff Janes wrote:
On Tue, Dec 4, 2012 at 10:03 AM, postgre...@foo.me.uk wrote:
[...]
Is there some nice bit of literature somewhere that explains what sort of
costs are associated with the different types of lookup?
I've heard good things about Greg
...@foo.me.uk; postgres performance list
Subject: Re: [PERFORM] Slow query: bitmap scan troubles
Jeff Janes jeff.ja...@gmail.com writes:
I now see where the cost is coming from. In commit 21a39de5809 (first
appearing in 9.2) the fudge factor cost estimate for large indexes
was increased by about 10 fold
I also wonder if increasing (say x10) of default_statistics_target or just
doing ALTER TABLE SET STATISTICS for particular tables will help.
It will make planned to produce more precise estimations. Do not forget
ANALYZE afer changing it.
Thanks Sergey, I will try this too.
I think the bother
Hi Jeff
It kind of does. The expected speed is predicated on the number of rows
being 200 fold higher. If the number of rows actually was that much higher,
the two speeds might be closer together. That is why it would be
interesting to see a more typical case where the actual number of rows
On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes jeff.ja...@gmail.com wrote:
I'm not sure that this change would fix your problem, because it might
also change the costs of the alternative plans in a way that
neutralizes
On Thu, Dec 6, 2012 at 2:27 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes jeff.ja...@gmail.com wrote:
I'm not sure that this change would fix your problem, because it might
also
On Thu, Dec 6, 2012 at 12:05 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, Dec 6, 2012 at 2:27 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire klaussfre...@gmail.com
wrote:
As far as I can see on the explain, the misestimation is 3x~4x not
On Tue, Dec 4, 2012 at 3:42 PM, Jeff Janes jeff.ja...@gmail.com wrote:
(Regarding http://explain.depesz.com/s/4MWG, wrote)
But I am curious about how the cost estimate for the primary key look
up is arrived at:
Index Scan using cons_pe_primary_key on position_effect
(cost=0.00..42.96
On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes jeff.ja...@gmail.com wrote:
I'm not sure that this change would fix your problem, because it might
also change the costs of the alternative plans in a way that
neutralizes things. But I suspect it would fix it. Of course, a
correct estimate of the
Jeff Janes jeff.ja...@gmail.com writes:
I now see where the cost is coming from. In commit 21a39de5809 (first
appearing in 9.2) the fudge factor cost estimate for large indexes
was increased by about 10 fold, which really hits this index hard.
This was fixed in commit bf01e34b556 Tweak
Hi guys (and girls)
I've been banging my head over this for a few days now so if any of you kind
souls could take a minute to take a look at this I would be eternally
grateful.
I have a pretty straightforward query that is very slow by default, and
about 70 times faster when I set
-performance@postgresql.org
Subject: [PERFORM] Slow query: bitmap scan troubles
Hi guys (and girls)
I've been banging my head over this for a few days now so if any of you kind
souls could take a minute to take a look at this I would be eternally
grateful.
I have a pretty straightforward query
On Tue, Dec 4, 2012 at 12:06 PM, postgre...@foo.me.uk wrote:
Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7
Fast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG
If you check the fast plan, it has a higher cost compared against
the slow plan.
The
On Tue, Dec 4, 2012 at 7:27 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Tue, Dec 4, 2012 at 12:06 PM, postgre...@foo.me.uk wrote:
Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7
Fast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG
If you check
On Tue, Dec 4, 2012 at 2:22 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Tue, Dec 4, 2012 at 7:27 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Tue, Dec 4, 2012 at 12:06 PM, postgre...@foo.me.uk wrote:
Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7
Fast version
But the row estimates are not precise at the top of the join/filter.
It thinks there will 2120 rows, but there are only 11.
Ah... I didn't spot that one...
Yes, you are right there - this is probably a slightly atypical query of
this sort actually, 2012 is a pretty good guess.
On Claudio's
But the row estimates are not precise at the top of the join/filter.
It thinks there will 2120 rows, but there are only 11.
So it seems like there is a negative correlation between the two tables
which is not recognized.
Yes, you are right there. I am only just beginning to understand how to
On Tue, Dec 4, 2012 at 3:03 PM, postgre...@foo.me.uk wrote:
Though that doesn't account for the 70x difference between the speed of the
two queries in actuality given a pretty similar expected speed (does it?).
It does go some way to explaining why a bad choice of plan was made.
I still
On Tue, Dec 4, 2012 at 3:31 PM, Philip Scott psc...@foo.me.uk wrote:
r_p_c 2- 1 (s_p_c 1-0.5):
Is this really necessary?
(looks like a no-op, unless your CPU is slow)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Well, you don't need to put anything down. Most settings that change
planner decisions can be tuned on per-quey basis by issuing set commands in
given session. This should not affect other queries more than it is needed
to run query in the way planner chooses.
Best regards, Vitalii Tymchyshyn
:33
To: Philip Scott
Cc: postgre...@foo.me.uk; postgres performance list
Subject: Re: [PERFORM] Slow query: bitmap scan troubles
On Tue, Dec 4, 2012 at 3:31 PM, Philip Scott psc...@foo.me.uk wrote:
r_p_c 2- 1 (s_p_c 1-0.5):
Is this really necessary?
(looks like a no-op, unless your CPU is slow
Subject: Re: [PERFORM] Slow query: bitmap scan troubles
Well, you don't need to put anything down. Most settings that change planner
decisions can be tuned on per-quey basis by issuing set commands in given
session. This should not affect other queries more than it is needed to run
query
postgre...@foo.me.uk wrote:
Ah okay, thanks. I knew I could set various things but not
effective_work_mem (I tried reloading the edited config file but
it didn't seem to pick it up)
Check the server log, maybe there was a typo or capitalization
error.
To test on a single connection you
On Tue, Dec 4, 2012 at 9:47 AM, postgre...@foo.me.uk wrote:
eagerly awaiting 6pm when I can bring the DB down and start tweaking. The
effective_work_mem setting is going from 6Gb-88Gb which I think will make
quite a difference.
I also wonder if increasing (say x10) of
On Tue, Dec 4, 2012 at 9:47 AM, postgre...@foo.me.uk wrote:
But the row estimates are not precise at the top of the join/filter.
It thinks there will 2120 rows, but there are only 11.
Ah... I didn't spot that one...
Yes, you are right there - this is probably a slightly atypical query of
On Tue, Dec 4, 2012 at 10:03 AM, postgre...@foo.me.uk wrote:
Though that doesn't account for the 70x difference between the speed of the
two queries in actuality given a pretty similar expected speed (does it?).
It kind of does. The expected speed is predicated on the number of
rows being
explain analyze
select a.ID, a.provider, a.hostname, a.username, a.eventTimeStamp,
a.AIPGUID, a.submissionGUID, a.parentSubmissionGUID, a.sizeArchived,
a.addedContentString, a.addedContentSizesString, a.removedContentString,
a.removedContentSizesString, a.modifiedContentString,
Can someone shed some light on the following query.
any help would certainly be appreciated!
thanks -
*
Maria Wilson
Nasa/Langley Research Center
Hampton, Virginia
m.l.wil...@nasa.gov
*
explain analyze
select a.ID, a.provider, a.hostname, a.username, a.eventTimeStamp,
a.AIPGUID,
Albe Laurenz *EXTERN* wrote
Increasing the statistics for test_result.id_recipe_version
had no effect?
I am going to normalize the table some more before partitioning.
How do you think that partitioning will help?
I increased the statistics in steps up to 5000 (with vacuum analyse) -
AndyG wrote:
Increasing the statistics for test_result.id_recipe_version
had no effect?
I increased the statistics in steps up to 5000 (with vacuum analyse) -
Seems
to be as good as it gets.
http://explain.depesz.com/s/z2a
Just out of curiosity, do you get a better plan with
Much better...
http://explain.depesz.com/s/uFi
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730145.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing
But why? Is there a way to force the planner into this?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730151.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance
But why? Is there a way to force the planner into this?
I don't know enough about the planner to answer the why,
but the root of the problem seems to be the mis-estimate
for the join between test_result and recipe_version
(1348 instead of 21983 rows).
That makes the planner think that a nested
Externalizing the limit has improved the speed a lot. Distinct is half a
second faster than group by.
http://explain.depesz.com/s/vP1
with tmp as (
select distinct tr.nr as tnr
, tr.time_end as tend
, c.id_board as cb
, c.id_board_mini as cbm
, ti.id_test_result as itr
from test_item ti
,
Hello all,
I have been pulling my hair out over the last few days trying to get any useful performance out of the following
painfully slow query.
The query is JPA created, I've just cleaned the aliases to make it more
readable.
Using 'distinct' or 'group by' deliver about the same results,
Andy wrote:
I have been pulling my hair out over the last few days trying to get
any useful performance out of the
following
painfully slow query.
The query is JPA created, I've just cleaned the aliases to make it
more readable.
Using 'distinct' or 'group by' deliver about the same results,
Thanks very much Laurenz.
I'll put your suggestions into motion right away and let you know the
results.
Albe Laurenz *EXTERN* wrote
BTW, you seem to have an awful lot of indexes defined, some
of which seem redundant.
I am in the process of pruning unused/useless indexes on this database -
A marginal improvement.
http://explain.depesz.com/s/y63
I am going to normalize the table some more before partitioning.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730059.html
Sent from the PostgreSQL - performance
AndyG wrote:
A marginal improvement.
http://explain.depesz.com/s/y63
That's what I thought.
Increasing the statistics for test_result.id_recipe_version
had no effect?
I am going to normalize the table some more before partitioning.
How do you think that partitioning will help?
Yours,
Hi
2012/8/8 Jeff Janes jeff.ja...@gmail.com:
On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller sfkel...@gmail.com wrote:
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having a weird problem that PG says that relation 'p' does not
exist.
Why does PG
On Thu, Aug 9, 2012 at 4:00 AM, Stefan Keller sfkel...@gmail.com wrote:
Hi
2012/8/8 Jeff Janes jeff.ja...@gmail.com:
On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller sfkel...@gmail.com wrote:
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having a
Hi
I have an interesting query to be optimized related to this one [1].
The query definition is: Select all buildings that have more than 1
pharmacies and more than 1 schools within a radius of 1000m.
The problem is that I think that this query is inherently O(n^2). In
fact the solution I
On 7 Srpen 2012, 14:01, Stefan Keller wrote:
Hi
I have an interesting query to be optimized related to this one [1].
The query definition is: Select all buildings that have more than 1
pharmacies and more than 1 schools within a radius of 1000m.
The problem is that I think that this query
Your proposal lacks the requirement that it's the same building from
where pharmacies and schools are reachable.
But I think about.
Yours, S.
2012/8/7 Tomas Vondra t...@fuzzy.cz:
On 7 Srpen 2012, 14:01, Stefan Keller wrote:
Hi
I have an interesting query to be optimized related to this one
On 7 Srpen 2012, 14:22, Stefan Keller wrote:
Your proposal lacks the requirement that it's the same building from
where pharmacies and schools are reachable.
But I think about.
I don't know the dataset so I've expected the osm_id to identify the
building - then the intersect should work as AND
On Tue, Aug 7, 2012 at 5:01 AM, Stefan Keller sfkel...@gmail.com wrote:
Hi
I have an interesting query to be optimized related to this one [1].
The query definition is: Select all buildings that have more than 1
pharmacies and more than 1 schools within a radius of 1000m.
The problem is
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having a weird problem that PG says that relation 'p' does not exist.
Why does PG recognize table b in the subquery but not table p?
Any ideas?
-- Stefan
SELECT b.way AS building_geometry
FROM
(SELECT way
On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller sfkel...@gmail.com wrote:
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having a weird problem that PG says that relation 'p' does not
exist.
Why does PG recognize table b in the subquery but not table
101 - 200 of 468 matches
Mail list logo