ansky
__
> Od: Tom Lane
> Komu:
> Dátum: 07.08.2013 17:53
> Predmet: Re: [PERFORM] RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a
> paginated query on a view with another view inside of it.
>
> CC: "Igor Neyman", "Pavel Stehule", "
writes:
> "Total runtime: 9.313 ms" in pgAdmin
> "Total runtime:Â 9.363 ms" in psql.
> But timing after the query finished was 912.842 ms in psql.
Well, that's the downside of increasing join_collapse_limit and
from_collapse_limit: you might get a better plan, but it takes a lot
longer to get it
From: sl...@centrum.sk [mailto:sl...@centrum.sk]
Sent: Wednesday, August 07, 2013 11:34 AM
To: Igor Neyman; Pavel Stehule
Cc: pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on
a view with another view inside of it.
I got:
"
" , Pavel Stehule
Dátum: 07.08.2013 16:48
Predmet: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on
a view with another view inside of it.
CC: "pgsql-performance@postgresql.org"
From: sl...@centrum.sk [mailto:sl...@centrum.sk]
Sent: Wednesday, August 0
_
>> Od: Igor Neyman
>> Komu: "sl...@centrum.sk" , Pavel Stehule
>>
>> Dátum: 07.08.2013 15:47
>> Predmet: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated
>> query on a view with another view inside of it.
>>
>
>>
2013/8/7 Igor Neyman :
>
>
> From: pgsql-performance-ow...@postgresql.org
> [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of sl...@centrum.sk
> Sent: Wednesday, August 07, 2013 8:43 AM
> To: Pavel Stehule
> Cc: pgsql-performance@postgresql.org
> Subject: [PE
From: sl...@centrum.sk [mailto:sl...@centrum.sk]
Sent: Wednesday, August 07, 2013 10:43 AM
To: Igor Neyman; Pavel Stehule
Cc: pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on
a view with another view inside of it.
You're righ
it
all.
I did this via pgadmin, but that shouldn't matter, should it?
Thank you,
Peter Slapansky
__
Od: Igor Neyman
Komu: "sl...@centrum.sk" , Pavel Stehule
Dátum: 07.08.2013 15:47
Predmet: RE: [PERFORM] Re: [PERFORM
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of sl...@centrum.sk
Sent: Wednesday, August 07, 2013 8:43 AM
To: Pavel Stehule
Cc: pgsql-performance@postgresql.org
Subject: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a
ee settings have been set to 32:
http://explain.depesz.com/s/cj2 <http://explain.depesz.com/s/cj2>
Thank you.
Peter Slapansky
__
Od: Pavel Stehule
Komu:
Dátum: 06.08.2013 21:01
Predmet: Re: [PERFORM] Sub-optimal plan for a paginate
Hello
please, send result of EXPLAIN ANALYZE
please, use a http://explain.depesz.com/ for saving a plan
there is a more than 8 joins - so try to set geqo_threshold to 16,
join_collapse_limit to 16, and from_collapse_limit to 16.
Regards
Pavel Stehule
2013/8/2 :
> Good day,
>
> I have a perfo
Good day,
I have a performance issue when JOINing a view within another view more than
once.
The query takes over three seconds to execute, which is too long in this case.
It's not a problem if the tables are nearly empty, but that isn't the case on
the production database.
I suspect the pla
Linux or ZFS
without sync.
Best regards,
Patrick
--- On Tue, 1/8/13, k...@rice.edu wrote:
From: k...@rice.edu
Subject: Re: [PERFORM] Sub optimal performance with default setting of
Postgresql with FreeBSD 9.1 on ZFS
To: "Patrick Dung"
Cc: pgsql-performance@postgresql.org
Date: Tuesday,
Hi Patrick,
You really need a flash ZIL with ZFS to handle syncs effectively. Setting
the sync_commit to off is the best you can do without it. Not that that
is bad, we do that here as well.
Regards,
Ken
On Tue, Jan 08, 2013 at 01:18:02AM +0800, Patrick Dung wrote:
> Hi,
>
> Updated information
Hi,
Updated information in this post.
I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.
The pgsql base directory is in a ZFS dataset.
I
have noticed the performance is sub-optimal, but I know the default
setting should be the most safest one to be use (without possible d
--
Cezariusz Marek mob: +48 608 646 494
http://www.comarch.com/ tel: +48 33 815 0734
--
Sent via pgsql-performance mailing lis
Hello,
SubPlan 2
-> Seq Scan on item_price (cost=0.00..423.30 rows=1 width=8)
(actual time=1.914..1.914 rows=0 loops=10669)
Filter: ((item_id = $1) AND (zone_id =
'OUsEaRcAA3jQrg42WHUm8A'::bpchar) AND (price_type = 0) AND
((size_name)::text = ($2)::text))
This means that, for ever
Hi all,
(Sorry, I know this is a repeat, but if you're using message threads,
the previous one was a reply to an OLD subject.)
The query below is fairly fast if the commented sub-select is
commented, but once I included that column, it takes over 10 minutes to
return results. Can someone shed s
Hi all,
The query below is fairly fast if the commented sub-select is
commented, but once I included that column, it takes over 10 minutes to
return results. Can someone shed some light on it? I was able to redo
the query using left joins instead, and it only marginally increased
result time. T
2009/10/30 Grzegorz Jaśkiewicz :
> for explains, use http://explain.depesz.com/
> besides, why are you using left join ?
> equivlent of IN () is just JOIN, not LEFT JOIN.
> And please, format your query so it readable without twisting eyeballs
> before sending.
I prefer to have things posted to th
for explains, use http://explain.depesz.com/
besides, why are you using left join ?
equivlent of IN () is just JOIN, not LEFT JOIN.
And please, format your query so it readable without twisting eyeballs
before sending.
I had posted this on another thread..but did not get a response..here
it is again
explain analyze select thedate,sent.watch as wat, nod.point as fwl,
act.acttype, intf.pointofcontact, func.getNum(snum) as sss,
func.getNum(dnum) as ddd, dddport,
aaa.aaacol,szone.ssszn as ssszone, dzone.dddzn as
dd
On Thu, Oct 29, 2009 at 10:10 AM, Anj Adu wrote:
> Join did not help. A sequential scan is still being done. The
> hardcoded value in the IN clause performs the best. The time
> difference is more than an order of magnitude.
If you want help debugging a performance problem, you need to post
your
s.
Bob
--- On Thu, 10/29/09, Anj Adu wrote:
> From: Anj Adu
> Subject: Re: [PERFORM] sub-select in IN clause results in sequential scan
> To: "Angayarkanni"
> Cc: "Grzegorz Jaśkiewicz" ,
> pgsql-performance@postgresql.org
> Date: Thursday, October 29, 200
Join did not help. A sequential scan is still being done. The
hardcoded value in the IN clause performs the best. The time
difference is more than an order of magnitude.
2009/10/29 Angayarkanni :
>
> 2009/10/29 Grzegorz Jaśkiewicz
>>
>>
>> On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu wrote:
>>>
>>>
2009/10/29 Grzegorz Jaśkiewicz
>
>
> On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu wrote:
>
>> Postgres consistently does a sequential scan on the child partitions
>> for this query
>>
>> select * from partitioned_table
>> where partitioned_column > current_timestamp - interval 8 days
>> where x in
On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu wrote:
> Postgres consistently does a sequential scan on the child partitions
> for this query
>
> select * from partitioned_table
> where partitioned_column > current_timestamp - interval 8 days
> where x in (select yy from z where colname like 'aaa%')
>
Postgres consistently does a sequential scan on the child partitions
for this query
select * from partitioned_table
where partitioned_column > current_timestamp - interval 8 days
where x in (select yy from z where colname like 'aaa%')
If I replace the query with
select * from partitioned_table
w
On Thu, Sep 10, 2009 at 10:56 AM, Tom Lane wrote:
> bricklen writes:
> > I just created a new index as Tom said, and the query *does* use the new
> > index (where ofid precedes date in the definition).
>
> And is it indeed faster than the other alternatives?
>
>regards, t
bricklen writes:
> I just created a new index as Tom said, and the query *does* use the new
> index (where ofid precedes date in the definition).
And is it indeed faster than the other alternatives?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-per
On Thu, Sep 10, 2009 at 10:07 AM, bricklen wrote:
> On Thu, Sep 10, 2009 at 10:02 AM, Robert Haas wrote:
>
>> On Thu, Sep 10, 2009 at 12:56 PM, bricklen wrote:
>> > On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane wrote:
>> >>
>> >> bricklen writes:
>> >> > Is there any other data I can provide to sh
On Thu, Sep 10, 2009 at 10:02 AM, Robert Haas wrote:
> On Thu, Sep 10, 2009 at 12:56 PM, bricklen wrote:
> > On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane wrote:
> >>
> >> bricklen writes:
> >> > Is there any other data I can provide to shed some light on this?
> >>
> >> The table and index defini
On Thu, Sep 10, 2009 at 12:56 PM, bricklen wrote:
> On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane wrote:
>>
>> bricklen writes:
>> > Is there any other data I can provide to shed some light on this?
>>
>> The table and index definitions?
>>
>> The straight indexscan would probably win if the index c
On Thu, Sep 10, 2009 at 9:57 AM, Robert Haas wrote:
> 2009/9/10 :
> >> Playing around with seq_page_cost (1) and random_page_cost (1), I can
> get
> >> the correct index selected. Applying those same settings to our
> production
> >> server does not produce the optimal plan, though.
> >
> > I do
2009/9/10 :
>> Playing around with seq_page_cost (1) and random_page_cost (1), I can get
>> the correct index selected. Applying those same settings to our production
>> server does not produce the optimal plan, though.
>
> I doubt setting seq_page_cost and random_page_cost to the same value is
>
On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane wrote:
> bricklen writes:
> > Is there any other data I can provide to shed some light on this?
>
> The table and index definitions?
>
> The straight indexscan would probably win if the index column order
> were ofid, date instead of date, ofid. I can't
bricklen writes:
> Is there any other data I can provide to shed some light on this?
The table and index definitions?
The straight indexscan would probably win if the index column order
were ofid, date instead of date, ofid. I can't tell if you have
any other queries for which the existing colu
> Playing around with seq_page_cost (1) and random_page_cost (1), I can get
> the correct index selected. Applying those same settings to our production
> server does not produce the optimal plan, though.
I doubt setting seq_page_cost and random_page_cost to the same value is
reasonable - random a
Hi Tomas,
2009/9/10
> > default_statistics_target = 100 (tried with 500, no change). Vacuum
> > analyzed
> > before initial query, and after each change to default_statistics_target.
>
> Modifying the statistics target is useful only if the estimates are
> seriously off, which is not your case -
> default_statistics_target = 100 (tried with 500, no change). Vacuum
> analyzed
> before initial query, and after each change to default_statistics_target.
Modifying the statistics target is useful only if the estimates are
seriously off, which is not your case - so it won't help, at least not
re
In the following query, We are seeing a sub-optimal plan being chosen. The
following results are after running the query several times (after each
change).
dev1=# select version();
version
---
Hi, Scott and Hale,
Scott Marlowe wrote:
> Make sure analyze has been run and that the statistics are fairly
> accurate.
It might also help to increase the statistics_target on the column in
question.
HTH,
Markus
--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | So
On Wed, 2006-08-02 at 07:17, H Hale wrote:
> Initial testing was with data that essentially looks like a single collection
> with many items.
> I then changed this to have 60 collections of 50 items.
> The result, much better (but not optimum) use of indexs, but a seq scan still
> used.
>
> Tu
Initial testing was with data that essentially looks like a single collection with many items. I then changed this to have 60 collections of 50 items. The result, much better (but not optimum) use of indexs, but a seq scan stillused. Turning seq scan off, all indexes where used. Query was much fas
Not sure if this helps solve the problem but... (see below) As new records are added Indexes are used for awhile and then at some point postgres switches to seq scan. It is repeatable. Any suggestions/comments to try and solve this are welcome. Thanks Data is as follows: capsa.flatommemberre
Tom, It is unique. Indexes: "flatomfilesysentry_pkey" PRIMARY KEY, btree (objectid) "capsa_flatomfilesysentry_name_idx" btree (name) Foreign-key constraints: "objectid" FOREIGN KEY (objectid) REFERENCES capsa_sys.master(objectid) ON DELETE CASCADE Tom Lane <[EMAIL PROTECTED]> wrote:
H Hale <[EMAIL PROTECTED]> writes:
> -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238
> width=30) (actual time=0.011..0.013 rows=1 loops=6473)
> Recheck Cond: (flatomfilesysentry.objectid = "outer".dstobj)
> -> Bitmap Index Scan on flatomfilesysentr
Look at that second seq-scan (on flatommemberrelation) - it's looping 5844 times (once for each row in flatmfilesysentry). I'd expect PG to materialise the seq-scan once and then join (unless I'm missing something, the subselect just involves the one test against a constant).I'm guessing something
Rod Taylor <[EMAIL PROTECTED]> writes:
>> Nested Loop IN Join (cost=0.00..1386.45 rows=5809 width=14) (actual
>> time=2.933..101467.463 rows=5841 loops=1)
>> Join Filter: ("outer".objectid = "inner".dstobj)
>> -> Seq Scan on flatomfilesysentry (cost=0.00..368.09 rows=5809
>> width=30) (actual ti
> capsa=# explain analyze select name from capsa.flatomfilesysentry
> where objectid in ( select dstobj from capsa.flatommemberrelation
> where srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409');
>
>
>QUERY PLAN
> -
H Hale wrote:
I am testing a query what that has a sub-select. The query performance is very very poor as shown below due to the use of sequencial scans. The actual row count of both tables is also shown. It appears the row count shown by explain analyze does not match the actual count. Columns d
I am testing a query what that has a sub-select. The query performance is very very poor as shown below due to the use of sequencial scans. The actual row count of both tables is also shown. It appears the row count shown by explain analyze does not match the actual count. Columns dstobj, srcobj &
52 matches
Mail list logo