2017-11-15 20:58 GMT+01:00 Gunther :
>
> On 11/15/2017 8:12, Pavel Stehule wrote:
>
> There is wrong plan due wrong estimation
>
> for this query you should to penalize nested loop
>
> set enable_nestloop to off;
>
> before evaluation of this query
>
>
> You are not the only one with this issue. M
On 11/15/2017 8:12, Pavel Stehule wrote:
There is wrong plan due wrong estimation
for this query you should to penalize nested loop
set enable_nestloop to off;
before evaluation of this query
You are not the only one with this issue. May I suggest to look at this
thread a little earlier th
On Wed, Nov 15, 2017 at 03:03:39PM +0530, Samir Magar wrote:
> I am having performance issues with one of the query.
> The query is taking 39 min to fetch 3.5 mil records.
>
> I want to reduce that time to 15 mins.
> could you please suggest something to its performance?
> "HashAggregate (cost=4
2017-11-15 13:54 GMT+01:00 Samir Magar :
> please find the EXPLAIN ANALYZE output.
>
> On Wed, Nov 15, 2017 at 3:13 PM, Pavel Stehule
> wrote:
>
>> Hi
>>
>> please send EXPLAIN ANALYZE output.
>>
>> Regards
>>
>> Pavel
>>
>> 2017-11-15 10:33 GMT+01:00 Samir Magar :
>>
>>> Hello,
>>> I am having p
please find the EXPLAIN ANALYZE output.
On Wed, Nov 15, 2017 at 3:13 PM, Pavel Stehule
wrote:
> Hi
>
> please send EXPLAIN ANALYZE output.
>
> Regards
>
> Pavel
>
> 2017-11-15 10:33 GMT+01:00 Samir Magar :
>
>> Hello,
>> I am having performance issues with one of the query.
>> The query is takin
Hi
please send EXPLAIN ANALYZE output.
Regards
Pavel
2017-11-15 10:33 GMT+01:00 Samir Magar :
> Hello,
> I am having performance issues with one of the query.
> The query is taking 39 min to fetch 3.5 mil records.
>
> I want to reduce that time to 15 mins.
> could you please suggest something
Hello,
I am having performance issues with one of the query.
The query is taking 39 min to fetch 3.5 mil records.
I want to reduce that time to 15 mins.
could you please suggest something to its performance?
server configuration:
CPUs = 4
memory = 16 GM
shared_buffers = 3 GB
work_mem = 100MB
eff
Based on my initial hunch that something resulting from all the ALTERS was
making PostgreSQL planner end up with bad plans, I tried a pg_dump and
pg_restore. Now the 'bad' query comes back in 70 seconds (compared to 20
minutes earlier) and the rewritten query still comes back in 2 seconds. So
we wi
Hello,
>
> If that does not help, you'll have to change the query probably. The
> problem is the explain analyze you've provided
> (http://explain.depesz.com/s/MY1) does not match the query from your
> yesterday's post so we can't really help with it.
Thanks for the pointers. I think I posted the
On 4 Září 2011, 20:06, Jayadevan wrote:
> I don't think I understood all that. Anyway, is there a way to fix this -
> either by rewriting the query or by creating an index? The output does
> match
> what I am expecting. It does take more than 10 times the time taken by
> Oracle for the same result
I don't think I understood all that. Anyway, is there a way to fix this -
either by rewriting the query or by creating an index? The output does match
what I am expecting. It does take more than 10 times the time taken by
Oracle for the same result, with PostgreSQL taking more than 20 minutes. I
a
"Kevin Grittner" writes:
> Thanks for posting the query and related schema. I tried working
> through it, but I keep coming back to this sort, and wondering how a
> sort can have 1121 rows as input and 2673340321 rows as output. Does
> anyone have any ideas on what could cause that?
Mergejoin r
Jayadevan M wrote:
> Here is the explain analyze
> http://explain.depesz.com/s/MY1
> PostgreSQL 9.0.4 on x86_64-pc-solaris2.10
> work_mem = 96MB
Thanks for posting the query and related schema. I tried working
through it, but I keep coming back to this sort, and wondering how a
sort can h
Order by ...upper(xyz), do you have functional index on these ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Here goesI think it might be difficult to go through all these
definitions..
PRGMEMACCMST
Table "public.prgmemaccmst"
Column|Type | Modifiers
--+-+---
cmpcod | character varying(5)
Jayadevan M wrote:
>> And the schema of the tables involved, and any indexes on them.
> The details of the tables and indexes may take a bit of effort to
> explain. Will do that.
In psql you can do \d to get a decent summary.
Without seeing the query and the table definitions, it's hard t
>
> A really interesting part is the sort near the bottom -
>
> -> Sort (cost=1895.95..1896.49 rows=215 width=61) (actual
> time=25.926..711784.723 rows=2673340321 loops=1)
> Sort Key: memmst.memshpsta
> Sort Method: quicksort Memory: 206kB
> -> Nested Loop (cost=0.01..1887.62 r
On 31 Srpen 2011, 13:19, Jayadevan M wrote:
> Hello,
>
>> >
>> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to
> say
>> > what's wrong from just the query plan, without knowing where the time
> is
>> > actually spent.
>> Here is the explain analyze
>> http://explain.depesz.com
Missed out looping in community...
On Wed, Aug 31, 2011 at 5:01 PM, Venkat Balaji wrote:
> Could you help us know the tables and columns on which Indexes are built ?
>
> Query is performing sorting based on key upper(column) and that is where i
> believe the cost is high.
>
> The 'upper' functio
Hello,
> >
> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to
say
> > what's wrong from just the query plan, without knowing where the time
is
> > actually spent.
> Here is the explain analyze
> http://explain.depesz.com/s/MY1
Going through the url tells me that statis
Hello,
>
> Please run EXPLAIN ANALYZE on the query and post that, it's hard to say
> what's wrong from just the query plan, without knowing where the time is
> actually spent.
Here is the explain analyze
http://explain.depesz.com/s/MY1
Regards,
Jayadevan
DISCLAIMER:
"The information in
Hello,
> Please run EXPLAIN ANALYZE on the query and post that, it's hard to say
> what's wrong from just the query plan, without knowing where the time is
> actually spent. And the schema of the tables involved, and any indexes
> on them. (see also http://wiki.postgresql.org/wiki/SlowQueryQue
Where is the query? And also paste the \d to show the tables and
indexes.
-Sushant.
On Wed, 2011-08-31 at 14:30 +0530, Jayadevan M wrote:
> Hello all,
> I have a query which takes about 20 minutes to execute and retrieves
> 2000-odd records. The explain for the query is pasted here
> http://exp
On 31.08.2011 12:00, Jayadevan M wrote:
Hello all,
I have a query which takes about 20 minutes to execute and retrieves
2000-odd records. The explain for the query is pasted here
http://explain.depesz.com/s/52f
The same query, with similar data structures/indexes and data comes back
in 50 seconds
Hello all,
I have a query which takes about 20 minutes to execute and retrieves
2000-odd records. The explain for the query is pasted here
http://explain.depesz.com/s/52f
The same query, with similar data structures/indexes and data comes back
in 50 seconds in Oracle. We just ported the product t
n Gray
-Original Message-
From: Chris [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 24, 2007 2:36 AM
To: Jonathan Gray
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query performance issue
Jonathan Gray wrote:
> Chris,
>
> Creating indexes on the customercl
Jonathan Gray wrote:
Chris,
Creating indexes on the customerclass table does speed up the queries but
still does not create the plan we are looking for (using the double index
with a backward index scan on the orders table).
Stupid question - why is that particular plan your "goal" plan?
The
007 1:51 AM
To: Jonathan Gray
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query performance issue
Chris wrote:
> Jonathan Gray wrote:
>> We're experiencing a query performance problem related to the planner
>> and its ability to perform a specific type of merge.
>
Chris wrote:
Jonathan Gray wrote:
We’re experiencing a query performance problem related to the planner
and its ability to perform a specific type of merge.
We have created a test case (as attached, or here:
http://www3.streamy.com/postgres/indextest.sql) which involves a
hypothetical cus
Jonathan Gray wrote:
We’re experiencing a query performance problem related to the planner
and its ability to perform a specific type of merge.
We have created a test case (as attached, or here:
http://www3.streamy.com/postgres/indextest.sql) which involves a
hypothetical customer ordering
erkus" <[EMAIL PROTECTED]>
To: "Stefano Bonnin" <[EMAIL PROTECTED]>
Sent: Monday, August 30, 2004 6:54 PM
Subject: Re: [PERFORM] Query performance issue with 8.0.0beta1
> Stefano,
>
> > This is my postgres.conf, I have changed only the work_mem and
> > s
- Original Message -
From: "Stefano Bonnin" <[EMAIL PROTECTED]>
To: "Josh Berkus" <[EMAIL PROTECTED]>
Sent: Monday, August 30, 2004 4:13 PM
Subject: Re: [PERFORM] Query performance issue with 8.0.0beta1
> This is my postgres.conf, I have changed onl
Stefano,
> Hi, I have just installed 8.0.0beta1 and I noticed that some query are
> slower than 7.4.2 queries.
Seems unlikely. How have you configured postgresql.conf?DID you
configure it for the 8.0 database?
--
Josh Berkus
Aglio Database Solutions
San Francisco
---
Is it possible (to mix two threads) that you had CLUSTERed the table on the old
database in a way that retrieved the records in this query faster?
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
7.4.2
> Aggregate (cost=46817.89..46817.89 rows=1 width=0) (actual time=401.216..401.217
> rows=1 loops=1)
>-> Index Scan using snsdata_codpar on "SNS_DATA" (cost=0.00..46817.22 rows=268
> width=0) (actual time=165.948..400.258 rows=744 loops=1)
> Index Cond: (("Cod_Par")::text =
Hi, I have just installed 8.0.0beta1 and I noticed
that some query are slower than 7.4.2 queries.
After:
pg_dump my_database >mydb.sql (from
7.4.2)
psql my_new_database
FULL VACUUM ANALYZE
***With the old db on 7.4.2***
explain analyze SELECT count(*) FROM "SNS_DATA"
WHERE "Data_Arri
36 matches
Mail list logo