On Thursday 09 June 2011 16:04, anthony.ship...@symstream.com wrote:
> I must be doing something really wrong to get this to happen:
Yes I did. Ignore that.
--
Anthony Shipman | flailover systems: When one goes down it
anthony.ship...@symstream.com | flails about until the other
On Wednesday 08 June 2011 19:47, t...@fuzzy.cz wrote:
> Have you tried to create a composite index on those two columns? Not sure
> if that helps but I'd try that.
>
> Tomas
Do you mean
create index tdiag_index2 ON tdiag(diag_id, create_time);
Should this be in addition to or instead of the
Samuel Gendler wrote:
Sure, but if it is a query that is slow enough for a time estimate to
be useful, odds are good that stats that are that far out of whack
would actually be interesting to whoever is looking at the time
estimate, so showing some kind of 'N/A' response once things have
gotte
-- Forwarded message --
From: Claudio Freire
Date: Wed, Jun 8, 2011 at 11:57 PM
Subject: Re: [PERFORM] poor performance when recreating constraints on
large tables
To: Samuel Gendler
On Wed, Jun 8, 2011 at 9:57 PM, Samuel Gendler
wrote:
> Sure, but if it is a query that is slow
On Wed, Jun 8, 2011 at 12:53 PM, Kevin Grittner wrote:
> Samuel Gendler wrote:
>
> > The planner knows how many rows are expected for each step of the
> > query plan, so it would be theoretically possible to compute how
> > far along it is in processing a query based on those estimates,
> > woul
Oooo...some bad math there. Thanks.
On Wed, 2011-06-08 at 12:38 -0700, Samuel Gendler wrote:
>
>
> On Wed, Jun 8, 2011 at 12:03 PM, Tony Capobianco
> wrote:
> My current setting is 22G. According to some documentation, I
> want to
> set effective_cache_size to my OS di
Samuel Gendler wrote:
> The planner knows how many rows are expected for each step of the
> query plan, so it would be theoretically possible to compute how
> far along it is in processing a query based on those estimates,
> wouldn't it?
And it is sometimes off by orders of magnitude. How muc
On Wed, Jun 8, 2011 at 12:28 PM, Robert Haas wrote:
> On Mon, Jun 6, 2011 at 6:10 PM, Mike Broers wrote:
> > Thanks for the suggestion, maintenance_work_mem is set to the default of
> > 16MB on the host that was taking over an hour as well as on the host that
> > was taking less than 10 minutes.
On Wed, Jun 8, 2011 at 12:03 PM, Tony Capobianco wrote:
> My current setting is 22G. According to some documentation, I want to
> set effective_cache_size to my OS disk cache + shared_buffers. In this
> case, I have 4 quad-core processors with 512K cache (8G) and my
> shared_buffers is 7680M.
Tony Capobianco wrote:
> According to some documentation, I want to set
> effective_cache_size to my OS disk cache + shared_buffers.
That seems reasonable, and is what has worked well for me.
> In this case, I have 4 quad-core processors with 512K cache (8G)
> and my shared_buffers is 7680M.
On Mon, Jun 6, 2011 at 6:10 PM, Mike Broers wrote:
> Thanks for the suggestion, maintenance_work_mem is set to the default of
> 16MB on the host that was taking over an hour as well as on the host that
> was taking less than 10 minutes. I tried setting it to 1GB on the faster
> test server and it
My current setting is 22G. According to some documentation, I want to
set effective_cache_size to my OS disk cache + shared_buffers. In this
case, I have 4 quad-core processors with 512K cache (8G) and my
shared_buffers is 7680M. Therefore my effective_cache_size should be
approximately 16G? Mo
2011/6/8 Tony Capobianco :
> pg_dw=# show random_page_cost ;
> random_page_cost
> --
> 4
> (1 row)
>
> Time: 0.299 ms
> pg_dw=# show seq_page_cost ;
> seq_page_cost
> ---
> 1
> (1 row)
>
> Time: 0.250 ms
> pg_dw=# show work_mem ;
> work_mem
> --
> 768MB
> (
Tony Capobianco writes:
> Well, this ran much better. However, I'm not sure if it's because of
> set enable_nestloop = 0, or because I'm executing the query twice in a
> row, where previous results may be cached. I will try this setting in
> my code for when this process runs later today and see
Hi All,
Let me clarify this a bit.
The memory values are ridiculous you're completely correct. I've
since fixed that (it had no effect sadly).
I've adjust the shared buffers to about 400MB. As per the tuning
guide to set that to around 1/4 of your system memory (the AWS in
question has 1.7GB).
pg_dw=# show random_page_cost ;
random_page_cost
--
4
(1 row)
Time: 0.299 ms
pg_dw=# show seq_page_cost ;
seq_page_cost
---
1
(1 row)
Time: 0.250 ms
pg_dw=# show work_mem ;
work_mem
--
768MB
(1 row)
On Wed, 2011-06-08 at 18:27 +0200, Pavel Stehule
Well, this ran much better. However, I'm not sure if it's because of
set enable_nestloop = 0, or because I'm executing the query twice in a
row, where previous results may be cached. I will try this setting in
my code for when this process runs later today and see what the result
is.
Thanks!
pg
Hello
what is your settings for
random_page_cost, seq_page_cost and work_mem?
Regards
Pavel Stehule
2011/6/8 Tony Capobianco :
> Here's the explain analyze:
>
> pg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)
> as
> select o.emailcampaignid, count(memberid) opencnt
> from
Here's the explain analyze:
pg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)
as
select o.emailcampaignid, count(memberid) opencnt
from openactivity o,ecr_sents s
where s.emailcampaignid = o.emailcampaignid
group by o.emailcampaignid;
QUERY
PLAN
08.06.11 18:40, Tony Capobianco написав(ла):
pg_dw=# set enable_nestloop =0;
SET
Time: 0.165 ms
pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)
pg_dw-# as
pg_dw-# select o.emailcampaignid, count(memberid) opencnt
pg_dw-# from openactivity o,ecr_sents s
pg_dw-# where s.emailcampaig
* Tony Capobianco (tcapobia...@prospectiv.com) wrote:
> HashAggregate (cost=4391163.81..4391288.05 rows=9939 width=12)
>-> Hash Join (cost=14.78..4344767.23 rows=9279316 width=12)
> Hash Cond: (o.emailcampaignid = s.emailcampaignid)
> -> Seq Scan on openactivity o (cost=
pg_dw=# set enable_nestloop =0;
SET
Time: 0.165 ms
pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)
pg_dw-# as
pg_dw-# select o.emailcampaignid, count(memberid) opencnt
pg_dw-# from openactivity o,ecr_sents s
pg_dw-# where s.emailcampaignid = o.emailcampaignid
pg_dw-# group by o.ema
Tony Capobianco writes:
> pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)
> pg_dw-# as
> pg_dw-# select o.emailcampaignid, count(memberid) opencnt
> pg_dw-# from openactivity o,ecr_sents s
> pg_dw-# where s.emailcampaignid = o.emailcampaignid
> pg_dw-# group by o.emailcampaignid;
> On Postgres, this same query takes about 58 minutes (could not run
> explain analyze because it is in progress):
>
> pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)
> pg_dw-# as
> pg_dw-# select o.emailcampaignid, count(memberid) opencnt
> pg_dw-# from openactivity o,ecr_sents s
>
We are thiis close to moving our datawarehouse from Oracle to
Postgres. This query is identical on both systems, but runs much, much
faster on Oracle. Our Postgres host has far superior hardware and
tuning parameters have been set via pgtune. Most everything else runs
faster in Postgres, exc
On 06/08/2011 06:30 AM, t...@fuzzy.cz wrote:
shared_buffers | 16MB
work_mem | 250MB
This seems a bit suspicious. Are you sure you want to keep the
shared_buffers so small and work_mem so large at the same time? There
probably are workloads where this is th
On 06/08/2011 07:08 PM, t...@fuzzy.cz wrote:
Thanks for including explain analyze output.
Is there any chance you can pop the full explains (not just excerpts) in
here:
http://explain.depesz.com/
?
I believe he already did that - there's a link below each query.
Gah, I'm blind. Thanks.
--
--+-
> shared_buffers | 16MB
> work_mem | 250MB
This seems a bit suspicious. Are you sure you want to keep the
shared_buffers so small and work_mem so large at the same time? There
probably
> Thanks for including explain analyze output.
>
> Is there any chance you can pop the full explains (not just excerpts) in
> here:
>
> http://explain.depesz.com/
>
> ?
I believe he already did that - there's a link below each query.
Tomas
--
Sent via pgsql-performance mailing list (pgsql-perf
Hello
2011/6/8 :
> On Wednesday 08 June 2011 18:39, Pavel Stehule wrote:
>> if you use FOR statement, there should be a problem in using a
>> implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.
> Alas this is mammoth replicator, equivalent to PG 8.3 and it doesn't have that
> paramet
> On Wednesday 08 June 2011 17:39, Claudio Freire wrote:
>> Of course optimally executing a plan with limit is a lot different
>> than one without.
>
> I imagined that limit just cuts out a slice of the query results.
> If it can find 8 rows in 0.5 seconds then I would have thought that
> retur
On Wednesday 08 June 2011 18:39, Pavel Stehule wrote:
> if you use FOR statement, there should be a problem in using a
> implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.
Alas this is mammoth replicator, equivalent to PG 8.3 and it doesn't have that
parameter.
--
Anthony Shipman
On 8/06/2011 10:58 AM, John Williams wrote:
-> Bitmap Heap Scan on logparser_entry
(cost=4119.06..21520.55 rows=68787 width=8) (actual
time=107.032..444.864 rows=16168 loops=1)
Recheck Cond: ((event_type)::text = ANY
('{Attack,"DoT Tick","Critical Attack"}
Hi All,
I'm having issues with a set of fairly related queries in my
application. EXPLAIN ANALYZE is showing them all to be getting stuck
performing roughly the same operation:
-> Bitmap Heap Scan on logparser_entry
(cost=4119.06..21520.55 rows=68787 width=8) (actual
time=107.032..
2011/6/8 :
> On Wednesday 08 June 2011 17:39, Claudio Freire wrote:
>> Of course optimally executing a plan with limit is a lot different
>> than one without.
>
> I imagined that limit just cuts out a slice of the query results.
> If it can find 8 rows in 0.5 seconds then I would have thought
On Wednesday 08 June 2011 17:39, Claudio Freire wrote:
> Of course optimally executing a plan with limit is a lot different
> than one without.
I imagined that limit just cuts out a slice of the query results.
If it can find 8 rows in 0.5 seconds then I would have thought that
returning just
> What seems odd to me is that the only difference between the two is the
> limit
> clause:
>
> select * from tdiag where (create_time >= '2011-06-03
> 09:49:04.00+0' and create_time < '2011-06-06 09:59:04.00+0') order
> by
> diag_id limit 1;
>
> select * from tdiag where (create_time >= '2
On Wed, Jun 8, 2011 at 7:08 AM, wrote:
> What seems odd to me is that the only difference between the two is the limit
> clause
Why would that seem odd?
Of course optimally executing a plan with limit is a lot different
than one without.
Just... why are you sorting by diag_id?
I believe you w
38 matches
Mail list logo