It looks like you guys
were right. I think vacuum analyzing this made it do an IOS. It seems
like materialized views are going to be the best bet. I see how that
would allow sequential reading. Thanks!
Bryce
Jeff Janes
December 12, 2013
4:01 PM
On Thu, Dec 12, 2013 at
On Thu, Dec 12, 2013 at 3:04 PM, Bryce Covert <
br...@brycecovertoperations.com> wrote:
> Not sure if this is helpful, but I tried upgrading to 9.2, and here's what
> I got:
>
> -
> Limit (cost=0.00..535.78 rows=50 width=8) (actual
> time=1037.376..135043.945 rows=50 loops=1)
>Output
On Thu, Dec 12, 2013 at 6:49 PM, Bryce Covert <
br...@brycecovertoperations.com> wrote:
> It's strange that it isn't sequential at least for the electric usage, as
> i've clustered using the index that it's using..
>
electricusage is the inner part of the nested loop, which means it will do
~3000
Not sure if this is
helpful, but I tried upgrading to 9.2, and here's what I got:
-
Limit (cost=0.00..535.78 rows=50 width=8) (actual
time=1037.376..135043.945 rows=50 loops=1)
Output: premiseaccount.id, (sum(electricusage.usage))
Buffers: shared hit=4851 read=18718
-> GroupA
It's strange that it isn't sequential at least for the electric usage,
as i've clustered using the index that it's using..
I had work_mem set to 128mb. I tried bumping it to 1024mb, and I don't
think I see a in the query plan.
Would you think upgrading to 9.2 would help much here? Using a cov
On Thu, Dec 12, 2013 at 6:16 PM, Bryce Covert <
br...@brycecovertoperations.com> wrote:
>
> Thanks a lot for the help. I'm not familiar with explain buffers, but
> here's the results:
>
> Limit (cost=0.00..648.71 rows=50 width=8) (actual
> time=653.681..52328.707 rows=50 loops=1)
>Buffers: s
Also, I was reading this:
http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2, and I
realized that index-only scans weren't introduced until 9.2. I tried
creating a covered index for this, but I don't think it helps in this
situation.
Bryce
Claudio Freire
Decembe
Hey Claudio,
Thanks a lot for the help. I'm not familiar with explain buffers, but
here's the results:
Limit (cost=0.00..648.71 rows=50 width=8) (actual
time=653.681..52328.707 rows=50 loops=1)
Buffers: shared hit=7875 read=9870
-> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8
On Thu, Dec 12, 2013 at 5:20 PM, Bryce Covert <
br...@brycecovertoperations.com> wrote:
> I don't have much info on disks, since this is a virtual server on linode.
> It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume (noatime).
> Hopefully that's helpful.
>
> Bryce
>
Well, did you run
I don't have much info on disks, since this is a virtual server on
linode. It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume
(noatime). Hopefully that's helpful.
Bryce
Claudio Freire
December 12, 2013
12:15 PM
On Thu, Dec 12, 2013 at
5:10 PM, Bryce CovertFor
On Thu, Dec 12, 2013 at 5:10 PM, Bryce Covert
wrote:
> Hi,
>
> I'm seeing a slow running query. After some optimization with indexes, it
> appears that the query plan is correct, it's just slow. Running the query
> twice, not surprisingly, is very fast, due to OS caching or shared_buffers
> cachin
Hi,
I'm seeing a slow running query. After some optimization with indexes,
it appears that the query plan is correct, it's just slow. Running the
query twice, not surprisingly, is very fast, due to OS caching or
shared_buffers caching. If the parameters for the query are different,
however,
Merlin Moncure writes:
> It's tempting to say, "there should be a limit to backend local cache"
> but it's not clear if the extra tracking is really worth it all things
> considered. There was some discussion about this (see the archives).
Yeah --- there actually was a limit on total catcache si
On Wed, Dec 11, 2013 at 9:39 PM, Tom Lane wrote:
> Mack Talcott writes:
>> The pattern I am seeing is that postgres processes keep growing in
>> shared (this makes sense as they access more of the shared memory, as
>> you've pointed out) but also process-specific memory as they run more
>> querie
Okay thanks.
That's what I wanted to know.
Janek Sendrowski
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
15 matches
Mail list logo