On 07/11/12 16:31, David Boreham wrote:
I'm bringing up a new type of server using Intel E5-2620 (unisocket)
which was selected for good SpecIntRate performance vs cost/power (201
for $410 and 95W).
Was assuming it was 6-core but I just noticed it has HT which is
currently enabled since I s
I'm bringing up a new type of server using Intel E5-2620 (unisocket)
which was selected for good SpecIntRate performance vs cost/power (201
for $410 and 95W).
Was assuming it was 6-core but I just noticed it has HT which is
currently enabled since I see 12 cores in /proc/cpuinfo
Question f
On Tue, Nov 6, 2012 at 12:57 PM, Rodrigo Rosenfeld Rosas
wrote:
> I would strongly consider investigation of hstore type along with
> gist/gin index.
> select * from company_transaction where contract_attributes @>
> 'State=>Delaware, Paid=Y';
> etc
>
>
> I'm not very familiar with hstore yet but
Merlin Moncure writes:
> hm, wouldn't timing the time to generate a raw EXPLAIN (that is,
> without ANALYZE) give a rough estimate of planning time? better to
> rule it out before OP goes to the trouble...
Well, we still wouldn't know *why* there was a problem ...
regar
On Tue, Nov 6, 2012 at 1:45 PM, Tom Lane wrote:
> Rodrigo Rosenfeld Rosas writes:
>> Em 06-11-2012 17:24, Tom Lane escreveu:
>>> Can you put together a self-contained test case to duplicate these
>>> results? I'm prepared to believe there's some sort of planner
>>> regression involved here, but
Am 06.11.2012 21:24, schrieb Petr Praus:
On 6 November 2012 14:17, Gunnar "Nick" Bluth
mailto:gunnar.bl...@pro-open.de>> wrote:
Am 06.11.2012 21:08, schrieb Petr Praus:
2MB: http://explain.depesz.com/s/ul1
4MB: http://explain.depesz.com/s/IlVu
10MB: http://explain.
Am 06.11.2012 21:08, schrieb Petr Praus:
2MB: http://explain.depesz.com/s/ul1
4MB: http://explain.depesz.com/s/IlVu
10MB: http://explain.depesz.com/s/afx3
12MB: http://explain.depesz.com/s/i0vQ
See the change in the plan between 10MB and 12MB, directly at top level?
That narrows the thing down
Rodrigo Rosenfeld Rosas writes:
> Em 06-11-2012 17:24, Tom Lane escreveu:
>> Can you put together a self-contained test case to duplicate these
>> results? I'm prepared to believe there's some sort of planner
>> regression involved here, but we'll never find it without a test case.
> I'd love to
Am 06.11.2012 18:38, schrieb Petr Praus:
Yes, but note that this happens only in Linux. Increasing work_mem on
my iMac increases performance (but the queries are slower under OSX
than on virtualized Ubuntu on the same machine). Over the weekend, I
tried the same test on my Ubuntu home machine
Em 06-11-2012 17:24, Tom Lane escreveu:
Rodrigo Rosenfeld Rosas writes:
Em 06-11-2012 16:42, Merlin Moncure escreveu:
Hm -- looking at your 'slow' 9.2 query, it is reporting that the query
took 3 seconds (reported times are in milliseconds). How are you
timing the data? What happens when yo
Rodrigo Rosenfeld Rosas writes:
> Em 06-11-2012 16:42, Merlin Moncure escreveu:
>> Hm -- looking at your 'slow' 9.2 query, it is reporting that the query
>> took 3 seconds (reported times are in milliseconds). How are you
>> timing the data? What happens when you run explain analyze
>> from p
Am 05.11.2012 18:09, schrieb Jeff Janes:
On Mon, Nov 5, 2012 at 8:48 AM, Claudio Freire wrote:
On Mon, Nov 5, 2012 at 1:44 PM, Jeff Janes wrote:
Well, I'm pretty sure that having more work_mem is a good thing (tm)
normally ;-)
In my experience when doing sorts in isolation, having more work_
Em 06-11-2012 16:42, Merlin Moncure escreveu:
On Tue, Nov 6, 2012 at 12:09 PM, Rodrigo Rosenfeld Rosas
wrote:
http://explain.depesz.com/s/ToX (fast on 9.1)
http://explain.depesz.com/s/65t (fast on 9.2)
http://explain.depesz.com/s/gZm (slow on 9.1)
http://explain.depesz.com/s/END (slow on 9.2 -
On Tue, Nov 6, 2012 at 12:09 PM, Rodrigo Rosenfeld Rosas
wrote:
> http://explain.depesz.com/s/ToX (fast on 9.1)
> http://explain.depesz.com/s/65t (fast on 9.2)
> http://explain.depesz.com/s/gZm (slow on 9.1)
> http://explain.depesz.com/s/END (slow on 9.2 - funny that the generated URL
> was END wh
Em 06-11-2012 15:36, Tom Lane escreveu:
Merlin Moncure writes:
Problem could also be no statistics (run ANALYZE to test) or some
other configuration problem (like index locale), or a bona fide
regression.
I'm wondering about join_collapse_limit in particular --- if that wasn't
cranked up in th
I've raised both to 25 in PG 9.2 and reloaded the server. Didn't make
any difference. :(
Thanks for the suggestion anyway.
Cheers,
Rodrigo.
Em 06-11-2012 16:08, k...@rice.edu escreveu:
Hi Rodrigo,
It looks like a lot of joins and 9.2 does some optimizations that
internally add additional joi
Hi Merlin,
Em 06-11-2012 15:22, Merlin Moncure escreveu:
On Tue, Nov 6, 2012 at 11:11 AM, Rodrigo Rosenfeld Rosas
wrote:
Hello, this is my first message to this list, so sorry if this is not the
right place to discuss this or if some data is missing from this message.
I'll gladly send any da
Hi Rodrigo,
It looks like a lot of joins and 9.2 does some optimizations that
internally add additional joins. Did you try raising the
join_collapse_limit and maybe the from_collapse_limit from the
default values of 8?
Regards,
Ken
On Tue, Nov 06, 2012 at 03:11:58PM -0200, Rodrigo Rosenfeld Rosa
Denis writes:
> Here is the output of EXPLAIN ANALYZE. It took 5 seconds but usually it
> takes from 10 to 15 seconds when I am doing backup.
> Sort (cost=853562.04..854020.73 rows=183478 width=219) (actual
> time=5340.477..5405.604 rows=183924 loops=1)
Hmmm ... so the problem here isn't that y
Merlin Moncure writes:
> Problem could also be no statistics (run ANALYZE to test) or some
> other configuration problem (like index locale), or a bona fide
> regression.
I'm wondering about join_collapse_limit in particular --- if that wasn't
cranked up in the 9.1 installation, it would be pure
On Tue, Nov 6, 2012 at 11:11 AM, Rodrigo Rosenfeld Rosas
wrote:
> Hello, this is my first message to this list, so sorry if this is not the
> right place to discuss this or if some data is missing from this message.
>
> I'll gladly send any data you request that would help us to understand this
>
Hello, this is my first message to this list, so sorry if this is not
the right place to discuss this or if some data is missing from this
message.
I'll gladly send any data you request that would help us to understand
this issue. I don't believe I'm allowed to share the actual database
dump,
Tom Lane-2 wrote
> Denis <
> socsam@
> > writes:
>> I've read all the posts in thread, and as I understood in version 9.2
>> some
>> patches were applied to improve pg_dump speed. I've just installed
>> PostgreSQL 9.2.1 and I still have the same problem. I have a database
>> with
>> 2600 schemas
@Victor,
Spasibo for the information, seems valid to me.
Regards,
Willem Leenen
> Date: Tue, 6 Nov 2012 16:52:01 +0200
> Subject: Re: [PERFORM] help with too slow query
> From: vyego...@gmail.com
> To: willem_lee...@hotmail.com
> CC: p.jime...@ismsolar.com; pgsql-performance@postgresql.or
Denis writes:
> I've read all the posts in thread, and as I understood in version 9.2 some
> patches were applied to improve pg_dump speed. I've just installed
> PostgreSQL 9.2.1 and I still have the same problem. I have a database with
> 2600 schemas in it. I try to dump each schema individually,
2012/11/6 Willem Leenen :
> @Victor,
>
> Is the reason of the wrong cardinality estimations of the join indeed due to
> wrong statistics? I thought that the full table scan was due to the index on
> the timefield couldn't be used with this predicate:
>
> time_stamp > date_trunc('month', current_dat
@Victor,
Is the reason of the wrong cardinality estimations of the join indeed due to
wrong statistics? I thought that the full table scan was due to the index on
the timefield couldn't be used with this predicate:
time_stamp > date_trunc('month', current_date - interval '11 months')
It s
I've read all the posts in thread, and as I understood in version 9.2 some
patches were applied to improve pg_dump speed. I've just installed
PostgreSQL 9.2.1 and I still have the same problem. I have a database with
2600 schemas in it. I try to dump each schema individually, but it takes too
much
2012/11/6 Pedro Jiménez Pérez
> Ok, here we go:
>
> I'm using postgresql version 8.0
>
> Here is my query that is too slow: http://explain.depesz.com/s/GbQ
Well, I would start with a note, that 8.0 is not supported anymore:
http://www.postgresql.org/support/versioning/
Please, consider upgradin
Title: Documento sin título
Ok, here we go:
I'm using postgresql version 8.0
Here is my query that is too slow: http://explain.depesz.com/s/GbQ
***
EXPLAIN analyze
select round(CAST(sum(var_value) AS nume
30 matches
Mail list logo