Dne 23.5.2011 19:01, Maciek Sakrejda napsal(a):
>> You're probably reading it wrong. The sort itself takes about 1 ms (just
>> subtract the numbers in "actual=").
>
> I thought it was cost=startup_cost..total_cost. That is not quite the
> same thing, since startup_cost is effectively "cost to prod
On Mon, May 23, 2011 at 1:01 PM, Maciek Sakrejda wrote:
>> You're probably reading it wrong. The sort itself takes about 1 ms (just
>> subtract the numbers in "actual=").
>
> I thought it was cost=startup_cost..total_cost. That is not quite the
> same thing, since startup_cost is effectively "cost
> You're probably reading it wrong. The sort itself takes about 1 ms (just
> subtract the numbers in "actual=").
I thought it was cost=startup_cost..total_cost. That is not quite the
same thing, since startup_cost is effectively "cost to produce first
row", and Sort can't really operate in a "stre
Dne 19.5.2011 23:13, Strange, John W napsal(a):
> Am I reading this right in that the sort is taking almost 8 seconds?
You're probably reading it wrong. The sort itself takes about 1 ms (just
subtract the numbers in "actual="). If you include all the overhead it
takes about 2.3 seconds (the hash j
"Strange, John W" wrote:
> Am I reading this right in that the sort is taking almost 8
> seconds?
> -> Sort ... actual time=14186.977..14287.068
> -> Hash Join ... actual time=6000.728..12037.492
The run time of the sort is the difference between 12037 ms and
14287 ms (the completion tim
Plus the entire explain analyze output into the form at
http://explain.depesz.com/ and you'll get a nicely human readable output
which shows both the inclusive and exclusive time spent on each step of the
query. It also highlights any steps which show inaccurate statistics. It
will also give you
Am I reading this right in that the sort is taking almost 8 seconds?
"GroupAggregate (cost=95808.09..95808.14 rows=1 width=142) (actual
time=14186.999..14694.524 rows=315635 loops=1)"
" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7,
s.d8, s.d9, s.d10, sum(sq.v)"
" Bu
Thanks Greg. You were right. If I set my sort_mem to 1G (yes I have
loads of memory, only for testing purpose), then I don't see any thing
written to disk. So in-memory require more memory than reported on-disk
storage.
Stalin
-Original Message-
From: Greg Stark [mailto:greg.st...@enter
:21 PM
To: Subbiah Stalin-XCGF84
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Sort performance
On Thu, Jan 29, 2009 at 3:15 PM, Subbiah Stalin-XCGF84
wrote:
> I'm in the process of tuning a query that does a sort on a huge
dataset.
> With work_mem set to 2M, i see the
Robert Haas writes:
> On Thu, Jan 29, 2009 at 3:15 PM, Subbiah Stalin-XCGF84
> wrote:
>>
>> i see the sort operation spilling to disk writing upto 430MB and then
>> return the first 500 rows. Our query is of the sort
>>
>> Now if set the work_mem to 500MB (i did this in a psql session without
>>
On Thu, Jan 29, 2009 at 3:15 PM, Subbiah Stalin-XCGF84
wrote:
> I'm in the process of tuning a query that does a sort on a huge dataset.
> With work_mem set to 2M, i see the sort operation spilling to disk writing
> upto 430MB and then return the first 500 rows. Our query is of the sort
>
> select
Hi All,
I'm in the process of tuning a query that does a sort on a huge dataset.
With work_mem set to 2M, i see the sort operation spilling to disk
writing upto 430MB and then return the first 500 rows. Our query is of
the sort
select co1, col2... from table where col1 like 'aa%' order col1 lim
e correct that there's poor quality junk out there. I was not
> talking about it, only reasonable quality components.
>
> Ron
>
>
> -Original Message-
> From: Kurt De Grave <[EMAIL PROTECTED]>
> Sent: Nov 10, 2005 5:40 AM
> To: Ron Peacetree <[EMAIL PROT
---Original Message-
From: Kurt De Grave <[EMAIL PROTECTED]>
Sent: Nov 10, 2005 5:40 AM
To: Ron Peacetree <[EMAIL PROTECTED]>
Cc: Charlie Savage <[EMAIL PROTECTED]>, pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Sort performance on large tables
On Wed, 9 Nov 2005,
DB server. It makes all kinds of problems just not exist.
Ron
-Original Message-
From: Simon Riggs <[EMAIL PROTECTED]>
Sent: Nov 9, 2005 4:35 AM
To: Charlie Savage <[EMAIL PROTECTED]>, Luke Lonergan <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org
Subject: Re: [PE
Hi Simon,
Thanks for the response Simon.
PostgreSQL can do HashAggregates as well as GroupAggregates, just like
Oracle. HashAggs avoid the sort phase, so would improve performance
considerably. The difference in performance you are getting is because
of the different plan used. Did you specific
On Tue, 2005-11-08 at 00:05 -0700, Charlie Savage wrote:
> Setup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 RC 1 with
> I want to extract data out of the file, with the most important values
> being stored in a column called tlid. The tlid field is an integer, and
> the values are
I solved it in my
application by the following type of query.
Assumes of course that you have an index on tlid.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Charlie Savage
Sent: Tuesday, November 08, 2005 2:05 AM
To: pgsql-performance
I'd set up a trigger to maintain summary tables perhaps...
Chris
Charlie Savage wrote:
Thanks everyone for the feedback.
I tried increasing work_mem:
set work_mem to 30;
select tlid, min(ogc_fid)
from completechain
group by tld;
The results are:
"GroupAggregate (cost=9041602.80..1000
Its an int4.
Charlie
Tom Lane wrote:
Charlie Savage <[EMAIL PROTECTED]> writes:
Thus the time decreased from 8486 seconds to 5279 seconds - which is a
nice improvement. However, that still leaves postgresql about 9 times
slower.
BTW, what data type are you sorting, exactly? If it's a stri
Charlie Savage <[EMAIL PROTECTED]> writes:
> Thus the time decreased from 8486 seconds to 5279 seconds - which is a
> nice improvement. However, that still leaves postgresql about 9 times
> slower.
BTW, what data type are you sorting, exactly? If it's a string type,
what is your LC_COLLATE set
Thanks everyone for the feedback.
I tried increasing work_mem:
set work_mem to 30;
select tlid, min(ogc_fid)
from completechain
group by tld;
The results are:
"GroupAggregate (cost=9041602.80..10003036.88 rows=48071704 width=8)
(actual time=4371749.523..5106162.256 rows=47599910 loops=1)
On Tue, 8 Nov 2005, Luke Lonergan wrote:
> Stephan,
>
> On 11/8/05 9:38 AM, "Stephan Szabo" <[EMAIL PROTECTED]> wrote:
>
> >> >
> >> > Just as we find with a similar comparison (with a "popular commercial,
> >> > proprietary database" :-) Though some might suggest you increase
> >> > work_mem or
Title: Re: [PERFORM] Sort performance on large tables
Stephan,
On 11/8/05 9:38 AM, "Stephan Szabo" <[EMAIL PROTECTED]> wrote:
>
> Just as we find with a similar comparison (with a "popular commercial,
> proprietary database" :-) Though some might sugge
On Tue, 8 Nov 2005, Luke Lonergan wrote:
> > SELECT tlid, min(ogc_fid)
> > FROM completechain
> > GROUP BY tlid
> > ORDER BY tlid;
> >
> > Even with this, it was more than a magnitude faster than Postgresql.
> > Which makes me think I have somehow misconfigured postgresql
> > (see the relevant par
have an index on tlid.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Charlie Savage
> Sent: Tuesday, November 08, 2005 2:05 AM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] Sort performance on large tables
>
Charlie,
> Should I expect results like this? I realize that the
> computer is quite low-end and is very IO bound for this
> query, but I'm still surprised that the sort operation takes so long.
It's the sort performance of Postgres that's your problem.
> Out of curiosity, I setup an Oracle
Charlie Savage wrote:
Hi everyone,
I have a question about the performance of sort.
Note it takes over 10 times longer to do the sort than the full
sequential scan.
Should I expect results like this? I realize that the computer is quite
low-end and is very IO bound for this query, but I'm
Hi everyone,
I have a question about the performance of sort.
Setup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 RC 1 with
PostGIS, 1 built-in 80 GB IDE drive, 1 SATA Seagate 400GB drive. The
IDE drive has the OS and the WAL files, the SATA drive the database.
From hdparm the max I
29 matches
Mail list logo