On Mon, Feb 22, 2010 at 3:44 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Yang Zhang <yanghates...@gmail.com> writes:
>> On Mon, Feb 22, 2010 at 1:13 PM, Pavel Stehule <pavel.steh...@gmail.com> 
>> wrote:
>>> the speed depends on setting of working_memory. Try to increase a 
>>> working_memory
>
>> It's already at
>>  20000kB
>
> According to your original posting, you're trying to sort something like
> a gigabyte of data.  20MB is peanuts.  I wouldn't recommend increasing
> the value across-the-board, but setting it to several hundred meg for
> this particular query might help.  How much RAM in your machine anyway?

We have 16GB of RAM, but again, Unix sort (and even our own
hand-rolled merge-sort) can operate zippily while avoiding consuming
additional memory.

All the same, we increased work_mem to 1GB, and still the query is not
completing.

>
> Also, the fact that mysql is faster suggests that having an index does help.
> Possibly the data is nearly ordered by transactionid, in which case an
> indexscan would not have random-access problems and would be much faster
> than an explicit sort.

Note that earlier in the thread I tried running this query with an
index scan, but it's still much slower.
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to