On Mon, Feb 22, 2010 at 2:39 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> On Mon, Feb 22, 2010 at 12:30 PM, Yang Zhang <yanghates...@gmail.com> wrote:
>> This isn't some microbenchmark. This is part of our actual analytical
>> application. We're running large-scale graph partitioning algorithms.
>
> It's important to see how it runs if you can fit more / most of the
> data set into memory by cranking up work_mem to something really big
> (like a gigabyte or two) and if the query planner can switch to some
> sort of hash algorithm.

We're actually using a very small dataset right now. Being bounded by
memory capacity is not a scalable approach for our application.

>
> Also, can you cluster the table on transactionid ?
>

We can, but that's not really addressing the core issue, which matters
to us since the sort itself is only for performing a self merge join
on transactionid, and the *very next step* is a group by a.tableid,
a.tupleid, b.tableid, b.tupleid (i.e. requiring another sort for the
group-agg).
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to