Konstantinos Agouros <[EMAIL PROTECTED]> writes:
> Is there a way in postgres to make use of the extra cpu(s) the
> machine has for the single tasks of importing the data and doing the 
> somewhat intensive selects that result from the sheer amount of data.

Maybe I'm missing something, but it seems like all you need to do is
run the data import and the selects in different processes (multiple
backends).

There isn't any way to apply multiple CPUs in a single SELECT, if that's
what you were hoping for.  Perhaps you could break down the data
reduction task into independent subqueries, but that will take some
thought :-(

                        regards, tom lane

Reply via email to