[EMAIL PROTECTED] (Michael Stone) writes:

> On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:
>>A naive read on this is that you might start with one backend process,
>>which then spawns 16 more.  Each of those backends is scanning through
>>one of those 16 files; they then throw relevant tuples into shared
>>memory to be aggregated/joined by the central one.
>
> Of course, table scanning is going to be IO limited in most cases, and
> having every query spawn 16 independent IO threads is likely to slow
> things down in more cases than it speeds them up. It could work if you
> have a bunch of storage devices, but at that point it's probably
> easier and more direct to implement a clustered approach.

All stipulated, yes.  It obviously wouldn't be terribly useful to scan
more aggressively than I/O bandwidth can support.  The point is that
this is one of the kinds of places where concurrent processing could
do some good...
-- 
let name="cbbrowne" and tld="acm.org" in name ^ "@" ^ tld;;
http://cbbrowne.com/info/spiritual.html
Save the whales. Collect the whole set. 

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to