Hi,
I'm having trouble optimizing PostgreSQL for an admittedly heinous
worst-case scenario load.
testbed:
dual P3 1.3 GHz box with 2GB RAM
two IDE 120G drives on separate channels (DMA on), OS on one, DB on the
other, some swap on each (totalling 2.8G).
RH Linux 8.
I've installed PG 7.3.4 from s
This in interesting effect, that I ran into more than once, and I have a
question concerning this:
We see that the cost for this query went from roughly 12,000,000 to about
12,000. Of course, we cannot assume that the time of execution will be
directly proportional to this, and also, the weight fa
Thanks, that sped things up a bit, from 7.6 sec. to about 5.5 sec. However
the plan still includes a sequential scan on ssa_candidate:
Aggregate (cost=12161.11..12161.11 rows=1 width=35)
-> Merge Join (cost=11611.57..12111.12 rows=19996 width=35)
-> Sort (cost=11488.27..11488.27 r
Not sure about tuning, but it seems to me that this query would be much more
effective if it's rewritten like this (especially if style_id columns on both
tables are indexed):
SELECT count(DISTINCT song_id) AS X
FROM ssa_candidate SC JOIN station_subgenre SS ON SC.style_id = SS.style_id
WHERE SS.
"Michael T. Halligan" <[EMAIL PROTECTED]> writes:
> The query sorts through about 80k rows.. here's the query
> --
> SELECT count(*) FROM (
> SELECT DISTINCT song_id FROM ssa_candidate WHERE
> style_id IN (
>
Hi.. I seem to be running into a bottle neck on a query, and I'm not
sure what the bottleneck is .
The machine is a dual-processor p3 750 with 2 gigs of (pc100) memory,
and 3 72 gig disks setup
in raid 5. Right now i'm just testing our db for speed (we're porting
from oracle) .. later on
We'r