Brendan Curran <[EMAIL PROTECTED]> writes:
> I'll tack up a note to the online documentation letting people know so
> that it's a little more explicitly clear that when you choose IN on
> data that isn't explicitly unique (to the planner i.e. post-analyze)
> you get the baggage of a forced unique
Well, IN != EXISTS != JOIN. Exists just stops as soon as it finds a
record. For some cases, it's equivalent to IN, but not all. IN has to
de-duplicate it's list in some fashion. For small IN lists, you can do
this with an OR, but at some point you need to switch to an actual
unique (actually, I
Hi, Brendan,
Brendan Curran wrote:
>> What prevents you from using an aggregate function?
>
> I guess I could actually obtain the results in an aggregate function and
> use those to maintain a summary table. There is a web view that requires
> 'as accurate as possible' numbers to be queried per g
On Wed, Oct 11, 2006 at 10:53:41AM -0600, Brendan Curran wrote:
> Interestingly, and thank you to Tom and Jim, the explicit JOIN improved
> performance tremendously (RESULTS BELOW). I converted the entire query
> to use explicit joins instead of IN and EXISTS and discovered acceptable
> performa
What prevents you from using an aggregate function?
I guess I could actually obtain the results in an aggregate function and use those to maintain a
summary table. There is a web view that requires 'as accurate as possible' numbers to be queried per
group (all 40 groups are displayed on the
Tom Lane wrote:
Brendan Curran <[EMAIL PROTECTED]> writes:
So much time is being spent in the Unique and Sort leaves... I would
think that it wouldn't need to do the unique portion, since there is no
DISTINCT clause...
There's nothing in that query suggesting that suppress.email is unique.
If
Brendan Curran <[EMAIL PROTECTED]> writes:
> So much time is being spent in the Unique and Sort leaves... I would
> think that it wouldn't need to do the unique portion, since there is no
> DISTINCT clause...
There's nothing in that query suggesting that suppress.email is unique.
If you know tha
On Tue, Oct 10, 2006 at 05:46:18PM -0600, Brendan Curran wrote:
>
>
> Tom Lane wrote:
> >Brendan Curran <[EMAIL PROTECTED]> writes:
> >>Tom Lane wrote:
> >>>Have you tried doing EXPLAIN ANALYZE of each of the INSERT/DELETE steps?
> >
> >>FIRST INSERT (Just the select is explained):
> >
> >EXPLAIN
Tom Lane wrote:
Brendan Curran <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Have you tried doing EXPLAIN ANALYZE of each of the INSERT/DELETE steps?
FIRST INSERT (Just the select is explained):
EXPLAIN ANALYZE, please, not just EXPLAIN.
regards, tom lane
Sorry,
Brendan Curran <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Have you tried doing EXPLAIN ANALYZE of each of the INSERT/DELETE steps?
> FIRST INSERT (Just the select is explained):
EXPLAIN ANALYZE, please, not just EXPLAIN.
regards, tom lane
--
Tom Lane wrote:
Brendan Curran <[EMAIL PROTECTED]> writes:
CREATE TEMP TABLE temp_list_suppress(email_record_id int8);
INSERT INTO temp_list_suppress
SELECT email_record_id from ONLY email_record er
WHERE email_list_id = 9 AND email IN
(select email from suppress);
Brendan Curran <[EMAIL PROTECTED]> writes:
> CREATE TEMP TABLE temp_list_suppress(email_record_id int8);
> INSERT INTO temp_list_suppress
> SELECT email_record_id from ONLY email_record er
> WHERE email_list_id = 9 AND email IN
> (select email from suppress);
> CREATE INDEX uniq
I currently have a db supporting what is for the most part an OLAP data
warehousing application.
One table (good data) has roughly 120 million rows, divided into roughly
40 different relational groups (logically by foreign key). Every time I
add data to this table, I need to afterwards scrub t
13 matches
Mail list logo