"Sergey Konoplev" <[EMAIL PROTECTED]> writes: > You are right. I've found the odd thing (that completely drives me > mad) in postgresql.conf.
> You are able to reproduce slow-not-in queries by switching > constraint_exclusion to on in your postgresql.conf and running my test > (which is attached to the first message). Hmph. It's trying to see if the NOT IN condition is self-contradictory, which of course it isn't, but the predicate_refuted_by machinery isn't smart enough to determine that except by running through all N^2 combinations of the individual x <> const conditions :-(. (It's not really any smarter about the IN case either, but that only takes constant time not O(N^2) because it will stop after finding that the first equality condition doesn't refute itself.) We could respond to this in a number of ways: 1. "Tough, don't do that." 2. Put some arbitrary limit on the number of subconditions in an AND or OR clause before we give up and don't attempt to prove anything about it. 3. Put in a narrow hack that will get us out of this specific case, but might still allow very slow proof attempts in other large cases. The specific narrow hack I'm considering for #3 goes like this: in this case, we repeatedly pass btree_predicate_proof two clauses "x <> const1" and "x <> const2", and after some fairly expensive probing of the system catalogs it finds out that there's no way to prove that the former refutes the latter. But when considering two ScalarArrayOps, the two operators will be the same for all of the sub-clauses, and so we could check once to find out that we can't refute anything. (It also seems interesting to cache that catalog lookup in cases where we might be able to prove something.) Comments? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers