I wrote:
> We could respond to this in a number of ways:

> 1. "Tough, don't do that."

> 2. Put some arbitrary limit on the number of subconditions in an AND or
> OR clause before we give up and don't attempt to prove anything about
> it.

> 3. Put in a narrow hack that will get us out of this specific case,
> but might still allow very slow proof attempts in other large cases.

> The specific narrow hack I'm considering for #3 goes like this: in this
> case, we repeatedly pass btree_predicate_proof two clauses "x <> const1"
> and "x <> const2", and after some fairly expensive probing of the system
> catalogs it finds out that there's no way to prove that the former
> refutes the latter.  But when considering two ScalarArrayOps, the two
> operators will be the same for all of the sub-clauses, and so we could
> check once to find out that we can't refute anything.  (It also seems
> interesting to cache that catalog lookup in cases where we might be able
> to prove something.)

I find that it's not too hard to cache the operator lookup stuff, and
that helps some, but putting in a short-circuit path to make the test
only once for a ScalarArrayOpExpr is a lot harder than I expected.  The
problem is the double recursion in predicate_refuted_by_recurse --- you
can stop the recursion when you are looking at two ScalarArrayOpExprs
at the same time, but that only shuts off one of three recursion paths
that are going to end up iterating over the lists.

So option #2 with a cutoff of 100 items or so is looking like the
best response.

Thoughts?

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to