Chris Browne <[EMAIL PROTECTED]> writes:
> I once ran into the situation where Slony-I generated a query that
> made the parser blow out (some sort of memory problem / running out of
> stack space somewhere thing); it was just short of 640K long, and so
> we figured that evidently it was wrong to conclude that "640K ought to
> be enough for anybody."

> Neil Conway was an observer; he was speculating that, with some
> (possibly nontrivial) change to the parser, we should have been able
> to cope with it.

> The query consisted mostly of a NOT IN clause where the list had some
> atrocious number of entries in it (all integers).

FWIW, we do seem to have improved that as of 8.2.  Assuming your entries
were 6-or-so-digit integers, that would have been on the order of 80K
entries, and we can manage it --- not amazingly fast, but it doesn't
blow out the stack anymore.

> (Aside: I wound up writing a "query compressor" (now in 1.2) which
> would read that list and, if it was at all large, try to squeeze any
> sets of consecutive integers into sets of "NOT BETWEEN" clauses.
> Usually, the lists, of XIDs, were more or less consecutive, and
> frequently, in the cases where the query got to MBs in size, there
> would be sets of hundreds or even thousands of consecutive integers
> such that we'd be left with a tiny query after this...)

Probably still a win.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to