Re: [GENERAL] max_expr_depth

2001-06-18 Thread Tom Lane

Joseph Shraibman <[EMAIL PROTECTED]> writes:
> Do you really think I should do 1000 updates in a transaction instead of
> an IN with 1000 items?  I can do my buffer flush any way I want but I'd
> have to think the overhead of making 1000 calls to the backend would be
> more than overwhelm the cost of the big OR statement (especially if the
> server and client aren't on the same machine).

If your choices are 1000 separate updates or a 1000-way IN, then maybe
the IN will be faster, but it's likely not as fast as it could be.
The best plan you can hope for from the IN is one indexscan pass per
IN item.  You'd get the same sort of plan from the 1000 updates, but
it'd cost 1000 iterations of the parser and planner, so the updates
likely will come out behind.

The real issue is whether you could get a better plan (merge or hash
join, say) from a join to a temp table.  Not sure about that --- unless
you go to the trouble of vacuuming the temp table, the planner won't
know much about it and is likely to pick an unhelpful plan anyway.
So maybe you should stick with what you have.  You're likely to run into
trouble if you try to scale it to ~ 10 IN values; that max_expr_depth
check does exist for a reason.  But at ~ 1000 values it doesn't sound
too awful.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] max_expr_depth

2001-06-18 Thread Tom Lane

Joseph Shraibman <[EMAIL PROTECTED]> writes:
> I recently tried to do a big update with postgres 7.1.2.  The update was
> something like
> UPDATE table SET status = 2 WHERE id IN (a few thousand entries)  AND
> status = 1;
> and I got:
> ERROR:  Expression too complex: nesting depth exceeds max_expr_depth =
> 1

How many is "a few thousand"?  About 1 by any chance?  That
"IN (a, b, ...)" will expand to "((id = a) OR (id = b) OR ...)"
which would set off the expression-too-complex detector right about
1 ORs, if I'm not mistaken.

You could crank up the max_expr_depth SET variable if you are so
inclined, but frankly performance of this query is going to suck
anyway.  I'd recommend sticking the target id values into a temp
table that you can join against, instead.

As for why we have an expression-too-complex check, it's because
mysql's crashme test used to provoke a stack overflow crash...

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] max_expr_depth

2001-06-18 Thread Ryan Mahoney

I don't know the cause, but if you only have to run this procedure once in 
a while, you could select all the records that need to be updated, and use 
a text editor to build a few thousand single update statement, then save 
this file and echo it to the postgres backend through psql.

Good Luck!

-r

At 08:50 PM 6/18/01 -0400, Joseph Shraibman wrote:

>I recently tried to do a big update with postgres 7.1.2.  The update was
>something like
>UPDATE table SET status = 2 WHERE id IN (a few thousand entries)  AND
>status = 1;
>
>
>and I got:
>ERROR:  Expression too complex: nesting depth exceeds max_expr_depth =
>1
>
>What exactly caused this and how do I work around it?
>
>
>--
>Joseph Shraibman
>[EMAIL PROTECTED]
>Increase signal to noise ratio.  http://www.targabot.com
>
>---(end of broadcast)---
>TIP 6: Have you searched our list archives?
>
>http://www.postgresql.org/search.mpl
>
>
>
>---
>Incoming mail is certified Virus Free.
>Checked by AVG anti-virus system (http://www.grisoft.com).
>Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01



---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster