On 20/10/2023 19:39, Stephen Frost wrote:
Greetings,
* Andrei Lepikhov (a.lepik...@postgrespro.ru) wrote:
The only issue I worry about is the uncertainty and clutter that can be
created by this feature. In the worst case, when we have a complex error
stack (including the extension's CATCH sections, exceptions in stored
procedures, etc.), the backend will throw the memory limit error repeatedly.

I'm not seeing what additional uncertainty or clutter there is- this is,
again, exactly the same as what happens today on a system with
overcommit disabled and I don't feel like we get a lot of complaints
about this today.

Maybe I missed something or see this feature from an alternate point of view (as an extension developer), but overcommit is more useful so far: it kills a process. It means that after restart, the backend/background worker will have an initial internal state. With this limit enabled, we need to remember that each function call can cause an error, and we have to remember it using static PG_CATCH sections where we must rearrange local variables to the initial (?) state. So, it complicates development. Of course, this limit is a good feature, but from my point of view, it would be better to kill a memory-consuming backend instead of throwing an error. At least for now, we don't have a technique to repeat query planning with chances to build a more effective plan.

--
regards,
Andrei Lepikhov
Postgres Professional



Reply via email to