Re: Limiting memory allocation

2022-05-24 Thread Bruce Momjian
On Tue, May 24, 2022 at 09:55:16PM -0400, Tom Lane wrote: > Bruce Momjian writes: > > On Tue, May 24, 2022 at 09:20:43PM -0400, Tom Lane wrote: > >> Then you just rendered all the planner's estimates fantasies. > > > That's what I was asking --- if the planner's estimates are based on the > >

Re: Limiting memory allocation

2022-05-24 Thread Tom Lane
Bruce Momjian writes: > On Tue, May 24, 2022 at 09:20:43PM -0400, Tom Lane wrote: >> Then you just rendered all the planner's estimates fantasies. > That's what I was asking --- if the planner's estimates are based on the > size of work_mem --- I thought you said it is not. The planner's

Re: Limiting memory allocation

2022-05-24 Thread Bruce Momjian
On Tue, May 24, 2022 at 09:20:43PM -0400, Tom Lane wrote: > Bruce Momjian writes: > > On Tue, May 24, 2022 at 07:40:45PM -0400, Tom Lane wrote: > >> (1) There are not a predetermined number of allocations. For example, > >> if we do a given join as nestloop+inner index scan, that doesn't require

Re: Limiting memory allocation

2022-05-24 Thread Tom Lane
Bruce Momjian writes: > On Tue, May 24, 2022 at 07:40:45PM -0400, Tom Lane wrote: >> (1) There are not a predetermined number of allocations. For example, >> if we do a given join as nestloop+inner index scan, that doesn't require >> any large amount of memory; but if we do it as merge or hash

Re: Limiting memory allocation

2022-05-24 Thread Bruce Momjian
On Tue, May 24, 2022 at 07:40:45PM -0400, Tom Lane wrote: > Bruce Momjian writes: > > If the plan output is independent of work_mem, > > ... it isn't ... Good. > > I always wondered why we > > didn't just determine the number of simultaneous memory requests in the > > plan and just allocate

Re: Limiting memory allocation

2022-05-24 Thread Tom Lane
Bruce Momjian writes: > If the plan output is independent of work_mem, ... it isn't ... > I always wondered why we > didn't just determine the number of simultaneous memory requests in the > plan and just allocate accordingly, e.g. if there are four simultaneous > memory requests in the plan,

Re: Limiting memory allocation

2022-05-24 Thread Bruce Momjian
On Tue, May 24, 2022 at 11:49:27AM -0400, Robert Haas wrote: > It's always seemed to me that the principled thing to do would be to > make work_mem a per-query budget rather than a per-node budget, and > have add_path() treat memory usage as an independent figure of merit > -- and also discard any

Re: Limiting memory allocation

2022-05-24 Thread Robert Haas
On Fri, May 20, 2022 at 7:09 PM Tomas Vondra wrote: > I wonder if we might eventually use this to define memory budgets. One > of the common questions I get is how do you restrict the user from > setting work_mem too high or doing too much memory-hungry things. > Currently there's no way to do

Re: Limiting memory allocation

2022-05-23 Thread Jan Wieck
On 5/20/22 19:08, Tomas Vondra wrote: Well, we already have the memory-accounting built into the memory context infrastructure. It kinda does the same thing as the malloc() wrapper, except that it does not publish the information anywhere and it's per-context (so we have to walk the context

Re: Limiting memory allocation

2022-05-20 Thread Tomas Vondra
On 5/20/22 21:50, Stephen Frost wrote: > Greetings, > > ... > >>> How exactly this would work is unclear to me; maybe one >>> process keeps an eye on it in an OS-specific manner, > > There seems to be a lot of focus on trying to implement this as "get the > amount of free memory from the OS and

Re: Limiting memory allocation

2022-05-20 Thread Stephen Frost
Greetings, * Oleksii Kliukin (al...@hintbits.com) wrote: > > On 18. May 2022, at 17:11, Alvaro Herrera wrote: > > On 2022-May-18, Jan Wieck wrote: > >> Maybe I'm missing something, but what is it that you would actually > >> consider > >> a solution? Knowing your current memory consumption

Re: Limiting memory allocation

2022-05-20 Thread Oleksii Kliukin
Hi, > On 18. May 2022, at 17:11, Alvaro Herrera wrote: > > On 2022-May-18, Jan Wieck wrote: > >> Maybe I'm missing something, but what is it that you would actually consider >> a solution? Knowing your current memory consumption doesn't make the need >> for allocating some right now go away.

Re: Limiting memory allocation

2022-05-19 Thread Dmitry Dolgov
> On Wed, May 18, 2022 at 04:49:24PM -0400, Joe Conway wrote: > On 5/18/22 16:20, Alvaro Herrera wrote: > > On 2022-May-18, Joe Conway wrote: > > > > > On 5/18/22 11:11, Alvaro Herrera wrote: > > > > > > Apparently, if the cgroup goes over the "high" limit, the processes are > > > > *throttled*.

Re: Limiting memory allocation

2022-05-18 Thread Joe Conway
On 5/18/22 16:20, Alvaro Herrera wrote: On 2022-May-18, Joe Conway wrote: On 5/18/22 11:11, Alvaro Herrera wrote: > Apparently, if the cgroup goes over the "high" limit, the processes are > *throttled*. Then if the group goes over the "max" limit, OOM-killer is > invoked. You may be

Re: Limiting memory allocation

2022-05-18 Thread Alvaro Herrera
On 2022-May-18, Joe Conway wrote: > On 5/18/22 11:11, Alvaro Herrera wrote: > > Apparently, if the cgroup goes over the "high" limit, the processes are > > *throttled*. Then if the group goes over the "max" limit, OOM-killer is > > invoked. > You may be misinterpreting "throttle" in this

Re: Limiting memory allocation

2022-05-18 Thread Joe Conway
On 5/18/22 11:11, Alvaro Herrera wrote: Now that's where cgroup's memory limiting features would prove useful, if they weren't totally braindead: https://www.kernel.org/doc/Documentation/cgroup-v2.txt Apparently, if the cgroup goes over the "high" limit, the processes are *throttled*. Then if

Re: Limiting memory allocation

2022-05-18 Thread Stephen Frost
Greetings, * Jan Wieck (j...@wi3ck.info) wrote: > On 5/17/22 18:30, Stephen Frost wrote: > >This isn’t actually a solution though and that’s the problem- you end up > >using swap but if you use more than “expected” the OOM killer comes in and > >happily blows you up anyway. Cgroups are containers

Re: Limiting memory allocation

2022-05-18 Thread Jan Wieck
On 5/18/22 11:11, Alvaro Herrera wrote: On 2022-May-18, Jan Wieck wrote: Maybe I'm missing something, but what is it that you would actually consider a solution? Knowing your current memory consumption doesn't make the need for allocating some right now go away. What do you envision the

Re: Limiting memory allocation

2022-05-18 Thread Stephen Frost
Greetings, * Tom Lane (t...@sss.pgh.pa.us) wrote: > Stephen Frost writes: > > On Tue, May 17, 2022 at 18:12 Tom Lane wrote: > >> ulimit might be interesting to check into as well. The last time I > >> looked, it wasn't too helpful for this on Linux, but that was years ago. > > > Unfortunately

Re: Limiting memory allocation

2022-05-18 Thread Alvaro Herrera
On 2022-May-18, Jan Wieck wrote: > Maybe I'm missing something, but what is it that you would actually consider > a solution? Knowing your current memory consumption doesn't make the need > for allocating some right now go away. What do you envision the response of > PostgreSQL to be if we had

Re: Limiting memory allocation

2022-05-18 Thread Ronan Dunklau
Le mercredi 18 mai 2022, 16:23:34 CEST Jan Wieck a écrit : > On 5/17/22 18:30, Stephen Frost wrote: > > Greetings, > > > > On Tue, May 17, 2022 at 18:12 Tom Lane > > > > wrote: > > Jan Wieck mailto:j...@wi3ck.info>> writes: > > > On 5/17/22 15:42, Stephen

Re: Limiting memory allocation

2022-05-18 Thread Jan Wieck
On 5/17/22 18:30, Stephen Frost wrote: Greetings, On Tue, May 17, 2022 at 18:12 Tom Lane > wrote: Jan Wieck mailto:j...@wi3ck.info>> writes: > On 5/17/22 15:42, Stephen Frost wrote: >> Thoughts? > Using cgroups one can actually force a certain

Re: Limiting memory allocation

2022-05-17 Thread Tom Lane
Stephen Frost writes: > On Tue, May 17, 2022 at 18:12 Tom Lane wrote: >> ulimit might be interesting to check into as well. The last time I >> looked, it wasn't too helpful for this on Linux, but that was years ago. > Unfortunately I really don’t think anything here has materially changed in >

Re: Limiting memory allocation

2022-05-17 Thread Stephen Frost
Greetings, On Tue, May 17, 2022 at 18:12 Tom Lane wrote: > Jan Wieck writes: > > On 5/17/22 15:42, Stephen Frost wrote: > >> Thoughts? > > > Using cgroups one can actually force a certain process (or user, or > > service) to use swap if and when that service is using more memory than > > it

Re: Limiting memory allocation

2022-05-17 Thread Tom Lane
Jan Wieck writes: > On 5/17/22 15:42, Stephen Frost wrote: >> Thoughts? > Using cgroups one can actually force a certain process (or user, or > service) to use swap if and when that service is using more memory than > it was "expected" to use. I wonder if we shouldn't just provide

Re: Limiting memory allocation

2022-05-17 Thread Jan Wieck
On 5/17/22 15:42, Stephen Frost wrote: Thoughts? Yes. The main and foremost problem is a server that is used for multiple services and they behave differently when it comes to memory allocation. One service just allocates like we have petabytes of RAM, then uses little of it, while another

Limiting memory allocation

2022-05-17 Thread Stephen Frost
Greetings, An ongoing issue in container environments where Kubernetes is being used is that setting the overcommit parameters on the base system will impact all of the processes on that system and not all of them handle malloc failing as gracefully as PG does and may allocate more than what they