On Mon, Nov 23, 2020 at 02:30:09PM +0100, Daniel Borkmann wrote:
> On 11/19/20 6:37 PM, Roman Gushchin wrote:
> > Currently bpf is using the memlock rlimit for the memory accounting.
> > This approach has its downsides and over time has created a significant
> > amount of problems:
> > 
> > 1) The limit is per-user, but because most bpf operations are performed
> >     as root, the limit has a little value.
> > 
> > 2) It's hard to come up with a specific maximum value. Especially because
> >     the counter is shared with non-bpf users (e.g. memlock() users).
> >     Any specific value is either too low and creates false failures
> >     or too high and useless.
> > 
> > 3) Charging is not connected to the actual memory allocation. Bpf code
> >     should manually calculate the estimated cost and precharge the counter,
> >     and then take care of uncharging, including all fail paths.
> >     It adds to the code complexity and makes it easy to leak a charge.
> > 
> > 4) There is no simple way of getting the current value of the counter.
> >     We've used drgn for it, but it's far from being convenient.
> > 
> > 5) Cryptic -EPERM is returned on exceeding the limit. Libbpf even had
> >     a function to "explain" this case for users.
> > 
> > In order to overcome these problems let's switch to the memcg-based
> > memory accounting of bpf objects. With the recent addition of the percpu
> > memory accounting, now it's possible to provide a comprehensive accounting
> > of the memory used by bpf programs and maps.
> > 
> > This approach has the following advantages:
> > 1) The limit is per-cgroup and hierarchical. It's way more flexible and 
> > allows
> >     a better control over memory usage by different workloads. Of course, it
> >     requires enabled cgroups and kernel memory accounting and properly 
> > configured
> >     cgroup tree, but it's a default configuration for a modern Linux system.
> > 
> > 2) The actual memory consumption is taken into account. It happens 
> > automatically
> >     on the allocation time if __GFP_ACCOUNT flags is passed. Uncharging is 
> > also
> >     performed automatically on releasing the memory. So the code on the bpf 
> > side
> >     becomes simpler and safer.
> > 
> > 3) There is a simple way to get the current value and statistics.
> > 
> > In general, if a process performs a bpf operation (e.g. creates or updates
> > a map), it's memory cgroup is charged. However map updates performed from
> > an interrupt context are charged to the memory cgroup which contained
> > the process, which created the map.
> > 
> > Providing a 1:1 replacement for the rlimit-based memory accounting is
> > a non-goal of this patchset. Users and memory cgroups are completely
> > orthogonal, so it's not possible even in theory.
> > Memcg-based memory accounting requires a properly configured cgroup tree
> > to be actually useful. However, it's the way how the memory is managed
> > on a modern Linux system.

Hi Daniel!

> 
> The cover letter here only describes the advantages of this series, but leaves
> out discussion of the disadvantages. They definitely must be part of the 
> series
> to provide a clear description of the semantic changes to readers.

Honestly, I don't see them as disadvantages. Cgroups are basic units in which
resource control limits/guarantees/accounting are expressed. If there are
no cgroups created and configured in the system, it's obvious (maybe only to me)
that no limits are applied.

Users (rlimits) are to some extent similar units, but they do not provide
a comprehensive resource control system. Some parts are deprecated (like rss 
limits),
some parts are just missing. Aside from bpf nobody uses users to control
the memory as a physical resource. It simple doesn't work (and never did).
If somebody expects that a non-privileged user can't harm the system by 
depleting
it's memory (and other resources), it's simple not correct. There are multiple 
ways
for doing it. Accounting or not accounting bpf maps doesn't really change 
anything.
If we see them not as a real security mechanism, but as a way to prevent 
"mistakes",
which can harm the system, it's to some extent legit. The question is only if
it justifies the amount of problems we had with these limits.

Switching to memory cgroups, which are the way how the memory control is 
expressed,
IMO doesn't need an additional justification. During the last year I remember 2 
or 3
times when various people (internally in Fb and in public mailing lists) were 
asking
why bpf memory is not accounted to memory cgroups. I think it's basically 
expected
these days.

I'll try to make more obvious that we're switching from users to cgroups and
describe the consequences of it on an unconfigured system. I'll update the 
cover.

> Last time we
> discussed them, they were i) no mem limits in general on unprivileged users 
> when
> memory cgroups was not configured in the kernel, and ii) no mem limits by 
> default
> if not configured in the cgroup specifically.
> Did we made any progress on these
> in the meantime? How do we want to address them? What is the concrete 
> justification
> to not address them?

I don't see how they can and should be addressed.
Cgroups are the way how the resource consumption of a group of processes can be
limited. If there are no cgroups configured, it means all resources are 
available
to everyone. Maybe a user wants to use the whole memory for a bpf map? Why not?

Do you have any specific use case in your mind?
If you see a real value in the old system (I don't), which can justify an 
additional
complexity of keeping them both in a working state, we can discuss this option 
too.
We can make a switch in few steps, if you think it's too risky.

> 
> Also I wonder what are the risk of regressions here, for example, if an 
> existing
> orchestrator has configured memory cgroup limits that are tailored to the 
> application's
> needs.. now, with kernel upgrade BPF will start to interfere, e.g. if a BPF 
> program
> attached to cgroups (e.g. connect/sendmsg/recvmsg or general cgroup skb 
> egress hook)
> starts charging to the process' memcg due to map updates?

Well, if somebody has a tight memory limit and large bpf map(s), they can see a 
"regression".
However the kernel memory usage and accounting implementation details vary from 
a version
to a version, so nobody should expect that limits once set will work forever.
If for some strange reason it'll create a critical problem, as a workaround 
it's possible
to disable the kernel memory accounting as a whole (via a boot option).

Actually, it seems that the usefulness of strict limits is generally limited, 
because
it's hard to get and assign any specific value. They are always either too 
relaxed
(and have no value), either too strict (and causing production issues). Memory 
cgroups
are generally moving towards soft limits and protections. But it's a separate 
theme...

Thanks!

Reply via email to