On 9/5/25 12:37, Andreas Hasenack wrote:
> Here are some more thoughts from me. This took me a long time to write,
> read, and rewrite. Consider this me brainstorming on pros and cons. I
> like apparmor and am happy to see a push to confine more applications,
> and thanks for offering a strategy for doing that.
> 
thank you for taking the time, it is helpful

>> The number of policies in this package is very large. When no policy cache 
>> exists (as on first
>> installation), building it can be very long. Even when a cache, loading all 
>> policies is not
>> instantaneous.
> 
> The upgrade took double the time, and there were no changes. I just
> repeated "dpkg -i" with the same package. I also noticed the package
strange, definitely something to dig into

> doesn't use dh_apparmor, so no dh_apparmor snippets are in the rendered
> postinst, maybe that is where some debhelper smarts are missing. I
> didn't investigate further.
> 

most of the smarts are actually in the parser. With that said, having
the profiles split out from the main apparmor package it does make
sense to look at using dh_apparmor

>>   - It allows the AppArmor team to review carefully profiles, maintain them, 
>> ensure their coherency and
>> how they interact with each other.
>> - It allows to decouple profiles from the application maintainers, that 
>> don't necessarily have the
>> necessary AppArmor knowledge.
> 
> But I think we would want coupling. Without it, the profile can evolve
> in one direction, and the application in another, and the confinement

sadly from experience that happens when coupled too. Coupling has also
lead to a different kind of drift where the upstream profile is being
updated, differently from the Ubuntu one. This can be that the Ubuntu
profile is getting revised and better suited to Ubuntu. But often the
upstream version is evolving faster. We have also had the problem of
Ubuntu package versions being updated in less than secure ways to just
make problems go away.

We are certainly partly to blame for this. There needs to be an active
sync happening, and we need tooling, and tracking to make sure it is
happening.

> will break. You could have an old version of the profile installed and a
> new version of the application package installed. Users could have

yes, and again from experience the reverse is also true. Where it is
the profile doesn't get updated in the package, but the upstream
profile has been.

> pinned an application package to a specific version. Users could want a
> profile fix from bin:apparmor.d-N+1, but keep another profile shipped in
> bin:apparmor.d-N for another application because in N+1 it broke their
> use case.
> 
yes. A very valid point.

>>   - That also allows updating profiles without needing to update the
> application package.
> 
> Conversely, you would be updating a package that ships 1500 profiles.
> You would be fixing a bug for one profile, and could be introducing a
> bug in another (bugs happen).
> 

possible but much less likely than the reverse. With profiles the bugs
tend to stay localized to the given profile, unless the bug is
introduced in the abstractions/tunables, or in cross domain rules.

abstraction/tunable bugs will affect all profiles, whether packaged
together or separately.

the cross domain rules case strongly favors updating the profiles
together.


> I think at the core I have two objections to this whole approach:
> 
> 1) all profiles loaded even when not needed, leading to the problems in 
> comment #6. You explained several optimizations, but to me the best 
> optimization is to not load what is not needed :)

fair point. With that said the outlined optimizations are still
needed.  Even if every profile was split out into the various packages
you still are going to have 100s of profiles to load.

> 2) decoupling with the application: high risk of the profile being
meant for one version of the app, but a later one has different
requirements that do not match the profile anymore. This discrepancy
looks easier and quicker to catch if the profile is together with the
application. Tee risk of updates to this single package approach also
seems much higher.

yes there is more risk here. Experience has shown this to not be as
much of problem as one might fear, and that the coupling has caused
its only drift problems.

Profiles really don't exist in isolation anymore, especially when more
of the system becomes confined. Cross domain interactions favour
moving policy as a unit. Application updates obviously favour keeping
the profile with the application.

Another variable not discussed is confinement models. Switching
confinement models, again favours keeping policy as a unit, or at
least tightly synced due to cross domain interactions.

Are we going to be using different confinement models. Yes, we really
need to move towards this. There will be standard confinement, a
looser classic/developer environment, and even more restrictive secure
or MLS style environments. Using things like conditionals we can
certainly still split profiles out into various application packages.

But switching between confinement models does favor keeping policy
together.

The reality is there will have to be some kind of mix, and we need to
figure out how to best keep the profiles in sync. Making sure updates
are flowing into upstream, and from upstream back into Ubuntu, and
where appropriate, having the Ubuntu versions keep a delta, etc.

> 
> Now, you make a good point about package maintainers not necessarily
> having the apparmor knowledge, or even a desire to confine their
> application. Us suddenly injecting an apparmor profile into their

yeah, we have had very bad luck with this over the years.

> package is rude and disruptive. And we would also have potentially up to
> 1500 new delta pieces added to debian packages.
> 
yep.

> How can we crack this nut?
> 
A hybrid approach, with better and more tooling. I wouldn't say no to
a lot more people working on the problem either ;)

> Have you guys thought of ways to still ship all profiles in a separate
> binary package, but not load them unless they are needed? Unless the
> application they are meant to confine is installed? Can we play some

yes. There are a couple ways to do this. Probably the easiest is
setting disabled symlinks, but having a locals tunables directory,
where a boolean file can be dropped in also works. Possible but not as
good solutions involve installing the profiles to an alternate source
location, and having packages either copy them, or make symlinks to
when the package is installed.

> tricks with triggers?
> 
indeed triggers are interesting. We were already looking at them to
cause policy compiles on kernel install, so that we hopeful can have
policy cached at boot. But I hadn't considered using them to enable
disable profiles for a given package.


> I guess similar problems and discussions were had in the past about the
> kernel modules package (we have two binary packages for kernel modules
> IIRC), and linux-firmware (which also installs a whole bunch of binary
> blobs regardless if you have that hardware or not: you *could* have it
> in the future). But none of these are loaded by default: they are just
> files available on disk, in case they are needed.
> 

right

> Some other thoughts:
> a) a promotion plan: what happens once a profile matures, and can be shipped 
> with the application? What are the conditions? What packaging changes will be 
> needed then? We will have to add careful breaks/replaces, following 
> https://wiki.debian.org/PackageTransition to avoid conflicts like in comment 
> #5

So we don't have a good metric to determine when a profile is mature,
but it certainly will be tied to bugs/feedback, how often it is
updated.

The biggest condition is the support of the package maintainer. Other
conditions would be around to what degree, a profile/package is a
leaf, or a node (something with lots a dependencies cross/domain
interactions).

Profiles moving out of the main src/binary are going to need an
annotation about which package installs them. It might be possible
that the breaks/replaces could serve as the annotation. Ideally the
upstream version of the profile will remain in the source tarball, and
when a profile is moved, this entails dropping it from the install
list, adding the necessary breaks/replaces.

The other part I want is some tooling that we can run that will check
the source version not being installed, again the profiles that have
been split out into other packages, so that we can periodically (maybe
every update of the source package), run a sync and use that to feed
updates back to upstream, and submit updates (when needed) to the
other packages.

> b) or is the plan to always ship the profile in the distro via
bin:apparmor.d, to be available in case the application package is
installed, and never ship it in the application package itself? Counting
on the fact that the current installation times can be made faster and
have it consume less memory?

this is a possibility, though probably in a slightly more flexible
incarnation, and ideally still not loading profiles that we don't need
to.

The reality is optimizations are only going to get us so far. Loading
anything that isn't needed takes more cpu and memory. There are
practicalities to consider, but we should be working towards the best
we can achieve, within the constraints we have.

In this scenario we would use an overlay. Something we want to
introduce anyways. The overlay would becomes something like local
profiles:application packaged profiles:base apparmor profiles.

This would allow us to install a base source profile, and still allow
applications to install profile. Ideally we would still be
coordination/syncing between application and the apparmor packaged
version of a profile but giving an overlay layer to packaging opens up
some flexibility.

To disable profiles, either disable symlinks, or whiteouts in the
overlay could be used. I am not sure which is the best mechanism
within the Ubuntu/debian packaging, but I would assume we could either
use triggers, or dh_apparmor depending on the situation.


> c) testing plan: how can the profiles from src:apparmor.d be tested? We would 
> have to have an autopkgtest in src:apparmor.d for package bin:FOO that would 
> install *both* bin:apparmor.d and bin:FOO, and from your comments looks like 
> that fails due to OOM, going back to the optimization problem.

yes they can be tested. We have been running autopkgtests for the
profiles in apparmor.d, and yes there an OOM issue for some packages
with the current defaults. The OOMs can be dealt with by increasing
memory, or by not loading the whole profile set for a given test. We
are at a painful less than ideal stage atm. But with a combination of
optimizations and packaging work we should be able to fix the OOM
issues.

> d) what about more restricted systems like raspberry PIs, are they of
scope for this package, at this stage?

definitely opt in to the pain atm. Long term they are in scope, but we
need to address the core issues first. Hopefully between the current
set of optimizations and getting the packaging sorted out that will be
enough for raspberry PIs. But some restricted systems really are going
to need more work, more optimization, new options for dividing policy,
shipping precompiled policy (another form of optimization not
previously discussed).


> e) What will SRUs look like for src:apparmor.d? How many profiles would you 
> be updating in one go? How many applications would have to be tested 
> separately?

Ideally a small set. SRUs are painful. Doing an SRU per profile that
needs to be updated would just be crazy, but packing to much into an
SRU is also bad.

Testing would depend on which profiles are being updated. A leaf
profile/application you might get away with testing just a single
application. But if we update some more core, say systemd (as a worst
case scenario, that isn't actually confined by the current apparmor.d
packaging) there would have to be extensive testing.


> f) What happens if I have a host spawning dozens of LXD containers, and all 
> those containers install bin:apparmor.d? "Don't do it"? :)
> 
indeed. atm this is opt into the pain.

Long term another optimization comes into play. The kernel load does a
dedup, check. Currently this is just dropping duplicate loads, saving
on the whole replacement dance.

However we are doing fairly fine grained reference counting in the
kernel with an eye towards sharing between different
profiles. Currently we are at the point where we are close to having
profiles loaded together being able to share components.

Once we get there, dedup can be extended and the container could pick
up a reference count to units already loaded.

> I also understand this is following an upstream project, which has all
> these profiles in a git repository/tarball, and having one source debian
> package mimicking that makes sense. But even with optimizations, unless
> they are really fantastic, I don't see right now what this will look

well cumulatively they will be, I have some educated guesses, and size
will be seeing a bigger improvement than time, but any one increment
wont be enough.  Its going to take some time, and like you said the
best optimization is just don't load it if isn't used.

> like in the long term.
> 
So I think distro packaging is different than the upstream source. As
a distro we do packaging however makes best sense for us. This may
mean more packaging work on our end.

With all that said, mimicking the upstream packaging here was
deliberate, but only as a first step. The upstream packaging
philosophy is trying to provide a base for as many distros as possible
with as little work for the upstream as possible.

A distro will to put some work in can and should improve the packaging
and make it fit the distros, needs. Hopefully we will be able to feed
ñsome of the work we do Ubuntu and even debian side can be fed
back into the upstream side to improve its work as well.


> Now, I'm not the final word on this. This just appeared on my radar for

no, but your feedback is very important. You come at it from a
different angle than we do, which gives a different set of data points
to consider

> sponsorship reasons, and I have a passion for application confinement,
> having written some apparmor profiles in the recent past. I truly
> welcome others to join the discussion, and have no objections to be
> proven wrong.
>

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2121409

Title:
  [FFE] add a new apparmor.d package containing several apparmor
  profiles

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2121409/+subscriptions


-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to