The following will reproduce the issue in a disco VM with disco LXD
container:

Initial setup:
1. have an up to date disco vm
$ cat /proc/version_signature 
Ubuntu 5.0.0-11.12-generic 5.0.6

2. sudo snap install lxd
3. sudo adduser `id -un` lxd
4. newgrp lxd
5. sudo lxd init # use defaults
6. . /etc/profile.d/apps-bin-path.sh

After this note the SFS_MOUNTPOINT bug:
1. lxc launch ubuntu-daily:d d-testapparmor
2. lxc exec d-testapparmor /lib/apparmor/apparmor.systemd reload
3. fix /lib/apparmor/rc.apparmor.functions to define 
SFS_MOUNTPOINT="${SECURITYFS}/${MODULE}" at the top of 
is_container_with_internal_policy(). Ie lxc exec d-testapparmor vi 
/lib/apparmor/rc.apparmor.functions 
4. lxc exec d-testapparmor -- sh -x /lib/apparmor/apparmor.systemd reload # 
notice apparmor_parser was called

At this point, these were called (as seen from the sh -x output, above):

/sbin/apparmor_parser --write-cache --replace -- /etc/apparmor.d
/sbin/apparmor_parser --write-cache --replace -- 
/var/lib/snapd/apparmor/profiles

but no profiles were loaded:
$ lxc exec d-testapparmor aa-status

Note weird parser error trying to load an individual profile:
$ lxc exec d-testapparmor -- apparmor_parser -r /etc/apparmor.d/sbin.dhclient 
AppArmor parser error for /etc/apparmor.d/sbin.dhclient in 
/etc/apparmor.d/tunables/home at line 25: Could not process include directory 
'/etc/apparmor.d/tunables/home.d' in 'tunables/home.d'

Stopping and starting the container doesn't help:
$ lxc stop d-testapparmor
$ lxc start d-testapparmor
$ lxc exec d-testapparmor aa-status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.


Note, under 5.0.0-8.9 and with the SFS_MOUNTPOINT fix, the tunables error goes 
away:
$ lxc exec d-testapparmor -- apparmor_parser -r /etc/apparmor.d/sbin.dhclient
$

and the profiles load on container start:
$ lxc exec d-testapparmor aa-status
apparmor module is loaded.
27 profiles are loaded.
27 profiles are in enforce mode.
   /sbin/dhclient
   /snap/core/6673/usr/lib/snapd/snap-confine
   /snap/core/6673/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/lib/snapd/snap-confine
   /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/sbin/tcpdump
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   snap-update-ns.core
   snap-update-ns.lxd
   snap.core.hook.configure
   snap.lxd.activate
   snap.lxd.benchmark
   snap.lxd.buginfo
   snap.lxd.check-kernel
   snap.lxd.daemon
   snap.lxd.hook.configure
   snap.lxd.hook.install
   snap.lxd.lxc
   snap.lxd.lxd
   snap.lxd.migrate
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

However, 5.0.0-11.12 has fixes for lxd and apparmor. This 11.12 also
starts using shiftfs. Very interestingly, if I create a container under
5.0.0-8.9, do the SFS_MOUNTPOINT fix and start it under 5.0.0-11.12,
then policy loads and everything seems fine; there are no shiftfs mounts
for that container:

$ lxc exec d-testapparmor -- grep shiftfs /proc/self/mountinfo
$

*but* if I create the container under 11.12, I see the problems and there are 
shiftfs mounts:
$ lxc exec shiftfs-testapparmor -- grep shiftfs /proc/self/mountinfo
1042 443 0:78 / / rw,relatime - shiftfs 
/var/snap/lxd/common/lxd/storage-pools/default/containers/shiftfs-testapparmor/rootfs
 rw,passthrough=3
1067 1043 0:57 /shiftfs-testapparmor /dev/.lxd-mounts rw,relatime master:216 - 
tmpfs tmpfs rw,size=100k,mode=711
1514 1042 0:78 /snap /snap rw,relatime shared:626 - shiftfs 
/var/snap/lxd/common/lxd/storage-pools/default/containers/shiftfs-testapparmor/rootfs
 rw,passthrough=3

** Also affects: linux (Ubuntu)
   Importance: Undecided
       Status: New

** Changed in: linux (Ubuntu)
       Status: New => Confirmed

** Changed in: linux (Ubuntu)
     Assignee: (unassigned) => John Johansen (jjohansen)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1824812

Title:
  apparmor does not start in Disco LXD containers

Status in AppArmor:
  Triaged
Status in apparmor package in Ubuntu:
  Triaged
Status in libvirt package in Ubuntu:
  Invalid
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  In LXD apparmor now skips starting.

  Steps to reproduce:
  1. start LXD container
    $ lxc launch ubuntu-daily:d d-testapparmor
    (disco to trigger the issue, cosmic as reference)
  2. check the default profiles loaded
    $ aa-status

  => This will in cosmic and up to recently disco list plenty of profiles 
active even in the default install.
  Cosmic:
    25 profiles are loaded.
    25 profiles are in enforce mode.
  Disco:
    15 profiles are loaded.
    15 profiles are in enforce mode.

  All those 15 remaining are from snaps.
  The service of apparmor.service actually states that it refuses to start.

  $ systemctl status apparmor
  ...
  Apr 15 13:56:12 testkvm-disco-to apparmor.systemd[101]: Not starting AppArmor 
in container

  I can get those profiles (the default installed ones) loaded, for example:
    $ sudo apparmor_parser -r /etc/apparmor.d/sbin.dhclient
  makes it appear
    22 profiles are in enforce mode.
     /sbin/dhclient

  I was wondering as in my case I found my guest with no (=0) profiles loaded. 
But as shown above after "apparmor_parser -r" and package install profiles 
seemed fine. Then the puzzle was solved, on package install they
  will call apparmor_parser via the dh_apparmor snippet and it is fine.

  To fully disable all of them:
    $ lxc stop <container>
    $ lxc start <container>
    $ lxc exec d-testapparmor aa-status
  apparmor module is loaded.
  0 profiles are loaded.
  0 profiles are in enforce mode.
  0 profiles are in complain mode.
  0 processes have profiles defined.
  0 processes are in enforce mode.
  0 processes are in complain mode.
  0 processes are unconfined but have a profile defined.

  That would match the service doing an early exit as shown in systemctl
  status output above. The package install or manual load works, but
  none are loaded by the service automatically e.g. on container
  restart.

  --- --- ---

  This bug started as:
  Migrations to Disco trigger "Unable to find security driver for model 
apparmor"

  This most likely is related to my KVM-in-LXD setup but it worked fine
  for years and I'd like to sort out what broke. I have migrated to
  Disco's qemu 3.1 already which makes me doubts generic issues in qemu
  3.1 in general.

  The virt tests that run cross release work fine starting from X/B/C but all 
those chains fail at mirgating to Disco now with:
    $ lxc exec testkvm-cosmic-from -- virsh migrate --unsafe --live 
kvmguest-bionic-normal
    qemu+ssh://10.21.151.207/system
    error: unsupported configuration: Unable to find security driver for model 
apparmor

  I need to analyze what changed

To manage notifications about this bug go to:
https://bugs.launchpad.net/apparmor/+bug/1824812/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to