Public bug reported:

When creating a new VM definition using "virsh define" no apparmor profile was 
created and consequently the VM cannot be started
Furthermore, a manually created apparmor definition could not be loaded 
(according to error message)

Release Ubuntu Trusty, uname -r
3.13.0-34-generic

Libvirt package versions
||/ Name                                 Version                 Architecture   
         Description
+++-====================================-=======================-=======================-==============================================================================
ii  libvirt-bin                          1.2.2-0ubuntu13.1.2     amd64          
         programs for the libvirt library
ii  libvirt-doc                          1.2.2-0ubuntu13.1.2     all            
         documentation for the libvirt library
ii  libvirt0                             1.2.2-0ubuntu13.1.2     amd64          
         library for interfacing with different virtualization systems
ii  libvirtodbc0                         6.1.6+repack-0ubuntu3   amd64          
         high-performance database - ODBC libraries

Qemu package versions
||/ Name                                 Version                 Architecture   
         Description
+++-====================================-=======================-=======================-==============================================================================
un  qemu                                 <none>                  <none>         
         (no description available)
un  qemu-common                          <none>                  <none>         
         (no description available)
ii  qemu-keymaps                         2.0.0+dfsg-2ubuntu1.2   all            
         QEMU keyboard maps
ii  qemu-kvm                             2.0.0+dfsg-2ubuntu1.2   amd64          
         QEMU Full virtualization on x86 hardware (transitional package)
un  qemu-kvm-spice                       <none>                  <none>         
         (no description available)
ii  qemu-launcher                        1.7.4-1ubuntu2          all            
         GTK+ front-end to QEMU computer emulator
un  qemu-system                          <none>                  <none>         
         (no description available)
ii  qemu-system-common                   2.0.0+dfsg-2ubuntu1.2   amd64          
         QEMU full system emulation binaries (common files)
un  qemu-system-i386                     <none>                  <none>         
         (no description available)
ii  qemu-system-x86                      2.0.0+dfsg-2ubuntu1.2   amd64          
         QEMU full system emulation binaries (x86)
un  qemu-system-x86-64                   <none>                  <none>         
         (no description available)
un  qemu-user                            <none>                  <none>         
         (no description available)
un  qemu-user-static                     <none>                  <none>         
         (no description available)
ii  qemu-utils                           2.0.0+dfsg-2ubuntu1.2   amd64          
         QEMU utilities
un  qemuctl                              <none>                  <none>         
         (no description available)

AppArmor package versions
||/ Name                                 Version                 Architecture   
         Description
+++-====================================-=======================-=======================-==============================================================================
ii  apparmor                             2.8.95~2430-0ubuntu5    amd64          
         User-space parser utility for AppArmor
un  apparmor-docs                        <none>                  <none>         
         (no description available)
un  apparmor-easyprof                    <none>                  <none>         
         (no description available)
un  apparmor-easyprof-ubuntu             <none>                  <none>         
         (no description available)
un  apparmor-parser                      <none>                  <none>         
         (no description available)
un  apparmor-profiles                    <none>                  <none>         
         (no description available)
un  apparmor-utils                       <none>                  <none>         
         (no description available)

xml imported via virsh define
=======================================================
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>win7_wd2t</name>
  <uuid>1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0</uuid>
  <memory unit='KiB'>6291456</memory>
  <currentMemory unit='KiB'>6291456</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='x86_64' machine='q35'>hvm</type>
    <loader>/usr/share/qemu/bios.bin</loader>
    <boot dev='cdrom'/>
    <boot dev='hd'/>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='4096'/>
    </hyperv>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Haswell</model>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/wd2t-lvm-data/lvwin7_kvm'/>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/programming_data/isos/Windows7_Ultimate.iso'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/programming_data/isos/virtio-win-0.1-74.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/programming_data/isos/winfiles_2.iso'/>
      <target dev='hdd' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='1'/>
    </disk>
    <controller type='ide' index='0'/>
    <controller type='sata' index='0'/>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'/>
    <controller type='pci' index='2' model='pci-bridge'/>
    <interface type='user'>
      <mac address='00:00:00:0a:5b:2c'/>
      <model type='virtio'/>
    </interface>
    <input type='tablet' bus='usb'/>
    <sound model='ac97'/>
    <memballoon model='virtio'/>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-rtc'/>
    <qemu:arg value='base=localtime'/>
    <qemu:arg value='-nodefconfig'/>
    <qemu:arg value='-spice'/>
    <qemu:arg value='port=5902,disable-ticketing'/>
    <qemu:arg value='-device'/>
    <qemu:arg 
value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg 
value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=02:00.0,bus=pcie.0'/>
  </qemu:commandline>
</domain>


xml created by virsh
=======================================================
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit win7_wd2t
or other application using the libvirt API.
-->

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>win7_wd2t</name>
  <uuid>1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0</uuid>
  <memory unit='KiB'>6291456</memory>
  <currentMemory unit='KiB'>6291456</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-2.0'>hvm</type>
    <loader>/usr/share/qemu/bios.bin</loader>
    <boot dev='cdrom'/>
    <boot dev='hd'/>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='4096'/>
    </hyperv>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Haswell</model>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/wd2t-lvm-data/lvwin7_kvm'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' 
function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/programming_data/isos/Windows7_Ultimate.iso'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/programming_data/isos/virtio-win-0.1-74.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/programming_data/isos/winfiles_2.iso'/>
      <target dev='hdd' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='1'/>
    </disk>
    <controller type='ide' index='0'/>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' 
function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' 
function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' 
function='0x0'/>
    </controller>
    <interface type='user'>
      <mac address='00:00:00:0a:5b:2c'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' 
function='0x0'/>
    </interface>
    <input type='tablet' bus='usb'/>
    <sound model='ac97'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' 
function='0x0'/>
    </sound>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' 
function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-rtc'/>
    <qemu:arg value='base=localtime'/>
    <qemu:arg value='-nodefconfig'/>
    <qemu:arg value='-spice'/>
    <qemu:arg value='port=5902,disable-ticketing'/>
    <qemu:arg value='-device'/>
    <qemu:arg 
value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg 
value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=02:00.0,bus=pcie.0'/>
  </qemu:commandline>
</domain>


Vm is created ok but will not start, initial message (no apparmor profile 
available) as reported by virt-manager
(note the message is correct, there is no profile for this VM in 
/etc/apparmor.d/libvirt)
================================================================
as reported by virt-manager
--------------------------------------
Error starting domain: internal error: cannot load AppArmor profile 
'libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 96, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 117, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1162, in startup
    self._backend.create()
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 866, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error: cannot load AppArmor profile 
'libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0'

found in libvirt log
-------------------------
014-08-31 03:54:07.571+0000: 14549: info : libvirt version: 1.2.2
2014-08-31 03:54:07.571+0000: 14549: error : virCommandWait:2399 : internal 
error: Child process (/usr/lib/libvirt/virt-aa-helper -p 0 -c -u 
libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0) unexpected exit status 1: 
virt-aa-helper: error: /usr/share/qemu/bios.bin
virt-aa-helper: error: skipped restricted file
virt-aa-helper: error: invalid VM definition

2014-08-31 03:54:07.571+0000: 14549: error :
AppArmorGenSecurityLabel:451 : internal error: cannot load AppArmor
profile 'libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0'


Then manually create apparmor files in /etc/apparmor.d/libvirt and retry 
(initially copied from working file, inheriting permissions etc),
(files created are named libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0  
libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0.files)
==========================================================
as reported by virt-manager
--------------------------------------
rror starting domain: internal error: cannot load AppArmor profile 
'libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 96, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 117, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1162, in startup
    self._backend.create()
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 866, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error: cannot load AppArmor profile 
'libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0'


found in libvirt log
-------------------------
2014-08-31 03:58:27.710+0000: 15013: info : libvirt version: 1.2.2
2014-08-31 03:58:27.710+0000: 15013: error : virCommandWait:2399 : internal 
error: Child process (/usr/lib/libvirt/virt-aa-helper -p 0 -r -u 
libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0) unexpected exit status 1: 
virt-aa-helper: error: /usr/share/qemu/bios.bin
virt-aa-helper: error: skipped restricted file
virt-aa-helper: error: invalid VM definition

2014-08-31 03:58:27.710+0000: 15013: error :
AppArmorGenSecurityLabel:451 : internal error: cannot load AppArmor
profile 'libvirt-1c19f09f-9e6c-4c2e-a13a-8bed5628f6f0'

libvirt.conf was modified as follows (see tags  # ============= RE Change 
============== # )
=====================================
# Master configuration file for the QEMU driver.
# All settings described here are optional - if omitted, sensible
# defaults are used.

# VNC is configured to listen on 127.0.0.1 by default.
# To make it listen on all public interfaces, uncomment
# this next option.
#
# NB, strong recommendation to enable TLS + x509 certificate
# verification when allowing public access
#
#vnc_listen = "0.0.0.0"

# Enable this option to have VNC served over an automatically created
# unix socket. This prevents unprivileged access from users on the
# host machine, though most VNC clients do not support it.
#
# This will only be enabled for VNC configurations that do not have
# a hardcoded 'listen' or 'socket' value. This setting takes preference
# over vnc_listen.
#
#vnc_auto_unix_socket = 1

# Enable use of TLS encryption on the VNC server. This requires
# a VNC client which supports the VeNCrypt protocol extension.
# Examples include vinagre, virt-viewer, virt-manager and vencrypt
# itself. UltraVNC, RealVNC, TightVNC do not support this
#
# It is necessary to setup CA and issue a server certificate
# before enabling this.
#
#vnc_tls = 1


# Use of TLS requires that x509 certificates be issued. The
# default it to keep them in /etc/pki/libvirt-vnc. This directory
# must contain
#
#  ca-cert.pem - the CA master certificate
#  server-cert.pem - the server certificate signed with ca-cert.pem
#  server-key.pem  - the server private key
#
# This option allows the certificate directory to be changed
#
#vnc_tls_x509_cert_dir = "/etc/pki/libvirt-vnc"


# The default TLS configuration only uses certificates for the server
# allowing the client to verify the server's identity and establish
# an encrypted channel.
#
# It is possible to use x509 certificates for authentication too, by
# issuing a x509 certificate to every client who needs to connect.
#
# Enabling this option will reject any client who does not have a
# certificate signed by the CA in /etc/pki/libvirt-vnc/ca-cert.pem
#
#vnc_tls_x509_verify = 1


# The default VNC password. Only 8 bytes are significant for
# VNC passwords. This parameter is only used if the per-domain
# XML config does not already provide a password. To allow
# access without passwords, leave this commented out. An empty
# string will still enable passwords, but be rejected by QEMU,
# effectively preventing any use of VNC. Obviously change this
# example here before you set this.
#
#vnc_password = "XYZ12345"


# Enable use of SASL encryption on the VNC server. This requires
# a VNC client which supports the SASL protocol extension.
# Examples include vinagre, virt-viewer and virt-manager
# itself. UltraVNC, RealVNC, TightVNC do not support this
#
# It is necessary to configure /etc/sasl2/qemu.conf to choose
# the desired SASL plugin (eg, GSSPI for Kerberos)
#
#vnc_sasl = 1


# The default SASL configuration file is located in /etc/sasl2/
# When running libvirtd unprivileged, it may be desirable to
# override the configs in this location. Set this parameter to
# point to the directory, and create a qemu.conf in that location
#
#vnc_sasl_dir = "/some/directory/sasl2"


# QEMU implements an extension for providing audio over a VNC connection,
# though if your VNC client does not support it, your only chance for getting
# sound output is through regular audio backends. By default, libvirt will
# disable all QEMU sound backends if using VNC, since they can cause
# permissions issues. Enabling this option will make libvirtd honor the
# QEMU_AUDIO_DRV environment variable when using VNC.
#
#vnc_allow_host_audio = 0


# SPICE is configured to listen on 127.0.0.1 by default.
# To make it listen on all public interfaces, uncomment
# this next option.
#
# NB, strong recommendation to enable TLS + x509 certificate
# verification when allowing public access
#
#spice_listen = "0.0.0.0"


# Enable use of TLS encryption on the SPICE server.
#
# It is necessary to setup CA and issue a server certificate
# before enabling this.
#
#spice_tls = 1


# Use of TLS requires that x509 certificates be issued. The
# default it to keep them in /etc/pki/libvirt-spice. This directory
# must contain
#
#  ca-cert.pem - the CA master certificate
#  server-cert.pem - the server certificate signed with ca-cert.pem
#  server-key.pem  - the server private key
#
# This option allows the certificate directory to be changed.
#
#spice_tls_x509_cert_dir = "/etc/pki/libvirt-spice"


# The default SPICE password. This parameter is only used if the
# per-domain XML config does not already provide a password. To
# allow access without passwords, leave this commented out. An
# empty string will still enable passwords, but be rejected by
# QEMU, effectively preventing any use of SPICE. Obviously change
# this example here before you set this.
#
#spice_password = "XYZ12345"


# Enable use of SASL encryption on the SPICE server. This requires
# a SPICE client which supports the SASL protocol extension.
#
# It is necessary to configure /etc/sasl2/qemu.conf to choose
# the desired SASL plugin (eg, GSSPI for Kerberos)
#
#spice_sasl = 1

# The default SASL configuration file is located in /etc/sasl2/
# When running libvirtd unprivileged, it may be desirable to
# override the configs in this location. Set this parameter to
# point to the directory, and create a qemu.conf in that location
#
#spice_sasl_dir = "/some/directory/sasl2"


# By default, if no graphical front end is configured, libvirt will disable
# QEMU audio output since directly talking to alsa/pulseaudio may not work
# with various security settings. If you know what you're doing, enable
# the setting below and libvirt will passthrough the QEMU_AUDIO_DRV
# environment variable when using nographics.
#
#nographics_allow_host_audio = 1


# Override the port for creating both VNC and SPICE sessions (min).
# This defaults to 5900 and increases for consecutive sessions
# or when ports are occupied, until it hits the maximum.
#
# Minimum must be greater than or equal to 5900 as lower number would
# result into negative vnc display number.
#
# Maximum must be less than 65536, because higher numbers do not make
# sense as a port number.
#
#remote_display_port_min = 5900
#remote_display_port_max = 65535

# VNC WebSocket port policies, same rules apply as with remote display
# ports.  VNC WebSockets use similar display <-> port mappings, with
# the exception being that ports start from 5700 instead of 5900.
#
#remote_websocket_port_min = 5700
#remote_websocket_port_max = 65535

# The default security driver is SELinux. If SELinux is disabled
# on the host, then the security driver will automatically disable
# itself. If you wish to disable QEMU SELinux security driver while
# leaving SELinux enabled for the host in general, then set this
# to 'none' instead. It's also possible to use more than one security
# driver at the same time, for this use a list of names separated by
# comma and delimited by square brackets. For example:
#
#       security_driver = [ "selinux", "apparmor" ]
#
# Notes: The DAC security driver is always enabled; as a result, the
# value of security_driver cannot contain "dac".  The value "none" is
# a special value; security_driver can be set to that value in
# isolation, but it cannot appear in a list of drivers.
#
#security_driver = "selinux"
# ============= RE Change ============== #
# Didn't help
security_driver = "apparmor"
#security_driver = "none"

# If set to non-zero, then the default security labeling
# will make guests confined. If set to zero, then guests
# will be unconfined by default. Defaults to 1.
#security_default_confined = 1

# If set to non-zero, then attempts to create unconfined
# guests will be blocked. Defaults to 0.
#security_require_confined = 1
# ============= RE Change ============== #
security_require_confined = 0

# The user for QEMU processes run by the system instance. It can be
# specified as a user name or as a user id. The qemu driver will try to
# parse this value first as a name and then, if the name doesn't exist,
# as a user id.
#
# Since a sequence of digits is a valid user name, a leading plus sign
# can be used to ensure that a user id will not be interpreted as a user
# name.
#
# Some examples of valid values are:
#
#       user = "qemu"   # A user named "qemu"
#       user = "+0"     # Super user (uid=0)
#       user = "100"    # A user named "100" or a user with uid=100
#
#user = "root"
# ============= RE Change ============== #
user = "root"

# The group for QEMU processes run by the system instance. It can be
# specified in a similar way to user.
#group = "root"
# ============= RE Change ============== #
group = "root"

# Whether libvirt should dynamically change file ownership
# to match the configured user/group above. Defaults to 1.
# Set to 0 to disable file ownership changes.
#dynamic_ownership = 1


# What cgroup controllers to make use of with QEMU guests
#
#  - 'cpu' - use for schedular tunables
#  - 'devices' - use for device whitelisting
#  - 'memory' - use for memory tunables
#  - 'blkio' - use for block devices I/O tunables
#  - 'cpuset' - use for CPUs and memory nodes
#  - 'cpuacct' - use for CPUs statistics.
#
# NB, even if configured here, they won't be used unless
# the administrator has mounted cgroups, e.g.:
#
#  mkdir /dev/cgroup
#  mount -t cgroup -o devices,cpu,memory,blkio,cpuset none /dev/cgroup
#
# They can be mounted anywhere, and different controllers
# can be mounted in different locations. libvirt will detect
# where they are located.
#
#cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuset", 
"cpuacct" ]

# This is the basic set of devices allowed / required by
# all virtual machines.
#
# As well as this, any configured block backed disks,
# all sound device, and all PTY devices are allowed.
#
# This will only need setting if newer QEMU suddenly
# wants some device we don't already know about.
#
#cgroup_device_acl = [
#    "/dev/null", "/dev/full", "/dev/zero",
#    "/dev/random", "/dev/urandom",
#    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
#    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio"
#]
#
# ============= RE Change ============== #
cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
    "/dev/vfio/1", "/dev/vfio/15", "/dev/vfio/16", "/dev/vfio/17"
]


# The default format for Qemu/KVM guest save images is raw; that is, the
# memory from the domain is dumped out directly to a file.  If you have
# guests with a large amount of memory, however, this can take up quite
# a bit of space.  If you would like to compress the images while they
# are being saved to disk, you can also set "lzop", "gzip", "bzip2", or "xz"
# for save_image_format.  Note that this means you slow down the process of
# saving a domain in order to save disk space; the list above is in descending
# order by performance and ascending order by compression ratio.
#
# save_image_format is used when you use 'virsh save' or 'virsh managedsave'
# at scheduled saving, and it is an error if the specified save_image_format
# is not valid, or the requested compression program can't be found.
#
# dump_image_format is used when you use 'virsh dump' at emergency
# crashdump, and if the specified dump_image_format is not valid, or
# the requested compression program can't be found, this falls
# back to "raw" compression.
#
# snapshot_image_format specifies the compression algorithm of the memory save
# image when an external snapshot of a domain is taken. This does not apply
# on disk image format. It is an error if the specified format isn't valid,
# or the requested compression program can't be found.
#
#save_image_format = "raw"
#dump_image_format = "raw"
#snapshot_image_format = "raw"

# When a domain is configured to be auto-dumped when libvirtd receives a
# watchdog event from qemu guest, libvirtd will save dump files in directory
# specified by auto_dump_path. Default value is /var/lib/libvirt/qemu/dump
#
#auto_dump_path = "/var/lib/libvirt/qemu/dump"

# When a domain is configured to be auto-dumped, enabling this flag
# has the same effect as using the VIR_DUMP_BYPASS_CACHE flag with the
# virDomainCoreDump API.  That is, the system will avoid using the
# file system cache while writing the dump file, but may cause
# slower operation.
#
#auto_dump_bypass_cache = 0

# When a domain is configured to be auto-started, enabling this flag
# has the same effect as using the VIR_DOMAIN_START_BYPASS_CACHE flag
# with the virDomainCreateWithFlags API.  That is, the system will
# avoid using the file system cache when restoring any managed state
# file, but may cause slower operation.
#
#auto_start_bypass_cache = 0

# If provided by the host and a hugetlbfs mount point is configured,
# a guest may request huge page backing.  When this mount point is
# unspecified here, determination of a host mount point in /proc/mounts
# will be attempted.  Specifying an explicit mount overrides detection
# of the same in /proc/mounts.  Setting the mount point to "" will
# disable guest hugepage backing.
#
# NB, within this mount point, guests will create memory backing files
# in a location of $MOUNTPOINT/libvirt/qemu
#
#hugetlbfs_mount = "/dev/hugepages"
# ============= RE Change ============== #
hugetlbfs_mount = "/dev/hugepages"


# Path to the setuid helper for creating tap devices.  This executable
# is used to create <source type='bridge'> interfaces when libvirtd is
# running unprivileged.  libvirt invokes the helper directly, instead
# of using "-netdev bridge", for security reasons.
#bridge_helper = "/usr/libexec/qemu-bridge-helper"


# If clear_emulator_capabilities is enabled, libvirt will drop all
# privileged capabilities of the QEmu/KVM emulator. This is enabled by
# default.
#
# Warning: Disabling this option means that a compromised guest can
# exploit the privileges and possibly do damage to the host.
#
#clear_emulator_capabilities = 1
# ============= RE Change ============== #
clear_emulator_capabilities = 0

# If enabled, libvirt will have QEMU set its process name to
# "qemu:VM_NAME", where VM_NAME is the name of the VM. The QEMU
# process will appear as "qemu:VM_NAME" in process listings and
# other system monitoring tools. By default, QEMU does not set
# its process title, so the complete QEMU command (emulator and
# its arguments) appear in process listings.
#
#set_process_name = 1


# If max_processes is set to a positive integer, libvirt will use
# it to set the maximum number of processes that can be run by qemu
# user. This can be used to override default value set by host OS.
# The same applies to max_files which sets the limit on the maximum
# number of opened files.
#
#max_processes = 0
#max_files = 0


# mac_filter enables MAC addressed based filtering on bridge ports.
# This currently requires ebtables to be installed.
#
#mac_filter = 1


# By default, PCI devices below non-ACS switch are not allowed to be assigned
# to guests. By setting relaxed_acs_check to 1 such devices will be allowed to
# be assigned to guests.
#
#relaxed_acs_check = 1
# ============= RE Change ============== #
relaxed_acs_check = 1


# If allow_disk_format_probing is enabled, libvirt will probe disk
# images to attempt to identify their format, when not otherwise
# specified in the XML. This is disabled by default.
#
# WARNING: Enabling probing is a security hole in almost all
# deployments. It is strongly recommended that users update their
# guest XML <disk> elements to include  <driver type='XXXX'/>
# elements instead of enabling this option.
#
#allow_disk_format_probing = 1


# To enable 'Sanlock' project based locking of the file
# content (to prevent two VMs writing to the same
# disk), uncomment this
#
#lock_manager = "sanlock"


# Set limit of maximum APIs queued on one domain. All other APIs
# over this threshold will fail on acquiring job lock. Specially,
# setting to zero turns this feature off.
# Note, that job lock is per domain.
#
#max_queued = 0

###################################################################
# Keepalive protocol:
# This allows qemu driver to detect broken connections to remote
# libvirtd during peer-to-peer migration.  A keepalive message is
# sent to the daemon after keepalive_interval seconds of inactivity
# to check if the daemon is still responding; keepalive_count is a
# maximum number of keepalive messages that are allowed to be sent
# to the daemon without getting any response before the connection
# is considered broken.  In other words, the connection is
# automatically closed approximately after
# keepalive_interval * (keepalive_count + 1) seconds since the last
# message received from the daemon.  If keepalive_interval is set to
# -1, qemu driver will not send keepalive requests during
# peer-to-peer migration; however, the remote libvirtd can still
# send them and source libvirtd will send responses.  When
# keepalive_count is set to 0, connections will be automatically
# closed after keepalive_interval seconds of inactivity without
# sending any keepalive messages.
#
#keepalive_interval = 5
#keepalive_count = 5


# Use seccomp syscall whitelisting in QEMU.
# 1 = on, 0 = off, -1 = use QEMU default
# Defaults to -1.
#
#seccomp_sandbox = 1


# Override the listen address for all incoming migrations. Defaults to
# 0.0.0.0, or :: if both host and qemu are capable of IPv6.
#migration_address = "127.0.0.1"


# Override the port range used for incoming migrations.
#
# Minimum must be greater than 0, however when QEMU is not running as root,
# setting the minimum to be lower than 1024 will not work.
#
# Maximum must not be greater than 65535.
#
#migration_port_min = 49152
#migration_port_max = 49215

** Affects: libvirt (Ubuntu)
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in Ubuntu.
https://bugs.launchpad.net/bugs/1363567

Title:
  Virsh Define Does Not Create AppArmor Profile

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1363567/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs

Reply via email to