Hello Christian, Attached you can find a xenial SRU patch based on commit (https://github.com/libvirt/libvirt/commit/7e667664d28f90bf6916604a55ebad7e2d85305b).
Here is the guest definition used on my testing (http://paste.ubuntu.com/25288141/) Without the patch, the error is reproduced. root@buneary:/home/ubuntu# virsh start reproducer2 error: Failed to start domain reproducer2 error: internal error: process exited while connecting to monitor: mlockall: Cannot allocate memory 2017-08-11T03:59:54.936275Z qemu-system-x86_64: locking memory failed With the patch the guest is initialized correctly. root@buneary:/var/lib/uvtool/libvirt/images# virsh start reproducer2 Domain reproducer2 started I ran the following systemtap probe (probe kernel.function("sys_setrlimit") { printf("locked: %d -> %s\n", pid(), sprintf("%s, %s", _rlimit_resource_str($resource), _struct_rlimit_u($rlim))) } ) And the output shows that the RLIMIT_MEMLOCK process limit is set to 9007199254740991 (VIR_DOMAIN_MEMORY_PARAM_UNLIMITED) root@buneary:/home/ubuntu# stap -g qemu_probe.stap -kv -g --suppress-time-limits Pass 1: parsed user script and 465 library scripts using 112796virt/48440res/6664shr/42136data kb, in 130usr/40sys/164real ms. Pass 2: analyzed script: 1 probe, 5 functions, 100 embeds, 0 globals using 164772virt/101848res/7988shr/94112data kb, in 660usr/120sys/776real ms. Pass 3: translated to C into "/tmp/stapVXuoMi/stap_30601_src.c" using 164772virt/102040res/8180shr/94112data kb, in 10usr/0sys/5real ms. Pass 4: compiled C into "stap_30601.ko" in 2160usr/540sys/3231real ms. Pass 5: starting run. locked: 31028 -> RLIMIT_MEMLOCK, [9007199254740991,9007199254740991] ** Changed in: libvirt (Ubuntu Artful) Status: Confirmed => Fix Released ** Changed in: libvirt (Ubuntu Artful) Status: Fix Released => Fix Committed ** Patch added: "fix-1708305-xenial.debdiff" https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1708305/+attachment/4930463/+files/fix-1708305-xenial.debdiff -- You received this bug notification because you are a member of नेपाली भाषा समायोजकहरुको समूह, which is subscribed to Xenial. Matching subscriptions: Ubuntu 16.04 Bugs https://bugs.launchpad.net/bugs/1708305 Title: Realtime feature mlockall: Cannot allocate memory Status in libvirt package in Ubuntu: Fix Committed Status in libvirt source package in Xenial: In Progress Status in libvirt source package in Zesty: In Progress Status in libvirt source package in Artful: Fix Committed Bug description: [Environment] root@buneary:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.2 LTS Release: 16.04 Codename: xenial root@buneary:~# uname -r 4.10.0-29-generic Reproducible also with the 4.4 kernel. [Description] When a guest memory backing stanza is defined using the <locked/> stanza + hugepages, as follows: <memoryBacking> <hugepages> <page size='1' unit='GiB' nodeset='0'/> <page size='1' unit='GiB' nodeset='1'/> </hugepages> <nosharedpages/> <locked/> </memoryBacking> (Full guest definition: http://paste.ubuntu.com/25229162/) The guest fails to start due to the following error: 2017-08-02 20:25:03.714+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.12 (Christian Ehrhardt <christian.ehrha...@canonical.com> Wed, 19 Jul 2017 08:28:14 +0200), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.14), hostname: buneary.seg LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name reproducer2 -S -machine pc-i440fx-2.5,accel=kvm,usb=off -cpu host -m 124928 -realtime mlock=on -smp 32,sockets=16,cores=1,threads=2 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=64424509440,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-15,memdev=ram-node0 -object memory-backend-file,id=ram-node1,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=66571993088,host-nodes=1,policy=bind -numa node,nodeid=1,cpus=16-31,memdev=ram-node1 -uuid 2460778d-979b-4024-9a13-0c3ca04b18ec -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-reproducer2/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/uvtool/libvirt/images/test-ds.qcow,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on Domain id=14 is tainted: host-cpu char device redirected to /dev/pts/1 (label charserial0) mlockall: Cannot allocate memory 2017-08-02T20:25:37.732772Z qemu-system-x86_64: locking memory failed 2017-08-02 20:25:37.811+0000: shutting down This seems to be due to the setrlimit for RLIMIT_MEMLOCK is too low for mlockall to work given the large amount of memory. There is a libvirt upstream patch that enforces the existence of the hard_limit stanza when using with <locked/> in the memory backing settings. https://github.com/libvirt/libvirt/commit/c2e60ad0e5124482942164e5fec088157f5e716a Memory locking can only work properly if the memory locking limit for the QEMU process has been raised appropriately: the default one is extremely low, so there's no way the guest will fit in there. The commit https://github.com/libvirt/libvirt/commit/7e667664d28f90bf6916604a55ebad7e2d85305b is also required when using hugepages and the locked stanza. [Suggested Fix] * https://github.com/libvirt/libvirt/commit/c2e60ad0e5124482942164e5fec088157f5e716a * https://github.com/libvirt/libvirt/commit/7e667664d28f90bf6916604a55ebad7e2d85305b To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1708305/+subscriptions _______________________________________________ Mailing list: https://launchpad.net/~group.of.nepali.translators Post to : group.of.nepali.translators@lists.launchpad.net Unsubscribe : https://launchpad.net/~group.of.nepali.translators More help : https://help.launchpad.net/ListHelp