Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
Hi, i have found the time and tested Xen-4.4.1 with libvirt-1.2.6 and the problem with the threads are disappered. But now i see after every restore of an xen-vm (windows-xp hvm) an openfile-handle (via lsof -p `pgrep libvirtd`) to an file in /var/lib/xen with the name qemu-resume.vm-id and what me real make confusion is that the entry is marked as deleted.If i look in the directory, the file is really no more their, but libvirt have still have an handle to that none existing file!?! I know that Xen create such a file before they startet the vm, and i also know that xen delete that file , but i don´t understand while it is listed under the openfiles of libvirtd after the vm is started/destroyed??? all the best max ursprüngliche Nachricht- Von: web2 usterman...@web.de An: Jim Fehlig jfeh...@suse.com , web2 usterman...@web.de Kopie: Michal Privoznik mpriv...@redhat.com , libvirt-users redhat.com , libvirt-list redhat.com Datum: Thu, 2 Oct 2014 07:49:03 + - ursprüngliche Nachricht- Von: Jim Fehlig jfeh...@suse.com An: web2 usterman...@web.de Kopie: Michal Privoznik mpriv...@redhat.com , libvirt-users redhat.com , libvirt-list redhat.com Datum: Wed, 01 Oct 2014 21:38:05 -0600 - web2 wrote: Hi ursprüngliche Nachricht- Von: Michal Privoznik mpriv...@redhat.com An: web2 usterman...@web.de , libvirt-users redhat.com , libvirt-list redhat.com Datum: Wed, 01 Oct 2014 18:12:45 +0200 - On 01.10.2014 10:31, web2 wrote: Hello, sorry for my later answer. so, after i started libvirtd (no vm´s running) and attach gdb i get the following threads (gdb) info thread Id Target Id Frame 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i restore an persistent domain, i see the following in gdb: Detaching after fork from child process 20880. Detaching after fork from child process 20882. [New Thread 0x7f190893d700 (LWP 20883)] Detaching after fork from child process 20890. Detaching after fork from child process 20906. (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:1 85 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
ursprüngliche Nachricht- Von: Jim Fehlig jfeh...@suse.com An: web2 usterman...@web.de Kopie: Michal Privoznik mpriv...@redhat.com , libvirt-users redhat.com , libvirt-list redhat.com Datum: Wed, 01 Oct 2014 21:38:05 -0600 - web2 wrote: Hi ursprüngliche Nachricht- Von: Michal Privoznik mpriv...@redhat.com An: web2 usterman...@web.de , libvirt-users redhat.com , libvirt-list redhat.com Datum: Wed, 01 Oct 2014 18:12:45 +0200 - On 01.10.2014 10:31, web2 wrote: Hello, sorry for my later answer. so, after i started libvirtd (no vm´s running) and attach gdb i get the following threads (gdb) info thread Id Target Id Frame 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i restore an persistent domain, i see the following in gdb: Detaching after fork from child process 20880. Detaching after fork from child process 20882. [New Thread 0x7f190893d700 (LWP 20883)] Detaching after fork from child process 20890. Detaching after fork from child process 20906. (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i now destroy this vm i get the following: (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
Hi Jim, i use libvirt 1.1.3.5 on fedora core 20, the vm´s i startet are xen-vm´s. i also take a look on libvirt 1.1.2 on an openSuSE 13.1 und also on libvirt 1.0.2 on an OpenSuSE 12.3 and i didn´t see threads that are created and not closed . But on the openSuSE-Systems the vm´s are KVM/qemu. on the fedora-system i will attach gdb to the libvirtd-process, and hope to see what´s doing on. all the best max ursprüngliche Nachricht- Von: Jim Fehlig jfeh...@suse.com An: usterman...@web.de Kopie: libvir-list@redhat.com, libvirt-us...@redhat.com Datum: Tue, 30 Sep 2014 18:44:17 -0600 - usterman...@web.de wrote: hello, if i start a transient guest doamin via virsh create abcd.xml i see an additional libvirt thread and also some open files: pstree -h `pgrep libvirtd` libvirtd───11*[{libvirtd}] libvirtd 3016 root 21w REG 253,0 6044 1052094 /var/log/libvirt/libxl/abcd.log libvirtd 3016 root 22r FIFO 0,8 0t0 126124 pipe libvirtd 3016 root 23w FIFO 0,8 0t0 126124 pipe libvirtd 3016 root 24u REG 0,37 0 4 /proc/xen/privcmd libvirtd 3016 root 25u unix 0x8807d2c3ad80 0t0 126125 socket libvirtd 3016 root 26r FIFO 0,8 0t0 126127 pipe libvirtd 3016 root 27w FIFO 0,8 0t0 126127 pipe libvirtd 3016 root 28r FIFO 0,8 0t0 124783 pipe libvirtd 3016 root 29w FIFO 0,8 0t0 124783 pipe libvirtd 3016 root 30r FIFO 0,8 0t0 127140 pipe libvirtd 3016 root 31w FIFO 0,8 0t0 127140 pipe if i destroy these vm via virsh destroy abcd, i see that the additional thread still exists and also the list of openfiles is the same. I don't see this behavior with current libvirt git master # pstree `pgrep libvirtd` libvirtd───10*[{libvirtd}] # virsh create ./test.xml Domain test created from ./test.xml # pstree `pgrep libvirtd` libvirtd───11*[{libvirtd}] # virsh destroy test Domain test destroyed # pstree `pgrep libvirtd` libvirtd───10*[{libvirtd}] All file descriptors opened when the domain was created were closed as well. What version of libvirt are you using? Regards, Jim -- -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
Hello, sorry for my later answer. so, after i started libvirtd (no vm´s running) and attach gdb i get the following threads (gdb) info thread Id Target Id Frame 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i restore an persistent domain, i see the following in gdb: Detaching after fork from child process 20880. Detaching after fork from child process 20882. [New Thread 0x7f190893d700 (LWP 20883)] Detaching after fork from child process 20890. Detaching after fork from child process 20906. (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i now destroy this vm i get the following: (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
On 01.10.2014 10:31, web2 wrote: Hello, sorry for my later answer. so, after i started libvirtd (no vm´s running) and attach gdb i get the following threads (gdb) info thread Id Target Id Frame 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i restore an persistent domain, i see the following in gdb: Detaching after fork from child process 20880. Detaching after fork from child process 20882. [New Thread 0x7f190893d700 (LWP 20883)] Detaching after fork from child process 20890. Detaching after fork from child process 20906. (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i now destroy this vm i get the following: (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
Hi ursprüngliche Nachricht- Von: Michal Privoznik mpriv...@redhat.com An: web2 usterman...@web.de , libvirt-users redhat.com , libvirt-list redhat.com Datum: Wed, 01 Oct 2014 18:12:45 +0200 - On 01.10.2014 10:31, web2 wrote: Hello, sorry for my later answer. so, after i started libvirtd (no vm´s running) and attach gdb i get the following threads (gdb) info thread Id Target Id Frame 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i restore an persistent domain, i see the following in gdb: Detaching after fork from child process 20880. Detaching after fork from child process 20882. [New Thread 0x7f190893d700 (LWP 20883)] Detaching after fork from child process 20890. Detaching after fork from child process 20906. (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i now destroy this vm i get the following: (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
web2 wrote: Hi ursprüngliche Nachricht- Von: Michal Privoznik mpriv...@redhat.com An: web2 usterman...@web.de , libvirt-users redhat.com , libvirt-list redhat.com Datum: Wed, 01 Oct 2014 18:12:45 +0200 - On 01.10.2014 10:31, web2 wrote: Hello, sorry for my later answer. so, after i started libvirtd (no vm´s running) and attach gdb i get the following threads (gdb) info thread Id Target Id Frame 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i restore an persistent domain, i see the following in gdb: Detaching after fork from child process 20880. Detaching after fork from child process 20882. [New Thread 0x7f190893d700 (LWP 20883)] Detaching after fork from child process 20890. Detaching after fork from child process 20906. (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 7 Thread 0x7f18fcf4a700 (LWP 20699) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 6 Thread 0x7f18fc749700 (LWP 20700) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 5 Thread 0x7f18fbf48700 (LWP 20701) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 4 Thread 0x7f18fb747700 (LWP 20702) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 3 Thread 0x7f18faf46700 (LWP 20703) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 2 Thread 0x7f18fa745700 (LWP 20704) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 * 1 Thread 0x7f190892f840 (LWP 20694) libvirtd 0x7f190677d7cd in poll () at ../sysdeps/unix/syscall-template.S:81 if i now destroy this vm i get the following: (gdb) info thread Id Target Id Frame 12 Thread 0x7f190893d700 (LWP 20883) libvirtd 0x7f1906e6684d in read () at ../sysdeps/unix/syscall-template.S:81 11 Thread 0x7f18fef4e700 (LWP 20695) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 10 Thread 0x7f18fe74d700 (LWP 20696) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 9 Thread 0x7f18fdf4c700 (LWP 20697) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 8 Thread 0x7f18fd74b700 (LWP 20698) libvirtd pthread_cond_wait@@GLIBC_2.3.2 () at
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
usterman...@web.de wrote: hello, if i start a transient guest doamin via virsh create abcd.xml i see an additional libvirt thread and also some open files: pstree -h `pgrep libvirtd` libvirtd───11*[{libvirtd}] libvirtd 3016 root 21w REG 253,0 6044 1052094 /var/log/libvirt/libxl/abcd.log libvirtd 3016 root 22r FIFO0,8 0t0 126124 pipe libvirtd 3016 root 23w FIFO0,8 0t0 126124 pipe libvirtd 3016 root 24u REG 0,37 0 4 /proc/xen/privcmd libvirtd 3016 root 25u unix 0x8807d2c3ad80 0t0 126125 socket libvirtd 3016 root 26r FIFO0,8 0t0 126127 pipe libvirtd 3016 root 27w FIFO0,8 0t0 126127 pipe libvirtd 3016 root 28r FIFO0,8 0t0 124783 pipe libvirtd 3016 root 29w FIFO0,8 0t0 124783 pipe libvirtd 3016 root 30r FIFO0,8 0t0 127140 pipe libvirtd 3016 root 31w FIFO0,8 0t0 127140 pipe if i destroy these vm via virsh destroy abcd, i see that the additional thread still exists and also the list of openfiles is the same. I don't see this behavior with current libvirt git master # pstree `pgrep libvirtd` libvirtd───10*[{libvirtd}] # virsh create ./test.xml Domain test created from ./test.xml # pstree `pgrep libvirtd` libvirtd───11*[{libvirtd}] # virsh destroy test Domain test destroyed # pstree `pgrep libvirtd` libvirtd───10*[{libvirtd}] All file descriptors opened when the domain was created were closed as well. What version of libvirt are you using? Regards, Jim -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
On 26.09.2014 09:46, usterman...@web.de wrote: hello, if i start a transient guest doamin via virsh create abcd.xml i see an additional libvirt thread and also some open files: pstree -h `pgrep libvirtd` libvirtd───11*[{libvirtd}] libvirtd 3016 root 21w REG 253,0 6044 1052094 /var/log/libvirt/libxl/abcd.log libvirtd 3016 root 22r FIFO0,8 0t0 126124 pipe libvirtd 3016 root 23w FIFO0,8 0t0 126124 pipe libvirtd 3016 root 24u REG 0,37 0 4 /proc/xen/privcmd libvirtd 3016 root 25u unix 0x8807d2c3ad80 0t0 126125 socket libvirtd 3016 root 26r FIFO0,8 0t0 126127 pipe libvirtd 3016 root 27w FIFO0,8 0t0 126127 pipe libvirtd 3016 root 28r FIFO0,8 0t0 124783 pipe libvirtd 3016 root 29w FIFO0,8 0t0 124783 pipe libvirtd 3016 root 30r FIFO0,8 0t0 127140 pipe libvirtd 3016 root 31w FIFO0,8 0t0 127140 pipe if i destroy these vm via virsh destroy abcd, i see that the additional thread still exists and also the list of openfiles is the same. if i start the transient guest domain again, i observe an increase in the number of libvirt threads ans also in the list of openfiles: [root@localhost libxl]# pstree -h `pgrep libvirtd` libvirtd───12*[{libvirtd}] libvirtd 3016 root 21w REG 253,0 13783 1052094 /var/log/libvirt/libxl/abcd.log libvirtd 3016 root 22r FIFO0,8 0t0 126124 pipe libvirtd 3016 root 23w FIFO0,8 0t0 126124 pipe libvirtd 3016 root 24u REG 0,37 0 4 /proc/xen/privcmd libvirtd 3016 root 25u unix 0x8807d2c3ad80 0t0 126125 socket libvirtd 3016 root 26r FIFO0,8 0t0 126127 pipe libvirtd 3016 root 27w FIFO0,8 0t0 126127 pipe libvirtd 3016 root 28r FIFO0,8 0t0 124783 pipe libvirtd 3016 root 29w FIFO0,8 0t0 124783 pipe libvirtd 3016 root 30r FIFO0,8 0t0 127140 pipe libvirtd 3016 root 31w FIFO0,8 0t0 127140 pipe libvirtd 3016 root 32w REG 253,0 13783 1052094 /var/log/libvirt/libxl/abcd.log libvirtd 3016 root 33r FIFO0,8 0t0 129039 pipe libvirtd 3016 root 34w FIFO0,8 0t0 129039 pipe libvirtd 3016 root 35u REG 0,37 0 4 /proc/xen/privcmd libvirtd 3016 root 36u unix 0x8807d398bb80 0t0 129040 socket libvirtd 3016 root 37r FIFO0,8 0t0 129042 pipe libvirtd 3016 root 38w FIFO0,8 0t0 129042 pipe libvirtd 3016 root 39r FIFO0,8 0t0 129043 pipe libvirtd 3016 root 40w FIFO0,8 0t0 129043 pipe libvirtd 3016 root 41r FIFO0,8 0t0 129044 pipe libvirtd 3016 root 42w FIFO0,8 0t0 129044 pipe if i destroy the doamin again and define them via virsh define abcd.xml and start them then via virsh start abcd, the number of libvirtd threads don´t increase again and also the number of open files is the same. My question: is it normal that for transient guest domains the created libvirtd thread sill exists and also the open files after i destroy the doamin? or is it bug? The thread is created as a result of this commit: commit 03b3f8940af1a3179b7d5e19b25f54290bde10e0 Author: Jim Fehlig jfeh...@suse.com AuthorDate: Fri Jan 31 23:06:35 2014 -0700 Commit: Jim Fehlig jfeh...@suse.com CommitDate: Thu Feb 6 10:17:58 2014 -0700 libxl: handle domain shutdown events in a thread Handling the domain shutdown event within the event handler seems a bit unfair to libxl's event machinery. Domain shutdown could take considerable time. E.g. if the shutdown reason is reboot, the domain must be reaped and then started again. Spawn a shutdown handler thread to do this work, allowing libxl's event machinery to go about its business. Signed-off-by: Jim Fehlig jfeh...@suse.com Which was then modified by 0a840e23 (pure code movement). The thing is, libvirt listens on events from hypervisors. And some actions that are taken subsequently may take ages to finish. However, the events are executed single threaded, so they serialize effectively. And for the long lasting actions (like shutdown, in fact in libxl shutdown only) a new thread is spawned so the event loop is not burdened for too long. The thread is created with 'detach-state' attribute so once the shutdown is handled the thread should just disappear. Michal -- libvir-list mailing list libvir-list@redhat.com
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
Hi Michal, thank you for your answer. so if i understand that correctly, no matter if i shutdown or destroy the domain and no matter if it is a transient or an persistent vm, the thread should disappear, right? in my case they still exist, also an hour after i destroy the domain (and don´t start any new one). i use libvirt-1.1.35 on fedora core 20, for information. all the best max Gesendet: Freitag, 26. September 2014 um 11:21 Uhr Von: Michal Privoznik mpriv...@redhat.com An: usterman...@web.de, libvir-list@redhat.com, libvirt-us...@redhat.com Betreff: Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug? On 26.09.2014 09:46, usterman...@web.de wrote: hello, if i start a transient guest doamin via virsh create abcd.xml i see an additional libvirt thread and also some open files: pstree -h `pgrep libvirtd` libvirtd───11*[{libvirtd}] libvirtd 3016 root 21w REG 253,0 6044 1052094 /var/log/libvirt/libxl/abcd.log libvirtd 3016 root 22r FIFO 0,8 0t0 126124 pipe libvirtd 3016 root 23w FIFO 0,8 0t0 126124 pipe libvirtd 3016 root 24u REG 0,37 0 4 /proc/xen/privcmd libvirtd 3016 root 25u unix 0x8807d2c3ad80 0t0 126125 socket libvirtd 3016 root 26r FIFO 0,8 0t0 126127 pipe libvirtd 3016 root 27w FIFO 0,8 0t0 126127 pipe libvirtd 3016 root 28r FIFO 0,8 0t0 124783 pipe libvirtd 3016 root 29w FIFO 0,8 0t0 124783 pipe libvirtd 3016 root 30r FIFO 0,8 0t0 127140 pipe libvirtd 3016 root 31w FIFO 0,8 0t0 127140 pipe if i destroy these vm via virsh destroy abcd, i see that the additional thread still exists and also the list of openfiles is the same. if i start the transient guest domain again, i observe an increase in the number of libvirt threads ans also in the list of openfiles: [root@localhost libxl]# pstree -h `pgrep libvirtd` libvirtd───12*[{libvirtd}] libvirtd 3016 root 21w REG 253,0 13783 1052094 /var/log/libvirt/libxl/abcd.log libvirtd 3016 root 22r FIFO 0,8 0t0 126124 pipe libvirtd 3016 root 23w FIFO 0,8 0t0 126124 pipe libvirtd 3016 root 24u REG 0,37 0 4 /proc/xen/privcmd libvirtd 3016 root 25u unix 0x8807d2c3ad80 0t0 126125 socket libvirtd 3016 root 26r FIFO 0,8 0t0 126127 pipe libvirtd 3016 root 27w FIFO 0,8 0t0 126127 pipe libvirtd 3016 root 28r FIFO 0,8 0t0 124783 pipe libvirtd 3016 root 29w FIFO 0,8 0t0 124783 pipe libvirtd 3016 root 30r FIFO 0,8 0t0 127140 pipe libvirtd 3016 root 31w FIFO 0,8 0t0 127140 pipe libvirtd 3016 root 32w REG 253,0 13783 1052094 /var/log/libvirt/libxl/abcd.log libvirtd 3016 root 33r FIFO 0,8 0t0 129039 pipe libvirtd 3016 root 34w FIFO 0,8 0t0 129039 pipe libvirtd 3016 root 35u REG 0,37 0 4 /proc/xen/privcmd libvirtd 3016 root 36u unix 0x8807d398bb80 0t0 129040 socket libvirtd 3016 root 37r FIFO 0,8 0t0 129042 pipe libvirtd 3016 root 38w FIFO 0,8 0t0 129042 pipe libvirtd 3016 root 39r FIFO 0,8 0t0 129043 pipe libvirtd 3016 root 40w FIFO 0,8 0t0 129043 pipe libvirtd 3016 root 41r FIFO 0,8 0t0 129044 pipe libvirtd 3016 root 42w FIFO 0,8 0t0 129044 pipe if i destroy the doamin again and define them via virsh define abcd.xml and start them then via virsh start abcd, the number of libvirtd threads don´t increase again and also the number of open files is the same. My question: is it normal that for transient guest domains the created libvirtd thread sill exists and also the open files after i destroy the doamin? or is it bug? The thread is created as a result of this commit: commit 03b3f8940af1a3179b7d5e19b25f54290bde10e0 Author: Jim Fehlig jfeh...@suse.com AuthorDate: Fri Jan 31 23:06:35 2014 -0700 Commit: Jim Fehlig jfeh...@suse.com CommitDate: Thu Feb 6 10:17:58 2014 -0700 libxl: handle domain shutdown events in a thread Handling the domain shutdown event within the event handler seems a bit unfair to libxl's event machinery. Domain shutdown could take considerable time. E.g. if the shutdown reason is reboot, the domain must be reaped and then started again. Spawn a shutdown handler thread to do this work, allowing libxl's event machinery to go about its business. Signed-off-by: Jim Fehlig jfeh...@suse.com Which was then modified by 0a840e23 (pure code movement). The thing is, libvirt listens on events from hypervisors. And some actions that are taken subsequently may take ages to finish. However, the events are executed single threaded, so they serialize effectively. And for the long lasting actions (like shutdown, in fact in libxl shutdown only) a new thread is spawned so the event loop is not burdened for too long. The thread is created with 'detach-state' attribute so once the shutdown is handled the thread should just disappear. Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list