[libvirt] [virtio-pci] unbind port from virtio-pci causes kernel crash

2015-08-19 Thread Clarylin L
I am running kvm based virtual guest. The guest has two virtio-based ports.
Now I am trying to unbind ports from their current drivers (by echo
:00:06.0  /sys/bus/pci/drivers/virtio-pci/unbind) and getting kernel
crash. Don't know how to move forward. Any suggestions are highly
appreciated.

[  110.034006] WARNING: at kernel/irq/manage.c:937 __free_irq+0x205/0x210()

[  110.034006] Hardware name: Standard PC (i440FX + PIIX, 1996)

[  110.034006] Modules linked in: rte_kni igb_uio knpusim(P) knpushm(P)
virtio_scsi virtio_blk vmw_pvscsi ata_piix mptsas mptspi mptscsih mptbase
ub be2net ixgbevf enic ixgbe mdio e1000e e1000 vmxnet3 virtio_net bnx2
3c59x uhci_hcd ehci_hcd

[  110.034006] Pid: 5578, comm: sh Tainted: PW
2.6.38-staros-v3-ssi-64 #2

[  110.034006] Call Trace:

[  110.034006]  [810a1905] ? __free_irq+0x205/0x210

[  110.034006]  [810a1905] ? __free_irq+0x205/0x210

[  110.034006]  [81056710] ? warn_slowpath_common+0x90/0xc0

[  110.034006]  [8105675a] ? warn_slowpath_null+0x1a/0x20

[  110.034006]  [810a1905] ? __free_irq+0x205/0x210

[  110.034006]  [810a1957] ? free_irq+0x47/0x90

[  110.034006]  [812f96a3] ? vp_del_vqs+0x73/0xb0

[  110.034006]  [a00192f6] ? virtnet_del_vqs+0x36/0x50
[virtio_net]

[  110.034006]  [a001ab16] ? remove_vq_common+0x36/0x40
[virtio_net]

[  110.034006]  [a001c47f] ? virtnet_remove+0x3f/0xbc0
[virtio_net]

[  110.034006]  [812f8612] ? virtio_dev_remove+0x22/0x50

[  110.034006]  [8133a176] ? __device_release_driver+0x66/0xc0

[  110.034006]  [8133a8ed] ? device_release_driver+0x2d/0x40

[  110.034006]  [81339b8d] ? bus_remove_device+0x7d/0xe0

[  110.034006]  [81336a98] ? device_del+0x128/0x1a0

[  110.034006]  [81336b32] ? device_unregister+0x22/0x60

[  110.034006]  [812f8682] ? unregister_virtio_device+0x12/0x20

[  110.034006]  [8155fe2d] ? virtio_pci_remove+0x1d/0x20

[  110.034006]  [812b6ee7] ? pci_device_remove+0x37/0x70

[  110.034006]  [8133a176] ? __device_release_driver+0x66/0xc0

[  110.034006]  [8133a8ed] ? device_release_driver+0x2d/0x40

[  110.034006]  [81339c8b] ? driver_unbind+0x9b/0xc0

[  110.034006]  [81338bbc] ? drv_attr_store+0x2c/0x30

[  110.034006]  [8117f0a6] ? sysfs_write_file+0xe6/0x150

[  110.034006]  [8111fd7e] ? vfs_write+0xce/0x170

[  110.034006]  [81120485] ? sys_write+0x55/0x90

[  110.034006]  [8103fc40] ? sysenter_dispatch+0x7/0x37

[  110.034006]  [81563e49] ? trace_hardirqs_on_thunk+0x3a/0x3c

[  110.034006] ---[ end trace 4eaa2a86a8e2da24 ]---

[  110.074389] general protection fault:  [#1] PREEMPT SMP

[  110.075006] last sysfs file: /sys/bus/pci/drivers/virtio-pci/unbind
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] Problem with setting up KVM guests to use HugePages

2015-06-11 Thread Clarylin L
On Thu, Jun 11, 2015 at 12:00 AM, Michal Privoznik mpriv...@redhat.com
wrote:

 [please keep the list CC'ed]

 On 10.06.2015 20:09, Clarylin L wrote:
  Hi Michal,
 
  Thanks a lot.
 
  If 100 hugepages are pre-allocated, the guest can start without
 decreasing
  number of hugepages. Since the guest requires 128 hugepages, it's kind of
  expected that the guest would not take memory from hugepages.
 
  Before guest start,
 
  [root@local ~]# cat /proc/meminfo | grep uge
 
  AnonH*uge*Pages: 0 kB
 
  H*uge*Pages_Total: 100
 
  H*uge*Pages_Free:  100
 
  H*uge*Pages_Rsvd:0
 
  H*uge*Pages_Surp:0
  H*uge*pagesize:1048576 kB
 
  After:
 
  [root@local ~]# cat /proc/meminfo | grep uge
 
  AnonH*uge*Pages:  134254592 kB
 
  H*uge*Pages_Total: 100
 
  H*uge*Pages_Free:  100
 
  H*uge*Pages_Rsvd:0
 
  H*uge*Pages_Surp:0
  H*uge*pagesize:1048576 kB
 
  There is no -mem-prealloc and -mem-path options in qemu command

 And there can't be. From the command line below, you are defining 2 NUMA
 nodes for your guest. In order to instruct qemu to back their memory by
 huge pages you need it to support memory-backend-file object which was
 introduced in qemu-2.1.0.
 The other option you have is to not use guest NUMA nodes, in which case
 global -mem-path can be used.


Michal, you were correct. My current version is qemu-1.5.3. After I updated
it to 2.1.2, I was able to use page element under hugepages and the
problem was solved. Seemed page element was required -- without using it,
start guest would report an error.


 
  [root@local ~]# ps -ef | grep qemu
 
  *qemu*  3403 1 99 17:42 ?00:36:42 /usr/libexec/*qemu*-kvm
  -name qvpc-di-03-sf -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off
 -cpu
  host -m 131072 -realtime mlock=off -smp 32,sockets=2,cores=16,threads=1
  -numa node,nodeid=0,cpus=0-15,mem=65536 -numa
  node,nodeid=1,cpus=16-31,mem=65536 -uuid
  e1b72349-4a0b-4b91-aedc-fd34e92251e4 -smbios type=1,serial=SCALE-SLOT-03
  -no-user-config -nodefaults -chardev
 
 socket,id=charmonitor,path=/var/lib/libvirt/*qemu*/qvpc-di-03-sf.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
 -no-shutdown
  -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
 -drive
 
 file=/var/lib/libvirt/images/asr5700/qvpc-di-03-sf-hda.img,if=none,id=drive-ide0-0-0,format=qcow2
  -device
  ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2
 -drive
  if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
  ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
  -chardev pty,id=charserial0 -device
  isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1
  -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:0 -vga
  cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0x3 -watchdog-action
  reset -device vfio-pci,host=08:00.0,id=hostdev0,bus=pci.0,addr=0x5
 -device
  vfio-pci,host=09:00.0,id=hostdev1,bus=pci.0,addr=0x6 -device
  vfio-pci,host=0a:00.0,id=hostdev2,bus=pci.0,addr=0x7 -device
  virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
 
 
  If 140 hugepages are preallocated, the guest cannot start and it
 complained
  not enough memory.
 
 
  The libvirt version is shown as follows:
 
  virsh # version
 
  Compiled against library: libvirt 1.2.8
 
  Using library: libvirt 1.2.8
 
  Using API: QEMU 1.2.8
 
  Running hypervisor: QEMU 1.5.3
 
 



  Also in guest configuration contains numa session. The hugepages are
  uniformly distributed to two nodes. In this case, do I need to make
  additional configurations to enable usage of hugepages?
 
numatune
 
  memory mode='strict' nodeset='0-1'/
 
/numatune
 

 This says that all the memory for you guests should be pinned onto host
 nodes 0-1. If you want to be more specific, you can explicitly wire
 guest NUMA nodes onto host NUMA nodes in 1:N relationship (where N can
 be even 1, in which case you will get 1:1), e.g.:

 memnode cellid='0' mode='preferred' nodeset='0'/

 Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] Problem with setting up KVM guests to use HugePages

2015-06-11 Thread Clarylin L
Hi Michal,

I also tried the other option you mentioned The other option you have is
to not use guest NUMA nodes, in which case global -mem-path can be used.
by removing from xml

  numatune

memory mode='strict' nodeset='0-1'/

  /numatune

while keeping older qemu-1.5.3 which does not support page element, it
still complained no enough memory when starting guest.

Did I miss something?

On Thu, Jun 11, 2015 at 12:00 AM, Michal Privoznik mpriv...@redhat.com
wrote:

 [please keep the list CC'ed]

 On 10.06.2015 20:09, Clarylin L wrote:
  Hi Michal,
 
  Thanks a lot.
 
  If 100 hugepages are pre-allocated, the guest can start without
 decreasing
  number of hugepages. Since the guest requires 128 hugepages, it's kind of
  expected that the guest would not take memory from hugepages.
 
  Before guest start,
 
  [root@local ~]# cat /proc/meminfo | grep uge
 
  AnonH*uge*Pages: 0 kB
 
  H*uge*Pages_Total: 100
 
  H*uge*Pages_Free:  100
 
  H*uge*Pages_Rsvd:0
 
  H*uge*Pages_Surp:0
  H*uge*pagesize:1048576 kB
 
  After:
 
  [root@local ~]# cat /proc/meminfo | grep uge
 
  AnonH*uge*Pages:  134254592 kB
 
  H*uge*Pages_Total: 100
 
  H*uge*Pages_Free:  100
 
  H*uge*Pages_Rsvd:0
 
  H*uge*Pages_Surp:0
  H*uge*pagesize:1048576 kB
 
  There is no -mem-prealloc and -mem-path options in qemu command

 And there can't be. From the command line below, you are defining 2 NUMA
 nodes for your guest. In order to instruct qemu to back their memory by
 huge pages you need it to support memory-backend-file object which was
 introduced in qemu-2.1.0.
 The other option you have is to not use guest NUMA nodes, in which case
 global -mem-path can be used.

 
  [root@local ~]# ps -ef | grep qemu
 
  *qemu*  3403 1 99 17:42 ?00:36:42 /usr/libexec/*qemu*-kvm
  -name qvpc-di-03-sf -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off
 -cpu
  host -m 131072 -realtime mlock=off -smp 32,sockets=2,cores=16,threads=1
  -numa node,nodeid=0,cpus=0-15,mem=65536 -numa
  node,nodeid=1,cpus=16-31,mem=65536 -uuid
  e1b72349-4a0b-4b91-aedc-fd34e92251e4 -smbios type=1,serial=SCALE-SLOT-03
  -no-user-config -nodefaults -chardev
 
 socket,id=charmonitor,path=/var/lib/libvirt/*qemu*/qvpc-di-03-sf.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
 -no-shutdown
  -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
 -drive
 
 file=/var/lib/libvirt/images/asr5700/qvpc-di-03-sf-hda.img,if=none,id=drive-ide0-0-0,format=qcow2
  -device
  ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2
 -drive
  if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
  ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
  -chardev pty,id=charserial0 -device
  isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1
  -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:0 -vga
  cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0x3 -watchdog-action
  reset -device vfio-pci,host=08:00.0,id=hostdev0,bus=pci.0,addr=0x5
 -device
  vfio-pci,host=09:00.0,id=hostdev1,bus=pci.0,addr=0x6 -device
  vfio-pci,host=0a:00.0,id=hostdev2,bus=pci.0,addr=0x7 -device
  virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
 
 
  If 140 hugepages are preallocated, the guest cannot start and it
 complained
  not enough memory.
 
 
  The libvirt version is shown as follows:
 
  virsh # version
 
  Compiled against library: libvirt 1.2.8
 
  Using library: libvirt 1.2.8
 
  Using API: QEMU 1.2.8
 
  Running hypervisor: QEMU 1.5.3
 
 



  Also in guest configuration contains numa session. The hugepages are
  uniformly distributed to two nodes. In this case, do I need to make
  additional configurations to enable usage of hugepages?
 
numatune
 
  memory mode='strict' nodeset='0-1'/
 
/numatune
 

 This says that all the memory for you guests should be pinned onto host
 nodes 0-1. If you want to be more specific, you can explicitly wire
 guest NUMA nodes onto host NUMA nodes in 1:N relationship (where N can
 be even 1, in which case you will get 1:1), e.g.:

 memnode cellid='0' mode='preferred' nodeset='0'/

 Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] Problem with setting up KVM guests to use HugePages

2015-06-11 Thread Clarylin L
With older qemu-1.5.3, I can start the guest by removing the following
lines from the xml config file and hugepages are correctly used.

numa

  cell id='0' cpus='0-15' memory='67108864'/

  cell id='1' cpus='16-31' memory='67108864'/

/numa


I believe these lines were used to create NUMA nodes for the guest.
Removing them means no NUMA nodes for the guest I guess.


There is one thing I don't quite understand though. In the xml file, I kept


  numatune

memory mode='strict' nodeset='0-1'/

  /numatune

Since no NUMA nodes were created, how come this section can pass? Then what
is the nodeset here?



On Thu, Jun 11, 2015 at 10:27 AM, Clarylin L clear...@gmail.com wrote:

 Hi Michal,

 I also tried the other option you mentioned The other option you have is
 to not use guest NUMA nodes, in which case global -mem-path can be used.
 by removing from xml

   numatune

 memory mode='strict' nodeset='0-1'/

   /numatune

 while keeping older qemu-1.5.3 which does not support page element, it
 still complained no enough memory when starting guest.

 Did I miss something?

 On Thu, Jun 11, 2015 at 12:00 AM, Michal Privoznik mpriv...@redhat.com
 wrote:

 [please keep the list CC'ed]

 On 10.06.2015 20:09, Clarylin L wrote:
  Hi Michal,
 
  Thanks a lot.
 
  If 100 hugepages are pre-allocated, the guest can start without
 decreasing
  number of hugepages. Since the guest requires 128 hugepages, it's kind
 of
  expected that the guest would not take memory from hugepages.
 
  Before guest start,
 
  [root@local ~]# cat /proc/meminfo | grep uge
 
  AnonH*uge*Pages: 0 kB
 
  H*uge*Pages_Total: 100
 
  H*uge*Pages_Free:  100
 
  H*uge*Pages_Rsvd:0
 
  H*uge*Pages_Surp:0
  H*uge*pagesize:1048576 kB
 
  After:
 
  [root@local ~]# cat /proc/meminfo | grep uge
 
  AnonH*uge*Pages:  134254592 kB
 
  H*uge*Pages_Total: 100
 
  H*uge*Pages_Free:  100
 
  H*uge*Pages_Rsvd:0
 
  H*uge*Pages_Surp:0
  H*uge*pagesize:1048576 kB
 
  There is no -mem-prealloc and -mem-path options in qemu command

 And there can't be. From the command line below, you are defining 2 NUMA
 nodes for your guest. In order to instruct qemu to back their memory by
 huge pages you need it to support memory-backend-file object which was
 introduced in qemu-2.1.0.
 The other option you have is to not use guest NUMA nodes, in which case
 global -mem-path can be used.

 
  [root@local ~]# ps -ef | grep qemu
 
  *qemu*  3403 1 99 17:42 ?00:36:42
 /usr/libexec/*qemu*-kvm
  -name qvpc-di-03-sf -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off
 -cpu
  host -m 131072 -realtime mlock=off -smp 32,sockets=2,cores=16,threads=1
  -numa node,nodeid=0,cpus=0-15,mem=65536 -numa
  node,nodeid=1,cpus=16-31,mem=65536 -uuid
  e1b72349-4a0b-4b91-aedc-fd34e92251e4 -smbios type=1,serial=SCALE-SLOT-03
  -no-user-config -nodefaults -chardev
 
 socket,id=charmonitor,path=/var/lib/libvirt/*qemu*/qvpc-di-03-sf.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
 -no-shutdown
  -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
 -drive
 
 file=/var/lib/libvirt/images/asr5700/qvpc-di-03-sf-hda.img,if=none,id=drive-ide0-0-0,format=qcow2
  -device
  ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2
 -drive
  if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
  ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
  -chardev pty,id=charserial0 -device
  isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1
  -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:0 -vga
  cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0x3 -watchdog-action
  reset -device vfio-pci,host=08:00.0,id=hostdev0,bus=pci.0,addr=0x5
 -device
  vfio-pci,host=09:00.0,id=hostdev1,bus=pci.0,addr=0x6 -device
  vfio-pci,host=0a:00.0,id=hostdev2,bus=pci.0,addr=0x7 -device
  virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
 
 
  If 140 hugepages are preallocated, the guest cannot start and it
 complained
  not enough memory.
 
 
  The libvirt version is shown as follows:
 
  virsh # version
 
  Compiled against library: libvirt 1.2.8
 
  Using library: libvirt 1.2.8
 
  Using API: QEMU 1.2.8
 
  Running hypervisor: QEMU 1.5.3
 
 



  Also in guest configuration contains numa session. The hugepages are
  uniformly distributed to two nodes. In this case, do I need to make
  additional configurations to enable usage of hugepages?
 
numatune
 
  memory mode='strict' nodeset='0-1'/
 
/numatune
 

 This says that all the memory for you guests should be pinned onto host
 nodes 0-1. If you want to be more specific, you can explicitly wire
 guest NUMA nodes onto host NUMA nodes in 1:N relationship (where N can
 be even 1, in which case you will get 1:1), e.g.:

 memnode cellid='0' mode='preferred' nodeset='0'/

 Michal



--
libvir-list mailing list
libvir-list@redhat.com