Re: [libvirt-users] Breaking a virtlockd lock?

2018-07-03 Thread Daniel P . Berrangé
On Tue, Jul 03, 2018 at 10:20:29AM -0400, Steve Gaarder wrote:
> I have several Qemu/kvm servers running VMs hosted on an NFS share, and am
> using virtlockd.  (lock_manager = "lockd" in qemu.conf)  After a power
> failure, one of the VMs will not start, claiming that it is locked. How do I
> get out of this?

Libvirt uses fcntl() for locking disk image.  In NFS v2 and v3, locking is
a side band protocol and when an NFS client host dies while holding locks,
the server will not release them. When the host comes back online it tell
the server to flush all locks it previously held.  The problems obviously
arise if your dead host doesn't come back online, as nothgin will release
the locks and so other hosts won't be able to lock the VM.

In NFS v4 the situation is much improved, as locking is part of the main
protocol implemented as continually renewed leases. Thus when a client host
dies, it is possible for the server to timeout any locks it held without
waiting for the host to come back online.

My best recommendation would thus be to use NFS v4.  Note that there's still
a 60 second timeout IIRC by default before the server releases the dead
client's locks.

Take a read of "man 5 nfs" if you want to learn more - see the section
headings

   "Using file locks with NFS"

and

   "NFS version 4 Leases"


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] Breaking a virtlockd lock?

2018-07-03 Thread Steve Gaarder
I have several Qemu/kvm servers running VMs hosted on an NFS share, and am 
using virtlockd.  (lock_manager = "lockd" in qemu.conf)  After a power 
failure, one of the VMs will not start, claiming that it is locked. How do 
I get out of this?


thanks,

Steve Gaarder
System Administrator, Dept of Mathematics
Cornell University, Ithaca, NY, USA
gaar...@math.cornell.edu

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] multiple devices in the same iommu group in L1 guest

2018-07-03 Thread Yalan Zhang
Hi,

I have a guest enabled vIOMMU, but on the guest there are several devices
in the same iommu group.
Could someone help to check if I missed something?
Thank you very much!

1. guest xml:
# virsh edit q
...

hvm
/usr/share/OVMF/OVMF_CODE.secboot.fd
/var/lib/libvirt/qemu/nvram/q_VARS.fd
  
...
 
 ...

  
 

  
...

...
 
  
  
  


  
  
  
  


  


...
2. guest has 'intel_iommu=on' enabled in kernel cmdline, then reboot guest

3. log in guest to check:
# dmesg  | grep -i DMAR
[0.00] ACPI: DMAR 7d83f000 00050 (v01 BOCHS  BXPCDMAR
0001 BXPC 0001)
[0.00] DMAR: IOMMU enabled
[0.155178] DMAR: Host address width 39
[0.155180] DMAR: DRHD base: 0x00fed9 flags: 0x1
[0.155221] DMAR: dmar0: reg_base_addr fed9 ver 1:0 cap
12008c22260286 ecap f00f5e
[0.155228] DMAR: ATSR flags: 0x1
[0.155231] DMAR-IR: IOAPIC id 0 under DRHD base  0xfed9 IOMMU 0
[0.155232] DMAR-IR: Queued invalidation will be enabled to support
x2apic and Intr-remapping.
[0.156843] DMAR-IR: Enabled IRQ remapping in x2apic mode
[2.112369] DMAR: No RMRR found
[2.112505] DMAR: dmar0: Using Queued invalidation
[2.112669] DMAR: Setting RMRR:
[2.112671] DMAR: Prepare 0-16MiB unity mapping for LPC
[2.112820] DMAR: Setting identity map for device :00:1f.0 [0x0 -
0xff]
[2.211577] DMAR: Intel(R) Virtualization Technology for Directed I/O
===> This is expected

# dmesg  | grep -i iommu  |grep device
[2.212267] iommu: Adding device :00:00.0 to group 0
[2.212287] iommu: Adding device :00:01.0 to group 1
[2.212372] iommu: Adding device :00:02.0 to group 2
[2.212392] iommu: Adding device :00:02.1 to group 2
[2.212411] iommu: Adding device :00:02.2 to group 2
[2.212444] iommu: Adding device :00:02.3 to group 2
[2.212464] iommu: Adding device :00:02.4 to group 2
[2.212482] iommu: Adding device :00:02.5 to group 2
[2.212520] iommu: Adding device :00:1d.0 to group 3
[2.212533] iommu: Adding device :00:1d.1 to group 3
[2.212541] iommu: Adding device :00:1d.2 to group 3
[2.212550] iommu: Adding device :00:1d.7 to group 3
[2.212567] iommu: Adding device :00:1f.0 to group 4
[2.212576] iommu: Adding device :00:1f.2 to group 4
[2.212585] iommu: Adding device :00:1f.3 to group 4
[2.212599] iommu: Adding device :01:00.0 to group 2
[2.212605] iommu: Adding device :02:01.0 to group 2
[2.212621] iommu: Adding device :04:00.0 to group 2
[2.212634] iommu: Adding device :05:00.0 to group 2
[2.212646] iommu: Adding device :06:00.0 to group 2
[2.212657] iommu: Adding device :07:00.0 to group 2
> several devices in the same iommu group

# virsh nodedev-dumpxml pci__07_00_0

  pci__07_00_0
  /sys/devices/pci:00/:00:02.5/:07:00.0
  pci__00_02_5
  
e1000
  
  
0
7
0
0
82540EM Gigabit Ethernet Controller
Intel Corporation

  
  
  
  
  
  
  
  
  
  
  
  

  


Thus, can not attach the device to L2 guest:
# cat hostdev.xml


  

  
# virsh attach-device rhel hostdev.xml
error: Failed to attach device from hostdev.xml
error: internal error: unable to execute QEMU command 'device_add': vfio
error: :07:00.0: group 2 is not viable


---
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users