Re: [libvirt] [RFC] require for suggestions on support for ivshmem device
Hi all, On 07/21/2014 04:38 AM, Wang Rui wrote: On 2014/7/17 17:37, Martin Kletzander wrote: On Tue, May 20, 2014 at 11:17:32AM +0200, Martin Kletzander wrote: On Wed, May 14, 2014 at 08:23:21AM +, Wangrui (K) wrote: Hi, Libvirt does not support ivshmem(Inter-VM Shared Memory) device recently, thus, I would like to know if there's any plan to support it in the future? If not, I would like to contribute a serial of patches to do so. I came back to this mail right now because I need to have this implemented. Is there any progress on your side with this or should I try hitting this? I am working right now on supporting ivshmem in libvirt. Please, see my github: https://github.com/6WIND/libvirt/commits/rfc_ivshmem_support I will try to send RFC patches on the mailing list in the next days. Maxime -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] require for suggestions on support for ivshmem device
On 2014/7/17 17:37, Martin Kletzander wrote: On Tue, May 20, 2014 at 11:17:32AM +0200, Martin Kletzander wrote: On Wed, May 14, 2014 at 08:23:21AM +, Wangrui (K) wrote: Hi, Libvirt does not support ivshmem(Inter-VM Shared Memory) device recently, thus, I would like to know if there's any plan to support it in the future? If not, I would like to contribute a serial of patches to do so. I came back to this mail right now because I need to have this implemented. Is there any progress on your side with this or should I try hitting this? There's some experimental progress, not good enough to send patches. Sure, you can hav a try. I would keep attention on your patches. You mentioned shm unlink below. If I Understand Correctly, QEMU does have code to cleanup shm. Libvirt should do the cleanup job. [...] There are two ways to use ivshmem with qemu (please refer to http://qemu.weilnetz.de/qemu-doc.html#pcsys_005fother_005fdevs ): 1.Guests map a POSIX shared memory region into the guest as a PCI device that enables zero-copy communication to the application level of the guests, The basic syntax is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, shm = shm name] 2.If desired, interrupts can be sent between guest VMs accessing the same shared memory region. Interrupt support requires using a shared memory server and using a chardev socket to connect to it. An example syntax when using the shared memory server is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, chardev = id] [, msi = on] [, ioeventfd = on] [, vectors = n] [, role = peer | master] qemu-system-i386-chardev socket, path = path, id = id The respective xml configuration for the above 2 qemu command lines are shown as below: Example1: automatically attach device with KVM devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: size means ivshmem size in unit MB, name means shm name role is optional, it may be set to master or peer, the default one is master What do these roles mean, I mean what's the difference between master and peer and why is it only used with the chardev? Does it mean master can only send interrupts or...? Just curious. @Cam (Cc'd) I was wondering about the role= options, so I looked into the code. It looks like role=peer just effectively disables migration. Did I miss any other difference? From the libvirt's POV I'd have a few more questions if I may. How does the migration work (if there's role=master) WRT other guests using the same shm? I found no shm_unlink call in QEMU sources (but again, I'm not experienced in QEMU's internals), does that mean that cleanup should be done by libvirt? Thank you for any info provided. Martin -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] require for suggestions on support for ivshmem device
On Tue, May 20, 2014 at 11:17:32AM +0200, Martin Kletzander wrote: On Wed, May 14, 2014 at 08:23:21AM +, Wangrui (K) wrote: Hi, Libvirt does not support ivshmem(Inter-VM Shared Memory) device recently, thus, I would like to know if there's any plan to support it in the future? If not, I would like to contribute a serial of patches to do so. I came back to this mail right now because I need to have this implemented. Is there any progress on your side with this or should I try hitting this? [...] There are two ways to use ivshmem with qemu (please refer to http://qemu.weilnetz.de/qemu-doc.html#pcsys_005fother_005fdevs ): 1.Guests map a POSIX shared memory region into the guest as a PCI device that enables zero-copy communication to the application level of the guests, The basic syntax is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, shm = shm name] 2.If desired, interrupts can be sent between guest VMs accessing the same shared memory region. Interrupt support requires using a shared memory server and using a chardev socket to connect to it. An example syntax when using the shared memory server is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, chardev = id] [, msi = on] [, ioeventfd = on] [, vectors = n] [, role = peer | master] qemu-system-i386-chardev socket, path = path, id = id The respective xml configuration for the above 2 qemu command lines are shown as below: Example1: automatically attach device with KVM devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: size means ivshmem size in unit MB, name means shm name role is optional, it may be set to master or peer, the default one is master What do these roles mean, I mean what's the difference between master and peer and why is it only used with the chardev? Does it mean master can only send interrupts or...? Just curious. @Cam (Cc'd) I was wondering about the role= options, so I looked into the code. It looks like role=peer just effectively disables migration. Did I miss any other difference? From the libvirt's POV I'd have a few more questions if I may. How does the migration work (if there's role=master) WRT other guests using the same shm? I found no shm_unlink call in QEMU sources (but again, I'm not experienced in QEMU's internals), does that mean that cleanup should be done by libvirt? Thank you for any info provided. Martin signature.asc Description: Digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] require for suggestions on support for ivshmem device
Thank you for reply -Original Message- From: Martin Kletzander [mailto:mklet...@redhat.com] Sent: Tuesday, May 20, 2014 5:18 PM To: Wangrui (K) Cc: libvir-list@redhat.com; Zhangbo (Oscar); Yanqiangjun; Zengjunliang; Moyuxiang; jdene...@redhat.com Subject: Re: [libvirt] [RFC] require for suggestions on support for ivshmem device On Wed, May 14, 2014 at 08:23:21AM +, Wangrui (K) wrote: Hi, Libvirt does not support ivshmem(Inter-VM Shared Memory) device recently, thus, I would like to know if there's any plan to support it in the future? If not, I would like to contribute a serial of patches to do so. On Jan 28, Wangyufei (James) asked about this question, and Daniel replied with 2 suggestions: 1. Libvirt should be capable of configuring the guest's xml on ivshmem. 2.An ivshmem daemon is needed to run on the host to support it, libvirt is recommended to provide such a daemon. Please refer to https://www.redhat.com/archives/libvir-list/2014-January/msg01335.html for details. What I'll do later is the 1st suggestion, the 2nd one is left to be accomplished by someone else. Here is the detailed work I'll do to support configuration of the guest in libvirt: virDomainDefParseXML: parse ivshmem device xml when defining dom.xml virDomainDeviceInfoIterateInternal: iterate ivshmem devices qemuAssignDevicePCISlots: assign ivshmem device pci slots virDomainDefFormatInternal: format ivshmem device xml (Eg. virsh edit dom) virDomainDefFree: free ivshmem device def qemuBuildCommandLine: build ivshmem device command line when vm starts qemuAssignDeviceAliases: assign ivshmem device aliases when vm starts virDomainDeviceDefParse: attach and parse ivshmem device xml qemuDomainAttachDeviceConfig: attach ivshmem device xml persistently qemuDomainAttachDeviceLive: attach ivshmem device online qemuDomainDetachDeviceConfig: detach ivshmem device xml persistently qemuDomainDetachDeviceLive: detach ivshmem device online Don't forget about checking for the qemu capability and error-ing out properly in case it's not supported, you probably know you can use qemuBuildChrChardevStr() for the '-chardev' part of the commandline, various backends are supported and the code is in already. OK. Thanks for your reminding. The idea looks good, it would be nice improvement to have. About the daemon, you mean it would be another daemon we have in the repo like virtlockd, I suppose. Yes, I think the daemon can be libvirtd or others. The existing ivmshm daemon was just a proof of concept demo by the original developers (as Dan said). Maybe libvirt provides the daemon in feature. There are two ways to use ivshmem with qemu (please refer to http://qemu.weilnetz.de/qemu-doc.html#pcsys_005fother_005fdevs ): 1.Guests map a POSIX shared memory region into the guest as a PCI device that enables zero-copy communication to the application level of the guests, The basic syntax is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, shm = shm name] 2.If desired, interrupts can be sent between guest VMs accessing the same shared memory region. Interrupt support requires using a shared memory server and using a chardev socket to connect to it. An example syntax when using the shared memory server is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, chardev = id] [, msi = on] [, ioeventfd = on] [, vectors = n] [, role = peer | master] qemu-system-i386-chardev socket, path = path, id = id The respective xml configuration for the above 2 qemu command lines are shown as below: Example1: automatically attach device with KVM devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: size means ivshmem size in unit MB, name means shm name role is optional, it may be set to master or peer, the default one is master What do these roles mean, I mean what's the difference between master and peer and why is it only used with the chardev? Does it mean master can only send interrupts or...? Just curious. The role is not always used with chardev (such as Example 1 and 2). IIUC, master and peer only act differently in migration. Master will migrate the shared memory to dest but peer will not. The function of sending interrupts you mentioned is provided in daemon. With-role=master, the guest will copy the shared memory on migration to the destination host. With role=peer, the guest will not be able to migrate with the device attached. With the-peer-case, the device should be detached and then reattached after migration using the PCI hotplug support. (please refer to http://qemu.weilnetz.de/qemu-doc.html#pcsys_005fother_005fdevs). Example2:
Re: [libvirt] [RFC] require for suggestions on support for ivshmem device
On Wed, May 14, 2014 at 08:23:21AM +, Wangrui (K) wrote: Hi, Libvirt does not support ivshmem(Inter-VM Shared Memory) device recently, thus, I would like to know if there's any plan to support it in the future? If not, I would like to contribute a serial of patches to do so. On Jan 28, Wangyufei (James) asked about this question, and Daniel replied with 2 suggestions: 1. Libvirt should be capable of configuring the guest's xml on ivshmem. 2.An ivshmem daemon is needed to run on the host to support it, libvirt is recommended to provide such a daemon. Please refer to https://www.redhat.com/archives/libvir-list/2014-January/msg01335.html for details. What I'll do later is the 1st suggestion, the 2nd one is left to be accomplished by someone else. Here is the detailed work I'll do to support configuration of the guest in libvirt: virDomainDefParseXML: parse ivshmem device xml when defining dom.xml virDomainDeviceInfoIterateInternal: iterate ivshmem devices qemuAssignDevicePCISlots: assign ivshmem device pci slots virDomainDefFormatInternal: format ivshmem device xml (Eg. virsh edit dom) virDomainDefFree: free ivshmem device def qemuBuildCommandLine: build ivshmem device command line when vm starts qemuAssignDeviceAliases: assign ivshmem device aliases when vm starts virDomainDeviceDefParse: attach and parse ivshmem device xml qemuDomainAttachDeviceConfig: attach ivshmem device xml persistently qemuDomainAttachDeviceLive: attach ivshmem device online qemuDomainDetachDeviceConfig: detach ivshmem device xml persistently qemuDomainDetachDeviceLive: detach ivshmem device online Don't forget about checking for the qemu capability and error-ing out properly in case it's not supported, you probably know you can use qemuBuildChrChardevStr() for the '-chardev' part of the commandline, various backends are supported and the code is in already. The idea looks good, it would be nice improvement to have. About the daemon, you mean it would be another daemon we have in the repo like virtlockd, I suppose. There are two ways to use ivshmem with qemu (please refer to http://qemu.weilnetz.de/qemu-doc.html#pcsys_005fother_005fdevs ): 1.Guests map a POSIX shared memory region into the guest as a PCI device that enables zero-copy communication to the application level of the guests, The basic syntax is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, shm = shm name] 2.If desired, interrupts can be sent between guest VMs accessing the same shared memory region. Interrupt support requires using a shared memory server and using a chardev socket to connect to it. An example syntax when using the shared memory server is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, chardev = id] [, msi = on] [, ioeventfd = on] [, vectors = n] [, role = peer | master] qemu-system-i386-chardev socket, path = path, id = id The respective xml configuration for the above 2 qemu command lines are shown as below: Example1: automatically attach device with KVM devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: size means ivshmem size in unit MB, name means shm name role is optional, it may be set to master or peer, the default one is master What do these roles mean, I mean what's the difference between master and peer and why is it only used with the chardev? Does it mean master can only send interrupts or...? Just curious. Example2: manually attach device with static PCI slot 4 requested devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ address type='pci' domain='0x' bus='0x00' slot='0x04' function='0x0'/ /ivshmem /devices Example3: automatically attach device with KVM devices ivshmem role='master' type='unix' source mode='connect' path='/tmp/ivshmem'/ memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: path means shared memory socket path which is set by the daemon. source mode and type is similar with vmchannel. I'm looking forward to your suggestions, thank you very much. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list signature.asc Description: Digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] require for suggestions on support for ivshmem device
Thank you for bringing this up. I'm not experienced with the inner workings of libvirt, but I'm happy to help in anyway I can in terms of clarifying ivshmem's behaviour. Cheers, Cam On Wed, May 14, 2014 at 2:23 AM, Wangrui (K) moon.wang...@huawei.comwrote: Hi, Libvirt does not support ivshmem(Inter-VM Shared Memory) device recently, thus, I would like to know if there's any plan to support it in the future? If not, I would like to contribute a serial of patches to do so. On Jan 28, Wangyufei (James) asked about this question, and Daniel replied with 2 suggestions: 1. Libvirt should be capable of configuring the guest's xml on ivshmem. 2.An ivshmem daemon is needed to run on the host to support it, libvirt is recommended to provide such a daemon. Please refer to https://www.redhat.com/archives/libvir-list/2014-January/msg01335.htmlfor details. What I'll do later is the 1st suggestion, the 2nd one is left to be accomplished by someone else. Here is the detailed work I'll do to support configuration of the guest in libvirt: virDomainDefParseXML: parse ivshmem device xml when defining dom.xml virDomainDeviceInfoIterateInternal: iterate ivshmem devices qemuAssignDevicePCISlots: assign ivshmem device pci slots virDomainDefFormatInternal: format ivshmem device xml (Eg. virsh edit dom) virDomainDefFree: free ivshmem device def qemuBuildCommandLine: build ivshmem device command line when vm starts qemuAssignDeviceAliases: assign ivshmem device aliases when vm starts virDomainDeviceDefParse: attach and parse ivshmem device xml qemuDomainAttachDeviceConfig: attach ivshmem device xml persistently qemuDomainAttachDeviceLive: attach ivshmem device online qemuDomainDetachDeviceConfig: detach ivshmem device xml persistently qemuDomainDetachDeviceLive: detach ivshmem device online There are two ways to use ivshmem with qemu (please refer to http://qemu.weilnetz.de/qemu-doc.html#pcsys_005fother_005fdevs ): 1.Guests map a POSIX shared memory region into the guest as a PCI device that enables zero-copy communication to the application level of the guests, The basic syntax is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, shm = shm name] 2.If desired, interrupts can be sent between guest VMs accessing the same shared memory region. Interrupt support requires using a shared memory server and using a chardev socket to connect to it. An example syntax when using the shared memory server is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, chardev = id] [, msi = on] [, ioeventfd = on] [, vectors = n] [, role = peer | master] qemu-system-i386-chardev socket, path = path, id = id The respective xml configuration for the above 2 qemu command lines are shown as below: Example1: automatically attach device with KVM devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: size means ivshmem size in unit MB, name means shm name role is optional, it may be set to master or peer, the default one is master Example2: manually attach device with static PCI slot 4 requested devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ address type='pci' domain='0x' bus='0x00' slot='0x04' function='0x0'/ /ivshmem /devices Example3: automatically attach device with KVM devices ivshmem role='master' type='unix' source mode='connect' path='/tmp/ivshmem'/ memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: path means shared memory socket path which is set by the daemon. source mode and type is similar with vmchannel. I'm looking forward to your suggestions, thank you very much. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [RFC] require for suggestions on support for ivshmem device
Hi, Libvirt does not support ivshmem(Inter-VM Shared Memory) device recently, thus, I would like to know if there's any plan to support it in the future? If not, I would like to contribute a serial of patches to do so. On Jan 28, Wangyufei (James) asked about this question, and Daniel replied with 2 suggestions: 1. Libvirt should be capable of configuring the guest's xml on ivshmem. 2.An ivshmem daemon is needed to run on the host to support it, libvirt is recommended to provide such a daemon. Please refer to https://www.redhat.com/archives/libvir-list/2014-January/msg01335.html for details. What I'll do later is the 1st suggestion, the 2nd one is left to be accomplished by someone else. Here is the detailed work I'll do to support configuration of the guest in libvirt: virDomainDefParseXML: parse ivshmem device xml when defining dom.xml virDomainDeviceInfoIterateInternal: iterate ivshmem devices qemuAssignDevicePCISlots: assign ivshmem device pci slots virDomainDefFormatInternal: format ivshmem device xml (Eg. virsh edit dom) virDomainDefFree: free ivshmem device def qemuBuildCommandLine: build ivshmem device command line when vm starts qemuAssignDeviceAliases: assign ivshmem device aliases when vm starts virDomainDeviceDefParse: attach and parse ivshmem device xml qemuDomainAttachDeviceConfig: attach ivshmem device xml persistently qemuDomainAttachDeviceLive: attach ivshmem device online qemuDomainDetachDeviceConfig: detach ivshmem device xml persistently qemuDomainDetachDeviceLive: detach ivshmem device online There are two ways to use ivshmem with qemu (please refer to http://qemu.weilnetz.de/qemu-doc.html#pcsys_005fother_005fdevs ): 1.Guests map a POSIX shared memory region into the guest as a PCI device that enables zero-copy communication to the application level of the guests, The basic syntax is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, shm = shm name] 2.If desired, interrupts can be sent between guest VMs accessing the same shared memory region. Interrupt support requires using a shared memory server and using a chardev socket to connect to it. An example syntax when using the shared memory server is: qemu-system-i386-device ivshmem, size = size in format accepted by -m [, chardev = id] [, msi = on] [, ioeventfd = on] [, vectors = n] [, role = peer | master] qemu-system-i386-chardev socket, path = path, id = id The respective xml configuration for the above 2 qemu command lines are shown as below: Example1: automatically attach device with KVM devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: size means ivshmem size in unit MB, name means shm name role is optional, it may be set to master or peer, the default one is master Example2: manually attach device with static PCI slot 4 requested devices ivshmem role='master' memory name='dom-ivshmem' size='2'/ address type='pci' domain='0x' bus='0x00' slot='0x04' function='0x0'/ /ivshmem /devices Example3: automatically attach device with KVM devices ivshmem role='master' type='unix' source mode='connect' path='/tmp/ivshmem'/ memory name='dom-ivshmem' size='2'/ /ivshmem /devices NOTE: path means shared memory socket path which is set by the daemon. source mode and type is similar with vmchannel. I'm looking forward to your suggestions, thank you very much. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list