Patches on libvirt patchew are outdated

2023-12-17 Thread Han Han
Hi,
The libvirt patchew https://patchew.org/Libvirt/ has no update for more
than one month. Is it dropped? If not, please sync the patches to it~
___
Devel mailing list -- devel@lists.libvirt.org
To unsubscribe send an email to devel-le...@lists.libvirt.org


Re: [libvirt PATCH 1/5] Add loongarch cpu support

2023-12-17 Thread lixianglai

Hi Andrea:


On Thu, Dec 14, 2023 at 02:08:45PM +0800, xianglai li wrote:

From: lixianglai 

Add loongarch cpu support, Define new cpu type 'loongarch64'
and implement it's driver functions.

Signed-off-by: lixianglai 

We usually prefer the full name to be used both for git authorship
and DCO purposes. I believe this would be "Xianglai Li" in your case.



Ok, I'll correct it in the next version :)



---
  po/POTFILES |   1 +
  src/conf/schemas/basictypes.rng |   1 +
  src/cpu/cpu.c   |   2 +
  src/cpu/cpu.h   |   2 +
  src/cpu/cpu_loongarch.c | 742 
  src/cpu/cpu_loongarch.h |  25 ++
  src/cpu/cpu_loongarch_data.h|  37 ++
  src/cpu/meson.build |   1 +
  src/qemu/qemu_capabilities.c|   1 +
  src/qemu/qemu_domain.c  |   4 +
  src/util/virarch.c  |   2 +
  src/util/virarch.h  |   4 +
  12 files changed, 822 insertions(+)
  create mode 100644 src/cpu/cpu_loongarch.c
  create mode 100644 src/cpu/cpu_loongarch.h
  create mode 100644 src/cpu/cpu_loongarch_data.h

This patch breaks the test suite:

   $ make -C .../libvirt/build/build-aux sc_preprocessor_indentation
   make: Entering directory '.../libvirt/build/build-aux'
   cppi: .../libvirt/src/cpu/cpu_loongarch.h: line 23: not properly indented
   cppi: .../libvirt/src/cpu/cpu_loongarch_data.h: line 23: not properly 
indented
   cppi: .../libvirt/src/cpu/cpu_loongarch_data.h: line 31: not properly 
indented
   incorrect preprocessor indentation
   make: *** [.../libvirt/build-aux/syntax-check.mk:500:
sc_preprocessor_indentation] Error 1
   make: Leaving directory '.../libvirt/build/build-aux'

Should be an easy enough fix.

Please make sure that 'meson test' runs successfully after every
single patch in the series, and that you have optional test tools
such as cppi installed.



Ok, I think it is because I did not install cppi that the problem was 
not detected during 'meson test'.


I will install cppi and conduct 'meson test' again for each patch.





+++ b/src/conf/schemas/basictypes.rng
@@ -470,6 +470,7 @@
x86_64
xtensa
xtensaeb
+  loongarch64

This list is sorted alphabetically; please ensure that it remains
that way after your changes. Not all lists in libvirt are sorted
alphabetically, but generally speaking if you see one that is you
should keep it that way.



Ok, I'll correct it in the next version.





+++ b/src/cpu/cpu_loongarch.c
@@ -0,0 +1,742 @@
+static const virArch archs[] = { VIR_ARCH_LOONGARCH64 };
+
+typedef struct _virCPULoongArchVendor virCPULoongArchVendor;
+struct _virCPULoongArchVendor {
+char *name;
+};
+
+typedef struct _virCPULoongArchModel virCPULoongArchModel;
+struct _virCPULoongArchModel {
+char *name;
+const virCPULoongArchVendor *vendor;
+virCPULoongArchData data;
+};
+
+typedef struct _virCPULoongArchMap virCPULoongArchMap;
+struct _virCPULoongArchMap {
+size_t nvendors;
+virCPULoongArchVendor **vendors;
+size_t nmodels;
+virCPULoongArchModel **models;
+};

This CPU driver appears to be directly modeled after the ppc64
driver. I wonder if all the complexity is necessary at this point in
time? Wouldn't it perhaps be better to start with a very bare-bone
CPU driver, modeled after the riscv64 one, and then grow from there
as the demand for more advanced features becomes apparent?



Well, I think I will try to refer to riscv64 for cpu driver 
implementation in the next version.




+static int
+virCPULoongArchGetHostPRID(void)
+{
+return 0x14c010;
+}

Hardcoding the host CPU's PRID...


+static int
+virCPULoongArchGetHost(virCPUDef *cpu,
+   virDomainCapsCPUModels *models)
+{
+virCPUData *cpuData = NULL;
+virCPULoongArchData *data;
+int ret = -1;
+
+if (!(cpuData = virCPUDataNew(archs[0])))
+goto cleanup;
+
+data = >data.loongarch;
+data->prid = g_new0(virCPULoongArchPrid, 1);
+if (!data->prid)
+goto cleanup;
+
+
+data->len = 1;
+
+data->prid[0].value = virCPULoongArchGetHostPRID();
+data->prid[0].mask = 0x00ul;

... and corresponding mask is definitely not acceptable. You'll need
to implement a function that fetches the value dynamically by using
whatever mechanism is appropriate, and of course ensure that such
code is only ever run on a loongarch64 host.

But again, do we really need that complexity right now? The riscv64
driver doesn't have any of that and is usable for many purposes.


Okay, so the hard coding here is a little bit inappropriate, and I feel 
like I can do without the complexity,


I'm not sure, but I can try to simplify this.


+static virCPUDef *
+virCPULoongArchDriverBaseline(virCPUDef **cpus,
+unsigned int ncpus,
+virDomainCapsCPUModels *models ATTRIBUTE_UNUSED,
+const char **features ATTRIBUTE_UNUSED,
+bool 

Re: [libvirt PATCH v2 10/15] xen: explicitly set hostdev driver.type at runtime, not in postparse

2023-12-17 Thread Laine Stump

On 11/27/23 10:12 AM, Peter Krempa wrote:

On Mon, Nov 06, 2023 at 02:38:55 -0500, Laine Stump wrote:

Xen only supports a single type of PCI hostdev assignment, so it is
superfluous to have  peppered throughout the
config. It *is* necessary to have the driver type explicitly set in
the hosdev object before calling into the hypervisor-agnostic "hostdev
manager" though (otherwise the hostdev manager doesn't know whether it
should do Xen-specific setup, or VFIO-specific setup).

Historically, the Xen driver has checked for "default" driver type
(i.e. not set in the XML), and set it to "xen', during the XML
postparse, thus guaranteeing that it will be set by the time the
object is sent to the hostdev manager at runtime, but also setting it
so early that a simple round-trip of parse-format results in the XML
always containing an explicit , even if that
wasn't specified in the original XML.


Generally stuff that is written to the definition at parse time is done
so that it doesn't change in the future, thus preserving ABI of machines
created with a config where the default was not entered yet.


I don't think there was any intentional "lock in the default setting in 
the config to preserve ABI" in this case - it all seems to just be 
convenience/serendipity. From what I can see in the original code adding 
"PCI passthrough" to the libxl driver, they were just filling in the 
driver name in the xml because 1) it was there, and 2) they had just 
split out some of the code from qemu into "common code" to be used by 
both libxl and QEMU, needed a way to specify which driver was calling 
into this common code, saw the driver name as a convenient way to do it, 
and noticed that lots of other values that weren't set by the user were 
being set in the postparse, so that's where they set it in their code.


(BTW, there was code in the original commit adding PCI passthrough to 
libxl (commit 6225cb3df5a5732868719462f0a602ef11f7fa03) that logged an 
error if the driver name was anything other than empty (in which case it 
was set to "xen") or "xen". That code is still there today (see 
libxlNodeDeviceDetachFlags), so the Xen driver definiteoy only supports 
the one type of PCI device assignment.)




This commit message doesn't make an argument why the change is needed,
so I'm a bit reluctant...


Well... the less use of the old usage of  there is, the 
better, and preventing it from being written into every single new 
config file with a  means less use. Also this change makes the 
libxl driver more consistent with the behavior of the qemu driver (which 
has never set an explicit default in the postparse - it waits until the 
devices are being initialized for domain startup/hotplug before it sets 
the driver type (and it also always sets it to the same value).


I suppose I could omit this patch, but I think it's just perpetuating 
unintentional code that is inconsistent with the other driver (qemu) 
which serves to make it more difficult for a newcomer to understand 
what's going on with the  and what is the proper way 
to handle it in the case of some new hypervisor driver that decides to 
support .






The QEMU driver *doesn't* set driver.type during postparse though;
instead, it waits until domain startup time (or device attach time for
hotplug), and sets the driver.type then. The result is that a
parse-format round trip of the XML in the QEMU driver *doesn't* add in
the .

This patch modifies the Xen code to behave similarly to the QEMU code
- the PostParse just checks for a driver.type that isn't supported by
the Xen driver, and any explicit setting to "xen" is deferred until domain
runtime rather than during the postparse.

This delayed setting of driver.type of course results in slightly
different xml2xml parse-format results, so the unit test data is
modified accordingly.

Signed-off-by: Laine Stump 
---
  src/libxl/libxl_domain.c  | 65 +++
  src/libxl/libxl_driver.c  | 25 ---
  tests/libxlxml2domconfigdata/moredevs-hvm.xml |  1 -
  tests/xlconfigdata/test-fullvirt-pci.xml  |  2 -
  tests/xmconfigdata/test-pci-dev-syntax.xml|  2 -
  tests/xmconfigdata/test-pci-devs.xml  |  2 -
  6 files changed, 69 insertions(+), 28 deletions(-)

diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index 22482adfa6..ecdf57e655 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -160,8 +160,15 @@ libxlDomainDeviceDefPostParse(virDomainDeviceDef *dev,
  
  if (hostdev->mode == VIR_DOMAIN_HOSTDEV_MODE_SUBSYS &&

  hostdev->source.subsys.type == VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI 
&&
-pcisrc->driver.type == VIR_DEVICE_HOSTDEV_PCI_DRIVER_TYPE_DEFAULT)
-pcisrc->driver.type = VIR_DEVICE_HOSTDEV_PCI_DRIVER_TYPE_XEN;
+pcisrc->driver.type != VIR_DEVICE_HOSTDEV_PCI_DRIVER_TYPE_DEFAULT 
&&
+pcisrc->driver.type != VIR_DEVICE_HOSTDEV_PCI_DRIVER_TYPE_XEN) {
+
+  

Re: [libvirt PATCH v2 09/15] tests: remove explicit from hostdev test cases

2023-12-17 Thread Laine Stump

On 11/27/23 10:03 AM, Peter Krempa wrote:

On Mon, Nov 06, 2023 at 02:38:54 -0500, Laine Stump wrote:

The long-deprecated use of  in domain xml
for  devices was only ever necessary during the period when
libvirt (and the Linux kernel) supported both VFIO and "legacy KVM"
styles of hostdev device assignment for QEMU. This became pointless
many years ago when legacy KVM device assignment was removed from the
kernel, and support for that style of device assignment was completely
disabled in the libvirt source in 2019 (commit
v5.6.0-316-g2e7225ea8c).

Nevertheless, there were instances of  in the
unit test data that were then (unnecessarily) propagated to several
more tests over the years. This patch cleans out those unnecessary
explicit settings of driver name='vfio' in all QEMU unit test data,
proving that the attribute is no longer (externally) needed while also
making it simpler to properly test when a later patch "re-animates"
the driver name attribute for a slightly different (but related) use.

Signed-off-by: Laine Stump 
---


Few issues:
1) There are few bits with domain XML bits that seem to mention it:

tests/qemuhotplugtestdevices/qemuhotplug-hostdev-pci.xml:  
tests/qemuhotplugtestdomains/qemuhotplug-base-live+hostdev-pci.xml:  
tests/qemuhotplugtestdomains/qemuhotplug-pseries-base-live+hostdev-pci.xml:  



I *thought* that I needed to keep those for now (until patch 12/15), but 
it turns out I didn't, so I removed them too.



tests/qemumemlockdata/qemumemlock-pc-hardlimit+hostdev.xml:  
tests/qemumemlockdata/qemumemlock-pc-hardlimit+locked+hostdev.xml:  
tests/qemumemlockdata/qemumemlock-pc-hostdev-nvme.xml:  
tests/qemumemlockdata/qemumemlock-pc-hostdev.xml:  
tests/qemumemlockdata/qemumemlock-pc-locked+hostdev.xml:  
tests/qemumemlockdata/qemumemlock-pseries-hardlimit+hostdev.xml:  
tests/qemumemlockdata/qemumemlock-pseries-hardlimit+locked+hostdev.xml:  

tests/qemumemlockdata/qemumemlock-pseries-hostdev.xml:  
tests/qemumemlockdata/qemumemlock-pseries-locked+hostdev.xml:  


I had previously also left these in because the test was failing when I 
removed them, then I forgot to go back and investiate why. Now that 
you've reminded me to look back at it, I see this failure was due to the 
patch 11/15 - "conf: replace virHostdevIsVFIODevice with 
virHostdevIsPCIDevice" - not yet being applied. I've switched the order 
of the patches and now am able to remove these as well.


> tests/qemustatusxml2xmldata/modern-in.xml:  

This is status XML generated by


2) There's documentation that seems to mention it:

docs/formatdomain.rst:   
docs/formatnetwork.rst:  
docs/formatnetworkport.rst:   
docs/pci-addresses.rst:


removed (I forgot to search in docs)



3) You are not leaving any example with the existing syntax. To prevent
regression, either keep some in or create a specific new test case to
cover this now-corner-case.


One of the test cases added in patch 12/15 covers the "legacy" case of 
"driver name='vfio'" along with the new "driver type='vfio'", and the 
new usage of "driver name='vfio-pci-igbvf'".




With at least 2 & 3 addressed:

Reviewed-by: Peter Krempa 
___
Devel mailing list -- devel@lists.libvirt.org
To unsubscribe send an email to devel-le...@lists.libvirt.org

___
Devel mailing list -- devel@lists.libvirt.org
To unsubscribe send an email to devel-le...@lists.libvirt.org


Re: [libvirt PATCH 0/5] add loongarch support for libvirt

2023-12-17 Thread lixianglai

Hi Andrea:



On Thu, Dec 14, 2023 at 02:08:44PM +0800, xianglai li wrote:

Hello, Everyone:
   This patch series adds libvirt support for loongarch.Although the bios
path and name has not been officially integrated into qemu and we think
there are still many shortcomings, we try to push a version of patch to
the community according to the opinions of the community, hoping to
listen to everyone's opinions.

Sharing your work earlier rather than later is definitely a good
approach when it comes to open source development, so I appreciate
you doing this :)



Thank you very much for your affirmation and encouragement!



   loongarch's virtual machine bios is not yet available in qemu, so you can 
get it from the following link
https://github.com/loongson/Firmware/tree/main/LoongArchVirtMachine

Great to see that edk2 support has already been mainlined! An
excellent next step would be to get an edk2-loongarch64 package into
the various distros... Please consider working with the maintainers
for edk2 in Fedora to make that happen, as it would significantly
lower the barrier for interested people to get involved.



Yes, we will do that, currently the loongarch code is being moved from 
the edk2-platform directory to the edk2 directory,


I think after this work is completed, we will have the edk2 installation 
package.




(Note: You should clone the repository using git instead of downloading the 
file via wget or you'll get xml)
We named the bios edk2-loongarch64-code.fd, edk2-loongarch64-vars.fd is used to 
store pflash images of non-volatile
variables.After installing qemu-system-loongarch64, you need to manually copy 
these two files to the /user/share/qemu
directory.

As I have implicitly pointed out in the comment to one of the
patches, these paths are not correct.

The /usr/share/qemu/ directory is owned by the QEMU package, and
other components should not drop their files in there. The exception
is the /usr/share/qemu/firmware/ directory, which is specifically
designed for interoperation.

The edk2 files should be installed to /usr/share/edk2/loongarch64/,
following the convention established by existing architectures. Once
the directory name already contains architecture information, you can
use shorter and less unique names for the files themselves.



I think edk2-loongarch64-code.fd can be the loongarch bios that comes 
with the qemu package,


and then its installation path isĀ  /usr/share/qemu which makes sense.

The future separately generated loongarch edk2 installation package 
installation path according to your suggestion can be


/usr/share/edk2/loongarch64, named then QEMU_EFI. Fd.





   Well, if you have completed the above steps I think you can now install 
loongarch virtual machine,
you can install it through the virt-manager graphical interface, or install it 
through vrit-install,
here is an example of installing it using virt-install:

virt-install  \
--virt-type=qemu \
--name  loongarch-test \
--memory 4096 \
--vcpus=4 \
--arch=loongarch64 \
--boot cdrom \
--disk device=cdrom,bus=scsi,path=/root/livecd-fedora-mate-4.loongarch64.iso \
--disk 
path=/var/lib/libvirt/images/debian12-loongarch64.qcow2,size=10,format=qcow2,bus=scsi
 \
--network network=default \
--osinfo archlinux   \
--feature acpi=true \

This looks a bit out of place: virt-install should automatically
enable the ACPI feature if it's advertised as available by libvirt.

Please take a look at virQEMUCapsInitGuestFromBinary() and consider
updating it so that ACPI support for loongarch is advertised.



Ok, I'll fix that in the next version.



lixianglai (5):
   Add loongarch cpu support
   Add loongarch cpu model and vendor info
   Config some capabilities for loongarch virt machine
   Implement the method of getting host info for loongarch
   Add bios path for loongarch

The information provided in the cover letter, including pointers to
the various not-yet-upstreamed changes and instructions on how to
test everything, is very much appreciated!
Ok, I will provide more detailed instructions on changes and testing in 
the next version.


Unfortunately I didn't have enough time to take things for a spin, so
I've limited myself to a relatively quick review.

In addition to the comments that I've provided for the code that is
there, I need to point out what is *not* there: specifically, any
kind of test :)

Before this can be considered for inclusion, we need to have some
test coverage. It doesn't have to be incredibly exhaustive, but at
least the basics need to be addressed. If you look for files that
contain "riscv64" in their names in the tests/ directory you should
get a decent idea of what kind of coverage we will need.



Ok, I will refer to the "riscv64" file in the tests directory to add 
loongarch64 related test cases.





That's all I have for now. I'll talk to you again in 2024 :)

Ok, thank you very much for taking time out of your busy schedule to 
review these patches.



Wish you a merry Christmas in 

Re: [libvirt PATCH v2 05/15] conf: put hostdev PCI backend into a struct

2023-12-17 Thread Laine Stump

On 11/27/23 9:53 AM, Peter Krempa wrote:

@@ -29973,14 +29973,10 @@ virDomainNetDefActualToNetworkPort(virDomainDef *dom,
  break;
  
  case VIR_DEVICE_HOSTDEV_PCI_DRIVER_TYPE_XEN:

-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
-   _("Unexpected PCI backend 'xen'"));
-break;
-


Falling through to virReportEnumRangeError from such cases isn't
something we normally do as virReportEnumRangeError is usually meant for
cases when the value is unknown.


(Finally going through all of these)

the ..._XEN value does exist in the enum for the value in the NetDef 
object, but there's no equivalent for it in the 
virNetworkForwardDriverNameType enum that's used on the NetworkPort 
object being converted *to*, so this actually *is* a range error 
(although you're correct that it's not the normal case of a completely 
unknown value).


So I would take the time to make a separate case / error message for 
this particular value, however the patch immediately after this patch 
completely removes all this code anyway (it switches to using the same 
enum for both objects), so I'm just leaving it as it is.

___
Devel mailing list -- devel@lists.libvirt.org
To unsubscribe send an email to devel-le...@lists.libvirt.org