Re: [PATCH v2 1/2] hw/i386/pc: pc_system_ovmf_table_find: Assert that flash was parsed

2021-06-30 Thread Dov Murik



On 30/06/2021 17:35, Philippe Mathieu-Daudé wrote:
> On 6/30/21 3:38 PM, Tom Lendacky wrote:
>> On 6/30/21 12:46 AM, Dov Murik wrote:
>>> Add assertion in pc_system_ovmf_table_find that verifies that the flash
>>> was indeed previously parsed (looking for the OVMF table) by
>>> pc_system_parse_ovmf_flash.
>>>
>>> Now pc_system_ovmf_table_find distinguishes between "no one called
>>> pc_system_parse_ovmf_flash" (which will abort due to assertion failure)
>>> and "the flash was parsed but no OVMF table was found, or it is invalid"
>>> (which will return false).
>>>
>>> Suggested-by: Philippe Mathieu-Daudé 
>>> Signed-off-by: Dov Murik 
>>
>> Does the qemu coding style prefer not initializing the bool to false since
>> it will default to that?
> 
> Indeed, you are right, and checkpatch will block this patch:
> 
> ERROR: do not initialise statics to 0 or NULL
> #33: FILE: hw/i386/pc_sysfw.c:129:
> +static bool ovmf_flash_parsed = false;
> 
> total: 1 errors, 0 warnings, 28 lines checked

oops, missed that in my flow.

Sent a v3 series with this fix.


> 
>> Otherwise,
>>
>> Reviewed-by: Tom Lendacky 

Thanks, Tom!

-Dov



[PATCH v3 2/2] hw/i386/pc: Document pc_system_ovmf_table_find

2021-06-30 Thread Dov Murik
Suggested-by: Philippe Mathieu-Daudé 
Signed-off-by: Dov Murik 
Reviewed-by: Philippe Mathieu-Daudé 
---
 hw/i386/pc_sysfw.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/hw/i386/pc_sysfw.c b/hw/i386/pc_sysfw.c
index e353f2a4e9..6ddce92a86 100644
--- a/hw/i386/pc_sysfw.c
+++ b/hw/i386/pc_sysfw.c
@@ -179,6 +179,17 @@ static void pc_system_parse_ovmf_flash(uint8_t *flash_ptr, 
size_t flash_size)
 ovmf_table += tot_len;
 }
 
+/**
+ * pc_system_ovmf_table_find - Find the data associated with an entry in OVMF's
+ * reset vector GUIDed table.
+ *
+ * @entry: GUID string of the entry to lookup
+ * @data: Filled with a pointer to the entry's value (if not NULL)
+ * @data_len: Filled with the length of the entry's value (if not NULL). Pass
+ *NULL here if the length of data is known.
+ *
+ * Return: true if the entry was found in the OVMF table; false otherwise.
+ */
 bool pc_system_ovmf_table_find(const char *entry, uint8_t **data,
int *data_len)
 {
-- 
2.25.1




[PATCH v3 1/2] hw/i386/pc: pc_system_ovmf_table_find: Assert that flash was parsed

2021-06-30 Thread Dov Murik
Add assertion in pc_system_ovmf_table_find that verifies that the flash
was indeed previously parsed (looking for the OVMF table) by
pc_system_parse_ovmf_flash.

Now pc_system_ovmf_table_find distinguishes between "no one called
pc_system_parse_ovmf_flash" (which will abort due to assertion failure)
and "the flash was parsed but no OVMF table was found, or it is invalid"
(which will return false).

Suggested-by: Philippe Mathieu-Daudé 
Signed-off-by: Dov Murik 
Reviewed-by: Tom Lendacky 
---
 hw/i386/pc_sysfw.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/hw/i386/pc_sysfw.c b/hw/i386/pc_sysfw.c
index 6ce37a2b05..e353f2a4e9 100644
--- a/hw/i386/pc_sysfw.c
+++ b/hw/i386/pc_sysfw.c
@@ -126,6 +126,7 @@ void pc_system_flash_cleanup_unused(PCMachineState *pcms)
 
 #define OVMF_TABLE_FOOTER_GUID "96b582de-1fb2-45f7-baea-a366c55a082d"
 
+static bool ovmf_flash_parsed;
 static uint8_t *ovmf_table;
 static int ovmf_table_len;
 
@@ -136,10 +137,12 @@ static void pc_system_parse_ovmf_flash(uint8_t 
*flash_ptr, size_t flash_size)
 int tot_len;
 
 /* should only be called once */
-if (ovmf_table) {
+if (ovmf_flash_parsed) {
 return;
 }
 
+ovmf_flash_parsed = true;
+
 if (flash_size < TARGET_PAGE_SIZE) {
 return;
 }
@@ -183,6 +186,8 @@ bool pc_system_ovmf_table_find(const char *entry, uint8_t 
**data,
 int tot_len = ovmf_table_len;
 QemuUUID entry_guid;
 
+assert(ovmf_flash_parsed);
+
 if (qemu_uuid_parse(entry, _guid) < 0) {
 return false;
 }
-- 
2.25.1




[PATCH v3 0/2] hw/i386/pc: Clarify pc_system_ovmf_table_find usage

2021-06-30 Thread Dov Murik
Add assertion to verify that the flash was parsed (looking for the OVMF table),
and add documentation for pc_system_ovmf_table_find.

v3:
 - [style] remove static initalization to 'false'

v2:
 - add assertion (insert patch 1/2)

Dov Murik (2):
  hw/i386/pc: pc_system_ovmf_table_find: Assert that flash was parsed
  hw/i386/pc: Document pc_system_ovmf_table_find

 hw/i386/pc_sysfw.c | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)

-- 
2.25.1




Re: [RFC v6 10/13] target/s390x: use kvm_enabled() to wrap call to kvm_s390_get_hpage_1m

2021-06-30 Thread Al Cho
Hi Cornelia,

Sorry for missing the reply.
I think it may not be worth it, as you said it seem to be the only call site 
for kvm_s390_get_hpage_1m().
So I think we could keep it.

Thanks,
AL

From: Cornelia Huck 
Sent: Wednesday, June 30, 2021 11:21 PM
To: Al Cho ; qemu-devel@nongnu.org ; 
qemu-s3...@nongnu.org 
Cc: Claudio Fontana ; Al Cho ; José 
Ricardo Ziviani ; Claudio Fontana 
Subject: Re: [RFC v6 10/13] target/s390x: use kvm_enabled() to wrap call to 
kvm_s390_get_hpage_1m

On Tue, Jun 29 2021, "Cho, Yu-Chen"  wrote:

> this will allow to remove the kvm stubs.
>
> Signed-off-by: Claudio Fontana 
> Signed-off-by: Cho, Yu-Chen 
> ---
>  target/s390x/diag.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/target/s390x/diag.c b/target/s390x/diag.c
> index c17a2498a7..8405f69df0 100644
> --- a/target/s390x/diag.c
> +++ b/target/s390x/diag.c
> @@ -20,6 +20,7 @@
>  #include "hw/s390x/ipl.h"
>  #include "hw/s390x/s390-virtio-ccw.h"
>  #include "hw/s390x/pv.h"
> +#include "sysemu/kvm.h"
>  #include "kvm_s390x.h"
>
>  int handle_diag_288(CPUS390XState *env, uint64_t r1, uint64_t r3)
> @@ -168,7 +169,7 @@ out:
>  return;
>  }
>
> -if (kvm_s390_get_hpage_1m()) {
> +if (kvm_enabled() && kvm_s390_get_hpage_1m()) {

I think I asked before whether we should introduce a
s390_huge_page_backing() wrapper (which might be overkill)... any
opinions on that? I'm not really opposed to this patch here, either.

>  error_report("Protected VMs can currently not be backed with "
>   "huge pages");
>  env->regs[r1 + 1] = DIAG_308_RC_INVAL_FOR_PV;



Re: [PATCH] hw/pci/pcie_port: Rename "enable-native-hotplug" property

2021-06-30 Thread David Gibson
On Wed, Jun 23, 2021 at 04:47:47PM +0200, Julia Suvorova wrote:
> PCIE_SLOT property renamed to "native-hotplug" to be more concise
> and consistent with other properties.
> 
> Signed-off-by: Julia Suvorova 
> Reviewed-by: Igor Mammedov 
> Reviewed-by: Marcel Apfelbaum 

Reviewed-by: David Gibson 

> ---
>  hw/i386/pc_q35.c   | 4 ++--
>  hw/pci/pcie_port.c | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index a0ec7964cc..04b4a4788d 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -243,8 +243,8 @@ static void pc_q35_init(MachineState *machine)
>NULL);
>  
>  if (acpi_pcihp) {
> -object_register_sugar_prop(TYPE_PCIE_SLOT, "enable-native-hotplug",
> -  "false", true);
> +object_register_sugar_prop(TYPE_PCIE_SLOT, "native-hotplug",
> +   "false", true);
>  }
>  
>  /* irq lines */
> diff --git a/hw/pci/pcie_port.c b/hw/pci/pcie_port.c
> index a410111825..da850e8dde 100644
> --- a/hw/pci/pcie_port.c
> +++ b/hw/pci/pcie_port.c
> @@ -148,7 +148,7 @@ static Property pcie_slot_props[] = {
>  DEFINE_PROP_UINT8("chassis", PCIESlot, chassis, 0),
>  DEFINE_PROP_UINT16("slot", PCIESlot, slot, 0),
>  DEFINE_PROP_BOOL("hotplug", PCIESlot, hotplug, true),
> -DEFINE_PROP_BOOL("enable-native-hotplug", PCIESlot, native_hotplug, 
> true),
> +DEFINE_PROP_BOOL("native-hotplug", PCIESlot, native_hotplug, true),
>  DEFINE_PROP_END_OF_LIST()
>  };
>  

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [PATCH v5 4/7] hw/pci/pcie: Do not set HPC flag if acpihp is used

2021-06-30 Thread David Gibson
On Thu, Jun 17, 2021 at 09:07:36PM +0200, Julia Suvorova wrote:
> Instead of changing the hot-plug type in _OSC register, do not
> set the 'Hot-Plug Capable' flag. This way guest will choose ACPI
> hot-plug if it is preferred and leave the option to use SHPC with
> pcie-pci-bridge.
> 
> The ability to control hot-plug for each downstream port is retained,
> while 'hotplug=off' on the port means all hot-plug types are disabled.
> 
> Signed-off-by: Julia Suvorova 
> Reviewed-by: Igor Mammedov 

Reviewed-by: David Gibson 

> ---
>  include/hw/pci/pcie_port.h |  5 -
>  hw/acpi/pcihp.c|  8 
>  hw/core/machine.c  |  1 -
>  hw/i386/pc_q35.c   | 11 +++
>  hw/pci/pcie.c  |  8 +++-
>  hw/pci/pcie_port.c |  1 +
>  6 files changed, 31 insertions(+), 3 deletions(-)
> 
> diff --git a/include/hw/pci/pcie_port.h b/include/hw/pci/pcie_port.h
> index bea8ecad0f..e25b289ce8 100644
> --- a/include/hw/pci/pcie_port.h
> +++ b/include/hw/pci/pcie_port.h
> @@ -57,8 +57,11 @@ struct PCIESlot {
>  /* Disable ACS (really for a pcie_root_port) */
>  booldisable_acs;
>  
> -/* Indicates whether hot-plug is enabled on the slot */
> +/* Indicates whether any type of hot-plug is allowed on the slot */
>  boolhotplug;
> +
> +boolnative_hotplug;
> +
>  QLIST_ENTRY(PCIESlot) next;
>  };
>  
> diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> index 5355618608..7a6bc1b31e 100644
> --- a/hw/acpi/pcihp.c
> +++ b/hw/acpi/pcihp.c
> @@ -31,6 +31,7 @@
>  #include "hw/pci/pci.h"
>  #include "hw/pci/pci_bridge.h"
>  #include "hw/pci/pci_host.h"
> +#include "hw/pci/pcie_port.h"
>  #include "hw/i386/acpi-build.h"
>  #include "hw/acpi/acpi.h"
>  #include "hw/pci/pci_bus.h"
> @@ -332,6 +333,13 @@ void acpi_pcihp_device_plug_cb(HotplugHandler 
> *hotplug_dev, AcpiPciHpState *s,
>  object_dynamic_cast(OBJECT(dev), TYPE_PCI_BRIDGE)) {
>  PCIBus *sec = pci_bridge_get_sec_bus(PCI_BRIDGE(pdev));
>  
> +/* Remove all hot-plug handlers if hot-plug is disabled on slot 
> */
> +if (object_dynamic_cast(OBJECT(dev), TYPE_PCIE_SLOT) &&
> +!PCIE_SLOT(pdev)->hotplug) {
> +qbus_set_hotplug_handler(BUS(sec), NULL);
> +return;
> +}
> +
>  qbus_set_hotplug_handler(BUS(sec), OBJECT(hotplug_dev));
>  /* We don't have to overwrite any other hotplug handler yet */
>  assert(QLIST_EMPTY(>child));
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 55b9bc7817..6ed0575d81 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -582,7 +582,6 @@ static void machine_set_memdev(Object *obj, const char 
> *value, Error **errp)
>  ms->ram_memdev_id = g_strdup(value);
>  }
>  
> -
>  static void machine_init_notify(Notifier *notifier, void *data)
>  {
>  MachineState *machine = MACHINE(qdev_get_machine());
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index 46a0f196f4..a0ec7964cc 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -37,6 +37,7 @@
>  #include "sysemu/kvm.h"
>  #include "hw/kvm/clock.h"
>  #include "hw/pci-host/q35.h"
> +#include "hw/pci/pcie_port.h"
>  #include "hw/qdev-properties.h"
>  #include "hw/i386/x86.h"
>  #include "hw/i386/pc.h"
> @@ -136,6 +137,7 @@ static void pc_q35_init(MachineState *machine)
>  ram_addr_t lowmem;
>  DriveInfo *hd[MAX_SATA_PORTS];
>  MachineClass *mc = MACHINE_GET_CLASS(machine);
> +bool acpi_pcihp;
>  
>  /* Check whether RAM fits below 4G (leaving 1/2 GByte for IO memory
>   * and 256 Mbytes for PCI Express Enhanced Configuration Access Mapping
> @@ -236,6 +238,15 @@ static void pc_q35_init(MachineState *machine)
>  object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
>   OBJECT(lpc), _abort);
>  
> +acpi_pcihp = object_property_get_bool(OBJECT(lpc),
> +  
> "acpi-pci-hotplug-with-bridge-support",
> +  NULL);
> +
> +if (acpi_pcihp) {
> +object_register_sugar_prop(TYPE_PCIE_SLOT, "enable-native-hotplug",
> +  "false", true);
> +}
> +
>  /* irq lines */
>  gsi_state = pc_gsi_create(>gsi, pcmc->pci_enabled);
>  
> diff --git a/hw/pci/pcie.c b/hw/pci/pcie.c
> index fd0fa157e8..6e95d82903 100644
> --- a/hw/pci/pcie.c
> +++ b/hw/pci/pcie.c
> @@ -529,7 +529,13 @@ void pcie_cap_slot_init(PCIDevice *dev, PCIESlot *s)
> PCI_EXP_SLTCAP_PIP |
> PCI_EXP_SLTCAP_AIP |
> PCI_EXP_SLTCAP_ABP);
> -if (s->hotplug) {
> +
> +/*
> + * Enable native hot-plug on all hot-plugged bridges unless
> + * hot-plug is disabled on the slot.
> + */
> +if (s->hotplug &&
> +(s->native_hotplug || DEVICE(dev)->hotplugged)) {
>  

Re: [PATCH v5 1/7] hw/acpi/pcihp: Enhance acpi_pcihp_disable_root_bus() to support Q35

2021-06-30 Thread David Gibson
On Thu, Jun 17, 2021 at 09:07:33PM +0200, Julia Suvorova wrote:
> PCI Express does not allow hot-plug on pcie.0. Check for Q35 in
> acpi_pcihp_disable_root_bus() to be able to forbid hot-plug using the
> 'acpi-root-pci-hotplug' flag.
> 
> Signed-off-by: Julia Suvorova 
> Reviewed-by: Igor Mammedov 
> Reviewed-by: Marcel Apfelbaum 

Reviewed-by: David Gibson 

> ---
>  hw/acpi/pcihp.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> index 4999277d57..09f531e941 100644
> --- a/hw/acpi/pcihp.c
> +++ b/hw/acpi/pcihp.c
> @@ -122,13 +122,14 @@ static void acpi_set_pci_info(void)
>  static void acpi_pcihp_disable_root_bus(void)
>  {
>  static bool root_hp_disabled;
> +Object *host = acpi_get_i386_pci_host();
>  PCIBus *bus;
>  
>  if (root_hp_disabled) {
>  return;
>  }
>  
> -bus = find_i440fx();
> +bus = PCI_HOST_BRIDGE(host)->bus;
>  if (bus) {
>  /* setting the hotplug handler to NULL makes the bus 
> non-hotpluggable */
>  qbus_set_hotplug_handler(BUS(bus), NULL);

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [PATCH v5 2/7] hw/i386/acpi-build: Add ACPI PCI hot-plug methods to Q35

2021-06-30 Thread David Gibson
On Thu, Jun 17, 2021 at 09:07:34PM +0200, Julia Suvorova wrote:
> Implement notifications and gpe to support q35 ACPI PCI hot-plug.
> Use 0xcc4 - 0xcd7 range for 'acpi-pci-hotplug' io ports.
> 
> Signed-off-by: Julia Suvorova 
> Reviewed-by: Igor Mammedov 
> Reviewed-by: Marcel Apfelbaum 

I don't know ACPI or x86 particular well, so I could well have missed
something, but..

[snip]
> @@ -392,6 +392,9 @@ static void build_append_pci_bus_devices(Aml 
> *parent_scope, PCIBus *bus,
>  
>  if (!pdev) {
>  if (bsel) { /* add hotplug slots for non present devices */
> +if (pci_bus_is_express(bus) && slot > 0) {
> +break;
> +}
>  dev = aml_device("S%.02X", PCI_DEVFN(slot, 0));
>  aml_append(dev, aml_name_decl("_SUN", aml_int(slot)));
>  aml_append(dev, aml_name_decl("_ADR", aml_int(slot << 16)));
> @@ -516,7 +519,7 @@ static void build_append_pci_bus_devices(Aml 
> *parent_scope, PCIBus *bus,
>  QLIST_FOREACH(sec, >child, sibling) {
>  int32_t devfn = sec->parent_dev->devfn;
>  
> -if (pci_bus_is_root(sec) || pci_bus_is_express(sec)) {
> +if (pci_bus_is_root(sec)) {
>  continue;
>  }

.. what will this logic do if we encounter a PCIe-switch.  AIUI, it
should be possible to hotplug 1 slot under each downstream port, but
we can't hotplug anything directly under the upstream port.  AFAICT
both the upstream and downstream ports will show up as 'is_bridge'
though.

So, IIUC we want to traverse a PCIe upstream switch port, but not
generate hotplug slots until we encounter the downstream ports below
it.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [PATCH v5 7/7] bios-tables-test: Update golden binaries

2021-06-30 Thread David Gibson
On Thu, Jun 17, 2021 at 09:07:39PM +0200, Julia Suvorova wrote:
> Add ACPI hot-plug registers to DSDT Q35 tables.
> Changes in the tables:
> 
> +Scope (_SB.PCI0)
> +{
> +OperationRegion (PCST, SystemIO, 0x0CC4, 0x08)
> +Field (PCST, DWordAcc, NoLock, WriteAsZeros)
> +{
> +PCIU,   32,
> +PCID,   32
> +}
> +
> +OperationRegion (SEJ, SystemIO, 0x0CCC, 0x04)
> +Field (SEJ, DWordAcc, NoLock, WriteAsZeros)
> +{
> +B0EJ,   32
> +}
> +
> +OperationRegion (BNMR, SystemIO, 0x0CD4, 0x08)
> +Field (BNMR, DWordAcc, NoLock, WriteAsZeros)
> +{
> +BNUM,   32,
> +PIDX,   32
> +}
> +
> +Mutex (BLCK, 0x00)
> +Method (PCEJ, 2, NotSerialized)
> +{
> +Acquire (BLCK, 0x)
> +BNUM = Arg0
> +B0EJ = (One << Arg1)
> +Release (BLCK)
> +Return (Zero)
> +}
> +
> +Method (AIDX, 2, NotSerialized)
> +{
> +Acquire (BLCK, 0x)
> +BNUM = Arg0
> +PIDX = (One << Arg1)
> +Local0 = PIDX /* \_SB_.PCI0.PIDX */
> +Release (BLCK)
> +Return (Local0)
> +}
> +
> +Method (PDSM, 6, Serialized)
> +{
> +If ((Arg0 == ToUUID ("e5c937d0-3553-4d7a-9117-ea4d19c3434d") /* 
> Device Labeling Interface */))
> +{
> +Local0 = AIDX (Arg4, Arg5)
> +If ((Arg2 == Zero))
> +{
> +If ((Arg1 == 0x02))
> +{
> +If (!((Local0 == Zero) | (Local0 == 0x)))
> +{
> +Return (Buffer (One)
> +{
> + 0x81
>  // .
> +})
> +}
> +}
> +
> +Return (Buffer (One)
> +{
> + 0x00 // 
> .
> +})
> +}
> +ElseIf ((Arg2 == 0x07))
> +{
> +Local1 = Package (0x02)
> +{
> +Zero,
> +""
> +}
> +Local1 [Zero] = Local0
> +Return (Local1)
> +}
> +}
> +}
> +}
> +
> ...
> 
>  Scope (_GPE)
>  {
>  Name (_HID, "ACPI0006" /* GPE Block Device */)  // _HID: Hardware ID
> +Method (_E01, 0, NotSerialized)  // _Exx: Edge-Triggered GPE, 
> xx=0x00-0xFF
> +{
> +Acquire (\_SB.PCI0.BLCK, 0x)
> +\_SB.PCI0.PCNT ()
> +Release (\_SB.PCI0.BLCK)
> +}
> ...
> 
> +
> +Device (PHPR)
> +{
> +Name (_HID, "PNP0A06" /* Generic Container Device */)  // _HID: 
> Hardware ID
> +Name (_UID, "PCI Hotplug resources")  // _UID: Unique ID
> +Name (_STA, 0x0B)  // _STA: Status
> +Name (_CRS, ResourceTemplate ()  // _CRS: Current Resource 
> Settings
> +{
> +IO (Decode16,
> +0x0CC4, // Range Minimum
> +0x0CC4, // Range Maximum
> +0x01,   // Alignment
> +0x18,   // Length
> +)
> +})
> +}
>  }
> ...
> 
> And if there is a port in configuration:
> 
>  Device (S10)
>  {
>  Name (_ADR, 0x0002)  // _ADR: Address
> +Name (BSEL, Zero)
> +Device (S00)
> +{
> +Name (_SUN, Zero)  // _SUN: Slot User Number
> +Name (_ADR, Zero)  // _ADR: Address
> +Method (_EJ0, 1, NotSerialized)  // _EJx: Eject Device, 
> x=0-9
> +{
> +PCEJ (BSEL, _SUN)
> +}
> +
> +Method (_DSM, 4, Serialized)  // _DSM: Device-Specific 
> Method
> +{
> +Return (PDSM (Arg0, Arg1, Arg2, Arg3, BSEL, _SUN))
> +}
> +}
> +
> ...
> 
> +Method (DVNT, 2, NotSerialized)
> +{
> +If ((Arg0 & One))
> +{
> +Notify (S00, Arg1)
> +}
> ...
> 
> Signed-off-by: Julia Suvorova 
> ---
>  tests/qtest/bios-tables-test-allowed-diff.h |  11 ---
>  tests/data/acpi/q35/DSDT| Bin 7859 -> 8289 bytes
>  tests/data/acpi/q35/DSDT.acpihmat   | Bin 9184 -> 9614 bytes
>  tests/data/acpi/q35/DSDT.bridge | Bin 7877 

Re: [PATCH v5 3/7] hw/acpi/ich9: Enable ACPI PCI hot-plug

2021-06-30 Thread David Gibson
On Thu, Jun 17, 2021 at 09:07:35PM +0200, Julia Suvorova wrote:
> Add acpi_pcihp to ich9_pm as part of
> 'acpi-pci-hotplug-with-bridge-support' option. Set default to false.
> 
> Signed-off-by: Julia Suvorova 
> Reviewed-by: Igor Mammedov 
> Reviewed-by: Marcel Apfelbaum 
> ---
>  hw/i386/acpi-build.h   |  1 +
>  include/hw/acpi/ich9.h |  3 ++
>  hw/acpi/ich9.c | 67 ++
>  hw/acpi/pcihp.c|  5 +++-
>  hw/i386/acpi-build.c   |  2 +-
>  5 files changed, 76 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/i386/acpi-build.h b/hw/i386/acpi-build.h
> index 487ec7710f..0dce155c8c 100644
> --- a/hw/i386/acpi-build.h
> +++ b/hw/i386/acpi-build.h
> @@ -10,5 +10,6 @@ extern const struct AcpiGenericAddress 
> x86_nvdimm_acpi_dsmio;
>  #define ACPI_PCIHP_BNMR_BASE 0x10
>  
>  void acpi_setup(void);
> +Object *acpi_get_i386_pci_host(void);
>  
>  #endif
> diff --git a/include/hw/acpi/ich9.h b/include/hw/acpi/ich9.h
> index 596120d97f..a329ce43ab 100644
> --- a/include/hw/acpi/ich9.h
> +++ b/include/hw/acpi/ich9.h
> @@ -24,6 +24,7 @@
>  #include "hw/acpi/acpi.h"
>  #include "hw/acpi/cpu_hotplug.h"
>  #include "hw/acpi/cpu.h"
> +#include "hw/acpi/pcihp.h"
>  #include "hw/acpi/memory_hotplug.h"
>  #include "hw/acpi/acpi_dev_interface.h"
>  #include "hw/acpi/tco.h"
> @@ -55,6 +56,8 @@ typedef struct ICH9LPCPMRegs {
>  AcpiCpuHotplug gpe_cpu;
>  CPUHotplugState cpuhp_state;
>  
> +bool use_acpi_hotplug_bridge;
> +AcpiPciHpState acpi_pci_hotplug;
>  MemHotplugState acpi_memory_hotplug;
>  
>  uint8_t disable_s3;
> diff --git a/hw/acpi/ich9.c b/hw/acpi/ich9.c
> index 4daa79ec8d..bcbd567cb0 100644
> --- a/hw/acpi/ich9.c
> +++ b/hw/acpi/ich9.c
> @@ -217,6 +217,26 @@ static const VMStateDescription vmstate_cpuhp_state = {
>  }
>  };
>  
> +static bool vmstate_test_use_pcihp(void *opaque)
> +{
> +ICH9LPCPMRegs *s = opaque;
> +
> +return s->use_acpi_hotplug_bridge;
> +}
> +
> +static const VMStateDescription vmstate_pcihp_state = {
> +.name = "ich9_pm/pcihp",
> +.version_id = 1,
> +.minimum_version_id = 1,
> +.needed = vmstate_test_use_pcihp,
> +.fields  = (VMStateField[]) {
> +VMSTATE_PCI_HOTPLUG(acpi_pci_hotplug,
> +ICH9LPCPMRegs,
> +NULL, NULL),
> +VMSTATE_END_OF_LIST()
> +}
> +};
> +
>  const VMStateDescription vmstate_ich9_pm = {
>  .name = "ich9_pm",
>  .version_id = 1,
> @@ -238,6 +258,7 @@ const VMStateDescription vmstate_ich9_pm = {
>  _memhp_state,
>  _tco_io_state,
>  _cpuhp_state,
> +_pcihp_state,
>  NULL
>  }
>  };
> @@ -259,6 +280,7 @@ static void pm_reset(void *opaque)
>  }
>  pm->smi_en_wmask = ~0;
>  
> +acpi_pcihp_reset(>acpi_pci_hotplug, true);

Doesn't this need to be protected by if (pm->use_acpi_hotplug_bridge)
? Otherwise pm->acpi_pci_hotplug won't be initialized.

>  acpi_update_sci(>acpi_regs, pm->irq);
>  }
>  
> @@ -297,6 +319,18 @@ void ich9_pm_init(PCIDevice *lpc_pci, ICH9LPCPMRegs *pm,
>  pm->enable_tco = true;
>  acpi_pm_tco_init(>tco_regs, >io);
>  
> +if (pm->use_acpi_hotplug_bridge) {
> +acpi_pcihp_init(OBJECT(lpc_pci),
> +>acpi_pci_hotplug,
> +pci_get_bus(lpc_pci),
> +pci_address_space_io(lpc_pci),
> +true,
> +ACPI_PCIHP_ADDR_ICH9);
> +
> +qbus_set_hotplug_handler(BUS(pci_get_bus(lpc_pci)),
> + OBJECT(lpc_pci));
> +}
> +
>  pm->irq = sci_irq;
>  qemu_register_reset(pm_reset, pm);
>  pm->powerdown_notifier.notify = pm_powerdown_req;
> @@ -368,6 +402,20 @@ static void ich9_pm_set_enable_tco(Object *obj, bool 
> value, Error **errp)
>  s->pm.enable_tco = value;
>  }
>  
> +static bool ich9_pm_get_acpi_pci_hotplug(Object *obj, Error **errp)
> +{
> +ICH9LPCState *s = ICH9_LPC_DEVICE(obj);
> +
> +return s->pm.use_acpi_hotplug_bridge;
> +}
> +
> +static void ich9_pm_set_acpi_pci_hotplug(Object *obj, bool value, Error 
> **errp)
> +{
> +ICH9LPCState *s = ICH9_LPC_DEVICE(obj);
> +
> +s->pm.use_acpi_hotplug_bridge = value;
> +}
> +
>  void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs *pm)
>  {
>  static const uint32_t gpe0_len = ICH9_PMIO_GPE0_LEN;
> @@ -376,6 +424,7 @@ void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs 
> *pm)
>  pm->disable_s3 = 0;
>  pm->disable_s4 = 0;
>  pm->s4_val = 2;
> +pm->use_acpi_hotplug_bridge = false;
>  
>  object_property_add_uint32_ptr(obj, ACPI_PM_PROP_PM_IO_BASE,
> >pm_io_base, OBJ_PROP_FLAG_READ);
> @@ -399,6 +448,9 @@ void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs 
> *pm)
>  object_property_add_bool(obj, ACPI_PM_PROP_TCO_ENABLED,
>   ich9_pm_get_enable_tco,
>   

RE: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()

2021-06-30 Thread Wang, Wei W
On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:
> Taking the mutex every time for each dirty bit to clear is too slow, 
> especially we'll
> take/release even if the dirty bit is cleared.  So far it's only used to sync 
> with
> special cases with qemu_guest_free_page_hint() against migration thread,
> nothing really that serious yet.  Let's move the lock to be upper.
> 
> There're two callers of migration_bitmap_clear_dirty().
> 
> For migration, move it into ram_save_iterate().  With the help of MAX_WAIT
> logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, so 
> taking
> the lock once there at the entry.  It also means any call sites to
> qemu_guest_free_page_hint() can be delayed; but it should be very rare, only
> during migration, and I don't see a problem with it.
> 
> For COLO, move it up to colo_flush_ram_cache().  I think COLO forgot to take
> that lock even when calling ramblock_sync_dirty_bitmap(), where another
> example is migration_bitmap_sync() who took it right.  So let the mutex cover
> both the
> ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.
> 
> It's even possible to drop the lock so we use atomic operations upon rb->bmap
> and the variable migration_dirty_pages.  I didn't do it just to still be 
> safe, also
> not predictable whether the frequent atomic ops could bring overhead too e.g.
> on huge vms when it happens very often.  When that really comes, we can
> keep a local counter and periodically call atomic ops.  Keep it simple for 
> now.
> 

If free page opt is enabled, 50ms waiting time might be too long for handling 
just one hint (via qemu_guest_free_page_hint)?
How about making the lock conditionally?
e.g.
#define QEMU_LOCK_GUARD_COND (lock, cond) {
if (cond)
QEMU_LOCK_GUARD(lock);
}
Then in migration_bitmap_clear_dirty:
QEMU_LOCK_GUARD_COND(>bitmap_mutex, rs->fpo_enabled);


Best,
Wei



[PATCH 14/20] python/aqmp: add QMP event support

2021-06-30 Thread John Snow
This class was designed as a "mix-in" primarily so that the feature
could be given its own treatment in its own python file.

It gets quite a bit too long otherwise.

Signed-off-by: John Snow 

---

Yes, the docstring is long. I recommend looking at the generated Sphinx
output for that part instead. You can review the markup itself if you
are a masochist.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py |   2 +
 python/qemu/aqmp/events.py   | 878 +++
 2 files changed, 880 insertions(+)
 create mode 100644 python/qemu/aqmp/events.py

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
index c1ec68a023..ae87436470 100644
--- a/python/qemu/aqmp/__init__.py
+++ b/python/qemu/aqmp/__init__.py
@@ -22,6 +22,7 @@
 # the COPYING file in the top-level directory.
 
 from .error import AQMPError, MultiException
+from .events import EventListener
 from .message import Message
 from .protocol import ConnectError, Runstate
 
@@ -30,6 +31,7 @@
 __all__ = (
 # Classes, most to least important
 'Message',
+'EventListener',
 'Runstate',
 
 # Exceptions, most generic to most explicit
diff --git a/python/qemu/aqmp/events.py b/python/qemu/aqmp/events.py
new file mode 100644
index 00..140465255e
--- /dev/null
+++ b/python/qemu/aqmp/events.py
@@ -0,0 +1,878 @@
+"""
+AQMP Events and EventListeners
+
+Asynchronous QMP uses `EventListener` objects to listen for events. An
+`EventListener` is a FIFO event queue that can be pre-filtered to listen
+for only specific events. Each `EventListener` instance receives its own
+copy of events that it hears, so events may be consumed without fear or
+worry for depriving other listeners of events they need to hear.
+
+
+EventListener Tutorial
+--
+
+In all of the following examples, we assume that we have a
+:py:class:`~qmp_protocol.QMP` object instantiated named ``qmp`` that is
+already connected.
+
+
+`listener()` context blocks with one name
+~
+
+The most basic usage is by using the `listener()` context manager to
+construct them:
+
+.. code:: python
+
+   with qmp.listener('STOP') as listener:
+   await qmp.execute('stop')
+   await listener.get()
+
+The listener is active only for the duration of the ‘with’ block. This
+instance listens only for ‘STOP’ events.
+
+
+`listener()` context blocks with two or more names
+~~
+
+Multiple events can be selected for by providing any ``Iterable[str]``:
+
+.. code:: python
+
+   with qmp.listener(('STOP', 'RESUME')) as listener:
+   await qmp.execute('stop')
+   event = await listener.get()
+   assert event['event'] == 'STOP'
+
+   await qmp.execute('cont')
+   event = await listener.get()
+   assert event['event'] == 'RESUME'
+
+
+`listener()` context blocks with no names
+~
+
+By omitting names entirely, you can listen to ALL events.
+
+.. code:: python
+
+   with qmp.listener() as listener:
+   await qmp.execute('stop')
+   event = await listener.get()
+   assert event['event'] == 'STOP'
+
+This isn’t a very good use case for this feature: In a non-trivial
+running system, we may not know what event will arrive next. Grabbing
+the top of a FIFO queue returning multiple kinds of events may be prone
+to error.
+
+
+Using async iterators to retrieve events
+
+
+If you’d like to simply watch what events happen to arrive, you can use
+the listener as an async iterator:
+
+.. code:: python
+
+   with qmp.listener() as listener:
+   async for event in listener:
+   print(f"Event arrived: {event['event']}")
+
+This is analogous to the following code:
+
+.. code:: python
+
+   with qmp.listener() as listener:
+   while True:
+   event = listener.get()
+   print(f"Event arrived: {event['event']}")
+
+This event stream will never end, so these blocks will never terminate.
+
+
+Using asyncio.Task to concurrently retrieve events
+~~
+
+Since a listener’s event stream will never terminate, it is not likely
+useful to use that form in a script. For longer-running clients, we can
+create event handlers by using `asyncio.Task` to create concurrent
+coroutines:
+
+.. code:: python
+
+   async def print_events(listener):
+   try:
+   async for event in listener:
+   print(f"Event arrived: {event['event']}")
+   except asyncio.CancelledError:
+   return
+
+   with qmp.listener() as listener:
+   task = asyncio.Task(print_events(listener))
+   await qmp.execute('stop')
+   await qmp.execute('cont')
+   task.cancel()
+   await task
+
+However, there is no guarantee that these events will be received by the
+time we leave this context block. Once the context block is exited, the
+listener will cease to hear any new 

[PATCH 15/20] python/aqmp: add QMP protocol support

2021-06-30 Thread John Snow
The star of our show!

Add most of the QMP protocol, sans support for actually executing
commands. No problem, that happens in the next two commits.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py |   2 +
 python/qemu/aqmp/qmp_protocol.py | 257 +++
 2 files changed, 259 insertions(+)
 create mode 100644 python/qemu/aqmp/qmp_protocol.py

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
index ae87436470..68d98cca75 100644
--- a/python/qemu/aqmp/__init__.py
+++ b/python/qemu/aqmp/__init__.py
@@ -25,11 +25,13 @@
 from .events import EventListener
 from .message import Message
 from .protocol import ConnectError, Runstate
+from .qmp_protocol import QMP
 
 
 # The order of these fields impact the Sphinx documentation order.
 __all__ = (
 # Classes, most to least important
+'QMP',
 'Message',
 'EventListener',
 'Runstate',
diff --git a/python/qemu/aqmp/qmp_protocol.py b/python/qemu/aqmp/qmp_protocol.py
new file mode 100644
index 00..5872bfc017
--- /dev/null
+++ b/python/qemu/aqmp/qmp_protocol.py
@@ -0,0 +1,257 @@
+"""
+QMP Protocol Implementation
+
+This module provides the `QMP` class, which can be used to connect and
+send commands to a QMP server such as QEMU. The QMP class can be used to
+either connect to a listening server, or used to listen and accept an
+incoming connection from that server.
+"""
+
+import logging
+from typing import (
+Dict,
+List,
+Mapping,
+Optional,
+)
+
+from .error import ProtocolError
+from .events import Events
+from .message import Message
+from .models import Greeting
+from .protocol import AsyncProtocol
+from .util import bottom_half, pretty_traceback, upper_half
+
+
+class _WrappedProtocolError(ProtocolError):
+"""
+Abstract exception class for Protocol errors that wrap an Exception.
+
+:param error_message: Human-readable string describing the error.
+:param exc: The root-cause exception.
+"""
+def __init__(self, error_message: str, exc: Exception):
+super().__init__(error_message)
+self.exc = exc
+
+def __str__(self) -> str:
+return f"{self.error_message}: {self.exc!s}"
+
+
+class GreetingError(_WrappedProtocolError):
+"""
+An exception occurred during the Greeting phase.
+
+:param error_message: Human-readable string describing the error.
+:param exc: The root-cause exception.
+"""
+
+
+class NegotiationError(_WrappedProtocolError):
+"""
+An exception occurred during the Negotiation phase.
+
+:param error_message: Human-readable string describing the error.
+:param exc: The root-cause exception.
+"""
+
+
+class QMP(AsyncProtocol[Message], Events):
+"""
+Implements a QMP client connection.
+
+QMP can be used to establish a connection as either the transport
+client or server, though this class always acts as the QMP client.
+
+:param name: Optional nickname for the connection, used for logging.
+
+Basic script-style usage looks like this::
+
+  qmp = QMP('my_virtual_machine_name')
+  await qmp.connect(('127.0.0.1', 1234))
+  ...
+  res = await qmp.execute('block-query')
+  ...
+  await qmp.disconnect()
+
+Basic async client-style usage looks like this::
+
+  class Client:
+  def __init__(self, name: str):
+  self.qmp = QMP(name)
+
+  async def watch_events(self):
+  try:
+  async for event in self.events:
+  print(f"Event: {event['event']}")
+  except asyncio.CancelledError:
+  return
+
+  async def run(self, address='/tmp/qemu.socket'):
+  await self.qmp.connect(address)
+  asyncio.create_task(self.watch_events())
+  await self.qmp.runstate_changed.wait()
+  await self.disconnect()
+
+See `aqmp.events` for more detail on event handling patterns.
+"""
+#: Logger object used for debugging messages.
+logger = logging.getLogger(__name__)
+
+def __init__(self, name: Optional[str] = None) -> None:
+super().__init__(name)
+Events.__init__(self)
+
+#: Whether or not to await a greeting after establishing a connection.
+self.await_greeting: bool = True
+
+#: Whether or not to perform capabilities negotiation upon connection.
+#: Implies `await_greeting`.
+self.negotiate: bool = True
+
+# Cached Greeting, if one was awaited.
+self._greeting: Optional[Greeting] = None
+
+@upper_half
+async def _begin_new_session(self) -> None:
+"""
+Initiate the QMP session.
+
+Wait for the QMP greeting and perform capabilities negotiation.
+
+:raise GreetingError: When the greeting is not understood.
+:raise NegotiationError: If the negotiation fails.
+:raise EOFError: When the server unexpectedly hangs up.
+:raise OSError: For 

[PATCH 18/20] python/aqmp: add _raw() execution interface

2021-06-30 Thread John Snow
This is added in anticipation of wanting it for a synchronous wrapper
for the iotest interface. Normally, execute() and execute_msg() both
raise QMP errors in the form of Python exceptions.

Many iotests expect the entire reply as-is. To reduce churn there, add a
private execution interface that will ease transition churn. However, I
do not wish to encourage its use, so it will remain a private interface.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/qmp_protocol.py | 25 +
 1 file changed, 25 insertions(+)

diff --git a/python/qemu/aqmp/qmp_protocol.py b/python/qemu/aqmp/qmp_protocol.py
index 3c16cdc213..36baef9fb3 100644
--- a/python/qemu/aqmp/qmp_protocol.py
+++ b/python/qemu/aqmp/qmp_protocol.py
@@ -454,6 +454,31 @@ async def _execute(self, msg: Message, assign_id: bool = 
True) -> Message:
 exec_id = await self._issue(msg)
 return await self._reply(exec_id)
 
+@upper_half
+@require(Runstate.RUNNING)
+async def _raw(
+self,
+msg: Union[Message, Mapping[str, object], bytes]
+) -> Message:
+"""
+Issue a fairly raw `Message` to the QMP server and await a reply.
+
+An AQMP execution ID will be assigned, so it isn't *truly* raw.
+
+:param msg:
+A Message to send to the server. It may be a `Message`, any
+Mapping (including Dict), or raw bytes.
+
+:return: Execution reply from the server.
+:raise ExecInterruptedError:
+When the reply could not be retrieved because the connection
+was lost, or some other problem.
+"""
+# 1. convert generic Mapping or bytes to a QMP Message
+# 2. copy Message objects so that we assign an ID only to the copy.
+msg = Message(msg)
+return await self._execute(msg)
+
 @upper_half
 @require(Runstate.RUNNING)
 async def execute_msg(self, msg: Message) -> object:
-- 
2.31.1




[PATCH 17/20] python/aqmp: add execute() interfaces

2021-06-30 Thread John Snow
Add execute() and execute_msg().

_execute() is split into _issue() and _reply() halves so that
hypothetical subclasses of QMP that want to support different execution
paradigms can do so.

I anticipate a synchronous interface may have need of separating the
send/reply phases. However, I do not wish to expose that interface here
and want to actively discourage it, so they remain private interfaces.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py |   4 +-
 python/qemu/aqmp/qmp_protocol.py | 203 +--
 2 files changed, 199 insertions(+), 8 deletions(-)

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
index 68d98cca75..5cd7df87c6 100644
--- a/python/qemu/aqmp/__init__.py
+++ b/python/qemu/aqmp/__init__.py
@@ -25,7 +25,7 @@
 from .events import EventListener
 from .message import Message
 from .protocol import ConnectError, Runstate
-from .qmp_protocol import QMP
+from .qmp_protocol import QMP, ExecInterruptedError, ExecuteError
 
 
 # The order of these fields impact the Sphinx documentation order.
@@ -39,6 +39,8 @@
 # Exceptions, most generic to most explicit
 'AQMPError',
 'ConnectError',
+'ExecuteError',
+'ExecInterruptedError',
 
 # Niche topics
 'MultiException',
diff --git a/python/qemu/aqmp/qmp_protocol.py b/python/qemu/aqmp/qmp_protocol.py
index 04c8a8cb54..3c16cdc213 100644
--- a/python/qemu/aqmp/qmp_protocol.py
+++ b/python/qemu/aqmp/qmp_protocol.py
@@ -7,8 +7,7 @@
 incoming connection from that server.
 """
 
-# The import workarounds here are fixed in the next commit.
-import asyncio  # pylint: disable=unused-import # noqa
+import asyncio
 import logging
 from typing import (
 Dict,
@@ -21,8 +20,8 @@
 from .error import AQMPError, ProtocolError
 from .events import Events
 from .message import Message
-from .models import Greeting
-from .protocol import AsyncProtocol
+from .models import ErrorResponse, Greeting
+from .protocol import AsyncProtocol, Runstate, require
 from .util import bottom_half, pretty_traceback, upper_half
 
 
@@ -59,11 +58,32 @@ class NegotiationError(_WrappedProtocolError):
 """
 
 
+class ExecuteError(AQMPError):
+"""
+Exception raised by `QMP.execute()` on RPC failure.
+
+:param error_response: The RPC error response object.
+:param sent: The sent RPC message that caused the failure.
+:param received: The raw RPC error reply received.
+"""
+def __init__(self, error_response: ErrorResponse,
+ sent: Message, received: Message):
+super().__init__(error_response.error.desc)
+#: The sent `Message` that caused the failure
+self.sent: Message = sent
+#: The received `Message` that indicated failure
+self.received: Message = received
+#: The parsed error response
+self.error: ErrorResponse = error_response
+#: The QMP error class
+self.error_class: str = error_response.error.class_
+
+
 class ExecInterruptedError(AQMPError):
 """
-Exception raised when an RPC is interrupted.
+Exception raised by `execute()` (et al) when an RPC is interrupted.
 
-This error is raised when an execute() statement could not be
+This error is raised when an `execute()` statement could not be
 completed.  This can occur because the connection itself was
 terminated before a reply was received.
 
@@ -106,6 +126,27 @@ class ServerParseError(_MsgProtocolError):
 """
 
 
+class BadReplyError(_MsgProtocolError):
+"""
+An execution reply was successfully routed, but not understood.
+
+If a QMP message is received with an 'id' field to allow it to be
+routed, but is otherwise malformed, this exception will be raised.
+
+A reply message is malformed if it is missing either the 'return' or
+'error' keys, or if the 'error' value has missing keys or members of
+the wrong type.
+
+:param error_message: Human-readable string describing the error.
+:param msg: The malformed reply that was received.
+:param sent: The message that was sent that prompted the error.
+"""
+def __init__(self, error_message: str, msg: Message, sent: Message):
+super().__init__(error_message, msg)
+#: The sent `Message` that caused the failure
+self.sent = sent
+
+
 class QMP(AsyncProtocol[Message], Events):
 """
 Implements a QMP client connection.
@@ -165,6 +206,9 @@ def __init__(self, name: Optional[str] = None) -> None:
 # Cached Greeting, if one was awaited.
 self._greeting: Optional[Greeting] = None
 
+# Command ID counter
+self._execute_id = 0
+
 # Incoming RPC reply messages
 self._pending: Dict[str, 'asyncio.Queue[QMP._PendingT]'] = {}
 
@@ -332,12 +376,136 @@ def _cleanup(self) -> None:
 self._greeting = None
 assert not self._pending
 
+@upper_half
+def _get_exec_id(self) -> str:
+exec_id = f"__aqmp#{self._execute_id:05d}"
+   

[PATCH 13/20] python/aqmp: add well-known QMP object models

2021-06-30 Thread John Snow
The QMP spec doesn't define very many objects that are iron-clad in
their format, but there are a few. This module makes it trivial to
validate them without relying on an external third-party library.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/models.py | 133 +
 1 file changed, 133 insertions(+)
 create mode 100644 python/qemu/aqmp/models.py

diff --git a/python/qemu/aqmp/models.py b/python/qemu/aqmp/models.py
new file mode 100644
index 00..24c94123ac
--- /dev/null
+++ b/python/qemu/aqmp/models.py
@@ -0,0 +1,133 @@
+"""
+QMP Data Models
+
+This module provides simplistic data classes that represent the few
+structures that the QMP spec mandates; they are used to verify incoming
+data to make sure it conforms to spec.
+"""
+# pylint: disable=too-few-public-methods
+
+from collections import abc
+from typing import (
+Any,
+Mapping,
+Optional,
+Sequence,
+)
+
+
+class Model:
+"""
+Abstract data model, representing some QMP object of some kind.
+
+:param raw: The raw object to be validated.
+:raise KeyError: If any required fields are absent.
+:raise TypeError: If any required fields have the wrong type.
+"""
+def __init__(self, raw: Mapping[str, Any]):
+self._raw = raw
+
+def _check_key(self, key: str) -> None:
+if key not in self._raw:
+raise KeyError(f"'{self._name}' object requires '{key}' member")
+
+def _check_value(self, key: str, type_: type, typestr: str) -> None:
+assert key in self._raw
+if not isinstance(self._raw[key], type_):
+raise TypeError(
+f"'{self._name}' member '{key}' must be a {typestr}"
+)
+
+def _check_member(self, key: str, type_: type, typestr: str) -> None:
+self._check_key(key)
+self._check_value(key, type_, typestr)
+
+@property
+def _name(self) -> str:
+return type(self).__name__
+
+def __repr__(self) -> str:
+return f"{self._name}({self._raw!r})"
+
+
+class Greeting(Model):
+"""
+Defined in qmp-spec.txt, section 2.2, "Server Greeting".
+
+:param raw: The raw Greeting object.
+:raise KeyError: If any required fields are absent.
+:raise TypeError: If any required fields have the wrong type.
+"""
+def __init__(self, raw: Mapping[str, Any]):
+super().__init__(raw)
+#: 'QMP' member
+self.QMP: QMPGreeting  # pylint: disable=invalid-name
+
+self._check_member('QMP', abc.Mapping, "JSON object")
+self.QMP = QMPGreeting(self._raw['QMP'])
+
+
+class QMPGreeting(Model):
+"""
+Defined in qmp-spec.txt, section 2.2, "Server Greeting".
+
+:param raw: The raw QMPGreeting object.
+:raise KeyError: If any required fields are absent.
+:raise TypeError: If any required fields have the wrong type.
+"""
+def __init__(self, raw: Mapping[str, Any]):
+super().__init__(raw)
+#: 'version' member
+self.version: Mapping[str, object]
+#: 'capabilities' member
+self.capabilities: Sequence[object]
+
+self._check_member('version', abc.Mapping, "JSON object")
+self.version = self._raw['version']
+
+self._check_member('capabilities', abc.Sequence, "JSON array")
+self.capabilities = self._raw['capabilities']
+
+
+class ErrorResponse(Model):
+"""
+Defined in qmp-spec.txt, section 2.4.2, "error".
+
+:param raw: The raw ErrorResponse object.
+:raise KeyError: If any required fields are absent.
+:raise TypeError: If any required fields have the wrong type.
+"""
+def __init__(self, raw: Mapping[str, Any]):
+super().__init__(raw)
+#: 'error' member
+self.error: ErrorInfo
+#: 'id' member
+self.id: Optional[object] = None  # pylint: disable=invalid-name
+
+self._check_member('error', abc.Mapping, "JSON object")
+self.error = ErrorInfo(self._raw['error'])
+
+if 'id' in raw:
+self.id = raw['id']
+
+
+class ErrorInfo(Model):
+"""
+Defined in qmp-spec.txt, section 2.4.2, "error".
+
+:param raw: The raw ErrorInfo object.
+:raise KeyError: If any required fields are absent.
+:raise TypeError: If any required fields have the wrong type.
+"""
+def __init__(self, raw: Mapping[str, Any]):
+super().__init__(raw)
+#: 'class' member, with an underscore to avoid conflicts in Python.
+self.class_: str
+#: 'desc' member
+self.desc: str
+
+self._check_member('class', str, "string")
+self.class_ = self._raw['class']
+
+self._check_member('desc', str, "string")
+self.desc = self._raw['desc']
-- 
2.31.1




[PATCH 16/20] python/aqmp: Add message routing to QMP protocol

2021-06-30 Thread John Snow
Add the ability to handle and route messages in qmp_protocol.py. The
interface for actually sending anything still isn't added until next
commit.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/qmp_protocol.py | 98 +++-
 1 file changed, 96 insertions(+), 2 deletions(-)

diff --git a/python/qemu/aqmp/qmp_protocol.py b/python/qemu/aqmp/qmp_protocol.py
index 5872bfc017..04c8a8cb54 100644
--- a/python/qemu/aqmp/qmp_protocol.py
+++ b/python/qemu/aqmp/qmp_protocol.py
@@ -7,15 +7,18 @@
 incoming connection from that server.
 """
 
+# The import workarounds here are fixed in the next commit.
+import asyncio  # pylint: disable=unused-import # noqa
 import logging
 from typing import (
 Dict,
 List,
 Mapping,
 Optional,
+Union,
 )
 
-from .error import ProtocolError
+from .error import AQMPError, ProtocolError
 from .events import Events
 from .message import Message
 from .models import Greeting
@@ -56,6 +59,53 @@ class NegotiationError(_WrappedProtocolError):
 """
 
 
+class ExecInterruptedError(AQMPError):
+"""
+Exception raised when an RPC is interrupted.
+
+This error is raised when an execute() statement could not be
+completed.  This can occur because the connection itself was
+terminated before a reply was received.
+
+The true cause of the interruption will be available via `disconnect()`.
+"""
+
+
+class _MsgProtocolError(ProtocolError):
+"""
+Abstract error class for protocol errors that have a `Message` object.
+
+This Exception class is used for protocol errors where the `Message`
+was mechanically understood, but was found to be inappropriate or
+malformed.
+
+:param error_message: Human-readable string describing the error.
+:param msg: The QMP `Message` that caused the error.
+"""
+def __init__(self, error_message: str, msg: Message):
+super().__init__(error_message)
+#: The received `Message` that caused the error.
+self.msg: Message = msg
+
+def __str__(self) -> str:
+return "\n".join([
+super().__str__(),
+f"  Message was: {str(self.msg)}\n",
+])
+
+
+class ServerParseError(_MsgProtocolError):
+"""
+The Server sent a `Message` indicating parsing failure.
+
+i.e. A reply has arrived from the server, but it is missing the "ID"
+field, indicating a parsing error.
+
+:param error_message: Human-readable string describing the error.
+:param msg: The QMP `Message` that caused the error.
+"""
+
+
 class QMP(AsyncProtocol[Message], Events):
 """
 Implements a QMP client connection.
@@ -98,6 +148,9 @@ async def run(self, address='/tmp/qemu.socket'):
 #: Logger object used for debugging messages.
 logger = logging.getLogger(__name__)
 
+# Type alias for pending execute() result items
+_PendingT = Union[Message, ExecInterruptedError]
+
 def __init__(self, name: Optional[str] = None) -> None:
 super().__init__(name)
 Events.__init__(self)
@@ -112,6 +165,9 @@ def __init__(self, name: Optional[str] = None) -> None:
 # Cached Greeting, if one was awaited.
 self._greeting: Optional[Greeting] = None
 
+# Incoming RPC reply messages
+self._pending: Dict[str, 'asyncio.Queue[QMP._PendingT]'] = {}
+
 @upper_half
 async def _begin_new_session(self) -> None:
 """
@@ -191,10 +247,27 @@ async def _negotiate(self) -> None:
 self.logger.error("%s:\n%s\n", emsg, pretty_traceback())
 raise
 
+@bottom_half
+async def _bh_disconnect(self, force: bool = False) -> None:
+await super()._bh_disconnect(force)
+
+if self._pending:
+self.logger.debug("Cancelling pending executions")
+keys = self._pending.keys()
+for key in keys:
+self.logger.debug("Cancelling execution '%s'", key)
+self._pending[key].put_nowait(
+ExecInterruptedError("Disconnected")
+)
+
+self.logger.debug("QMP Disconnected.")
+
 @bottom_half
 async def _on_message(self, msg: Message) -> None:
 """
 Add an incoming message to the appropriate queue/handler.
+
+:raise ServerParseError: When Message has no 'event' nor 'id' member
 """
 # Incoming messages are not fully parsed/validated here;
 # do only light peeking to know how to route the messages.
@@ -204,7 +277,27 @@ async def _on_message(self, msg: Message) -> None:
 return
 
 # Below, we assume everything left is an execute/exec-oob response.
-# ... Which we'll implement in the next commit!
+
+if 'id' not in msg:
+# This is (very likely) a server parsing error.
+# It doesn't inherently belong to any pending execution.
+# Instead of performing clever recovery, just terminate.
+# See "NOTE" in qmp-spec.txt, section 2.4.2
+raise 

[PATCH 09/20] python/aqmp: add AsyncProtocol.accept() method

2021-06-30 Thread John Snow
It's a little messier than connect, because it wasn't designed to accept
*precisely one* connection. Such is life.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/protocol.py | 85 ++--
 1 file changed, 82 insertions(+), 3 deletions(-)

diff --git a/python/qemu/aqmp/protocol.py b/python/qemu/aqmp/protocol.py
index dd8564ee02..a32a8cbbf6 100644
--- a/python/qemu/aqmp/protocol.py
+++ b/python/qemu/aqmp/protocol.py
@@ -242,6 +242,24 @@ def runstate(self) -> Runstate:
 """The current `Runstate` of the connection."""
 return self._runstate
 
+@upper_half
+@require(Runstate.IDLE)
+async def accept(self, address: Union[str, Tuple[str, int]],
+ ssl: Optional[SSLContext] = None) -> None:
+"""
+Accept a connection and begin processing message queues.
+
+If this call fails, `runstate` is guaranteed to be set back to `IDLE`.
+
+:param address:
+Address to listen to; UNIX socket path or TCP address/port.
+:param ssl: SSL context to use, if any.
+
+:raise StateError: When the `Runstate` is not `IDLE`.
+:raise ConnectError: If a connection could not be accepted.
+"""
+await self._new_session(address, ssl, accept=True)
+
 @upper_half
 @require(Runstate.IDLE)
 async def connect(self, address: Union[str, Tuple[str, int]],
@@ -302,7 +320,8 @@ def _set_state(self, state: Runstate) -> None:
 @upper_half
 async def _new_session(self,
address: Union[str, Tuple[str, int]],
-   ssl: Optional[SSLContext] = None) -> None:
+   ssl: Optional[SSLContext] = None,
+   accept: bool = False) -> None:
 """
 Establish a new connection and initialize the session.
 
@@ -311,9 +330,10 @@ async def _new_session(self,
 to be set back to `IDLE`.
 
 :param address:
-Address to connect to;
+Address to connect to/listen on;
 UNIX socket path or TCP address/port.
 :param ssl: SSL context to use, if any.
+:param accept: Accept a connection instead of connecting when `True`.
 
 :raise ConnectError:
 When a connection or session cannot be established.
@@ -332,7 +352,10 @@ async def _new_session(self,
 
 phase = "connection"
 try:
-await self._do_connect(address, ssl)
+if accept:
+await self._do_accept(address, ssl)
+else:
+await self._do_connect(address, ssl)
 
 phase = "session"
 await self._begin_new_session()
@@ -351,6 +374,62 @@ async def _new_session(self,
 
 assert self.runstate == Runstate.RUNNING
 
+@upper_half
+async def _do_accept(self, address: Union[str, Tuple[str, int]],
+ ssl: Optional[SSLContext] = None) -> None:
+"""
+Acting as the transport server, accept a single connection.
+
+:param address:
+Address to listen on; UNIX socket path or TCP address/port.
+:param ssl: SSL context to use, if any.
+
+:raise OSError: For stream-related errors.
+"""
+self.logger.debug("Awaiting connection ...")
+connected = asyncio.Event()
+server: Optional[asyncio.AbstractServer] = None
+
+async def _client_connected_cb(reader: asyncio.StreamReader,
+   writer: asyncio.StreamWriter) -> None:
+"""Used to accept a single incoming connection, see below."""
+nonlocal server
+nonlocal connected
+
+# A connection has been accepted; stop listening for new ones.
+assert server is not None
+server.close()
+await server.wait_closed()
+server = None
+
+# Register this client as being connected
+self._reader, self._writer = (reader, writer)
+
+# Signal back: We've accepted a client!
+connected.set()
+
+if isinstance(address, tuple):
+coro = asyncio.start_server(
+_client_connected_cb,
+host=address[0],
+port=address[1],
+ssl=ssl,
+backlog=1,
+)
+else:
+coro = asyncio.start_unix_server(
+_client_connected_cb,
+path=address,
+ssl=ssl,
+backlog=1,
+)
+
+server = await coro # Starts listening
+await connected.wait()  # Waits for the callback to fire (and finish)
+assert server is None
+
+self.logger.debug("Connection accepted")
+
 @upper_half
 async def _do_connect(self, address: Union[str, Tuple[str, int]],
   ssl: Optional[SSLContext] = None) -> None:
-- 
2.31.1




[PATCH 12/20] python/aqmp: add QMP Message format

2021-06-30 Thread John Snow
The Message class is here primarily to serve as a solid type to use for
mypy static typing for unambiguous annotation and documentation.

We can also stuff JSON serialization and deserialization into this class
itself so it can be re-used even outside this infrastructure.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py |   4 +-
 python/qemu/aqmp/message.py  | 207 +++
 2 files changed, 210 insertions(+), 1 deletion(-)
 create mode 100644 python/qemu/aqmp/message.py

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
index 5c44fabeea..c1ec68a023 100644
--- a/python/qemu/aqmp/__init__.py
+++ b/python/qemu/aqmp/__init__.py
@@ -22,12 +22,14 @@
 # the COPYING file in the top-level directory.
 
 from .error import AQMPError, MultiException
+from .message import Message
 from .protocol import ConnectError, Runstate
 
 
 # The order of these fields impact the Sphinx documentation order.
 __all__ = (
-# Classes
+# Classes, most to least important
+'Message',
 'Runstate',
 
 # Exceptions, most generic to most explicit
diff --git a/python/qemu/aqmp/message.py b/python/qemu/aqmp/message.py
new file mode 100644
index 00..3a4b283032
--- /dev/null
+++ b/python/qemu/aqmp/message.py
@@ -0,0 +1,207 @@
+"""
+QMP Message Format
+
+This module provides the `Message` class, which represents a single QMP
+message sent to or from the server.
+"""
+
+import json
+from json import JSONDecodeError
+from typing import (
+Dict,
+Iterator,
+Mapping,
+MutableMapping,
+Optional,
+Union,
+)
+
+from .error import ProtocolError
+
+
+class Message(MutableMapping[str, object]):
+"""
+Represents a single QMP protocol message.
+
+QMP uses JSON objects as its basic communicative unit; so this
+Python object is a :py:obj:`~collections.abc.MutableMapping`. It may
+be instantiated from either another mapping (like a `dict`), or from
+raw `bytes` that still need to be deserialized.
+
+Once instantiated, it may be treated like any other MutableMapping::
+
+>>> msg = Message(b'{"hello": "world"}')
+>>> assert msg['hello'] == 'world'
+>>> msg['id'] = 'foobar'
+>>> print(msg)
+{
+  "hello": "world",
+  "id": "foobar"
+}
+
+It can be converted to `bytes`::
+
+>>> msg = Message({"hello": "world"})
+>>> print(bytes(msg))
+b'{"hello":"world","id":"foobar"}'
+
+Or back into a garden-variety `dict`::
+
+   >>> dict(msg)
+   {'hello': 'world'}
+
+
+:param value: Initial value, if any.
+:param eager:
+When `True`, attempt to serialize or deserialize the initial value
+immediately, so that conversion exceptions are raised during
+the call to ``__init__()``.
+"""
+# pylint: disable=too-many-ancestors
+
+def __init__(self,
+ value: Union[bytes, Mapping[str, object]] = b'', *,
+ eager: bool = True):
+self._data: Optional[bytes] = None
+self._obj: Optional[Dict[str, object]] = None
+
+if isinstance(value, bytes):
+self._data = value
+if eager:
+self._obj = self._deserialize(self._data)
+else:
+self._obj = dict(value)
+if eager:
+self._data = self._serialize(self._obj)
+
+# Methods necessary to implement the MutableMapping interface, see:
+# 
https://docs.python.org/3/library/collections.abc.html#collections.abc.MutableMapping
+
+# We get pop, popitem, clear, update, setdefault, __contains__,
+# keys, items, values, get, __eq__ and __ne__ for free.
+
+def __getitem__(self, key: str) -> object:
+return self._object[key]
+
+def __setitem__(self, key: str, value: object) -> None:
+self._object[key] = value
+self._data = None
+
+def __delitem__(self, key: str) -> None:
+del self._object[key]
+self._data = None
+
+def __iter__(self) -> Iterator[str]:
+return iter(self._object)
+
+def __len__(self) -> int:
+return len(self._object)
+
+# Dunder methods not related to MutableMapping:
+
+def __repr__(self) -> str:
+return f"Message({self._object!r})"
+
+def __str__(self) -> str:
+"""Pretty-printed representation of this QMP message."""
+return json.dumps(self._object, indent=2)
+
+def __bytes__(self) -> bytes:
+"""bytes representing this QMP message."""
+if self._data is None:
+self._data = self._serialize(self._obj or {})
+return self._data
+
+#
+
+@property
+def _object(self) -> Dict[str, object]:
+"""
+A `dict` representing this QMP message.
+
+Generated on-demand, if required. This property is private
+because it returns an object that could be used to invalidate
+the internal state of the `Message` object.
+"""
+ 

[PATCH 20/20] python/aqmp: add scary message

2021-06-30 Thread John Snow
Add a warning whenever AQMP is used to steer people gently away from
using it for the time-being.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
index 5cd7df87c6..f85500e0a2 100644
--- a/python/qemu/aqmp/__init__.py
+++ b/python/qemu/aqmp/__init__.py
@@ -21,6 +21,8 @@
 # This work is licensed under the terms of the GNU GPL, version 2.  See
 # the COPYING file in the top-level directory.
 
+import warnings
+
 from .error import AQMPError, MultiException
 from .events import EventListener
 from .message import Message
@@ -28,6 +30,18 @@
 from .qmp_protocol import QMP, ExecInterruptedError, ExecuteError
 
 
+_WMSG = """
+
+The Asynchronous QMP library is currently in development and its API
+should be considered highly fluid and subject to change. It should
+not be used by any other scripts checked into the QEMU tree.
+
+Proceed with caution!
+"""
+
+warnings.warn(_WMSG, FutureWarning)
+
+
 # The order of these fields impact the Sphinx documentation order.
 __all__ = (
 # Classes, most to least important
-- 
2.31.1




[PATCH 10/20] python/aqmp: add _cb_inbound and _cb_inbound logging hooks

2021-06-30 Thread John Snow
Add hooks designed to log/filter incoming/outgoing messages. The primary
intent for these is to be able to support iotests which may want to log
messages with specific filters for reproducible output.

Another use is for plugging into Urwid frameworks; all messages in/out
can be automatically added to a rendering list for the purposes of a
qmp-shell like tool.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/protocol.py | 50 +---
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/python/qemu/aqmp/protocol.py b/python/qemu/aqmp/protocol.py
index a32a8cbbf6..72c9e95198 100644
--- a/python/qemu/aqmp/protocol.py
+++ b/python/qemu/aqmp/protocol.py
@@ -176,6 +176,11 @@ class AsyncProtocol(Generic[T]):
  can be written after the super() call.
  - `_on_message`:
  Actions to be performed when a message is received.
+ - `_cb_outbound`:
+ Logging/Filtering hook for all outbound messages.
+ - `_cb_inbound`:
+ Logging/Filtering hook for all inbound messages.
+ This hook runs *before* `_on_message()`.
 
 :param name:
 Name used for logging messages, if any. By default, messages
@@ -700,6 +705,43 @@ async def _bh_recv_message(self) -> None:
 # Section: Message I/O
 # 
 
+@upper_half
+@bottom_half
+def _cb_outbound(self, msg: T) -> T:
+"""
+Callback: outbound message hook.
+
+This is intended for subclasses to be able to add arbitrary
+hooks to filter or manipulate outgoing messages. The base
+implementation does nothing but log the message without any
+manipulation of the message.
+
+:param msg: raw outbound message
+:return: final outbound message
+"""
+self.logger.debug("--> %s", str(msg))
+return msg
+
+@upper_half
+@bottom_half
+def _cb_inbound(self, msg: T) -> T:
+"""
+Callback: inbound message hook.
+
+This is intended for subclasses to be able to add arbitrary
+hooks to filter or manipulate incoming messages. The base
+implementation does nothing but log the message without any
+manipulation of the message.
+
+This method does not "handle" incoming messages; it is a filter.
+The actual "endpoint" for incoming messages is `_on_message()`.
+
+:param msg: raw inbound message
+:return: processed inbound message
+"""
+self.logger.debug("<-- %s", str(msg))
+return msg
+
 @upper_half
 @bottom_half
 async def _do_recv(self) -> T:
@@ -728,8 +770,8 @@ async def _recv(self) -> T:
 
 :return: A single (filtered, processed) protocol message.
 """
-# A forthcoming commit makes this method less trivial.
-return await self._do_recv()
+message = await self._do_recv()
+return self._cb_inbound(message)
 
 @upper_half
 @bottom_half
@@ -759,7 +801,7 @@ async def _send(self, msg: T) -> None:
 
 :raise OSError: For problems with the underlying stream.
 """
-# A forthcoming commit makes this method less trivial.
+msg = self._cb_outbound(msg)
 self._do_send(msg)
 
 @bottom_half
@@ -774,6 +816,6 @@ async def _on_message(self, msg: T) -> None:
 directly cause the loop to halt, so logic may be best-kept
 to a minimum if at all possible.
 
-:param msg: The incoming message
+:param msg: The incoming message, already logged/filtered.
 """
 # Nothing to do in the abstract case.
-- 
2.31.1




[PATCH 07/20] python/aqmp: add runstate state machine to AsyncProtocol

2021-06-30 Thread John Snow
This serves a few purposes:

1. Protect interfaces when it's not safe to call them (via @require)

2. Add an interface by which an async client can determine if the state
has changed, for the purposes of connection management.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py |   5 +-
 python/qemu/aqmp/protocol.py | 133 +--
 2 files changed, 133 insertions(+), 5 deletions(-)

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
index e003c898bd..5c44fabeea 100644
--- a/python/qemu/aqmp/__init__.py
+++ b/python/qemu/aqmp/__init__.py
@@ -22,11 +22,14 @@
 # the COPYING file in the top-level directory.
 
 from .error import AQMPError, MultiException
-from .protocol import ConnectError
+from .protocol import ConnectError, Runstate
 
 
 # The order of these fields impact the Sphinx documentation order.
 __all__ = (
+# Classes
+'Runstate',
+
 # Exceptions, most generic to most explicit
 'AQMPError',
 'ConnectError',
diff --git a/python/qemu/aqmp/protocol.py b/python/qemu/aqmp/protocol.py
index beb7e12d9c..a99a191982 100644
--- a/python/qemu/aqmp/protocol.py
+++ b/python/qemu/aqmp/protocol.py
@@ -12,11 +12,10 @@
 
 import asyncio
 from asyncio import StreamReader, StreamWriter
+from enum import Enum
+from functools import wraps
 from ssl import SSLContext
-# import exceptions will be removed in a forthcoming commit.
-# The problem stems from pylint/flake8 believing that 'Any'
-# is unused because of its only use in a string-quoted type.
-from typing import (  # pylint: disable=unused-import # noqa
+from typing import (
 Any,
 Awaitable,
 Callable,
@@ -26,6 +25,7 @@
 Tuple,
 TypeVar,
 Union,
+cast,
 )
 
 from .error import AQMPError, MultiException
@@ -45,6 +45,20 @@
 _FutureT = TypeVar('_FutureT', bound=Optional['asyncio.Future[Any]'])
 
 
+class Runstate(Enum):
+"""Protocol session runstate."""
+
+#: Fully quiesced and disconnected.
+IDLE = 0
+#: In the process of connecting or establishing a session.
+CONNECTING = 1
+#: Fully connected and active session.
+RUNNING = 2
+#: In the process of disconnecting.
+#: Runstate may be returned to `IDLE` by calling `disconnect()`.
+DISCONNECTING = 3
+
+
 class ConnectError(AQMPError):
 """
 Raised when the initial connection process has failed.
@@ -66,6 +80,75 @@ def __str__(self) -> str:
 return f"{self.error_message}: {self.exc!s}"
 
 
+class StateError(AQMPError):
+"""
+An API command (connect, execute, etc) was issued at an inappropriate time.
+
+This error is raised when a command like
+:py:meth:`~AsyncProtocol.connect()` is issued at an inappropriate
+time.
+
+:param error_message: Human-readable string describing the state violation.
+:param state: The actual `Runstate` seen at the time of the violation.
+:param required: The `Runstate` required to process this command.
+
+"""
+def __init__(self, error_message: str,
+ state: Runstate, required: Runstate):
+super().__init__(error_message)
+self.error_message = error_message
+self.state = state
+self.required = required
+
+
+F = TypeVar('F', bound=Callable[..., Any])  # pylint: disable=invalid-name
+
+
+# Don't Panic.
+def require(required_state: Runstate) -> Callable[[F], F]:
+"""
+Decorator: protect a method so it can only be run in a certain `Runstate`.
+
+:param required_state: The `Runstate` required to invoke this method.
+:raise StateError: When the required `Runstate` is not met.
+"""
+def _decorator(func: F) -> F:
+# _decorator is the decorator that is built by calling the
+# require() decorator factory; e.g.:
+#
+# @require(Runstate.IDLE) def # foo(): ...
+# will replace 'foo' with the result of '_decorator(foo)'.
+
+@wraps(func)
+def _wrapper(proto: 'AsyncProtocol[Any]',
+ *args: Any, **kwargs: Any) -> Any:
+# _wrapper is the function that gets executed prior to the
+# decorated method.
+
+if proto.runstate != required_state:
+if proto.runstate == Runstate.CONNECTING:
+emsg = "Client is currently connecting."
+elif proto.runstate == Runstate.DISCONNECTING:
+emsg = ("Client is disconnecting."
+" Call disconnect() to return to IDLE state.")
+elif proto.runstate == Runstate.RUNNING:
+emsg = "Client is already connected and running."
+elif proto.runstate == Runstate.IDLE:
+emsg = "Client is disconnected and idle."
+else:
+assert False
+raise StateError(emsg, proto.runstate, required_state)
+# No StateError, so call the wrapped method.
+return func(proto, *args, **kwargs)
+
+# 

[PATCH 19/20] python/aqmp: add asyncio_run compatibility wrapper

2021-06-30 Thread John Snow
Merely as a convenience for users stuck on Python 3.6. It isn't used by
the library itself.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/util.py | 20 
 1 file changed, 20 insertions(+)

diff --git a/python/qemu/aqmp/util.py b/python/qemu/aqmp/util.py
index 2311be5893..356323ac70 100644
--- a/python/qemu/aqmp/util.py
+++ b/python/qemu/aqmp/util.py
@@ -109,6 +109,26 @@ async def wait_task_done(task: 
Optional['asyncio.Future[Any]']) -> None:
 break
 
 
+def asyncio_run(coro: Coroutine[Any, Any, T]) -> T:
+"""
+Python 3.6-compatible `asyncio.run` wrapper.
+
+:param coro: A coroutine to execute now.
+:return: The return value from the coroutine.
+"""
+# Python 3.7+
+if hasattr(asyncio, 'run'):
+# pylint: disable=no-member
+return asyncio.run(coro)  # type: ignore
+
+# Python 3.6
+loop = asyncio.get_event_loop()
+ret = loop.run_until_complete(coro)
+loop.close()
+
+return ret
+
+
 def pretty_traceback(prefix: str = "  | ") -> str:
 """
 Formats the current traceback, indented to provide visual distinction.
-- 
2.31.1




[PATCH 11/20] python/aqmp: add AsyncProtocol._readline() method

2021-06-30 Thread John Snow
This is added as a courtesy: many protocols are line-based, including
QMP. Putting it in AsyncProtocol lets us keep the QMP class
implementation just a pinch more abstract.

(And, if we decide to add a QTEST implementation later, it will need
this, too. (Yes, I have a QTEST implementation.))

Signed-off-by: John Snow 
---
 python/qemu/aqmp/protocol.py | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/python/qemu/aqmp/protocol.py b/python/qemu/aqmp/protocol.py
index 72c9e95198..6a2a7be056 100644
--- a/python/qemu/aqmp/protocol.py
+++ b/python/qemu/aqmp/protocol.py
@@ -742,6 +742,36 @@ def _cb_inbound(self, msg: T) -> T:
 self.logger.debug("<-- %s", str(msg))
 return msg
 
+@upper_half
+@bottom_half
+async def _readline(self) -> bytes:
+"""
+Wait for a newline from the incoming reader.
+
+This method is provided as a convenience for upper-layer
+protocols, as many are line-based.
+
+This method *may* return a sequence of bytes without a trailing
+newline if EOF occurs, but *some* bytes were received. In this
+case, the next call will raise `EOFError`. It is assumed that
+the layer 4 protocol will decide if there is anything meaningful
+to be done with a partial message.
+
+:raise OSError: For stream-related errors.
+:raise EOFError:
+If the reader stream is at EOF and there are no bytes to return.
+:return: bytes, including the newline.
+
+"""
+assert self._reader is not None
+msg_bytes = await self._reader.readline()
+
+if not msg_bytes:
+if self._reader.at_eof():
+raise EOFError
+
+return msg_bytes
+
 @upper_half
 @bottom_half
 async def _do_recv(self) -> T:
-- 
2.31.1




[PATCH 01/20] python/pylint: Add exception for TypeVar names ('T')

2021-06-30 Thread John Snow
'T' is a common TypeVar name, allow its use.

See also https://github.com/PyCQA/pylint/issues/3401 -- In the future,
we might be able to have a separate list of acceptable names for
TypeVars exclusively.

Signed-off-by: John Snow 
---
 python/setup.cfg | 1 +
 1 file changed, 1 insertion(+)

diff --git a/python/setup.cfg b/python/setup.cfg
index 11f71d5312..cfbe17f0f6 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -100,6 +100,7 @@ good-names=i,
fh,  # fh = open(...)
fd,  # fd = os.open(...)
c,   # for c in string: ...
+   T,   # for TypeVars. See pylint#3401
 
 [pylint.similarities]
 # Ignore imports when computing similarities.
-- 
2.31.1




[PATCH 03/20] python/aqmp: add asynchronous QMP (AQMP) subpackage

2021-06-30 Thread John Snow
For now, it's empty! Soon, it won't be.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py | 27 +++
 python/qemu/aqmp/py.typed|  0
 python/setup.cfg |  1 +
 3 files changed, 28 insertions(+)
 create mode 100644 python/qemu/aqmp/__init__.py
 create mode 100644 python/qemu/aqmp/py.typed

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
new file mode 100644
index 00..4c713b3ccf
--- /dev/null
+++ b/python/qemu/aqmp/__init__.py
@@ -0,0 +1,27 @@
+"""
+QEMU Monitor Protocol (QMP) development library & tooling.
+
+This package provides a fairly low-level class for communicating
+asynchronously with QMP protocol servers, as implemented by QEMU, the
+QEMU Guest Agent, and the QEMU Storage Daemon.
+
+:py:class:`~qmp_protocol.QMP` provides the main functionality of this
+package. All errors raised by this library dervive from `AQMPError`, see
+`aqmp.error` for additional detail. See `aqmp.events` for an in-depth
+tutorial on managing QMP events.
+"""
+
+# Copyright (C) 2020, 2021 John Snow for Red Hat, Inc.
+#
+# Authors:
+#  John Snow 
+#
+# Based on earlier work by Luiz Capitulino .
+#
+# This work is licensed under the terms of the GNU GPL, version 2.  See
+# the COPYING file in the top-level directory.
+
+
+# The order of these fields impact the Sphinx documentation order.
+__all__ = (
+)
diff --git a/python/qemu/aqmp/py.typed b/python/qemu/aqmp/py.typed
new file mode 100644
index 00..e69de29bb2
diff --git a/python/setup.cfg b/python/setup.cfg
index e1c48eb706..bce8807702 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -27,6 +27,7 @@ packages =
 qemu.qmp
 qemu.machine
 qemu.utils
+qemu.aqmp
 
 [options.package_data]
 * = py.typed
-- 
2.31.1




[PATCH 06/20] python/aqmp: add generic async message-based protocol support

2021-06-30 Thread John Snow
This is the bare minimum that you need to establish a full-duplex async
message-based protocol with Python's asyncio.

The features to be added in forthcoming commits are:

- Runstate tracking
- Logging
- Support for incoming connections via accept()
- _cb_outbound, _cb_inbound message hooks
- _readline() method

Signed-off-by: John Snow 

---

A note for reviewers: If you believe that it is unsafe to call
certain methods at certain times, you're absolutely correct!
These interfaces are protected in the following commit.

Some of the docstrings have dangling references, but they will resolve
themselves within the next few commits. Forgive me for not wanting to
rewrite them ... !

Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py |   4 +-
 python/qemu/aqmp/protocol.py | 523 +++
 python/qemu/aqmp/util.py |  54 
 3 files changed, 580 insertions(+), 1 deletion(-)
 create mode 100644 python/qemu/aqmp/protocol.py

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
index 8e955d784d..e003c898bd 100644
--- a/python/qemu/aqmp/__init__.py
+++ b/python/qemu/aqmp/__init__.py
@@ -22,12 +22,14 @@
 # the COPYING file in the top-level directory.
 
 from .error import AQMPError, MultiException
+from .protocol import ConnectError
 
 
 # The order of these fields impact the Sphinx documentation order.
 __all__ = (
-# Exceptions
+# Exceptions, most generic to most explicit
 'AQMPError',
+'ConnectError',
 
 # Niche topics
 'MultiException',
diff --git a/python/qemu/aqmp/protocol.py b/python/qemu/aqmp/protocol.py
new file mode 100644
index 00..beb7e12d9c
--- /dev/null
+++ b/python/qemu/aqmp/protocol.py
@@ -0,0 +1,523 @@
+"""
+Generic Asynchronous Message-based Protocol Support
+
+This module provides a generic framework for sending and receiving
+messages over an asyncio stream. `AsyncProtocol` is an abstract class
+that implements the core mechanisms of a simple send/receive protocol,
+and is designed to be extended.
+
+In this package, it is used as the implementation for the
+:py:class:`~qmp_protocol.QMP` class.
+"""
+
+import asyncio
+from asyncio import StreamReader, StreamWriter
+from ssl import SSLContext
+# import exceptions will be removed in a forthcoming commit.
+# The problem stems from pylint/flake8 believing that 'Any'
+# is unused because of its only use in a string-quoted type.
+from typing import (  # pylint: disable=unused-import # noqa
+Any,
+Awaitable,
+Callable,
+Generic,
+List,
+Optional,
+Tuple,
+TypeVar,
+Union,
+)
+
+from .error import AQMPError, MultiException
+from .util import (
+bottom_half,
+create_task,
+flush,
+is_closing,
+upper_half,
+wait_closed,
+wait_task_done,
+)
+
+
+T = TypeVar('T')
+_TaskFN = Callable[[], Awaitable[None]]  # aka ``async def func() -> None``
+_FutureT = TypeVar('_FutureT', bound=Optional['asyncio.Future[Any]'])
+
+
+class ConnectError(AQMPError):
+"""
+Raised when the initial connection process has failed.
+
+This Exception always wraps a "root cause" exception that can be
+interrogated for additional information.
+
+:param error_message: Human-readable string describing the error.
+:param exc: The root-cause exception.
+"""
+def __init__(self, error_message: str, exc: Exception):
+super().__init__(error_message)
+#: Human-readable error string
+self.error_message: str = error_message
+#: Wrapped root cause exception
+self.exc: Exception = exc
+
+def __str__(self) -> str:
+return f"{self.error_message}: {self.exc!s}"
+
+
+class AsyncProtocol(Generic[T]):
+"""
+AsyncProtocol implements a generic async message-based protocol.
+
+This protocol assumes the basic unit of information transfer between
+client and server is a "message", the details of which are left up
+to the implementation. It assumes the sending and receiving of these
+messages is full-duplex and not necessarily correlated; i.e. it
+supports asynchronous inbound messages.
+
+It is designed to be extended by a specific protocol which provides
+the implementations for how to read and send messages. These must be
+defined in `_do_recv()` and `_do_send()`, respectively.
+
+Other callbacks have a default implementation, but are intended to be
+either extended or overridden:
+
+ - `_begin_new_session`:
+ The base implementation starts the reader/writer tasks.
+ A protocol implementation can override this call, inserting
+ actions to be taken prior to starting the reader/writer tasks
+ before the super() call; actions needing to occur afterwards
+ can be written after the super() call.
+ - `_on_message`:
+ Actions to be performed when a message is received.
+"""
+# pylint: disable=too-many-instance-attributes
+
+# -
+# Section: 

[PATCH 02/20] python/pylint: disable too-many-function-args

2021-06-30 Thread John Snow
too-many-function-args seems prone to failure when considering
things like Method Resolution Order, which mypy gets correct. When
dealing with multiple inheritance, pylint doesn't seem to understand
which method will actually get called, while mypy does.

Remove the less powerful, redundant check.

Signed-off-by: John Snow 
---
 python/setup.cfg | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/python/setup.cfg b/python/setup.cfg
index cfbe17f0f6..e1c48eb706 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -87,7 +87,7 @@ ignore_missing_imports = True
 # --enable=similarities". If you want to run only the classes checker, but have
 # no Warning level messages displayed, use "--disable=all --enable=classes
 # --disable=W".
-disable=
+disable=too-many-function-args,  # mypy handles this with less false positives.
 
 [pylint.basic]
 # Good variable names which should always be accepted, separated by a comma.
-- 
2.31.1




[PATCH 05/20] python/aqmp: add asyncio compatibility wrappers

2021-06-30 Thread John Snow
Python 3.6 does not have all of the goodies that Python 3.7 does, and I
need to support both. Add some compatibility wrappers needed for this
purpose.

(Note: Python 3.6 is EOL December 2021.)

Signed-off-by: John Snow 
---
 python/qemu/aqmp/util.py | 77 
 1 file changed, 77 insertions(+)
 create mode 100644 python/qemu/aqmp/util.py

diff --git a/python/qemu/aqmp/util.py b/python/qemu/aqmp/util.py
new file mode 100644
index 00..c88a2201bc
--- /dev/null
+++ b/python/qemu/aqmp/util.py
@@ -0,0 +1,77 @@
+"""
+Miscellaneous Utilities
+
+This module primarily provides compatibility wrappers for Python 3.6 to
+provide some features that otherwise become available in Python 3.7+.
+"""
+
+import asyncio
+import sys
+from typing import (
+Any,
+Coroutine,
+Optional,
+TypeVar,
+)
+
+
+T = TypeVar('T')
+
+
+def create_task(coro: Coroutine[Any, Any, T],
+loop: Optional[asyncio.AbstractEventLoop] = None
+) -> 'asyncio.Future[T]':
+"""
+Python 3.6-compatible `asyncio.create_task` wrapper.
+
+:param coro: The coroutine to execute in a task.
+:param loop: Optionally, the loop to create the task in.
+
+:return: An `asyncio.Future` object.
+"""
+# Python 3.7+:
+if sys.version_info >= (3, 7):
+# pylint: disable=no-member
+if loop is not None:
+return loop.create_task(coro)
+return asyncio.create_task(coro)
+
+# Python 3.6:
+return asyncio.ensure_future(coro, loop=loop)
+
+
+def is_closing(writer: asyncio.StreamWriter) -> bool:
+"""
+Python 3.6-compatible `asyncio.StreamWriter.is_closing` wrapper.
+
+:param writer: The `asyncio.StreamWriter` object.
+:return: `True` if the writer is closing, or closed.
+"""
+if hasattr(writer, 'is_closing'):
+# Python 3.7+
+return writer.is_closing()  # type: ignore
+
+# Python 3.6:
+transport = writer.transport
+assert isinstance(transport, asyncio.WriteTransport)
+return transport.is_closing()
+
+
+async def wait_closed(writer: asyncio.StreamWriter) -> None:
+"""
+Python 3.6-compatible `asyncio.StreamWriter.wait_closed` wrapper.
+
+:param writer: The `asyncio.StreamWriter` to wait on.
+"""
+if hasattr(writer, 'wait_closed'):
+# Python 3.7+
+await writer.wait_closed()  # type: ignore
+else:
+# Python 3.6
+transport = writer.transport
+assert isinstance(transport, asyncio.WriteTransport)
+
+while not transport.is_closing():
+await asyncio.sleep(0.0)
+while transport.get_write_buffer_size() > 0:
+await asyncio.sleep(0.0)
-- 
2.31.1




[PATCH 08/20] python/aqmp: add logging to AsyncProtocol

2021-06-30 Thread John Snow
Give the connection and the reader/writer tasks nicknames, and add
logging statements throughout.

Signed-off-by: John Snow 
---
 python/qemu/aqmp/protocol.py | 64 
 python/qemu/aqmp/util.py | 32 ++
 2 files changed, 90 insertions(+), 6 deletions(-)

diff --git a/python/qemu/aqmp/protocol.py b/python/qemu/aqmp/protocol.py
index a99a191982..dd8564ee02 100644
--- a/python/qemu/aqmp/protocol.py
+++ b/python/qemu/aqmp/protocol.py
@@ -14,6 +14,7 @@
 from asyncio import StreamReader, StreamWriter
 from enum import Enum
 from functools import wraps
+import logging
 from ssl import SSLContext
 from typing import (
 Any,
@@ -34,6 +35,7 @@
 create_task,
 flush,
 is_closing,
+pretty_traceback,
 upper_half,
 wait_closed,
 wait_task_done,
@@ -174,14 +176,28 @@ class AsyncProtocol(Generic[T]):
  can be written after the super() call.
  - `_on_message`:
  Actions to be performed when a message is received.
+
+:param name:
+Name used for logging messages, if any. By default, messages
+will log to 'qemu.aqmp.protocol', but each individual connection
+can be given its own logger by giving it a name; messages will
+then log to 'qemu.aqmp.protocol.${name}'.
 """
 # pylint: disable=too-many-instance-attributes
 
+#: Logger object for debugging messages from this connection.
+logger = logging.getLogger(__name__)
+
 # -
 # Section: Public interface
 # -
 
-def __init__(self) -> None:
+def __init__(self, name: Optional[str] = None) -> None:
+#: The nickname for this connection, if any.
+self.name: Optional[str] = name
+if self.name is not None:
+self.logger = self.logger.getChild(self.name)
+
 # stream I/O
 self._reader: Optional[StreamReader] = None
 self._writer: Optional[StreamWriter] = None
@@ -212,6 +228,15 @@ def __init__(self) -> None:
 #: An `asyncio.Event` that signals when `runstate` is changed.
 self.runstate_changed: asyncio.Event = asyncio.Event()
 
+def __repr__(self) -> str:
+argstr = ''
+if self.name is not None:
+argstr += f"name={self.name}"
+return "{:s}({:s})".format(
+type(self).__name__,
+argstr,
+)
+
 @property
 def runstate(self) -> Runstate:
 """The current `Runstate` of the connection."""
@@ -301,6 +326,8 @@ async def _new_session(self,
 assert self.runstate == Runstate.IDLE
 self._set_state(Runstate.CONNECTING)
 
+if not self._outgoing.empty():
+self.logger.warning("Outgoing message queue was not empty!")
 self._outgoing = asyncio.Queue()
 
 phase = "connection"
@@ -311,9 +338,15 @@ async def _new_session(self,
 await self._begin_new_session()
 
 except Exception as err:
-# Reset from CONNECTING back to IDLE.
-await self.disconnect()
 emsg = f"Failed to establish {phase}"
+self.logger.error("%s:\n%s\n", emsg, pretty_traceback())
+try:
+# Reset from CONNECTING back to IDLE.
+await self.disconnect()
+except:
+emsg = "Unexpected bottom half exceptions"
+self.logger.error("%s:\n%s\n", emsg, pretty_traceback())
+raise
 raise ConnectError(emsg, err) from err
 
 assert self.runstate == Runstate.RUNNING
@@ -330,12 +363,16 @@ async def _do_connect(self, address: Union[str, 
Tuple[str, int]],
 
 :raise OSError: For stream-related errors.
 """
+self.logger.debug("Connecting ...")
+
 if isinstance(address, tuple):
 connect = asyncio.open_connection(address[0], address[1], ssl=ssl)
 else:
 connect = asyncio.open_unix_connection(path=address, ssl=ssl)
 self._reader, self._writer = await connect
 
+self.logger.debug("Connected.")
+
 @upper_half
 async def _begin_new_session(self) -> None:
 """
@@ -343,8 +380,8 @@ async def _begin_new_session(self) -> None:
 """
 assert self.runstate == Runstate.CONNECTING
 
-reader_coro = self._bh_loop_forever(self._bh_recv_message)
-writer_coro = self._bh_loop_forever(self._bh_send_message)
+reader_coro = self._bh_loop_forever(self._bh_recv_message, 'Reader')
+writer_coro = self._bh_loop_forever(self._bh_send_message, 'Writer')
 
 self._reader_task = create_task(reader_coro)
 self._writer_task = create_task(writer_coro)
@@ -374,6 +411,7 @@ def _schedule_disconnect(self, force: bool = False) -> None:
 terminating execution. When `True`, terminate immediately.
 """
 if not self._dc_task:
+self.logger.debug("scheduling disconnect.")
 self._dc_task = 

[PATCH 04/20] python/aqmp: add error classes

2021-06-30 Thread John Snow
Signed-off-by: John Snow 
---
 python/qemu/aqmp/__init__.py |  7 +++
 python/qemu/aqmp/error.py| 97 
 2 files changed, 104 insertions(+)
 create mode 100644 python/qemu/aqmp/error.py

diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py
index 4c713b3ccf..8e955d784d 100644
--- a/python/qemu/aqmp/__init__.py
+++ b/python/qemu/aqmp/__init__.py
@@ -21,7 +21,14 @@
 # This work is licensed under the terms of the GNU GPL, version 2.  See
 # the COPYING file in the top-level directory.
 
+from .error import AQMPError, MultiException
+
 
 # The order of these fields impact the Sphinx documentation order.
 __all__ = (
+# Exceptions
+'AQMPError',
+
+# Niche topics
+'MultiException',
 )
diff --git a/python/qemu/aqmp/error.py b/python/qemu/aqmp/error.py
new file mode 100644
index 00..126f77bb5c
--- /dev/null
+++ b/python/qemu/aqmp/error.py
@@ -0,0 +1,97 @@
+"""
+AQMP Error Classes
+
+This package seeks to provide semantic error classes that are intended
+to be used directly by clients when they would like to handle particular
+semantic failures (e.g. "failed to connect") without needing to know the
+enumeration of possible reasons for that failure.
+
+AQMPError serves as the ancestor for *almost* all exceptions raised by
+this package, and is suitable for use in handling semantic errors from
+this library. In most cases, individual public methods will attempt to
+catch and re-encapsulate various exceptions to provide a semantic
+error-handling interface.
+
+.. caution::
+
+The only exception that is not an `AQMPError` is
+`MultiException`. It is special, and used to encapsulate one-or-more
+exceptions of an arbitrary kind; this exception MAY be raised on
+`disconnect()` when there are two or more exceptions from the AQMP
+event loop to report back to the caller.
+
+Every pain has been taken to prevent this circumstance but in
+certain cases these exceptions may occasionally be (unfortunately)
+visible. See `MultiException` and `AsyncProtocol.disconnect()` for
+more details.
+
+
+.. admonition:: AQMP Exception Hierarchy Reference
+
+ |   `Exception`
+ |+-- `MultiException`
+ |+-- `AQMPError`
+ | +-- `ConnectError`
+ | +-- `StateError`
+ | +-- `ExecInterruptedError`
+ | +-- `ExecuteError`
+ | +-- `ListenerError`
+ | +-- `ProtocolError`
+ |  +-- `DeserializationError`
+ |  +-- `UnexpectedTypeError`
+ |  +-- `ServerParseError`
+ |  +-- `BadReplyError`
+ |  +-- `GreetingError`
+ |  +-- `NegotiationError`
+"""
+
+from typing import Iterable, Iterator, List
+
+
+class AQMPError(Exception):
+"""Abstract error class for all errors originating from this package."""
+
+
+class ProtocolError(AQMPError):
+"""
+Abstract error class for protocol failures.
+
+Semantically, these errors are generally the fault of either the
+protocol server or as a result of a bug in this this library.
+
+:param error_message: Human-readable string describing the error.
+"""
+def __init__(self, error_message: str):
+super().__init__(error_message)
+#: Human-readable error message, without any prefix.
+self.error_message: str = error_message
+
+
+class MultiException(Exception):
+"""
+Used for multiplexing exceptions.
+
+This exception is used in the case that errors were encountered in both the
+Reader and Writer tasks, and we must raise more than one.
+
+PEP 0654 seeks to remedy this clunky infrastructure, but it will not be
+available for quite some time -- possibly Python 3.11 or even later.
+
+:param exceptions: An iterable of `BaseException` objects.
+"""
+def __init__(self, exceptions: Iterable[BaseException]):
+super().__init__(exceptions)
+self._exceptions: List[BaseException] = list(exceptions)
+
+def __str__(self) -> str:
+ret = "--\n"
+ret += "Multiple Exceptions occurred:\n"
+ret += "\n"
+for i, exc in enumerate(self._exceptions):
+ret += f"{i}) {str(exc)}\n"
+ret += "\n"
+ret += "-\n"
+return ret
+
+def __iter__(self) -> Iterator[BaseException]:
+return iter(self._exceptions)
-- 
2.31.1




[PATCH 00/20] python: introduce Asynchronous QMP package

2021-06-30 Thread John Snow
GitLab: https://gitlab.com/jsnow/qemu/-/commits/python-async-qmp-aqmp
CI: https://gitlab.com/jsnow/qemu/-/pipelines/330003554
Docs: https://people.redhat.com/~jsnow/sphinx/html/qemu.aqmp.html
Based-on: <20210701020921.1679468-1-js...@redhat.com>
  [PULL 00/15] Python patches

Hi!

This patch series adds an Asynchronous QMP package to the Python
library. It offers a few improvements over the previous library:

- out-of-band support
- true asynchronous event support
- avoids undocumented interfaces abusing non-blocking sockets

This library serves as the basis for a new qmp-shell program that will
offer improved reconnection support, true asynchronous display of
events, VM and job status update notifiers, and so on.

My intent is to eventually publish this library directly to PyPI as a
standalone package. I would like to phase out our usage of the old QMP
library over time; eventually replacing it entirely with this one.

This series looks big by line count, but it's *mostly*
docstrings. Seriously!

This package has *no* external dependencies whatsoever.

Notes & Design
==

Here are some notes on the design of how the library works, to serve as
a primer for review; however I also **highly recommend** browsing the
generated Sphinx documentation for this series.

Here's that link again:
https://people.redhat.com/~jsnow/sphinx/html/qemu.aqmp.html

The core machinery is split between the AsyncProtocol and QMP
classes. AsyncProtocol provides the generic machinery, while QMP
provides the QMP-specific details.

The design uses two independent coroutines that act as the "bottom
half", a writer task and a reader task. These tasks run for the duration
of the connection and independently send and receive messages,
respectively.

A third task, disconnect, is scheduled asynchronously whenever an
unrecoverable error occurs and facilitates coalescing of the other two
tasks.

This diagram for how execute() operates may be helpful for understanding
how AsyncProtocol is laid out. The arrows indicate the direction of a
QMP message; the long horizontal dash indicates the separation between
the upper and lower half of the event loop. The queue mechanisms between
both dashes serve as the intermediaries between the upper and lower
half.

   +-+
   | caller  |
   +-+
   ^ |
   | v
   +-+
 +---> |execute()| ---+
 | +-+|
 ||
[---]
 ||
 |v
++--++++--+---+
| ExecQueue || EventListeners ||Outbound Queue|
++--+++---++--+---+
 ^^   |
 ||   |
[---]
 ||   |
 ||   v
  +--++---+   +---+---+
  | Reader Task/Coroutine |   | Writer Task/Coroutine |
  +---+---+   +---+---+
  ^   |
  |   v
+-+--+  +-+--+
|StreamReader|  |StreamWriter|
++  ++

The caller will invoke execute(), which in turn will deposit a message
in the outbound send queue. This will wake up the writer task, which
well send the message over the wire.

The execute() method will then yield to wait for a reply delivered to an
execution queue created solely for that execute statement.

When a message arrives, the Reader task will unblock and route the
message either to the EventListener subsystem, or place it in the
appropriate pending execution queue.

Once a message is placed in the pending execution queue, execute() will
unblock and the execution will conclude, returning the result of the RPC
call to the caller.

Ugly Bits
=

- MultiException is ... wonky. I am still working out how to avoid needing it.
  See patch 04/20 for details here, or see
  https://people.redhat.com/~jsnow/sphinx/html/qemu.aqmp.error.html

  Patch 06/20 also goes into details of the ugliness; see
  AsyncProtocol._results or view the same information here:
  
https://people.redhat.com/~jsnow/sphinx/html/_modules/qemu/aqmp/protocol.html#AsyncProtocol._results

- There are quite a few lingering questions I have over the design of the
  EventListener subsystem; I wrote about those ugly bits in excruciating detail
  in patch 14/20.

  You can view them formatted nicely here:
  

Re: [PATCH v3 00/37] target/riscv: support packed extension v0.9.4

2021-06-30 Thread LIU Zhiwei



On 2021/7/1 上午9:30, Alistair Francis wrote:

On Thu, Jun 24, 2021 at 9:14 PM LIU Zhiwei  wrote:

This patchset implements the packed extension for RISC-V on QEMU.

You can also find this patch set on my
repo(https://github.com/romanheros/qemu.git branch:packed-upstream-v3).

Features:
* support specification packed extension
   v0.9.4(https://github.com/riscv/riscv-p-spec/)
* support basic packed extension.
* support Zpsoperand.

There is now a 0.9.5, do you have plans to support that?


Thanks for pointing it out.

After review the latest change, I think it is small change. So I will 
not update the  implementation to v0.9.5.  I hope next supporting 
version is v1.0.


Thanks,
Zhiwei



Alistair


v3:
* split 32 bit vector operations.

v2:
* remove all the TARGET_RISCV64 macro.
* use tcg_gen_vec_* to accelabrate.
* update specficication to latest v0.9.4
* fix kmsxda32, kmsda32,kslra32,smal

LIU Zhiwei (37):
   target/riscv: implementation-defined constant parameters
   target/riscv: Make the vector helper functions public
   target/riscv: 16-bit Addition & Subtraction Instructions
   target/riscv: 8-bit Addition & Subtraction Instruction
   target/riscv: SIMD 16-bit Shift Instructions
   target/riscv: SIMD 8-bit Shift Instructions
   target/riscv: SIMD 16-bit Compare Instructions
   target/riscv: SIMD 8-bit Compare Instructions
   target/riscv: SIMD 16-bit Multiply Instructions
   target/riscv: SIMD 8-bit Multiply Instructions
   target/riscv: SIMD 16-bit Miscellaneous Instructions
   target/riscv: SIMD 8-bit Miscellaneous Instructions
   target/riscv: 8-bit Unpacking Instructions
   target/riscv: 16-bit Packing Instructions
   target/riscv: Signed MSW 32x32 Multiply and Add Instructions
   target/riscv: Signed MSW 32x16 Multiply and Add Instructions
   target/riscv: Signed 16-bit Multiply 32-bit Add/Subtract Instructions
   target/riscv: Signed 16-bit Multiply 64-bit Add/Subtract Instructions
   target/riscv: Partial-SIMD Miscellaneous Instructions
   target/riscv: 8-bit Multiply with 32-bit Add Instructions
   target/riscv: 64-bit Add/Subtract Instructions
   target/riscv: 32-bit Multiply 64-bit Add/Subtract Instructions
   target/riscv: Signed 16-bit Multiply with 64-bit Add/Subtract
 Instructions
   target/riscv: Non-SIMD Q15 saturation ALU Instructions
   target/riscv: Non-SIMD Q31 saturation ALU Instructions
   target/riscv: 32-bit Computation Instructions
   target/riscv: Non-SIMD Miscellaneous Instructions
   target/riscv: RV64 Only SIMD 32-bit Add/Subtract Instructions
   target/riscv: RV64 Only SIMD 32-bit Shift Instructions
   target/riscv: RV64 Only SIMD 32-bit Miscellaneous Instructions
   target/riscv: RV64 Only SIMD Q15 saturating Multiply Instructions
   target/riscv: RV64 Only 32-bit Multiply Instructions
   target/riscv: RV64 Only 32-bit Multiply & Add Instructions
   target/riscv: RV64 Only 32-bit Parallel Multiply & Add Instructions
   target/riscv: RV64 Only Non-SIMD 32-bit Shift Instructions
   target/riscv: RV64 Only 32-bit Packing Instructions
   target/riscv: configure and turn on packed extension from command line

  target/riscv/cpu.c  |   34 +
  target/riscv/cpu.h  |6 +
  target/riscv/helper.h   |  330 ++
  target/riscv/insn32.decode  |  370 +++
  target/riscv/insn_trans/trans_rvp.c.inc | 1155 +++
  target/riscv/internals.h|   50 +
  target/riscv/meson.build|1 +
  target/riscv/packed_helper.c| 3851 +++
  target/riscv/translate.c|3 +
  target/riscv/vector_helper.c|   82 +-
  10 files changed, 5824 insertions(+), 58 deletions(-)
  create mode 100644 target/riscv/insn_trans/trans_rvp.c.inc
  create mode 100644 target/riscv/packed_helper.c

--
2.17.1






Re: [PATCH 15/18] vhost-net: control virtqueue support

2021-06-30 Thread Jason Wang



在 2021/7/1 上午1:33, Eugenio Perez Martin 写道:

On Mon, Jun 21, 2021 at 6:18 AM Jason Wang  wrote:

We assume there's no cvq in the past, this is not true when we need
control virtqueue support for vhost-user backends. So this patch
implements the control virtqueue support for vhost-net. As datapath,
the control virtqueue is also required to be coupled with the
NetClientState. The vhost_net_start/stop() are tweaked to accept the
number of datapath queue pairs plus the the number of control
virtqueue for us to start and stop the vhost device.

Signed-off-by: Jason Wang 
---
  hw/net/vhost_net.c  | 43 ++---
  hw/net/virtio-net.c |  4 ++--
  include/net/vhost_net.h |  6 --
  3 files changed, 38 insertions(+), 15 deletions(-)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index ef1370bd92..fe2fd7e3d5 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -311,11 +311,14 @@ static void vhost_net_stop_one(struct vhost_net *net,
  }

  int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
-int total_queues)
+int data_qps, int cvq)

I can see the convenience of being an int, but maybe it is more clear
to use a boolean?



I tend to leave this for future extensions. E.g we may have more than 
one cvqs.






  {
  BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
  VirtioBusState *vbus = VIRTIO_BUS(qbus);
  VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+int total_notifiers = data_qps * 2 + cvq;
+VirtIONet *n = VIRTIO_NET(dev);
+int nvhosts = data_qps + cvq;
  struct vhost_net *net;
  int r, e, i;
  NetClientState *peer;
@@ -325,9 +328,14 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
  return -ENOSYS;
  }

-for (i = 0; i < total_queues; i++) {
+for (i = 0; i < nvhosts; i++) {
+
+if (i < data_qps) {
+peer = qemu_get_peer(ncs, i);
+} else { /* Control Virtqueue */
+peer = qemu_get_peer(ncs, n->max_qps);

The field max_qps should be max_queues until the next patch, or maybe
we can reorder the commits and then rename the field before this
commit?



You're right, let me re-order the patches.

Thanks




Same comment later on this function and in vhost_net_stop.

Thanks!


+}

-peer = qemu_get_peer(ncs, i);
  net = get_vhost_net(peer);
  vhost_net_set_vq_index(net, i * 2);

@@ -340,14 +348,18 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
*ncs,
  }
   }

-r = k->set_guest_notifiers(qbus->parent, total_queues * 2, true);
+r = k->set_guest_notifiers(qbus->parent, total_notifiers, true);
  if (r < 0) {
  error_report("Error binding guest notifier: %d", -r);
  goto err;
  }

-for (i = 0; i < total_queues; i++) {
-peer = qemu_get_peer(ncs, i);
+for (i = 0; i < nvhosts; i++) {
+if (i < data_qps) {
+peer = qemu_get_peer(ncs, i);
+} else {
+peer = qemu_get_peer(ncs, n->max_qps);
+}
  r = vhost_net_start_one(get_vhost_net(peer), dev);

  if (r < 0) {
@@ -371,7 +383,7 @@ err_start:
  peer = qemu_get_peer(ncs , i);
  vhost_net_stop_one(get_vhost_net(peer), dev);
  }
-e = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+e = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
  if (e < 0) {
  fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e);
  fflush(stderr);
@@ -381,18 +393,27 @@ err:
  }

  void vhost_net_stop(VirtIODevice *dev, NetClientState *ncs,
-int total_queues)
+int data_qps, int cvq)
  {
  BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(dev)));
  VirtioBusState *vbus = VIRTIO_BUS(qbus);
  VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(vbus);
+VirtIONet *n = VIRTIO_NET(dev);
+NetClientState *peer;
+int total_notifiers = data_qps * 2 + cvq;
+int nvhosts = data_qps + cvq;
  int i, r;

-for (i = 0; i < total_queues; i++) {
-vhost_net_stop_one(get_vhost_net(ncs[i].peer), dev);
+for (i = 0; i < nvhosts; i++) {
+if (i < data_qps) {
+peer = qemu_get_peer(ncs, i);
+} else {
+peer = qemu_get_peer(ncs, n->max_qps);
+}
+vhost_net_stop_one(get_vhost_net(peer), dev);
  }

-r = k->set_guest_notifiers(qbus->parent, total_queues * 2, false);
+r = k->set_guest_notifiers(qbus->parent, total_notifiers, false);
  if (r < 0) {
  fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", r);
  fflush(stderr);
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index bd7958b9f0..614660274c 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -285,14 +285,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t 
status)
  }

  n->vhost_started = 1;
-r = 

[PULL 13/15] python: Update help text on 'make clean', 'make distclean'

2021-06-30 Thread John Snow
Update for visual parity with all the remaining targets.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-14-js...@redhat.com
Signed-off-by: John Snow 
---
 python/Makefile | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/python/Makefile b/python/Makefile
index a14705d12e..0432ee0022 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -36,11 +36,14 @@ help:
@echo "make dev-venv"
@echo "Creates a simple venv for check-dev. ($(QEMU_VENV_DIR))"
@echo ""
-   @echo "make clean:  remove package build output."
+   @echo "make clean:"
+   @echo "Remove package build output."
@echo ""
-   @echo "make distclean:  remove venv files, qemu package forwarder,"
-   @echo " built distribution files, and everything"
-   @echo " from 'make clean'."
+   @echo "make distclean:"
+   @echo "remove pipenv/venv files, qemu package forwarder,"
+   @echo "built distribution files, and everything from 'make clean'."
+   @echo ""
+   @echo -e "Have a nice day ^_^\n"
 
 .PHONY: pipenv
 pipenv: .venv
-- 
2.31.1




[PULL 09/15] python: Fix .PHONY Make specifiers

2021-06-30 Thread John Snow
I missed the 'check-tox' target. Add that, but split the large .PHONY
specifier at the top into its component pieces and move them near the
targets they describe so that they're much harder to forget to update.

Signed-off-by: John Snow 
Reviewed-by: Wainer dos Santos Moschetta 
Reviewed-by: Willian Rampazzo 
Message-id: 20210629214323.1329806-10-js...@redhat.com
Signed-off-by: John Snow 
---
 python/Makefile | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/python/Makefile b/python/Makefile
index d2cfa6ad8f..d34c4e35d9 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -1,5 +1,4 @@
-.PHONY: help pipenv check-pipenv check clean distclean develop
-
+.PHONY: help
 help:
@echo "python packaging help:"
@echo ""
@@ -29,25 +28,32 @@ help:
@echo " built distribution files, and everything"
@echo " from 'make clean'."
 
+.PHONY: pipenv
 pipenv: .venv
 .venv: Pipfile.lock
@PIPENV_VENV_IN_PROJECT=1 pipenv sync --dev --keep-outdated
@touch .venv
 
+.PHONY: check-pipenv
 check-pipenv: pipenv
@pipenv run make check
 
+.PHONY: develop
 develop:
pip3 install -e .[devel]
 
+.PHONY: check
 check:
@avocado --config avocado.cfg run tests/
 
+.PHONY: check-tox
 check-tox:
@tox
 
+.PHONY: clean
 clean:
python3 setup.py clean --all
 
+.PHONY: distclean
 distclean: clean
rm -rf qemu.egg-info/ .venv/ .tox/ dist/
-- 
2.31.1




[PULL 11/15] python: add 'make check-dev' invocation

2021-06-30 Thread John Snow
This is a *third* way to run the Python tests. Unlike the first two
(check-pipenv, check-tox), this version does not require any specific
interpreter version -- making it a lot easier to tell people to run it
as a quick smoketest prior to submission to GitLab CI.

Summary:

  Checked via GitLab CI:
- check-pipenv: tests our oldest python & dependencies
- check-tox: tests newest dependencies on all non-EOL python versions
  Executed only incidentally:
- check-dev: tests newest dependencies on whichever python version

('make check' does not set up any environment at all, it just runs the
tests in your current environment. All four invocations perform the
exact same tests, just in different execution environments.)

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Tested-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-12-js...@redhat.com
[Maintainer edit: added .dev-venv/ to .gitignore. --js]
Acked-by: Wainer dos Santos Moschetta 
Acked-by: Willian Rampazzo 
Signed-off-by: John Snow 
---
 python/.gitignore |  1 +
 python/Makefile   | 35 +--
 2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/python/.gitignore b/python/.gitignore
index 272ed223a8..c8b0e67fe6 100644
--- a/python/.gitignore
+++ b/python/.gitignore
@@ -14,3 +14,4 @@ qemu.egg-info/
 # virtual environments (pipenv et al)
 .venv/
 .tox/
+.dev-venv/
diff --git a/python/Makefile b/python/Makefile
index d34c4e35d9..8f8e1999c0 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -1,3 +1,5 @@
+QEMU_VENV_DIR=.dev-venv
+
 .PHONY: help
 help:
@echo "python packaging help:"
@@ -14,6 +16,11 @@ help:
@echo "Requires: Python 3.6 - 3.10, and tox."
@echo "Hint (Fedora): 'sudo dnf install python3-tox python3.10'"
@echo ""
+   @echo "make check-dev:"
+   @echo "Run tests in a venv against your default python3 version."
+   @echo "These tests use the newest dependencies."
+   @echo "Requires: Python 3.x"
+   @echo ""
@echo "make develop:Install deps for 'make check', and"
@echo " the qemu libs in editable/development mode."
@echo ""
@@ -22,6 +29,9 @@ help:
@echo "make pipenv"
@echo "Creates pipenv's virtual environment (.venv)"
@echo ""
+   @echo "make dev-venv"
+   @echo "Creates a simple venv for check-dev. ($(QEMU_VENV_DIR))"
+   @echo ""
@echo "make clean:  remove package build output."
@echo ""
@echo "make distclean:  remove venv files, qemu package forwarder,"
@@ -38,9 +48,30 @@ pipenv: .venv
 check-pipenv: pipenv
@pipenv run make check
 
+.PHONY: dev-venv
+dev-venv: $(QEMU_VENV_DIR) $(QEMU_VENV_DIR)/bin/activate
+$(QEMU_VENV_DIR) $(QEMU_VENV_DIR)/bin/activate: setup.cfg
+   @echo "VENV $(QEMU_VENV_DIR)"
+   @python3 -m venv $(QEMU_VENV_DIR)
+   @(  \
+   echo "ACTIVATE $(QEMU_VENV_DIR)";   \
+   . $(QEMU_VENV_DIR)/bin/activate;\
+   echo "INSTALL qemu[devel] $(QEMU_VENV_DIR)";\
+   make develop 1>/dev/null;   \
+   )
+   @touch $(QEMU_VENV_DIR)
+
+.PHONY: check-dev
+check-dev: dev-venv
+   @(  \
+   echo "ACTIVATE $(QEMU_VENV_DIR)";   \
+   . $(QEMU_VENV_DIR)/bin/activate;\
+   make check; \
+   )
+
 .PHONY: develop
 develop:
-   pip3 install -e .[devel]
+   pip3 install --disable-pip-version-check -e .[devel]
 
 .PHONY: check
 check:
@@ -56,4 +87,4 @@ clean:
 
 .PHONY: distclean
 distclean: clean
-   rm -rf qemu.egg-info/ .venv/ .tox/ dist/
+   rm -rf qemu.egg-info/ .venv/ .tox/ $(QEMU_VENV_DIR) dist/
-- 
2.31.1




[PULL 12/15] python: Update help text on 'make check', 'make develop'

2021-06-30 Thread John Snow
Update for visual parity with the other targets.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-13-js...@redhat.com
Signed-off-by: John Snow 
---
 python/Makefile | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/python/Makefile b/python/Makefile
index 8f8e1999c0..a14705d12e 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -21,10 +21,14 @@ help:
@echo "These tests use the newest dependencies."
@echo "Requires: Python 3.x"
@echo ""
-   @echo "make develop:Install deps for 'make check', and"
-   @echo " the qemu libs in editable/development mode."
+   @echo "make check:"
+   @echo "Run tests in your *current environment*."
+   @echo "Performs no environment setup of any kind."
@echo ""
-   @echo "make check:  run linters using the current environment."
+   @echo "make develop:"
+   @echo "Install deps needed for for 'make check',"
+   @echo "and install the qemu package in editable mode."
+   @echo "(Can be used in or outside of a venv.)"
@echo ""
@echo "make pipenv"
@echo "Creates pipenv's virtual environment (.venv)"
-- 
2.31.1




[PULL 06/15] python: Add no-install usage instructions

2021-06-30 Thread John Snow
It's not encouraged, but it's legitimate to want to know how to do.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-7-js...@redhat.com
Signed-off-by: John Snow 
---
 python/README.rst | 28 
 1 file changed, 28 insertions(+)

diff --git a/python/README.rst b/python/README.rst
index 107786ffdc..d4502fdb60 100644
--- a/python/README.rst
+++ b/python/README.rst
@@ -37,6 +37,34 @@ See `Installing packages using pip and virtual environments
 for more information.
 
 
+Using these packages without installing them
+
+
+These packages may be used without installing them first, by using one
+of two tricks:
+
+1. Set your PYTHONPATH environment variable to include this source
+   directory, e.g. ``~/src/qemu/python``. See
+   https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPATH
+
+2. Inside a Python script, use ``sys.path`` to forcibly include a search
+   path prior to importing the ``qemu`` namespace. See
+   https://docs.python.org/3/library/sys.html#sys.path
+
+A strong downside to both approaches is that they generally interfere
+with static analysis tools being able to locate and analyze the code
+being imported.
+
+Package installation also normally provides executable console scripts,
+so that tools like ``qmp-shell`` are always available via $PATH. To
+invoke them without installation, you can invoke e.g.:
+
+``> PYTHONPATH=~/src/qemu/python python3 -m qemu.qmp.qmp_shell``
+
+The mappings between console script name and python module path can be
+found in ``setup.cfg``.
+
+
 Files in this directory
 ---
 
-- 
2.31.1




[PULL 14/15] python: remove auto-generated pyproject.toml file

2021-06-30 Thread John Snow
For reasons that at-present escape me, pipenv insists on creating a stub
pyproject.toml file. This file is a nuisance, because its mere presence
changes the behavior of various tools.

For instance, this stub file will cause "pip install --user -e ." to
fail in spectacular fashion with misleading errors. "pip install -e ."
works okay, but for some reason pip does not support editable installs
to the user directory when using PEP517.

References:
  https://github.com/pypa/pip/pull/9990
  https://github.com/pypa/pip/issues/7953

As outlined in ea1213b7ccc, it is still too early for us to consider
moving to a PEP-517 exclusive package. We must support older
distributions, so squash the annoyance for now. (Python 3.6 shipped Dec
2016, PEP517 support showed up in pip sometime in 2019 or so.)

Add 'pyproject.toml' to the 'make clean' target, and also delete it
after every pipenv invocation issued by the Makefile.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-15-js...@redhat.com
Signed-off-by: John Snow 
---
 python/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/python/Makefile b/python/Makefile
index 0432ee0022..ac46ae33e7 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -49,6 +49,7 @@ help:
 pipenv: .venv
 .venv: Pipfile.lock
@PIPENV_VENV_IN_PROJECT=1 pipenv sync --dev --keep-outdated
+   rm -f pyproject.toml
@touch .venv
 
 .PHONY: check-pipenv
@@ -91,6 +92,7 @@ check-tox:
 .PHONY: clean
 clean:
python3 setup.py clean --all
+   rm -f pyproject.toml
 
 .PHONY: distclean
 distclean: clean
-- 
2.31.1




[PULL 08/15] python: update help text for check-tox

2021-06-30 Thread John Snow
Move it up near the check-pipenv help text, and update it to suggest parity.

(At the time I first added it, I wasn't sure if I would be keeping it,
but I've come to appreciate it as it has actually helped uncover bugs I
would not have noticed without it. It should stay.)

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-9-js...@redhat.com
Signed-off-by: John Snow 
---
 python/Makefile | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/python/Makefile b/python/Makefile
index 07ad73ccd0..d2cfa6ad8f 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -9,13 +9,17 @@ help:
@echo "Requires: Python 3.6 and pipenv."
@echo "Hint (Fedora): 'sudo dnf install python3.6 pipenv'"
@echo ""
+   @echo "make check-tox:"
+   @echo "Run tests against multiple python versions."
+   @echo "These tests use the newest dependencies."
+   @echo "Requires: Python 3.6 - 3.10, and tox."
+   @echo "Hint (Fedora): 'sudo dnf install python3-tox python3.10'"
+   @echo ""
@echo "make develop:Install deps for 'make check', and"
@echo " the qemu libs in editable/development mode."
@echo ""
@echo "make check:  run linters using the current environment."
@echo ""
-   @echo "make check-tox:  run linters using multiple python versions."
-   @echo ""
@echo "make pipenv"
@echo "Creates pipenv's virtual environment (.venv)"
@echo ""
-- 
2.31.1




[PULL 04/15] python: Re-lock pipenv at *oldest* supported versions

2021-06-30 Thread John Snow
tox is already testing the most recent versions. Let's use pipenv to
test the oldest versions we claim to support. This matches the stylistic
choice to have pipenv always test our oldest supported Python version, 3.6.

The effect of this is that the python-check-pipenv CI job on gitlab will
now test against much older versions of these linters, which will help
highlight incompatible changes that might otherwise go unnoticed.

Update instructions for adding and bumping versions in setup.cfg. The
reason for deleting the line that gets added to Pipfile is largely just
to avoid having the version minimums specified in multiple places in
config checked into the tree.

(This patch was written by deleting Pipfile and Pipfile.lock, then
explicitly installing each dependency manually at a specific
version. Then, I restored the prior Pipfile and re-ran `pipenv lock
--dev --keep-outdated` to re-add the qemu dependency back to the pipenv
environment while keeping the "old" packages. It's annoying, yes, but I
think the improvement to test coverage is worthwhile.)

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-5-js...@redhat.com
Signed-off-by: John Snow 
---
 python/Pipfile.lock | 113 +---
 python/setup.cfg|   4 +-
 2 files changed, 56 insertions(+), 61 deletions(-)

diff --git a/python/Pipfile.lock b/python/Pipfile.lock
index 5bb3f1b635..8ab41a3f60 100644
--- a/python/Pipfile.lock
+++ b/python/Pipfile.lock
@@ -31,19 +31,19 @@
 },
 "astroid": {
 "hashes": [
-
"sha256:4db03ab5fc3340cf619dbc25e42c2cc3755154ce6009469766d7143d1fc2ee4e",
-
"sha256:8a398dfce302c13f14bab13e2b14fe385d32b73f4e4853b9bdfb64598baa1975"
+
"sha256:09bdb456e02564731f8b5957cdd0c98a7f01d2db5e90eb1d794c353c28bfd705",
+
"sha256:6a8a51f64dae307f6e0c9db752b66a7951e282389d8362cc1d39a56f3feeb31d"
 ],
 "markers": "python_version ~= '3.6'",
-"version": "==2.5.6"
+"version": "==2.6.0"
 },
 "avocado-framework": {
 "hashes": [
-
"sha256:42aa7962df98d6b78d4efd9afa2177226dc630f3d83a2a7d5baf7a0a7da7fa1b",
-
"sha256:d96ae343abf890e1ef3b3a6af5ce49e35f6bded0715770c4acb325bca555c515"
+
"sha256:3fca7226d7d164f124af8a741e7fa658ff4345a0738ddc32907631fd688b38ed",
+
"sha256:48ac254c0ae2ef0c0ceeb38e3d3df0388718eda8f48b3ab55b30b252839f42b1"
 ],
-"markers": "python_version >= '3.6'",
-"version": "==88.1"
+"index": "pypi",
+"version": "==87.0"
 },
 "distlib": {
 "hashes": [
@@ -61,25 +61,27 @@
 },
 "flake8": {
 "hashes": [
-
"sha256:07528381786f2a6237b061f6e96610a4167b226cb926e2aa2b6b1d78057c576b",
-
"sha256:bf8fd46d844f616e8d47905ef3a3384edae6b4e9beb0c5101e25e3110907"
+
"sha256:6a35f5b8761f45c5513e3405f110a86bea57982c3b75b766ce7b65217abe1670",
+
"sha256:c01f8a3963b3571a8e6bd7a4063359aff90749e160778e03817cd9b71c9e07d2"
 ],
-"markers": "python_version >= '2.7' and python_version not in 
'3.0, 3.1, 3.2, 3.3, 3.4'",
-"version": "==3.9.2"
+"index": "pypi",
+"version": "==3.6.0"
 },
 "fusepy": {
 "hashes": [
-
"sha256:72ff783ec2f43de3ab394e3f7457605bf04c8cf288a2f4068b4cde141d4ee6bd"
+
"sha256:10f5c7f5414241bffecdc333c4d3a725f1d6605cae6b4eaf86a838ff49cdaf6c",
+
"sha256:a9f3a3699080ddcf0919fd1eb2cf743e1f5859ca54c2018632f939bdfac269ee"
 ],
-"version": "==3.0.1"
+"index": "pypi",
+"version": "==2.0.4"
 },
 "importlib-metadata": {
 "hashes": [
-
"sha256:8c501196e49fb9df5df43833bdb1e4328f64847763ec8a50703148b73784d581",
-
"sha256:d7eb1dea6d6a6086f8be21784cc9e3bcfa55872b52309bc5fad53a8ea65d"
+
"sha256:90bb658cdbbf6d1735b6341ce708fc7024a3e14e99ffdc5783edea9f9b077f83",
+
"sha256:dc15b2969b4ce36305c51eebe62d418ac7791e9a157911d58bfb1f9ccd8e2070"
 ],
 "markers": "python_version < '3.8'",
-"version": "==4.0.1"
+"version": "==1.7.0"
 },
 "importlib-resources": {
 "hashes": [
@@ -91,11 +93,11 @@
 },
 "isort": {
 "hashes": [
-
"sha256:0a943902919f65c5684ac4e0154b1ad4fac6dcaa5d9f3426b732f1c8b5419be6",
-
"sha256:2bb1680aad211e3c9944dbce1d4ba09a989f04e238296c87fe2139faa26d655d"
+
"sha256:408e4d75d84f51b64d0824894afee44469eba34a4caee621dc53799f80d71ccc",
+

[PULL 15/15] python: Fix broken ReST docstrings

2021-06-30 Thread John Snow
This patch *doesn't* update all of the docstring standards across the
QEMU package directory to make our docstring usage consistent. It
*doesn't* fix the formatting to make it look pretty or reasonable in
generated output. It *does* fix a few small instances where Sphinx would
emit a build warning because of malformed ReST -- If we built our Python
docs with Sphinx.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-16-js...@redhat.com
Signed-off-by: John Snow 
---
 python/qemu/machine/__init__.py | 6 +++---
 python/qemu/machine/machine.py  | 3 ++-
 python/qemu/qmp/__init__.py | 1 +
 python/qemu/qmp/qom_common.py   | 2 +-
 python/qemu/utils/accel.py  | 2 +-
 5 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/python/qemu/machine/__init__.py b/python/qemu/machine/__init__.py
index 728f27adbe..9ccd58ef14 100644
--- a/python/qemu/machine/__init__.py
+++ b/python/qemu/machine/__init__.py
@@ -4,10 +4,10 @@
 This library provides a few high-level classes for driving QEMU from a
 test suite, not intended for production use.
 
-- QEMUMachine: Configure and Boot a QEMU VM
- - QEMUQtestMachine: VM class, with a qtest socket.
+ | QEMUQtestProtocol: send/receive qtest messages.
+ | QEMUMachine: Configure and Boot a QEMU VM
+ | +-- QEMUQtestMachine: VM class, with a qtest socket.
 
-- QEMUQtestProtocol: Connect to, send/receive qtest messages.
 """
 
 # Copyright (C) 2020-2021 John Snow for Red Hat Inc.
diff --git a/python/qemu/machine/machine.py b/python/qemu/machine/machine.py
index e3345dfa1b..d47ab3d896 100644
--- a/python/qemu/machine/machine.py
+++ b/python/qemu/machine/machine.py
@@ -545,7 +545,8 @@ def set_qmp_monitor(self, enabled: bool = True) -> None:
 @param enabled: if False, qmp monitor options will be removed from
 the base arguments of the resulting QEMU command
 line. Default is True.
-@note: call this function before launch().
+
+.. note:: Call this function before launch().
 """
 self._qmp_set = enabled
 
diff --git a/python/qemu/qmp/__init__.py b/python/qemu/qmp/__init__.py
index 376954cb6d..269516a79b 100644
--- a/python/qemu/qmp/__init__.py
+++ b/python/qemu/qmp/__init__.py
@@ -279,6 +279,7 @@ def accept(self, timeout: Optional[float] = 15.0) -> 
QMPMessage:
 None). The value passed will set the behavior of the
 underneath QMP socket as described in [1].
 Default value is set to 15.0.
+
 @return QMP greeting dict
 @raise OSError on socket connection errors
 @raise QMPConnectError if the greeting is not received
diff --git a/python/qemu/qmp/qom_common.py b/python/qemu/qmp/qom_common.py
index f82b16772d..a59ae1a2a1 100644
--- a/python/qemu/qmp/qom_common.py
+++ b/python/qemu/qmp/qom_common.py
@@ -156,7 +156,7 @@ def command_runner(
 """
 Run a fully-parsed subcommand, with error-handling for the CLI.
 
-:return: The return code from `.run()`.
+:return: The return code from `run()`.
 """
 try:
 cmd = cls(args)
diff --git a/python/qemu/utils/accel.py b/python/qemu/utils/accel.py
index 297933df2a..386ff640ca 100644
--- a/python/qemu/utils/accel.py
+++ b/python/qemu/utils/accel.py
@@ -36,7 +36,7 @@ def list_accel(qemu_bin: str) -> List[str]:
 List accelerators enabled in the QEMU binary.
 
 @param qemu_bin (str): path to the QEMU binary.
-@raise Exception: if failed to run `qemu -accel help`
+@raise Exception: if failed to run ``qemu -accel help``
 @return a list of accelerator names.
 """
 if not qemu_bin:
-- 
2.31.1




[PULL 10/15] python: only check qemu/ subdir with flake8

2021-06-30 Thread John Snow
flake8 is a little eager to check everything it can. Limit it to
checking inside the qemu namespace directory only. Update setup.cfg now
that the exclude patterns are no longer necessary.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Tested-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-11-js...@redhat.com
Signed-off-by: John Snow 
---
 python/setup.cfg   | 2 --
 python/tests/flake8.sh | 2 +-
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/python/setup.cfg b/python/setup.cfg
index e730f208d3..11f71d5312 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -62,8 +62,6 @@ console_scripts =
 [flake8]
 extend-ignore = E722  # Prefer pylint's bare-except checks to flake8's
 exclude = __pycache__,
-  .venv,
-  .tox,
 
 [mypy]
 strict = True
diff --git a/python/tests/flake8.sh b/python/tests/flake8.sh
index 51e0788462..1cd7d40fad 100755
--- a/python/tests/flake8.sh
+++ b/python/tests/flake8.sh
@@ -1,2 +1,2 @@
 #!/bin/sh -e
-python3 -m flake8
+python3 -m flake8 qemu/
-- 
2.31.1




[PULL 02/15] python: expose typing information via PEP 561

2021-06-30 Thread John Snow
https://www.python.org/dev/peps/pep-0561/#specification

Create 'py.typed' files in each subpackage that indicate to mypy that
this is a typed module, so that users of any of these packages can use
mypy to check their code as well.

Note: Theoretically it's possible to ditch MANIFEST.in in favor of using
package_data in setup.cfg, but I genuinely could not figure out how to
get it to include things from the *source root* into the *package root*;
only how to include things from each subpackage. I tried!

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-3-js...@redhat.com
Signed-off-by: John Snow 
---
 python/qemu/machine/py.typed | 0
 python/qemu/qmp/py.typed | 0
 python/qemu/utils/py.typed   | 0
 python/setup.cfg | 4 
 4 files changed, 4 insertions(+)
 create mode 100644 python/qemu/machine/py.typed
 create mode 100644 python/qemu/qmp/py.typed
 create mode 100644 python/qemu/utils/py.typed

diff --git a/python/qemu/machine/py.typed b/python/qemu/machine/py.typed
new file mode 100644
index 00..e69de29bb2
diff --git a/python/qemu/qmp/py.typed b/python/qemu/qmp/py.typed
new file mode 100644
index 00..e69de29bb2
diff --git a/python/qemu/utils/py.typed b/python/qemu/utils/py.typed
new file mode 100644
index 00..e69de29bb2
diff --git a/python/setup.cfg b/python/setup.cfg
index 85cecbb41b..db1639c1f2 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -19,6 +19,7 @@ classifiers =
 Programming Language :: Python :: 3.8
 Programming Language :: Python :: 3.9
 Programming Language :: Python :: 3.10
+Typing :: Typed
 
 [options]
 python_requires = >= 3.6
@@ -27,6 +28,9 @@ packages =
 qemu.machine
 qemu.utils
 
+[options.package_data]
+* = py.typed
+
 [options.extras_require]
 # Run `pipenv lock --dev` when changing these requirements.
 devel =
-- 
2.31.1




[PULL 05/15] python: README.rst touchups

2021-06-30 Thread John Snow
Clarifying a few points; removing the reference to 'setuptools' because
it isn't referenced anywhere else in this document and doesn't really
provide any useful information to a Python newcomer.

Adjusting the language elsewhere to be less ambiguous and have fewer
run-on sentences.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-6-js...@redhat.com
Signed-off-by: John Snow 
---
 python/README.rst | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/python/README.rst b/python/README.rst
index dcf993819d..107786ffdc 100644
--- a/python/README.rst
+++ b/python/README.rst
@@ -7,8 +7,7 @@ then by package (e.g. ``qemu/machine``, ``qemu/qmp``, etc).
 
 ``setup.py`` is used by ``pip`` to install this tooling to the current
 environment. ``setup.cfg`` provides the packaging configuration used by
-``setup.py`` in a setuptools specific format. You will generally invoke
-it by doing one of the following:
+``setup.py``. You will generally invoke it by doing one of the following:
 
 1. ``pip3 install .`` will install these packages to your current
environment. If you are inside a virtual environment, they will
@@ -17,12 +16,13 @@ it by doing one of the following:
 
 2. ``pip3 install --user .`` will install these packages to your user's
local python packages. If you are inside of a virtual environment,
-   this will fail; you likely want the first invocation above.
+   this will fail; you want the first invocation above.
 
-If you append the ``-e`` argument, pip will install in "editable" mode;
-which installs a version of the package that installs a forwarder
-pointing to these files, such that the package always reflects the
-latest version in your git tree.
+If you append the ``--editable`` or ``-e`` argument to either invocation
+above, pip will install in "editable" mode. This installs the package as
+a forwarder ("qemu.egg-link") that points to the source tree. In so
+doing, the installed package always reflects the latest version in your
+source tree.
 
 Installing ".[devel]" instead of "." will additionally pull in required
 packages for testing this package. They are not runtime requirements,
@@ -30,6 +30,7 @@ and are not needed to simply use these libraries.
 
 Running ``make develop`` will pull in all testing dependencies and
 install QEMU in editable mode to the current environment.
+(It is a shortcut for ``pip3 install -e .[devel]``.)
 
 See `Installing packages using pip and virtual environments
 
`_
@@ -39,7 +40,7 @@ for more information.
 Files in this directory
 ---
 
-- ``qemu/`` Python package source directory.
+- ``qemu/`` Python 'qemu' namespace package source directory.
 - ``tests/`` Python package tests directory.
 - ``avocado.cfg`` Configuration for the Avocado test-runner.
   Used by ``make check`` et al.
-- 
2.31.1




[PULL 07/15] python: rename 'venv-check' target to 'check-pipenv'

2021-06-30 Thread John Snow
Well, Cleber was right, this is a better name.

In preparation for adding a different kind of virtual environment check
(One that simply uses whichever version of Python you happen to have),
rename this test 'check-pipenv' so that it matches the CI job
'check-python-pipenv'.

Remove the "If you don't know which test to run" hint, because it's not
actually likely you have Python 3.6 installed to be able to run the
test. It's still the test I'd most prefer you to run, but it's not the
test you are most likely to be able to run.

Rename the 'venv' target to 'pipenv' as well, and move the more
pertinent help text under the 'check-pipenv' target.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-8-js...@redhat.com
Signed-off-by: John Snow 
---
 python/README.rst  |  2 +-
 .gitlab-ci.d/static_checks.yml |  2 +-
 python/Makefile| 21 +++--
 3 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/python/README.rst b/python/README.rst
index d4502fdb60..9c1fceaee7 100644
--- a/python/README.rst
+++ b/python/README.rst
@@ -79,7 +79,7 @@ Files in this directory
 - ``PACKAGE.rst`` is used as the README file that is visible on PyPI.org.
 - ``Pipfile`` is used by Pipenv to generate ``Pipfile.lock``.
 - ``Pipfile.lock`` is a set of pinned package dependencies that this package
-  is tested under in our CI suite. It is used by ``make venv-check``.
+  is tested under in our CI suite. It is used by ``make check-pipenv``.
 - ``README.rst`` you are here!
 - ``VERSION`` contains the PEP-440 compliant version used to describe
   this package; it is referenced by ``setup.cfg``.
diff --git a/.gitlab-ci.d/static_checks.yml b/.gitlab-ci.d/static_checks.yml
index c5fa4fce26..b01f6ec231 100644
--- a/.gitlab-ci.d/static_checks.yml
+++ b/.gitlab-ci.d/static_checks.yml
@@ -30,7 +30,7 @@ check-python-pipenv:
   stage: test
   image: $CI_REGISTRY_IMAGE/qemu/python:latest
   script:
-- make -C python venv-check
+- make -C python check-pipenv
   variables:
 GIT_DEPTH: 1
   needs:
diff --git a/python/Makefile b/python/Makefile
index b5621b0d54..07ad73ccd0 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -1,15 +1,13 @@
-.PHONY: help venv venv-check check clean distclean develop
+.PHONY: help pipenv check-pipenv check clean distclean develop
 
 help:
@echo "python packaging help:"
@echo ""
-   @echo "make venv:   Create pipenv's virtual environment."
-   @echo "NOTE: Requires Python 3.6 and pipenv."
-   @echo "  Will download packages from PyPI."
-   @echo "Hint: (On Fedora): 'sudo dnf install python36 pipenv'"
-   @echo ""
-   @echo "make venv-check: run linters using pipenv's virtual environment."
-   @echo "Hint: If you don't know which test to run, run this one!"
+   @echo "make check-pipenv:"
+   @echo "Run tests in pipenv's virtual environment."
+   @echo "These tests use the oldest dependencies."
+   @echo "Requires: Python 3.6 and pipenv."
+   @echo "Hint (Fedora): 'sudo dnf install python3.6 pipenv'"
@echo ""
@echo "make develop:Install deps for 'make check', and"
@echo " the qemu libs in editable/development mode."
@@ -18,18 +16,21 @@ help:
@echo ""
@echo "make check-tox:  run linters using multiple python versions."
@echo ""
+   @echo "make pipenv"
+   @echo "Creates pipenv's virtual environment (.venv)"
+   @echo ""
@echo "make clean:  remove package build output."
@echo ""
@echo "make distclean:  remove venv files, qemu package forwarder,"
@echo " built distribution files, and everything"
@echo " from 'make clean'."
 
-venv: .venv
+pipenv: .venv
 .venv: Pipfile.lock
@PIPENV_VENV_IN_PROJECT=1 pipenv sync --dev --keep-outdated
@touch .venv
 
-venv-check: venv
+check-pipenv: pipenv
@pipenv run make check
 
 develop:
-- 
2.31.1




[PULL 03/15] python: Remove global pylint suppressions

2021-06-30 Thread John Snow
These suppressions only apply to a small handful of places. Instead of
disabling them globally, disable them just in the cases where we
need. The design of the machine class grew quite organically with tons
of constructor and class instance variables -- there's little chance of
meaningfully refactoring it in the near term, so just suppress the
warnings for that class.

Signed-off-by: John Snow 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Wainer dos Santos Moschetta 
Message-id: 20210629214323.1329806-4-js...@redhat.com
Signed-off-by: John Snow 
---
 python/qemu/machine/machine.py | 3 +++
 python/qemu/machine/qtest.py   | 2 ++
 python/setup.cfg   | 4 +---
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/python/qemu/machine/machine.py b/python/qemu/machine/machine.py
index b62435528e..e3345dfa1b 100644
--- a/python/qemu/machine/machine.py
+++ b/python/qemu/machine/machine.py
@@ -84,6 +84,7 @@ class QEMUMachine:
 ...
 # vm is guaranteed to be shut down here
 """
+# pylint: disable=too-many-instance-attributes, too-many-public-methods
 
 def __init__(self,
  binary: str,
@@ -111,6 +112,8 @@ def __init__(self,
 @param console_log: (optional) path to console log file
 @note: Qemu process is not started until launch() is used.
 '''
+# pylint: disable=too-many-arguments
+
 # Direct user configuration
 
 self._binary = binary
diff --git a/python/qemu/machine/qtest.py b/python/qemu/machine/qtest.py
index 93700684d1..d6d9c6a34a 100644
--- a/python/qemu/machine/qtest.py
+++ b/python/qemu/machine/qtest.py
@@ -116,6 +116,8 @@ def __init__(self,
  base_temp_dir: str = "/var/tmp",
  socket_scm_helper: Optional[str] = None,
  sock_dir: Optional[str] = None):
+# pylint: disable=too-many-arguments
+
 if name is None:
 name = "qemu-%d" % os.getpid()
 if sock_dir is None:
diff --git a/python/setup.cfg b/python/setup.cfg
index db1639c1f2..524789d6e0 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -87,9 +87,7 @@ ignore_missing_imports = True
 # --enable=similarities". If you want to run only the classes checker, but have
 # no Warning level messages displayed, use "--disable=all --enable=classes
 # --disable=W".
-disable=too-many-arguments,
-too-many-instance-attributes,
-too-many-public-methods,
+disable=
 
 [pylint.basic]
 # Good variable names which should always be accepted, separated by a comma.
-- 
2.31.1




[PULL 01/15] python/qom: Do not use 'err' name at module scope

2021-06-30 Thread John Snow
Pylint updated to 2.9.0 upstream, adding new warnings for things that
re-use the 'err' variable. Luckily, this only breaks the
python-check-tox job, which is allowed to fail as a warning.

Signed-off-by: John Snow 
Reviewed-by: Wainer dos Santos Moschetta 
Reviewed-by: Willian Rampazzo 
Message-id: 20210629214323.1329806-2-js...@redhat.com
Signed-off-by: John Snow 
---
 python/qemu/qmp/qom.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/python/qemu/qmp/qom.py b/python/qemu/qmp/qom.py
index 7ec7843d57..8ff28a8343 100644
--- a/python/qemu/qmp/qom.py
+++ b/python/qemu/qmp/qom.py
@@ -38,8 +38,8 @@
 
 try:
 from .qom_fuse import QOMFuse
-except ModuleNotFoundError as err:
-if err.name != 'fuse':
+except ModuleNotFoundError as _err:
+if _err.name != 'fuse':
 raise
 else:
 assert issubclass(QOMFuse, QOMCommand)
-- 
2.31.1




[PULL 00/15] Python patches

2021-06-30 Thread John Snow
The following changes since commit d940d468e29bff5eb5669c0dd8f3de0c3de17bfb:

  Merge remote-tracking branch 'remotes/quic/tags/pull-hex-20210629' into 
staging (2021-06-30 19:09:45 +0100)

are available in the Git repository at:

  https://gitlab.com/jsnow/qemu.git tags/python-pull-request

for you to fetch changes up to 5c02c865866fdd2d17e8f5507deb4aa1f74bf59f:

  python: Fix broken ReST docstrings (2021-06-30 21:57:08 -0400)


Pull request

Patch 01/15 fixes the check-python-tox test.



John Snow (15):
  python/qom: Do not use 'err' name at module scope
  python: expose typing information via PEP 561
  python: Remove global pylint suppressions
  python: Re-lock pipenv at *oldest* supported versions
  python: README.rst touchups
  python: Add no-install usage instructions
  python: rename 'venv-check' target to 'check-pipenv'
  python: update help text for check-tox
  python: Fix .PHONY Make specifiers
  python: only check qemu/ subdir with flake8
  python: add 'make check-dev' invocation
  python: Update help text on 'make check', 'make develop'
  python: Update help text on 'make clean', 'make distclean'
  python: remove auto-generated pyproject.toml file
  python: Fix broken ReST docstrings

 python/README.rst   |  47 ++---
 .gitlab-ci.d/static_checks.yml  |   2 +-
 python/.gitignore   |   1 +
 python/Makefile |  89 +++--
 python/Pipfile.lock | 113 +++-
 python/qemu/machine/__init__.py |   6 +-
 python/qemu/machine/machine.py  |   6 +-
 python/qemu/machine/py.typed|   0
 python/qemu/machine/qtest.py|   2 +
 python/qemu/qmp/__init__.py |   1 +
 python/qemu/qmp/py.typed|   0
 python/qemu/qmp/qom.py  |   4 +-
 python/qemu/qmp/qom_common.py   |   2 +-
 python/qemu/utils/accel.py  |   2 +-
 python/qemu/utils/py.typed  |   0
 python/setup.cfg|  14 ++--
 python/tests/flake8.sh  |   2 +-
 17 files changed, 187 insertions(+), 104 deletions(-)
 create mode 100644 python/qemu/machine/py.typed
 create mode 100644 python/qemu/qmp/py.typed
 create mode 100644 python/qemu/utils/py.typed

-- 
2.31.1





Re: [PATCH v3 05/37] target/riscv: SIMD 16-bit Shift Instructions

2021-06-30 Thread Alistair Francis
On Thu, Jun 24, 2021 at 9:11 PM LIU Zhiwei  wrote:
>
> Instructions include right arithmetic shift, right logic shift,
> and left shift.
>
> The shift can be an immediate or a register scalar. The
> right shift has rounding operation. And the left shift
> has saturation operation.
>
> Signed-off-by: LIU Zhiwei 

Reviewed-by: Alistair Francis 

Alistair

> ---
>  target/riscv/helper.h   |   9 ++
>  target/riscv/insn32.decode  |  17 
>  target/riscv/insn_trans/trans_rvp.c.inc |  59 ++
>  target/riscv/packed_helper.c| 104 
>  4 files changed, 189 insertions(+)
>
> diff --git a/target/riscv/helper.h b/target/riscv/helper.h
> index 629ff13402..de7b4fc17d 100644
> --- a/target/riscv/helper.h
> +++ b/target/riscv/helper.h
> @@ -1188,3 +1188,12 @@ DEF_HELPER_3(rsub8, tl, env, tl, tl)
>  DEF_HELPER_3(ursub8, tl, env, tl, tl)
>  DEF_HELPER_3(ksub8, tl, env, tl, tl)
>  DEF_HELPER_3(uksub8, tl, env, tl, tl)
> +
> +DEF_HELPER_3(sra16, tl, env, tl, tl)
> +DEF_HELPER_3(sra16_u, tl, env, tl, tl)
> +DEF_HELPER_3(srl16, tl, env, tl, tl)
> +DEF_HELPER_3(srl16_u, tl, env, tl, tl)
> +DEF_HELPER_3(sll16, tl, env, tl, tl)
> +DEF_HELPER_3(ksll16, tl, env, tl, tl)
> +DEF_HELPER_3(kslra16, tl, env, tl, tl)
> +DEF_HELPER_3(kslra16_u, tl, env, tl, tl)
> diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
> index 13e196..44c497f28a 100644
> --- a/target/riscv/insn32.decode
> +++ b/target/riscv/insn32.decode
> @@ -24,6 +24,7 @@
>  %sh5   20:5
>
>  %sh720:7
> +%sh420:4
>  %csr20:12
>  %rm 12:3
>  %nf 29:3 !function=ex_plus_1
> @@ -61,6 +62,7 @@
>  @j     . ...   imm=%imm_j  
> %rd
>
>  @sh  ..  .. .  ... . ...   shamt=%sh7 %rs1 
> %rd
> +@sh4 ..  .. .  ... . ...   shamt=%sh4  
> %rs1 %rd
>  @csr    .  ... . ...   %csr %rs1 
> %rd
>
>  @atom_ld . aq:1 rl:1 .  . ...  rs2=0 %rs1 
> %rd
> @@ -775,3 +777,18 @@ rsub8  101  . . 000 . 1110111 @r
>  ursub8 0010101  . . 000 . 1110111 @r
>  ksub8  0001101  . . 000 . 1110111 @r
>  uksub8 0011101  . . 000 . 1110111 @r
> +
> +sra16  0101000  . . 000 . 1110111 @r
> +sra16_u011  . . 000 . 1110111 @r
> +srai16 0111000  0 . 000 . 1110111 @sh4
> +srai16_u   0111000  1 . 000 . 1110111 @sh4
> +srl16  0101001  . . 000 . 1110111 @r
> +srl16_u0110001  . . 000 . 1110111 @r
> +srli16 0111001  0 . 000 . 1110111 @sh4
> +srli16_u   0111001  1 . 000 . 1110111 @sh4
> +sll16  0101010  . . 000 . 1110111 @r
> +slli16 0111010  0 . 000 . 1110111 @sh4
> +ksll16 0110010  . . 000 . 1110111 @r
> +kslli160111010  1 . 000 . 1110111 @sh4
> +kslra160101011  . . 000 . 1110111 @r
> +kslra16_u  0110011  . . 000 . 1110111 @r
> diff --git a/target/riscv/insn_trans/trans_rvp.c.inc 
> b/target/riscv/insn_trans/trans_rvp.c.inc
> index 80bec35ac9..afafa49824 100644
> --- a/target/riscv/insn_trans/trans_rvp.c.inc
> +++ b/target/riscv/insn_trans/trans_rvp.c.inc
> @@ -128,3 +128,62 @@ GEN_RVP_R_OOL(rsub8);
>  GEN_RVP_R_OOL(ursub8);
>  GEN_RVP_R_OOL(ksub8);
>  GEN_RVP_R_OOL(uksub8);
> +
> +/* 16-bit Shift Instructions */
> +GEN_RVP_R_OOL(sra16);
> +GEN_RVP_R_OOL(srl16);
> +GEN_RVP_R_OOL(sll16);
> +GEN_RVP_R_OOL(sra16_u);
> +GEN_RVP_R_OOL(srl16_u);
> +GEN_RVP_R_OOL(ksll16);
> +GEN_RVP_R_OOL(kslra16);
> +GEN_RVP_R_OOL(kslra16_u);
> +
> +static bool
> +rvp_shifti_ool(DisasContext *ctx, arg_shift *a,
> +   void (* fn)(TCGv, TCGv_ptr, TCGv, TCGv))
> +{
> +TCGv src1, dst, shift;
> +
> +src1 = tcg_temp_new();
> +dst = tcg_temp_new();
> +
> +gen_get_gpr(src1, a->rs1);
> +shift = tcg_const_tl(a->shamt);
> +fn(dst, cpu_env, src1, shift);
> +gen_set_gpr(a->rd, dst);
> +
> +tcg_temp_free(src1);
> +tcg_temp_free(dst);
> +tcg_temp_free(shift);
> +return true;
> +}
> +
> +static inline bool
> +rvp_shifti(DisasContext *ctx, arg_shift *a,
> +   void (* vecop)(TCGv, TCGv, target_long),
> +   void (* op)(TCGv, TCGv_ptr, TCGv, TCGv))
> +{
> +if (!has_ext(ctx, RVP)) {
> +return false;
> +}
> +
> +if (a->rd && a->rs1 && vecop) {
> +vecop(cpu_gpr[a->rd], cpu_gpr[a->rs1], a->shamt);
> +return true;
> +}
> +return rvp_shifti_ool(ctx, a, op);
> +}
> +
> +#define GEN_RVP_SHIFTI(NAME, VECOP, OP)  \
> +static bool trans_##NAME(DisasContext *s, arg_shift *a)  \
> +{\
> +return rvp_shifti(s, a, VECOP, OP);  \
> +}
> +
> +GEN_RVP_SHIFTI(srai16, tcg_gen_vec_sar16i_tl, 

Re: [PATCH v3 03/37] target/riscv: 16-bit Addition & Subtraction Instructions

2021-06-30 Thread Alistair Francis
On Thu, Jun 24, 2021 at 9:08 PM LIU Zhiwei  wrote:
>
> Include 5 groups: Wrap-around (dropping overflow), Signed Halving,
> Unsigned Halving, Signed Saturation, and Unsigned Saturation.
>
> Signed-off-by: LIU Zhiwei 

Reviewed-by: Alistair Francis 

Alistair

> ---
>  target/riscv/helper.h   |  30 ++
>  target/riscv/insn32.decode  |  32 +++
>  target/riscv/insn_trans/trans_rvp.c.inc | 117 
>  target/riscv/meson.build|   1 +
>  target/riscv/packed_helper.c| 354 
>  target/riscv/translate.c|   1 +
>  6 files changed, 535 insertions(+)
>  create mode 100644 target/riscv/insn_trans/trans_rvp.c.inc
>  create mode 100644 target/riscv/packed_helper.c
>
> diff --git a/target/riscv/helper.h b/target/riscv/helper.h
> index 415e37bc37..b6a71ade33 100644
> --- a/target/riscv/helper.h
> +++ b/target/riscv/helper.h
> @@ -1149,3 +1149,33 @@ DEF_HELPER_6(vcompress_vm_b, void, ptr, ptr, ptr, ptr, 
> env, i32)
>  DEF_HELPER_6(vcompress_vm_h, void, ptr, ptr, ptr, ptr, env, i32)
>  DEF_HELPER_6(vcompress_vm_w, void, ptr, ptr, ptr, ptr, env, i32)
>  DEF_HELPER_6(vcompress_vm_d, void, ptr, ptr, ptr, ptr, env, i32)
> +
> +/* P extension function */
> +DEF_HELPER_3(radd16, tl, env, tl, tl)
> +DEF_HELPER_3(uradd16, tl, env, tl, tl)
> +DEF_HELPER_3(kadd16, tl, env, tl, tl)
> +DEF_HELPER_3(ukadd16, tl, env, tl, tl)
> +DEF_HELPER_3(rsub16, tl, env, tl, tl)
> +DEF_HELPER_3(ursub16, tl, env, tl, tl)
> +DEF_HELPER_3(ksub16, tl, env, tl, tl)
> +DEF_HELPER_3(uksub16, tl, env, tl, tl)
> +DEF_HELPER_3(cras16, tl, env, tl, tl)
> +DEF_HELPER_3(rcras16, tl, env, tl, tl)
> +DEF_HELPER_3(urcras16, tl, env, tl, tl)
> +DEF_HELPER_3(kcras16, tl, env, tl, tl)
> +DEF_HELPER_3(ukcras16, tl, env, tl, tl)
> +DEF_HELPER_3(crsa16, tl, env, tl, tl)
> +DEF_HELPER_3(rcrsa16, tl, env, tl, tl)
> +DEF_HELPER_3(urcrsa16, tl, env, tl, tl)
> +DEF_HELPER_3(kcrsa16, tl, env, tl, tl)
> +DEF_HELPER_3(ukcrsa16, tl, env, tl, tl)
> +DEF_HELPER_3(stas16, tl, env, tl, tl)
> +DEF_HELPER_3(rstas16, tl, env, tl, tl)
> +DEF_HELPER_3(urstas16, tl, env, tl, tl)
> +DEF_HELPER_3(kstas16, tl, env, tl, tl)
> +DEF_HELPER_3(ukstas16, tl, env, tl, tl)
> +DEF_HELPER_3(stsa16, tl, env, tl, tl)
> +DEF_HELPER_3(rstsa16, tl, env, tl, tl)
> +DEF_HELPER_3(urstsa16, tl, env, tl, tl)
> +DEF_HELPER_3(kstsa16, tl, env, tl, tl)
> +DEF_HELPER_3(ukstsa16, tl, env, tl, tl)
> diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
> index f09f8d5faf..57f72fabf6 100644
> --- a/target/riscv/insn32.decode
> +++ b/target/riscv/insn32.decode
> @@ -732,3 +732,35 @@ greviw 0110100 .. 101 . 0011011 @sh5
>  gorciw 0010100 .. 101 . 0011011 @sh5
>
>  slli_uw1. ... 001 . 0011011 @sh
> +
> +# *** RV32P Extension ***
> +add16  010  . . 000 . 1110111 @r
> +radd16 000  . . 000 . 1110111 @r
> +uradd16001  . . 000 . 1110111 @r
> +kadd16 0001000  . . 000 . 1110111 @r
> +ukadd160011000  . . 000 . 1110111 @r
> +sub16  011  . . 000 . 1110111 @r
> +rsub16 001  . . 000 . 1110111 @r
> +ursub160010001  . . 000 . 1110111 @r
> +ksub16 0001001  . . 000 . 1110111 @r
> +uksub160011001  . . 000 . 1110111 @r
> +cras16 0100010  . . 000 . 1110111 @r
> +rcras16010  . . 000 . 1110111 @r
> +urcras16   0010010  . . 000 . 1110111 @r
> +kcras160001010  . . 000 . 1110111 @r
> +ukcras16   0011010  . . 000 . 1110111 @r
> +crsa16 0100011  . . 000 . 1110111 @r
> +rcrsa16011  . . 000 . 1110111 @r
> +urcrsa16   0010011  . . 000 . 1110111 @r
> +kcrsa160001011  . . 000 . 1110111 @r
> +ukcrsa16   0011011  . . 000 . 1110111 @r
> +stas16 010  . . 010 . 1110111 @r
> +rstas161011010  . . 010 . 1110111 @r
> +urstas16   1101010  . . 010 . 1110111 @r
> +kstas161100010  . . 010 . 1110111 @r
> +ukstas16   1110010  . . 010 . 1110111 @r
> +stsa16 011  . . 010 . 1110111 @r
> +rstsa161011011  . . 010 . 1110111 @r
> +urstsa16   1101011  . . 010 . 1110111 @r
> +kstsa161100011  . . 010 . 1110111 @r
> +ukstsa16   1110011  . . 010 . 1110111 @r
> diff --git a/target/riscv/insn_trans/trans_rvp.c.inc 
> b/target/riscv/insn_trans/trans_rvp.c.inc
> new file mode 100644
> index 00..43f395657a
> --- /dev/null
> +++ b/target/riscv/insn_trans/trans_rvp.c.inc
> @@ -0,0 +1,117 @@
> +/*
> + * RISC-V translation routines for the RVP Standard Extension.
> + *
> + * Copyright (c) 2021 T-Head Semiconductor Co., Ltd. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the 

Re: [PATCH v3 00/37] target/riscv: support packed extension v0.9.4

2021-06-30 Thread Alistair Francis
On Thu, Jun 24, 2021 at 9:14 PM LIU Zhiwei  wrote:
>
> This patchset implements the packed extension for RISC-V on QEMU.
>
> You can also find this patch set on my
> repo(https://github.com/romanheros/qemu.git branch:packed-upstream-v3).
>
> Features:
> * support specification packed extension
>   v0.9.4(https://github.com/riscv/riscv-p-spec/)
> * support basic packed extension.
> * support Zpsoperand.

There is now a 0.9.5, do you have plans to support that?

Alistair

>
> v3:
> * split 32 bit vector operations.
>
> v2:
> * remove all the TARGET_RISCV64 macro.
> * use tcg_gen_vec_* to accelabrate.
> * update specficication to latest v0.9.4
> * fix kmsxda32, kmsda32,kslra32,smal
>
> LIU Zhiwei (37):
>   target/riscv: implementation-defined constant parameters
>   target/riscv: Make the vector helper functions public
>   target/riscv: 16-bit Addition & Subtraction Instructions
>   target/riscv: 8-bit Addition & Subtraction Instruction
>   target/riscv: SIMD 16-bit Shift Instructions
>   target/riscv: SIMD 8-bit Shift Instructions
>   target/riscv: SIMD 16-bit Compare Instructions
>   target/riscv: SIMD 8-bit Compare Instructions
>   target/riscv: SIMD 16-bit Multiply Instructions
>   target/riscv: SIMD 8-bit Multiply Instructions
>   target/riscv: SIMD 16-bit Miscellaneous Instructions
>   target/riscv: SIMD 8-bit Miscellaneous Instructions
>   target/riscv: 8-bit Unpacking Instructions
>   target/riscv: 16-bit Packing Instructions
>   target/riscv: Signed MSW 32x32 Multiply and Add Instructions
>   target/riscv: Signed MSW 32x16 Multiply and Add Instructions
>   target/riscv: Signed 16-bit Multiply 32-bit Add/Subtract Instructions
>   target/riscv: Signed 16-bit Multiply 64-bit Add/Subtract Instructions
>   target/riscv: Partial-SIMD Miscellaneous Instructions
>   target/riscv: 8-bit Multiply with 32-bit Add Instructions
>   target/riscv: 64-bit Add/Subtract Instructions
>   target/riscv: 32-bit Multiply 64-bit Add/Subtract Instructions
>   target/riscv: Signed 16-bit Multiply with 64-bit Add/Subtract
> Instructions
>   target/riscv: Non-SIMD Q15 saturation ALU Instructions
>   target/riscv: Non-SIMD Q31 saturation ALU Instructions
>   target/riscv: 32-bit Computation Instructions
>   target/riscv: Non-SIMD Miscellaneous Instructions
>   target/riscv: RV64 Only SIMD 32-bit Add/Subtract Instructions
>   target/riscv: RV64 Only SIMD 32-bit Shift Instructions
>   target/riscv: RV64 Only SIMD 32-bit Miscellaneous Instructions
>   target/riscv: RV64 Only SIMD Q15 saturating Multiply Instructions
>   target/riscv: RV64 Only 32-bit Multiply Instructions
>   target/riscv: RV64 Only 32-bit Multiply & Add Instructions
>   target/riscv: RV64 Only 32-bit Parallel Multiply & Add Instructions
>   target/riscv: RV64 Only Non-SIMD 32-bit Shift Instructions
>   target/riscv: RV64 Only 32-bit Packing Instructions
>   target/riscv: configure and turn on packed extension from command line
>
>  target/riscv/cpu.c  |   34 +
>  target/riscv/cpu.h  |6 +
>  target/riscv/helper.h   |  330 ++
>  target/riscv/insn32.decode  |  370 +++
>  target/riscv/insn_trans/trans_rvp.c.inc | 1155 +++
>  target/riscv/internals.h|   50 +
>  target/riscv/meson.build|1 +
>  target/riscv/packed_helper.c| 3851 +++
>  target/riscv/translate.c|3 +
>  target/riscv/vector_helper.c|   82 +-
>  10 files changed, 5824 insertions(+), 58 deletions(-)
>  create mode 100644 target/riscv/insn_trans/trans_rvp.c.inc
>  create mode 100644 target/riscv/packed_helper.c
>
> --
> 2.17.1
>
>



Re: [PATCH] target/riscv: pmp: Fix some typos

2021-06-30 Thread Alistair Francis
On Sun, Jun 27, 2021 at 9:57 PM Bin Meng  wrote:
>
> %s/CSP/CSR
> %s/thie/the
>
> Signed-off-by: Bin Meng 

Thanks!

Applied to riscv-to-apply.next

Alistair

> ---
>
>  target/riscv/pmp.c | 10 +-
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index 82ed020b10..54abf42583 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -456,7 +456,7 @@ bool pmp_hart_has_privs(CPURISCVState *env, target_ulong 
> addr,
>  }
>
>  /*
> - * Handle a write to a pmpcfg CSP
> + * Handle a write to a pmpcfg CSR
>   */
>  void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
>  target_ulong val)
> @@ -483,7 +483,7 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t 
> reg_index,
>
>
>  /*
> - * Handle a read from a pmpcfg CSP
> + * Handle a read from a pmpcfg CSR
>   */
>  target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
>  {
> @@ -502,7 +502,7 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t 
> reg_index)
>
>
>  /*
> - * Handle a write to a pmpaddr CSP
> + * Handle a write to a pmpaddr CSR
>   */
>  void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
>  target_ulong val)
> @@ -540,7 +540,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t 
> addr_index,
>
>
>  /*
> - * Handle a read from a pmpaddr CSP
> + * Handle a read from a pmpaddr CSR
>   */
>  target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
>  {
> @@ -593,7 +593,7 @@ target_ulong mseccfg_csr_read(CPURISCVState *env)
>
>  /*
>   * Calculate the TLB size if the start address or the end address of
> - * PMP entry is presented in thie TLB page.
> + * PMP entry is presented in the TLB page.
>   */
>  static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
>   target_ulong tlb_sa, target_ulong 
> tlb_ea)
> --
> 2.25.1
>
>



ping Re: [PATCH] scsi: fix bug scsi resp sense is 0 when expand disk

2021-06-30 Thread wangjie (P)
ping.

On 2021/6/29 15:12, Jie Wang wrote:
> A large number of I/Os are delivered during disk capacity expansion.
> Many I/Os are extracted from the Vring ring, and each one registers
> reqops_unit_attention when new scsi req.
> If the first registered req takes the ua, the ua is cleared
> and other registered req is return 0 sense.
> 
> Let's add req_has_ua to avoid this kind of thing.
> 
> Signed-off-by: suruifeng 
> Signed-off-by: Jie Wang 
> ---
>  hw/scsi/scsi-bus.c | 10 --
>  include/hw/scsi/scsi.h |  1 +
>  2 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
> index 2a0a98cac9..20ec4a5f74 100644
> --- a/hw/scsi/scsi-bus.c
> +++ b/hw/scsi/scsi-bus.c
> @@ -722,7 +722,13 @@ SCSIRequest *scsi_req_new(SCSIDevice *d, uint32_t tag, 
> uint32_t lun,
>* If we already have a pending unit attention condition,
>* report this one before triggering another one.
>*/
> - !(buf[0] == REQUEST_SENSE && d->sense_is_ua))) {
> + !(buf[0] == REQUEST_SENSE && d->sense_is_ua)) &&
> + /*
> +  * If we already have a req register ua ops,
> +  * other req can not register.
> +  */
> + !d->req_has_ua) {
> +d->req_has_ua = true;
>  ops = _unit_attention;
>  } else if (lun != d->lun ||
> buf[0] == REPORT_LUNS ||
> @@ -822,7 +828,7 @@ static void scsi_clear_unit_attention(SCSIRequest *req)
>ua->ascq == SENSE_CODE(REPORTED_LUNS_CHANGED).ascq)) {
>  return;
>  }
> -
> +req->dev->req_has_ua = false;
>  *ua = SENSE_CODE(NO_SENSE);
>  }
>  
> diff --git a/include/hw/scsi/scsi.h b/include/hw/scsi/scsi.h
> index 0b726bc78c..3d0cda68f6 100644
> --- a/include/hw/scsi/scsi.h
> +++ b/include/hw/scsi/scsi.h
> @@ -74,6 +74,7 @@ struct SCSIDevice
>  BlockConf conf;
>  SCSISense unit_attention;
>  bool sense_is_ua;
> +bool req_has_ua;
>  uint8_t sense[SCSI_SENSE_BUF_SIZE];
>  uint32_t sense_len;
>  QTAILQ_HEAD(, SCSIRequest) requests;
> 



Re: [PATCH 20/20] target/loongarch: Add linux-user emulation support

2021-06-30 Thread maobibo



在 2021年06月30日 17:36, Alex Bennée 写道:
> 
> maobibo  writes:
> 
>> 在 2021年06月29日 21:42, Peter Maydell 写道:
>>> On Mon, 28 Jun 2021 at 13:05, Song Gao  wrote:

 Add files to linux-user/loongarch64
 Add file to default-configs
 Add loongarch to target/meson.build

 Signed-off-by: Song Gao 
 ---
  MAINTAINERS|   1 +
  default-configs/targets/loongarch64-linux-user.mak |   4 +
  include/elf.h  |   2 +
  linux-user/elfload.c   |  58 
  linux-user/loongarch64/cpu_loop.c  | 177 
  linux-user/loongarch64/signal.c| 193 +
  linux-user/loongarch64/sockbits.h  |   1 +
  linux-user/loongarch64/syscall_nr.h| 307 
 +
  linux-user/loongarch64/target_cpu.h|  36 +++
  linux-user/loongarch64/target_elf.h|  14 +
  linux-user/loongarch64/target_fcntl.h  |  12 +
  linux-user/loongarch64/target_signal.h |  28 ++
  linux-user/loongarch64/target_structs.h|  49 
  linux-user/loongarch64/target_syscall.h|  46 +++
  linux-user/loongarch64/termbits.h  | 229 +++
  linux-user/syscall_defs.h  |   8 +-
  meson.build|   2 +-
  qapi/machine-target.json   |   4 +-
  target/loongarch/meson.build   |  19 ++
  target/meson.build |   1 +
  20 files changed, 1185 insertions(+), 6 deletions(-)
>>>
>>> This is a massive patch that would benefit from being split up
>>> into multiple smaller patches.
>>>
>>> I'm told by a kernel developer that loongarch hasn't yet been
>>> accepted into the Linux kernel mainline. Until it has been, the
>>> syscall ABI for it is not yet stable, so we won't be able to take
>>> the linux-user patches for it yet. (We have been burned in the
>>> past by taking linux-user architecture support patches without
>>> realizing they weren't for a stable ABI, and then being out of
>>> sync with the eventual upstream kernel ABI that was accepted.)
>>>
>>> We can certainly do code review in the meantime, though.
>> Thanks for reviewing the big series patches. It is understandable that
>> there should be linux kernel merged for one new architecture support
>> firstly, and then there will be linux-user simulator later.
>>
>> We are planning to submit patch to linux kernel for LoongArch support,
>> there is the link:
>> https://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson.git/log/?h=loongarch-next
>>
>> And we will continueto  submit softmmu support for LoongArch. And is there
>> any extra requirements for softmmu simulation for new architecture,
>> such as gcc/binutils/bios?
> 
> Ideally if there are some pre-built toolchains either as part of a
> distro (we've used Debian Sid before for some) or easily to install in a
> docker container as binary tarballs (like we do for tricore) then we can
> enable basic check-tcg functionality.
> 
> Going forward having stable URLs for test images of distros means we can
> also enable check-acceptance tests.
Thanks for guidance, it requires that linux kernel/gcc/glibc are submitted
already. My meaning is that linux-user emulation depends on kernel 
syscall ABI, softmmu emulation does not have such dependency, on the contrast
system emulation can be used to verify linux kernel. Is there any requirement
for system emulation of new architecture?

bibo,mao

> 
>>
>> regards
>> bibo, mao
>>
>>
>>>
>>> thanks
>>> -- PMM
>>>
> 
> 




[RFC 1/3] modules: Add CONFIG_TCG_MODULAR in config_host

2021-06-30 Thread Jose R. Ziviani
CONFIG_TCG_MODULAR is a complement to CONFIG_MODULES, in order to
know if TCG will be a module, even if --enable-modules option was
set.

Signed-off-by: Jose R. Ziviani 
---
 meson.build | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/meson.build b/meson.build
index 2d72b8cc06..c37a2358d4 100644
--- a/meson.build
+++ b/meson.build
@@ -277,6 +277,9 @@ if not get_option('tcg').disabled()
 
   accelerators += 'CONFIG_TCG'
   config_host += { 'CONFIG_TCG': 'y' }
+  if is_tcg_modular
+config_host += { 'CONFIG_TCG_MODULAR': 'y' }
+  endif
 endif
 
 if 'CONFIG_KVM' not in accelerators and get_option('kvm').enabled()
-- 
2.32.0




[RFC 0/3] Improve module accelerator error message

2021-06-30 Thread Jose R. Ziviani
Hello!

I'm sending this as RFC because it's based on a patch still on
review[1], so I'd like to see if it makes sense.

Tt will improve the error message when an accelerator module could
not be loaded. Instead of the current assert error, a formated
message will be displayed.

[1] https://patchwork.kernel.org/project/qemu-devel/list/?series=506379

Jose R. Ziviani (3):
  modules: Add CONFIG_TCG_MODULAR in config_host
  modules: Implement module_is_loaded function
  qom: Improve error message in module_object_class_by_name()

 include/qemu/module.h |  3 +++
 meson.build   |  3 +++
 qom/object.c  | 30 ++
 util/module.c | 28 +---
 4 files changed, 57 insertions(+), 7 deletions(-)

-- 
2.32.0




[RFC 3/3] qom: Improve error message in module_object_class_by_name()

2021-06-30 Thread Jose R. Ziviani
module_object_class_by_name() calls module_load_qom_one if the object
is provided by a dynamically linked library. Such library might not be
available at this moment - for instance, it can be a package not yet
installed. Thus, instead of assert error messages, this patch outputs
more friendly messages.

Current error messages:
$ ./qemu-system-x86_64 -machine q35 -accel tcg -kernel /boot/vmlinuz
...
ERROR:../accel/accel-softmmu.c:82:accel_init_ops_interfaces: assertion failed: 
(ops != NULL)
Bail out! ERROR:../accel/accel-softmmu.c:82:accel_init_ops_interfaces: 
assertion failed: (ops != NULL)
[1]31964 IOT instruction (core dumped)  ./qemu-system-x86_64 ...

New error message:
$ ./qemu-system-x86_64 -machine q35 -accel tcg -kernel /boot/vmlinuz
accel-tcg-x86_64 module is missing, install the package or config the library 
path correctly.

$ make check
...
Running test qtest-x86_64/test-filter-mirror
Running test qtest-x86_64/endianness-test
accel-qtest-x86_64 module is missing, install the package or config the library 
path correctly.
accel-qtest-x86_64 module is missing, install the package or config the library 
path correctly.
accel-qtest-x86_64 module is missing, install the package or config the library 
path correctly.
accel-qtest-x86_64 module is missing, install the package or config the library 
path correctly.
accel-qtest-x86_64 module is missing, install the package or config the library 
path correctly.
accel-tcg-x86_64 module is missing, install the package or config the library 
path correctly.
...

Signed-off-by: Jose R. Ziviani 
---
 qom/object.c | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/qom/object.c b/qom/object.c
index 6a01d56546..2d40245af9 100644
--- a/qom/object.c
+++ b/qom/object.c
@@ -1024,6 +1024,24 @@ ObjectClass *object_class_by_name(const char *typename)
 return type->class;
 }
 
+char *get_accel_module_name(const char *ac_name);
+
+char *get_accel_module_name(const char *ac_name)
+{
+size_t len = strlen(ac_name);
+char *module_name = NULL;
+
+if (strncmp(ac_name, "tcg-accel-ops", len) == 0) {
+#ifdef CONFIG_TCG_MODULAR
+module_name = g_strdup_printf("%s%s", "accel-tcg-", "x86_64");
+#endif
+} else if (strncmp(ac_name, "qtest-accel-ops", len) == 0) {
+module_name = g_strdup_printf("%s%s", "accel-qtest-", "x86_64");
+}
+
+return module_name;
+}
+
 ObjectClass *module_object_class_by_name(const char *typename)
 {
 ObjectClass *oc;
@@ -1031,8 +1049,20 @@ ObjectClass *module_object_class_by_name(const char 
*typename)
 oc = object_class_by_name(typename);
 #ifdef CONFIG_MODULES
 if (!oc) {
+char *module_name;
 module_load_qom_one(typename);
 oc = object_class_by_name(typename);
+module_name = get_accel_module_name(typename);
+if (module_name) {
+if (!module_is_loaded(module_name)) {
+fprintf(stderr, "%s module is missing, install the "
+"package or config the library path "
+"correctly.\n", module_name);
+g_free(module_name);
+exit(1);
+}
+g_free(module_name);
+}
 }
 #endif
 return oc;
-- 
2.32.0




[RFC 2/3] modules: Implement module_is_loaded function

2021-06-30 Thread Jose R. Ziviani
The function module_load_one() fills a hash table will all modules that
were successfuly loaded. However, that table is a static variable of
module_load_one(). This patch changes it and creates a function that
informs whether a given module was loaded or not.

Signed-off-by: Jose R. Ziviani 
---
 include/qemu/module.h |  3 +++
 util/module.c | 28 +---
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/include/qemu/module.h b/include/qemu/module.h
index 456e190a55..01779cc7fb 100644
--- a/include/qemu/module.h
+++ b/include/qemu/module.h
@@ -14,6 +14,7 @@
 #ifndef QEMU_MODULE_H
 #define QEMU_MODULE_H
 
+#include 
 
 #define DSO_STAMP_FUN glue(qemu_stamp, CONFIG_STAMP)
 #define DSO_STAMP_FUN_STR stringify(DSO_STAMP_FUN)
@@ -74,6 +75,8 @@ void module_load_qom_one(const char *type);
 void module_load_qom_all(void);
 void module_allow_arch(const char *arch);
 
+bool module_is_loaded(const char *name);
+
 /**
  * DOC: module info annotation macros
  *
diff --git a/util/module.c b/util/module.c
index 6bb4ad915a..64307b7a25 100644
--- a/util/module.c
+++ b/util/module.c
@@ -119,6 +119,8 @@ static const QemuModinfo module_info_stub[] = { {
 static const QemuModinfo *module_info = module_info_stub;
 static const char *module_arch;
 
+static GHashTable *loaded_modules;
+
 void module_init_info(const QemuModinfo *info)
 {
 module_info = info;
@@ -206,13 +208,10 @@ static int module_load_file(const char *fname, bool 
mayfail, bool export_symbols
 out:
 return ret;
 }
-#endif
 
 bool module_load_one(const char *prefix, const char *lib_name, bool mayfail)
 {
 bool success = false;
-
-#ifdef CONFIG_MODULES
 char *fname = NULL;
 #ifdef CONFIG_MODULE_UPGRADES
 char *version_dir;
@@ -223,7 +222,6 @@ bool module_load_one(const char *prefix, const char 
*lib_name, bool mayfail)
 int i = 0, n_dirs = 0;
 int ret;
 bool export_symbols = false;
-static GHashTable *loaded_modules;
 const QemuModinfo *modinfo;
 const char **sl;
 
@@ -307,12 +305,9 @@ bool module_load_one(const char *prefix, const char 
*lib_name, bool mayfail)
 g_free(dirs[i]);
 }
 
-#endif
 return success;
 }
 
-#ifdef CONFIG_MODULES
-
 static bool module_loaded_qom_all;
 
 void module_load_qom_one(const char *type)
@@ -377,6 +372,15 @@ void qemu_load_module_for_opts(const char *group)
 }
 }
 
+bool module_is_loaded(const char *name)
+{
+if (!loaded_modules || !g_hash_table_contains(loaded_modules, name)) {
+return false;
+}
+
+return true;
+}
+
 #else
 
 void module_allow_arch(const char *arch) {}
@@ -384,4 +388,14 @@ void qemu_load_module_for_opts(const char *group) {}
 void module_load_qom_one(const char *type) {}
 void module_load_qom_all(void) {}
 
+bool module_load_one(const char *prefix, const char *lib_name, bool mayfail)
+{
+return false;
+}
+
+bool module_is_loaded(const char *name)
+{
+return false;
+}
+
 #endif
-- 
2.32.0




Re: [PATCH v4 0/4] avocado-qemu: New SMMUv3 and intel IOMMU tests

2021-06-30 Thread Wainer dos Santos Moschetta

Hi,

On 6/29/21 5:17 PM, Eric Auger wrote:

Hi Cleber, all,

On 6/29/21 4:36 PM, Eric Auger wrote:

This series adds ARM SMMU and Intel IOMMU functional
tests using Fedora cloud-init images.

ARM SMMU tests feature guests with and without RIL
(range invalidation support) using respectively fedora 33
and 31.  For each, we test the protection of virtio-net-pci
and virtio-block-pci devices. Also strict=no and passthrough
modes are tested. So there is a total of 6 tests.

The series applies on top of Cleber's series:
- [PATCH 0/3] Acceptance Tests: support choosing specific

Note:
- SMMU tests 2, 3, 5, 6 (resp. test_smmu_noril_passthrough and
test_smmu_noril_nostrict) pass but the log reports:
"WARN: Test passed but there were warnings during execution."
This seems due to the lack of hash when fetching the kernel and
initrd through fetch_asset():
WARNI| No hash provided. Cannot check the asset file integrity.

I wanted to emphasize that point and wondered how we could fix that
issue. Looks a pity the tests get tagged as WARN due to a lack of sha1.
Any advice?


As Willian mentioned somewhere, to supress the WARN you can pass the 
kernel and initrd checksums (sha1) to the fetch_asset() method.


Below is an draft implementation. It would need to fill out the 
remaining checksums and adjust the `smmu.py` tests.


- Wainer



diff --git a/tests/acceptance/avocado_qemu/__init__.py 
b/tests/acceptance/avocado_qemu/__init__.py

index 00eb0bfcc8..83637e2654 100644
--- a/tests/acceptance/avocado_qemu/__init__.py
+++ b/tests/acceptance/avocado_qemu/__init__.py
@@ -312,6 +312,8 @@ class LinuxDistro:
 {'checksum': 
'e3c1b309d9203604922d6e255c2c5d098a309c2d46215d8fc026954f3c5c27a0',
 'pxeboot_url': 
"https://archives.fedoraproject.org/pub/archive/fedora/;

"linux/releases/31/Everything/x86_64/os/images/pxeboot/",
+    'pxeboot_initrd_chksum': 
'dd0340a1b39bd28f88532babd4581c67649ec5b1',
+    'pxeboot_vmlinuz_chksum': 
'5b6f6876e1b5bda314f93893271da0d5777b1f3c',
 'kernel_params': 
"root=UUID=b1438b9b-2cab-4065-a99a-08a96687f73c ro "

   "no_timer_check net.ifnames=0 "
   "console=tty1 console=ttyS0,115200n8"},
@@ -371,6 +373,16 @@ def pxeboot_url(self):
 """Gets the repository url where pxeboot files can be found"""
 return self._info.get('pxeboot_url', None)

+    @property
+    def pxeboot_initrd_chksum(self):
+    """Gets the pxeboot initrd file checksum"""
+    return self._info.get('pxeboot_initrd_chksum', None)
+
+    @property
+    def pxeboot_vmlinuz_chksum(self):
+    """Gets the pxeboot vmlinuz file checksum"""
+    return self._info.get('pxeboot_vmlinuz_chksum', None)
+
 @property
 def checksum(self):
 """Gets the cloud-image file checksum"""
diff --git a/tests/acceptance/intel_iommu.py 
b/tests/acceptance/intel_iommu.py

index bf8dea6e4f..a2f38ee2e9 100644
--- a/tests/acceptance/intel_iommu.py
+++ b/tests/acceptance/intel_iommu.py
@@ -55,8 +55,10 @@ def common_vm_setup(self, custom_kernel=None):

 kernel_url = self.distro.pxeboot_url + 'vmlinuz'
 initrd_url = self.distro.pxeboot_url + 'initrd.img'
-    self.kernel_path = self.fetch_asset(kernel_url)
-    self.initrd_path = self.fetch_asset(initrd_url)
+    self.kernel_path = self.fetch_asset(kernel_url,
+ asset_hash=self.distro.pxeboot_vmlinuz_chksum)
+    self.initrd_path = self.fetch_asset(initrd_url,
+ asset_hash=self.distro.pxeboot_initrd_chksum)

 def run_and_check(self):
 if self.kernel_path:



Best Regards

Eric

History:
v3 -> v4:
- I added Wainer's refactoring of KNOWN_DISTROS
into a class (last patch) and took into account his comments.

v2 -> v3:
- Added Intel IOMMU tests were added. Different
operating modes are tested such as strict, caching mode, pt.

Best Regards

Eric

The series and its dependencies can be found at:
https://github.com/eauger/qemu/tree/avocado-qemu-v4

Eric Auger (3):
   Acceptance Tests: Add default kernel params and pxeboot url to the
 KNOWN_DISTROS collection
   avocado_qemu: Add SMMUv3 tests
   avocado_qemu: Add Intel iommu tests

Wainer dos Santos Moschetta (1):
   avocado_qemu: Fix KNOWN_DISTROS map into the LinuxDistro class

  tests/acceptance/avocado_qemu/__init__.py | 118 +--
  tests/acceptance/intel_iommu.py   | 115 +++
  tests/acceptance/smmu.py  | 132 ++
  3 files changed, 332 insertions(+), 33 deletions(-)
  create mode 100644 tests/acceptance/intel_iommu.py
  create mode 100644 tests/acceptance/smmu.py






Re: [External] Re: [RFC v1] virtio/vsock: add two more queues for datagram types

2021-06-30 Thread Jiang Wang .
On Thu, Jun 24, 2021 at 7:31 AM Stefano Garzarella  wrote:
>
> On Wed, Jun 23, 2021 at 11:50:33PM -0700, Jiang Wang . wrote:
> >Hi Stefano,
> >
> >I checked virtio_net_set_multiqueue(), which will help with following
> >changes in my patch:
> >
> >#ifdef CONFIG_VHOST_VSOCK_DGRAM
> >vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> >vhost_vsock_common_handle_output);
> >vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> >vhost_vsock_common_handle_output);
> >#endif
> >
> >But I think there is still an issue with the following lines, right?
>
> Yep, I think so.
>
> >
> >#ifdef CONFIG_VHOST_VSOCK_DGRAM
> >struct vhost_virtqueue vhost_vqs[4];
> >#else
> >struct vhost_virtqueue vhost_vqs[2];
> >#endif
> >
> >I think the problem with feature bits is that they are set and get after
> >vhost_vsock_common_realize() and after vhost_dev_init() in 
> >drivers/vhost/vsock.c
> >But those virtqueues need to be set up correctly beforehand.
>
> I think we can follow net and scsi vhost devices, so we can set a
> VHOST_VSOCK_VQ_MAX(5), allocates all the queues in any case and then use
> only the queues acked by the guest.
>
Thanks for the advice. I checked both net and scsi and scsi is more helpful.

> >
> >I tried to test with the host kernel allocating 4 vqs, but qemu only
> >allocated 2 vqs, and
> >guest kernel will not be able to send even the vsock stream packets. I
> >think the host
> >kernel and the qemu have to agree on the number of vhost_vqs. Do you agree?
> >Did I miss something?
>
> Mmm, I need to check, but for example vhost-net calls vhost_dev_init()
> with VHOST_NET_VQ_MAX, but then the guest can decide to use only one
> couple of TX and RX queues.
>
> I'm not sure about qemu point of view, but I expected that QEMU can set
> less queues then queues allocated by the kernel. `vhost_dev.nvqs` should
> be set with the amount of queue that QEMU can handle.
>
I checked that vhost_dev.nvqs is still the maximum number of queues (4 queues).
But I found a way to workaround it. More details in the following text.

> >
> >Another idea to make the setting in runtime instead of compiling time
> >is to use
> >qemu cmd-line options, then qemu can allocate 2 or 4 queues depending
> >on
> >the cmd line. This will solve the issue when the host kernel is an old
> >one( no dgram
> >support) and the qemu is a new one.
>
> I don't think this is a good idea, at most we can add an ioctl that qemu
> can use to query the kernel about allocated queues, but I still need to
> understand better if we really we need this.
>

Hmm. Both net and scsi use the qemu cmd line option to configure
number of queues. Qemu cmdline is a runtime setting and flexible.
I think qemu cmdline is better than ioctl. I also make the qemu cmd
line option default to only allocate two queues to be compatible with
old versions.

> >
> >But there is still an issue when the host kernel is a new one, while
> >the qemu
> >is an old one.  I am not sure how to make the virtqueues numbers to
> >change in run-time
> >for the host kernel. In another email thread, you mentioned removing kconfig
> >in the linux kernel, I believe that is related to this qemu patch,
> >right?
>
> It was related to both, I don't think we should build QEMU and Linux
> with or without dgram support.
>
> > If so,
> >any ideas that I can make the host kernel change the number of vqs in
> >the run-time
> >or when starting up vsock? The only way I can think of is to use a
> >kernel module parameter
> >for the vsock_vhost module. Any other ideas? Thanks.
>
> I need to check better, but we should be able to do all at run time
> looking at the features field. As I said, both QEMU and kernel can
> allocate the maximum number of queues that they can handle, then enable
> only the queues allocated by the guest (e.g. during
> vhost_vsock_common_start()).
>

Yes. I checked the code and found there is an implementation bug ( or
limitation) in drivers/vhost/vsock.c. In vhost_vsock_start(), if a queue
failed to init, the code will clean up all previous successfully
allocated queues. That is why V1 code does not work when
host kernel is new,  but qemu and guest kernel is old. I made a change
there and it works now. I will clean up the patch a little bit and
send V2 soon.


> >
> >btw, I searched Linux kernel code but did not find any examples.
> >
>
> I'm a bit busy this week, I'll try to write some PoC next week if you
> can't find a working solution. (without any #ifdef :-)
>
> Thanks,
> Stefano
>



Re: [PATCH v7 4/4] Jobs based on custom runners: add job definitions for QEMU's machines

2021-06-30 Thread Wainer dos Santos Moschetta



On 6/29/21 10:26 PM, Cleber Rosa wrote:

The QEMU project has two machines (aarch64 and s390x) that can be used
for jobs that do build and run tests.  This introduces those jobs,
which are a mapping of custom scripts used for the same purpose.

Signed-off-by: Cleber Rosa 
Reviewed-by: Willian Rampazzo 
---
  .gitlab-ci.d/custom-runners.yml | 208 
  1 file changed, 208 insertions(+)

Reviewed-by: Wainer dos Santos Moschetta 


diff --git a/.gitlab-ci.d/custom-runners.yml b/.gitlab-ci.d/custom-runners.yml
index a07b27384c..061d3cdfed 100644
--- a/.gitlab-ci.d/custom-runners.yml
+++ b/.gitlab-ci.d/custom-runners.yml
@@ -12,3 +12,211 @@
  # guarantees a fresh repository on each job run.
  variables:
GIT_STRATEGY: clone
+
+# All ubuntu-18.04 jobs should run successfully in an environment
+# setup by the scripts/ci/setup/build-environment.yml task
+# "Install basic packages to build QEMU on Ubuntu 18.04/20.04"
+ubuntu-18.04-s390x-all-linux-static:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ # --disable-libssh is needed because of 
https://bugs.launchpad.net/qemu/+bug/1838763
+ # --disable-glusterfs is needed because there's no static version of those 
libs in distro supplied packages
+ - mkdir build
+ - cd build
+ - ../configure --enable-debug --static --disable-system --disable-glusterfs 
--disable-libssh
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+ - make --output-sync -j`nproc` check-tcg V=1
+
+ubuntu-18.04-s390x-all:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+ubuntu-18.04-s390x-alldbg:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --enable-debug --disable-libssh
+ - make clean
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+ubuntu-18.04-s390x-clang:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+   when: manual
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh --cc=clang --cxx=clang++ --enable-sanitizers
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+ubuntu-18.04-s390x-tci:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh --enable-tcg-interpreter
+ - make --output-sync -j`nproc`
+
+ubuntu-18.04-s390x-notcg:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+   when: manual
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh --disable-tcg
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+# All ubuntu-20.04 jobs should run successfully in an environment
+# setup by the scripts/ci/setup/qemu/build-environment.yml task
+# "Install basic packages to build QEMU on Ubuntu 18.04/20.04"
+ubuntu-20.04-aarch64-all-linux-static:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_20.04
+ - aarch64
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ # --disable-libssh is needed because of 
https://bugs.launchpad.net/qemu/+bug/1838763
+ # --disable-glusterfs is needed because there's no static version of those 
libs in distro supplied packages
+ - mkdir build
+ - cd build
+ - ../configure --enable-debug --static --disable-system --disable-glusterfs 
--disable-libssh
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+ - make --output-sync -j`nproc` check-tcg V=1
+
+ubuntu-20.04-aarch64-all:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_20.04
+ - aarch64
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+ubuntu-20.04-aarch64-alldbg:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_20.04
+ - aarch64
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --enable-debug --disable-libssh
+ - make clean
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+ubuntu-20.04-aarch64-clang:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_20.04
+ - aarch64
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+   when: manual
+ script:
+ - mkdir build
+ - cd build
+ - ../configure 

Re: [PATCH v7 3/4] Jobs based on custom runners: docs and gitlab-runner setup playbook

2021-06-30 Thread Wainer dos Santos Moschetta



On 6/29/21 10:26 PM, Cleber Rosa wrote:

To have the jobs dispatched to custom runners, gitlab-runner must
be installed, active as a service and properly configured.  The
variables file and playbook introduced here should help with those
steps.

The playbook introduced here covers the Linux distributions and
has been primarily tested on OS/machines that the QEMU project
has available to act as runners, namely:

  * Ubuntu 20.04 on aarch64
  * Ubuntu 18.04 on s390x

But, it should work on all other Linux distributions.  Earlier
versions were tested on FreeBSD too, so chances of success are
high.

Signed-off-by: Cleber Rosa 
Reviewed-by: Willian Rampazzo 
Tested-by: Willian Rampazzo 
---
  docs/devel/ci.rst  | 55 +++
  scripts/ci/setup/.gitignore|  2 +-
  scripts/ci/setup/gitlab-runner.yml | 71 ++
  scripts/ci/setup/vars.yml.template | 12 +
  4 files changed, 139 insertions(+), 1 deletion(-)
  create mode 100644 scripts/ci/setup/gitlab-runner.yml
  create mode 100644 scripts/ci/setup/vars.yml.template

Reviewed-by: Wainer dos Santos Moschetta 


diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
index bfedbb1025..b3bf3ef615 100644
--- a/docs/devel/ci.rst
+++ b/docs/devel/ci.rst
@@ -70,3 +70,58 @@ privileges, such as those from the ``root`` account or those 
obtained
  by ``sudo``.  If necessary, please refer to ``ansible-playbook``
  options such as ``--become``, ``--become-method``, ``--become-user``
  and ``--ask-become-pass``.
+
+gitlab-runner setup and registration
+
+
+The gitlab-runner agent needs to be installed on each machine that
+will run jobs.  The association between a machine and a GitLab project
+happens with a registration token.  To find the registration token for
+your repository/project, navigate on GitLab's web UI to:
+
+ * Settings (the gears-like icon at the bottom of the left hand side
+   vertical toolbar), then
+ * CI/CD, then
+ * Runners, and click on the "Expand" button, then
+ * Under "Set up a specific Runner manually", look for the value under
+   "And this registration token:"
+
+Copy the ``scripts/ci/setup/vars.yml.template`` file to
+``scripts/ci/setup/vars.yml``.  Then, set the
+``gitlab_runner_registration_token`` variable to the value obtained
+earlier.
+
+To run the playbook, execute::
+
+  cd scripts/ci/setup
+  ansible-playbook -i inventory gitlab-runner.yml
+
+Following the registration, it's necessary to configure the runner tags,
+and optionally other configurations on the GitLab UI.  Navigate to:
+
+ * Settings (the gears like icon), then
+ * CI/CD, then
+ * Runners, and click on the "Expand" button, then
+ * "Runners activated for this project", then
+ * Click on the "Edit" icon (next to the "Lock" Icon)
+
+Tags are very important as they are used to route specific jobs to
+specific types of runners, so it's a good idea to double check that
+the automatically created tags are consistent with the OS and
+architecture.  For instance, an Ubuntu 20.04 aarch64 system should
+have tags set as::
+
+  ubuntu_20.04,aarch64
+
+Because the job definition at ``.gitlab-ci.d/custom-runners.yml``
+would contain::
+
+  ubuntu-20.04-aarch64-all:
+   tags:
+   - ubuntu_20.04
+   - aarch64
+
+It's also recommended to:
+
+ * increase the "Maximum job timeout" to something like ``2h``
+ * give it a better Description
diff --git a/scripts/ci/setup/.gitignore b/scripts/ci/setup/.gitignore
index ee088604d1..f4a6183f1f 100644
--- a/scripts/ci/setup/.gitignore
+++ b/scripts/ci/setup/.gitignore
@@ -1,2 +1,2 @@
  inventory
-
+vars.yml
diff --git a/scripts/ci/setup/gitlab-runner.yml 
b/scripts/ci/setup/gitlab-runner.yml
new file mode 100644
index 00..1127db516f
--- /dev/null
+++ b/scripts/ci/setup/gitlab-runner.yml
@@ -0,0 +1,71 @@
+# Copyright (c) 2021 Red Hat, Inc.
+#
+# Author:
+#  Cleber Rosa 
+#
+# This work is licensed under the terms of the GNU GPL, version 2 or
+# later.  See the COPYING file in the top-level directory.
+#
+# This is an ansible playbook file.  Run it to set up systems with the
+# gitlab-runner agent.
+---
+- name: Installation of gitlab-runner
+  hosts: all
+  vars_files:
+- vars.yml
+  tasks:
+- debug:
+msg: 'Checking for a valid GitLab registration token'
+  failed_when: "gitlab_runner_registration_token == 
'PLEASE_PROVIDE_A_VALID_TOKEN'"
+
+- name: Create a group for the gitlab-runner service
+  group:
+name: gitlab-runner
+
+- name: Create a user for the gitlab-runner service
+  user:
+user: gitlab-runner
+group: gitlab-runner
+comment: GitLab Runner
+home: /home/gitlab-runner
+shell: /bin/bash
+
+- name: Remove the .bash_logout file when on Ubuntu systems
+  file:
+path: /home/gitlab-runner/.bash_logout
+state: absent
+  when: "ansible_facts['distribution'] == 'Ubuntu'"
+
+- name: Set the Operating System for gitlab-runner
+

Re: [PATCH v7 2/4] Jobs based on custom runners: build environment docs and playbook

2021-06-30 Thread Wainer dos Santos Moschetta



On 6/29/21 10:26 PM, Cleber Rosa wrote:

To run basic jobs on custom runners, the environment needs to be
properly set up.  The most common requirement is having the right
packages installed.

The playbook introduced here covers the QEMU's project s390x and
aarch64 machines.  At the time this is being proposed, those machines
have already had this playbook applied to them.

Signed-off-by: Cleber Rosa 
---
  docs/devel/ci.rst  |  40 +
  scripts/ci/setup/.gitignore|   2 +
  scripts/ci/setup/build-environment.yml | 116 +
  scripts/ci/setup/inventory.template|   1 +
  4 files changed, 159 insertions(+)
  create mode 100644 scripts/ci/setup/.gitignore
  create mode 100644 scripts/ci/setup/build-environment.yml
  create mode 100644 scripts/ci/setup/inventory.template


Reviewed-by: Wainer dos Santos Moschetta 



diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
index 064ffa9988..bfedbb1025 100644
--- a/docs/devel/ci.rst
+++ b/docs/devel/ci.rst
@@ -30,3 +30,43 @@ The GitLab CI jobs definition for the custom runners are 
located under::
  Custom runners entail custom machines.  To see a list of the machines
  currently deployed in the QEMU GitLab CI and their maintainers, please
  refer to the QEMU `wiki `__.
+
+Machine Setup Howto
+---
+
+For all Linux based systems, the setup can be mostly automated by the
+execution of two Ansible playbooks.  Create an ``inventory`` file
+under ``scripts/ci/setup``, such as this::
+
+  fully.qualified.domain
+  other.machine.hostname
+
+You may need to set some variables in the inventory file itself.  One
+very common need is to tell Ansible to use a Python 3 interpreter on
+those hosts.  This would look like::
+
+  fully.qualified.domain ansible_python_interpreter=/usr/bin/python3
+  other.machine.hostname ansible_python_interpreter=/usr/bin/python3
+
+Build environment
+~
+
+The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will
+set up machines with the environment needed to perform builds and run
+QEMU tests.  This playbook consists on the installation of various
+required packages (and a general package update while at it).  It
+currently covers a number of different Linux distributions, but it can
+be expanded to cover other systems.
+
+The minimum required version of Ansible successfully tested in this
+playbook is 2.8.0 (a version check is embedded within the playbook
+itself).  To run the playbook, execute::
+
+  cd scripts/ci/setup
+  ansible-playbook -i inventory build-environment.yml
+
+Please note that most of the tasks in the playbook require superuser
+privileges, such as those from the ``root`` account or those obtained
+by ``sudo``.  If necessary, please refer to ``ansible-playbook``
+options such as ``--become``, ``--become-method``, ``--become-user``
+and ``--ask-become-pass``.
diff --git a/scripts/ci/setup/.gitignore b/scripts/ci/setup/.gitignore
new file mode 100644
index 00..ee088604d1
--- /dev/null
+++ b/scripts/ci/setup/.gitignore
@@ -0,0 +1,2 @@
+inventory
+
diff --git a/scripts/ci/setup/build-environment.yml 
b/scripts/ci/setup/build-environment.yml
new file mode 100644
index 00..581c1c75d1
--- /dev/null
+++ b/scripts/ci/setup/build-environment.yml
@@ -0,0 +1,116 @@
+# Copyright (c) 2021 Red Hat, Inc.
+#
+# Author:
+#  Cleber Rosa 
+#
+# This work is licensed under the terms of the GNU GPL, version 2 or
+# later.  See the COPYING file in the top-level directory.
+#
+# This is an ansible playbook file.  Run it to set up systems with the
+# environment needed to build QEMU.
+---
+- name: Installation of basic packages to build QEMU
+  hosts: all
+  tasks:
+- name: Check for suitable ansible version
+  delegate_to: localhost
+  assert:
+that:
+  - '((ansible_version.major == 2) and (ansible_version.minor >= 8)) or 
(ansible_version.major >= 3)'
+msg: "Unsuitable ansible version, please use version 2.8.0 or later"
+
+- name: Update apt cache / upgrade packages via apt
+  apt:
+update_cache: yes
+upgrade: yes
+  when:
+- ansible_facts['distribution'] == 'Ubuntu'
+
+- name: Install basic packages to build QEMU on Ubuntu 18.04/20.04
+  package:
+name:
+# Originally from tests/docker/dockerfiles/ubuntu1804.docker
+  - ccache
+  - gcc
+  - gettext
+  - git
+  - glusterfs-common
+  - libaio-dev
+  - libattr1-dev
+  - libbrlapi-dev
+  - libbz2-dev
+  - libcacard-dev
+  - libcap-ng-dev
+  - libcurl4-gnutls-dev
+  - libdrm-dev
+  - libepoxy-dev
+  - libfdt-dev
+  - libgbm-dev
+  - libgtk-3-dev
+  - libibverbs-dev
+  - libiscsi-dev
+  - libjemalloc-dev
+  - libjpeg-turbo8-dev
+  - liblzo2-dev
+  - libncurses5-dev
+

Re: [PATCH] python: Configure tox to skip missing interpreters

2021-06-30 Thread Willian Rampazzo
On Wed, Jun 30, 2021 at 3:46 PM Wainer dos Santos Moschetta
 wrote:
>
> Currently tox tests against the installed interpreters, however if any
> supported interpreter is absent then it will return fail. It seems not
> reasonable to expect developers to have all supported interpreters
> installed on their systems. Luckily tox can be configured to skip
> missing interpreters.
>
> This changed the tox setup so that missing interpreters are skipped by
> default. On the CI, however, we still want to enforce it tests
> against all supported. This way on CI the
> --skip-missing-interpreters=false option is passed to tox.
>
> Signed-off-by: Wainer dos Santos Moschetta 
> ---
> Tested locally with `make check-tox` and where I only Python 3.6 and 3.9
> installed.
> Tested on CI: https://gitlab.com/wainersm/qemu/-/jobs/1390010988
> Still on CI, but I deliberately removed Python 3.8: 
> https://gitlab.com/wainersm/qemu/-/jobs/1390046531
>
>  .gitlab-ci.d/static_checks.yml | 1 +
>  python/Makefile| 5 -
>  python/setup.cfg   | 1 +
>  3 files changed, 6 insertions(+), 1 deletion(-)
>

Seems reasonable.

Reviewed-by: Willian Rampazzo 




Re: [PATCH 1/3] build: validate that system capstone works before using it

2021-06-30 Thread Willian Rampazzo
On Fri, Jun 25, 2021 at 2:22 PM Daniel P. Berrangé  wrote:
>
> Some versions of capstone have shipped a broken pkg-config file which
> puts the -I path without the trailing '/capstone' suffix. This breaks
> the ability to "#include ". Upstream and most distros have
> fixed this, but a few stragglers remain, notably FreeBSD.
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  meson.build | 13 +
>  1 file changed, 13 insertions(+)
>

Reviewed-by: Willian Rampazzo 




Re: [PATCH 3/3] cirrus: delete FreeBSD and macOS jobs

2021-06-30 Thread Willian Rampazzo
On Fri, Jun 25, 2021 at 2:22 PM Daniel P. Berrangé  wrote:
>
> The builds for these two platforms can now be performed from GitLab CI
> using cirrus-run.
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  .cirrus.yml | 55 -
>  1 file changed, 55 deletions(-)
>

Reviewed-by: Willian Rampazzo 




Re: [PATCH 2/3] gitlab: support for FreeBSD 12, 13 and macOS 11 via cirrus-run

2021-06-30 Thread Daniel P . Berrangé
On Wed, Jun 30, 2021 at 03:58:57PM -0300, Wainer dos Santos Moschetta wrote:
> Hi,
> 
> On 6/25/21 2:22 PM, Daniel P. Berrangé wrote:
> > This adds support for running 4 jobs via Cirrus CI runners:
> > 
> >   * FreeBSD 12
> >   * FreeBSD 13
> >   * macOS 11 with default XCode
> >   * macOS 11 with latest XCode
> > 
> > The gitlab job uses a container published by the libvirt-ci
> > project (https://gitlab.com/libvirt/libvirt-ci) that contains
> > the 'cirrus-run' command. This accepts a short yaml file that
> > describes a single Cirrus CI job, runs it using the Cirrus CI
> > REST API, and reports any output to the console.
> > 
> > In this way Cirrus CI is effectively working as an indirect
> > custom runner for GitLab CI pipelines. The key benefit is that
> > Cirrus CI job results affect the GitLab CI pipeline result and
> > so the user only has look at one CI dashboard.
> > 
> > Signed-off-by: Daniel P. Berrangé 
> > ---
> >   .gitlab-ci.d/cirrus.yml | 103 
> >   .gitlab-ci.d/cirrus/README.rst  |  54 +++
> >   .gitlab-ci.d/cirrus/build.yml   |  35 ++
> >   .gitlab-ci.d/cirrus/freebsd-12.vars |  13 
> >   .gitlab-ci.d/cirrus/freebsd-13.vars |  13 
> >   .gitlab-ci.d/cirrus/macos-11.vars   |  15 
> >   .gitlab-ci.d/qemu-project.yml   |   1 +
> >   7 files changed, 234 insertions(+)
> >   create mode 100644 .gitlab-ci.d/cirrus.yml
> >   create mode 100644 .gitlab-ci.d/cirrus/README.rst
> >   create mode 100644 .gitlab-ci.d/cirrus/build.yml
> >   create mode 100644 .gitlab-ci.d/cirrus/freebsd-12.vars
> >   create mode 100644 .gitlab-ci.d/cirrus/freebsd-13.vars
> >   create mode 100644 .gitlab-ci.d/cirrus/macos-11.vars
> > 
> > diff --git a/.gitlab-ci.d/cirrus.yml b/.gitlab-ci.d/cirrus.yml
> > new file mode 100644
> > index 00..d7b4cce79b
> > --- /dev/null
> > +++ b/.gitlab-ci.d/cirrus.yml
> > @@ -0,0 +1,103 @@
> > +# Jobs that we delegate to Cirrus CI because they require an operating
> > +# system other than Linux. These jobs will only run if the required
> > +# setup has been performed on the GitLab account.
> > +#
> > +# The Cirrus CI configuration is generated by replacing target-specific
> > +# variables in a generic template: some of these variables are provided
> > +# when the GitLab CI job is defined, others are taken from a shell
> > +# snippet generated using lcitool.
> > +#
> > +# Note that the $PATH environment variable has to be treated with
> > +# special care, because we can't just override it at the GitLab CI job
> > +# definition level or we risk breaking it completely.
> > +.cirrus_build_job:
> > +  stage: build
> > +  image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
> > +  needs: []
> > +  script:
> > +- source .gitlab-ci.d/cirrus/$NAME.vars
> > +- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
> > +  -e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
> > +  -e "s|[@]CI_COMMIT_SHA@|$CI_COMMIT_SHA|g"
> > +  -e "s|[@]CIRRUS_VM_INSTANCE_TYPE@|$CIRRUS_VM_INSTANCE_TYPE|g"
> > +  -e "s|[@]CIRRUS_VM_IMAGE_SELECTOR@|$CIRRUS_VM_IMAGE_SELECTOR|g"
> > +  -e "s|[@]CIRRUS_VM_IMAGE_NAME@|$CIRRUS_VM_IMAGE_NAME|g"
> > +  -e "s|[@]CIRRUS_VM_CPUS@|$CIRRUS_VM_CPUS|g"
> > +  -e "s|[@]CIRRUS_VM_RAM@|$CIRRUS_VM_RAM|g"
> > +  -e "s|[@]UPDATE_COMMAND@|$UPDATE_COMMAND|g"
> > +  -e "s|[@]INSTALL_COMMAND@|$INSTALL_COMMAND|g"
> > +  -e "s|[@]PATH@|$PATH_EXTRA${PATH_EXTRA:+:}\$PATH|g"
> > +  -e "s|[@]PKG_CONFIG_PATH@|$PKG_CONFIG_PATH|g"
> > +  -e "s|[@]PKGS@|$PKGS|g"
> > +  -e "s|[@]MAKE@|$MAKE|g"
> > +  -e "s|[@]PYTHON@|$PYTHON|g"
> > +  -e "s|[@]PIP3@|$PIP3|g"
> > +  -e "s|[@]PYPI_PKGS@|$PYPI_PKGS|g"
> > +  -e "s|[@]CONFIGURE_ARGS@|$CONFIGURE_ARGS|g"
> > +  -e "s|[@]TEST_TARGETSS@|$TEST_TARGETSS|g"
> > +  <.gitlab-ci.d/cirrus/build.yml >.gitlab-ci.d/cirrus/$NAME.yml
> > +- cat .gitlab-ci.d/cirrus/$NAME.yml
> > +- cirrus-run -v --show-build-log always .gitlab-ci.d/cirrus/$NAME.yml
> > +  rules:
> > +- if: "$TEMPORARILY_DISABLED"
> 
> Reading 'TEMPORARILY_DISABLED' I immediately think the job is malfunctioning
> or under maintenance.

Actually this is cruft that I mistakenly copied from libvirt's rules.

> But since the plan is to keep it running as 'non-gate' until it proves
> reliable, so maybe you could rename the variable to 'NON_GATE' or
> 'STAGING_JOB' (i.e. some words to better express the intent).

We can just remove the 'if $TEMPORARILY_DISABLED' bit and
have only the 'allow_failure: true' bit


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PULL v2 0/4] Hexagon (target/hexagon) bug fixes

2021-06-30 Thread Peter Maydell
On Tue, 29 Jun 2021 at 18:14, Taylor Simpson  wrote:
>
> The following changes since commit 13d5f87cc3b94bfccc501142df4a7b12fee3a6e7:
>
>   Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-axp-20210628' 
> into staging (2021-06-29 10:02:42 +0100)
>
> are available in the git repository at:
>
>   https://github.com/quic/qemu tags/pull-hex-20210629
>
> for you to fetch changes up to fb858fb76b1b2dfdf64f82669df1270c0c19a033:
>
>   Hexagon (target/hexagon) remove unused TCG variables (2021-06-29 11:32:50 
> -0500)
>
> 
> Fixes for bugs found by inspection and internal testing
> Tests added to tests/tcg/hexagon/misc.c
>

Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/6.1
for any user-visible changes.

-- PMM



[PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()

2021-06-30 Thread Peter Xu
Taking the mutex every time for each dirty bit to clear is too slow, especially
we'll take/release even if the dirty bit is cleared.  So far it's only used to
sync with special cases with qemu_guest_free_page_hint() against migration
thread, nothing really that serious yet.  Let's move the lock to be upper.

There're two callers of migration_bitmap_clear_dirty().

For migration, move it into ram_save_iterate().  With the help of MAX_WAIT
logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, so
taking the lock once there at the entry.  It also means any call sites to
qemu_guest_free_page_hint() can be delayed; but it should be very rare, only
during migration, and I don't see a problem with it.

For COLO, move it up to colo_flush_ram_cache().  I think COLO forgot to take
that lock even when calling ramblock_sync_dirty_bitmap(), where another example
is migration_bitmap_sync() who took it right.  So let the mutex cover both the
ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.

It's even possible to drop the lock so we use atomic operations upon rb->bmap
and the variable migration_dirty_pages.  I didn't do it just to still be safe,
also not predictable whether the frequent atomic ops could bring overhead too
e.g. on huge vms when it happens very often.  When that really comes, we can
keep a local counter and periodically call atomic ops.  Keep it simple for now.

Cc: Wei Wang 
Cc: David Hildenbrand 
Cc: Hailiang Zhang 
Cc: Dr. David Alan Gilbert 
Cc: Juan Quintela 
Cc: Leonardo Bras Soares Passos 
Signed-off-by: Peter Xu 
---
 migration/ram.c | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 723af67c2e..9f2965675d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -795,8 +795,6 @@ static inline bool migration_bitmap_clear_dirty(RAMState 
*rs,
 {
 bool ret;
 
-QEMU_LOCK_GUARD(>bitmap_mutex);
-
 /*
  * Clear dirty bitmap if needed.  This _must_ be called before we
  * send any of the page in the chunk because we need to make sure
@@ -2834,6 +2832,14 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 goto out;
 }
 
+/*
+ * We'll take this lock a little bit long, but it's okay for two reasons.
+ * Firstly, the only possible other thread to take it is who calls
+ * qemu_guest_free_page_hint(), which should be rare; secondly, see
+ * MAX_WAIT (if curious, further see commit 4508bd9ed8053ce) below, which
+ * guarantees that we'll at least released it in a regular basis.
+ */
+qemu_mutex_lock(>bitmap_mutex);
 WITH_RCU_READ_LOCK_GUARD() {
 if (ram_list.version != rs->last_version) {
 ram_state_reset(rs);
@@ -2893,6 +2899,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 i++;
 }
 }
+qemu_mutex_unlock(>bitmap_mutex);
 
 /*
  * Must occur before EOS (or any QEMUFile operation)
@@ -3682,6 +3689,7 @@ void colo_flush_ram_cache(void)
 unsigned long offset = 0;
 
 memory_global_dirty_log_sync();
+qemu_mutex_lock(_state->bitmap_mutex);
 WITH_RCU_READ_LOCK_GUARD() {
 RAMBLOCK_FOREACH_NOT_IGNORED(block) {
 ramblock_sync_dirty_bitmap(ram_state, block);
@@ -3710,6 +3718,7 @@ void colo_flush_ram_cache(void)
 }
 }
 trace_colo_flush_ram_cache_end();
+qemu_mutex_unlock(_state->bitmap_mutex);
 }
 
 /**
-- 
2.31.1




Re: [PATCH v5 02/10] ACPI ERST: specification for ERST support

2021-06-30 Thread Eric DeVolder
Oops, at the end of the 4th paragraph, I meant to state that "Linux does not 
support the NVRAM mode."
rather than "non-NVRAM mode", which contradicts everything I stated prior.
Eric.

From: Eric DeVolder 
Sent: Wednesday, June 30, 2021 2:07 PM
To: qemu-devel@nongnu.org 
Cc: m...@redhat.com ; imamm...@redhat.com 
; marcel.apfelb...@gmail.com ; 
pbonz...@redhat.com ; r...@twiddle.net ; 
ehabk...@redhat.com ; Konrad Wilk 
; Boris Ostrovsky 
Subject: [PATCH v5 02/10] ACPI ERST: specification for ERST support

Information on the implementation of the ACPI ERST support.

Signed-off-by: Eric DeVolder 
---
 docs/specs/acpi_erst.txt | 152 +++
 1 file changed, 152 insertions(+)
 create mode 100644 docs/specs/acpi_erst.txt

diff --git a/docs/specs/acpi_erst.txt b/docs/specs/acpi_erst.txt
new file mode 100644
index 000..79f8eb9
--- /dev/null
+++ b/docs/specs/acpi_erst.txt
@@ -0,0 +1,152 @@
+ACPI ERST DEVICE
+
+
+The ACPI ERST device is utilized to support the ACPI Error Record
+Serialization Table, ERST, functionality. The functionality is
+designed for storing error records in persistent storage for
+future reference/debugging.
+
+The ACPI specification[1], in Chapter "ACPI Platform Error Interfaces
+(APEI)", and specifically subsection "Error Serialization", outlines
+a method for storing error records into persistent storage.
+
+The format of error records is described in the UEFI specification[2],
+in Appendix N "Common Platform Error Record".
+
+While the ACPI specification allows for an NVRAM "mode" (see
+GET_ERROR_LOG_ADDRESS_RANGE_ATTRIBUTES) where non-volatile RAM is
+directly exposed for direct access by the OS/guest, this implements
+the non-NVRAM "mode". This non-NVRAM "mode" is what is implemented
+by most BIOS (since flash memory requires programming operations
+in order to update its contents). Furthermore, as of the time of this
+writing, Linux does not support the non-NVRAM "mode".
+
+
+Background/Motivation
+-
+Linux uses the persistent storage filesystem, pstore, to record
+information (eg. dmesg tail) upon panics and shutdowns.  Pstore is
+independent of, and runs before, kdump.  In certain scenarios (ie.
+hosts/guests with root filesystems on NFS/iSCSI where networking
+software and/or hardware fails), pstore may contain the only
+information available for post-mortem debugging.
+
+Two common storage backends for the pstore filesystem are ACPI ERST
+and UEFI. Most BIOS implement ACPI ERST.  UEFI is not utilized in
+all guests. With QEMU supporting ACPI ERST, it becomes a viable
+pstore storage backend for virtual machines (as it is now for
+bare metal machines).
+
+Enabling support for ACPI ERST facilitates a consistent method to
+capture kernel panic information in a wide range of guests: from
+resource-constrained microvms to very large guests, and in
+particular, in direct-boot environments (which would lack UEFI
+run-time services).
+
+Note that Microsoft Windows also utilizes the ACPI ERST for certain
+crash information, if available.
+
+
+Invocation
+--
+
+To utilize ACPI ERST, a memory-backend-file object and acpi-erst
+device must be created, for example:
+
+ qemu ...
+ -object memory-backend-file,id=erstnvram,mem-path=acpi-erst.backing,
+  size=0x1,share=on
+ -device acpi-erst,memdev=erstnvram
+
+For proper operation, the ACPI ERST device needs a memory-backend-file
+object with the following parameters:
+
+ - id: The id of the memory-backend-file object is used to associate
+   this memory with the acpi-erst device.
+ - size: The size of the ACPI ERST backing storage. This parameter is
+   required.
+ - mem-path: The location of the ACPI ERST backing storage file. This
+   parameter is also required.
+ - share: The share=on parameter is required so that updates to the
+   ERST back store are written to the file immediately as well. Without
+   it, updates the the backing file are unpredictable and may not
+   properly persist (eg. if qemu should crash).
+
+The ACPI ERST device is a simple PCI device, and requires this one
+parameter:
+
+ - memdev: Is the object id of the memory-backend-file.
+
+
+PCI Interface
+-
+
+The ERST device is a PCI device with two BARs, one for accessing
+the programming registers, and the other for accessing the
+record exchange buffer.
+
+BAR0 contains the programming interface consisting of just two
+64-bit registers. The two registers are an ACTION (cmd) and a
+VALUE (data). All ERST actions/operations/side effects happen
+on the write to the ACTION, by design. Thus any data needed
+by the action must be placed into VALUE prior to writing
+ACTION. Reading the VALUE simply returns the register contents,
+which can be updated by a previous ACTION. This behavior is
+encoded in the ACPI ERST table generated by QEMU.
+
+BAR1 contains the record exchange buffer, and the size of this
+buffer sets the maximum record size. This 

Re: [PATCH] target/i386: Fix cpuid level for AMD

2021-06-30 Thread Michael Roth
Quoting Dr. David Alan Gilbert (2021-06-29 09:06:02)
> * zhenwei pi (pizhen...@bytedance.com) wrote:
> > A AMD server typically has cpuid level 0x10(test on Rome/Milan), it
> > should not be changed to 0x1f in multi-dies case.
> > 
> > Fixes: a94e1428991 (target/i386: Add CPUID.1F generation support
> > for multi-dies PCMachine)
> > Signed-off-by: zhenwei pi 
> 
> (Copying in Babu)
> 
> Hmm I think you're right.  I've cc'd in Babu and Wei.
> 
> Eduardo: What do we need to do about compatibility, do we need to wire
> this to machine type or CPU version?

FWIW, there are some other CPUID entries like leaves 2 and 4 that are
also Intel-specific. With SEV-SNP CPUID enforcement, advertising them to
guests will result in failures when host SNP firmware checks the
hypervisor-provided CPUID values against the host-supported ones.

To address this we've been planning to add an 'amd-cpuid-only' property
to suppress them:

  https://github.com/mdroth/qemu/commit/28d0553fe748d30a8af09e5e58a7da3eff03e21b

My thinking is this property should be off by default, and only defined
either via explicit command-line option, or via new CPU types. We're also
planning to add new CPU versions for EPYC* CPU types that set this
'amd-cpuid-only' property by default:

  https://github.com/mdroth/qemu/commits/new-cpu-types-upstream

So in general I think maybe this change should be similarly controlled by
this proposed 'amd-cpuid-only' property. Maybe for this particular case it's
okay to do it unconditionally, but it sounds bad to switch up the valid CPUID
range after a guest has already booted (which might happen with old->new
migration for instance), since it might continue treating values in the range
as valid afterward (but again, not sure that's the case here or not).

There's some other changes with the new CPU types that we're still
considering/testing internally, but should be able to post them in some form
next week.

-Mike

> 
> Dave
> 
> > ---
> >  target/i386/cpu.c | 8 ++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> > index a9fe1662d3..3934c559e4 100644
> > --- a/target/i386/cpu.c
> > +++ b/target/i386/cpu.c
> > @@ -5961,8 +5961,12 @@ void x86_cpu_expand_features(X86CPU *cpu, Error 
> > **errp)
> >  }
> >  }
> >  
> > -/* CPU topology with multi-dies support requires CPUID[0x1F] */
> > -if (env->nr_dies > 1) {
> > +/*
> > + * Intel CPU topology with multi-dies support requires CPUID[0x1F].
> > + * For AMD Rome/Milan, cpuid level is 0x10, and guest OS should 
> > detect
> > + * extended toplogy by leaf 0xB. Only adjust it for Intel CPU.
> > + */
> > +if ((env->nr_dies > 1) && IS_INTEL_CPU(env)) {
> >  x86_cpu_adjust_level(cpu, >cpuid_min_level, 0x1F);
> >  }
> >  
> > -- 
> > 2.25.1
> > 
> > 
> -- 
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> 
>



[PATCH v5 08/10] ACPI ERST: create ACPI ERST table for pc/x86 machines.

2021-06-30 Thread Eric DeVolder
This change exposes ACPI ERST support for x86 guests.

Signed-off-by: Eric DeVolder 
---
 hw/i386/acpi-build.c   | 9 +
 hw/i386/acpi-microvm.c | 9 +
 2 files changed, 18 insertions(+)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index de98750..d2026cc 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -43,6 +43,7 @@
 #include "sysemu/tpm.h"
 #include "hw/acpi/tpm.h"
 #include "hw/acpi/vmgenid.h"
+#include "hw/acpi/erst.h"
 #include "hw/boards.h"
 #include "sysemu/tpm_backend.h"
 #include "hw/rtc/mc146818rtc_regs.h"
@@ -2327,6 +2328,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState 
*machine)
 GArray *tables_blob = tables->table_data;
 AcpiSlicOem slic_oem = { .id = NULL, .table_id = NULL };
 Object *vmgenid_dev;
+Object *erst_dev;
 char *oem_id;
 char *oem_table_id;
 
@@ -2388,6 +2390,13 @@ void acpi_build(AcpiBuildTables *tables, MachineState 
*machine)
 ACPI_DEVICE_IF(x86ms->acpi_dev), x86ms->oem_id,
 x86ms->oem_table_id);
 
+erst_dev = find_erst_dev();
+if (erst_dev) {
+acpi_add_table(table_offsets, tables_blob);
+build_erst(tables_blob, tables->linker, erst_dev,
+   x86ms->oem_id, x86ms->oem_table_id);
+}
+
 vmgenid_dev = find_vmgenid_dev();
 if (vmgenid_dev) {
 acpi_add_table(table_offsets, tables_blob);
diff --git a/hw/i386/acpi-microvm.c b/hw/i386/acpi-microvm.c
index ccd3303..0099b13 100644
--- a/hw/i386/acpi-microvm.c
+++ b/hw/i386/acpi-microvm.c
@@ -30,6 +30,7 @@
 #include "hw/acpi/bios-linker-loader.h"
 #include "hw/acpi/generic_event_device.h"
 #include "hw/acpi/utils.h"
+#include "hw/acpi/erst.h"
 #include "hw/boards.h"
 #include "hw/i386/fw_cfg.h"
 #include "hw/i386/microvm.h"
@@ -160,6 +161,7 @@ static void acpi_build_microvm(AcpiBuildTables *tables,
 X86MachineState *x86ms = X86_MACHINE(mms);
 GArray *table_offsets;
 GArray *tables_blob = tables->table_data;
+Object *erst_dev;
 unsigned dsdt, xsdt;
 AcpiFadtData pmfadt = {
 /* ACPI 5.0: 4.1 Hardware-Reduced ACPI */
@@ -209,6 +211,13 @@ static void acpi_build_microvm(AcpiBuildTables *tables,
 ACPI_DEVICE_IF(x86ms->acpi_dev), x86ms->oem_id,
 x86ms->oem_table_id);
 
+erst_dev = find_erst_dev();
+if (erst_dev) {
+acpi_add_table(table_offsets, tables_blob);
+build_erst(tables_blob, tables->linker, erst_dev,
+   x86ms->oem_id, x86ms->oem_table_id);
+}
+
 xsdt = tables_blob->len;
 build_xsdt(tables_blob, tables->linker, table_offsets, x86ms->oem_id,
x86ms->oem_table_id);
-- 
1.8.3.1




[PATCH v5 07/10] ACPI ERST: trace support

2021-06-30 Thread Eric DeVolder
Provide the definitions needed to support tracing in ACPI ERST.

Signed-off-by: Eric DeVolder 
---
 hw/acpi/trace-events | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/hw/acpi/trace-events b/hw/acpi/trace-events
index dcc1438..a5c2755 100644
--- a/hw/acpi/trace-events
+++ b/hw/acpi/trace-events
@@ -55,3 +55,17 @@ piix4_gpe_writeb(uint64_t addr, unsigned width, uint64_t 
val) "addr: 0x%" PRIx64
 # tco.c
 tco_timer_reload(int ticks, int msec) "ticks=%d (%d ms)"
 tco_timer_expired(int timeouts_no, bool strap, bool no_reboot) "timeouts_no=%d 
no_reboot=%d/%d"
+
+# erst.c
+acpi_erst_reg_write(uint64_t addr, uint64_t val, unsigned size) "addr: 0x%04" 
PRIx64 " <== 0x%016" PRIx64 " (size: %u)"
+acpi_erst_reg_read(uint64_t addr, uint64_t val, unsigned size) " addr: 0x%04" 
PRIx64 " ==> 0x%016" PRIx64 " (size: %u)"
+acpi_erst_mem_write(uint64_t addr, uint64_t val, unsigned size) "addr: 0x%06" 
PRIx64 " <== 0x%016" PRIx64 " (size: %u)"
+acpi_erst_mem_read(uint64_t addr, uint64_t val, unsigned size) " addr: 0x%06" 
PRIx64 " ==> 0x%016" PRIx64 " (size: %u)"
+acpi_erst_pci_bar_0(uint64_t addr) "BAR0: 0x%016" PRIx64
+acpi_erst_pci_bar_1(uint64_t addr) "BAR1: 0x%016" PRIx64
+acpi_erst_realizefn_in(void)
+acpi_erst_realizefn_out(unsigned size) "total nvram size %u bytes"
+acpi_erst_reset_in(unsigned record_count) "record_count %u"
+acpi_erst_reset_out(unsigned record_count) "record_count %u"
+acpi_erst_class_init_in(void)
+acpi_erst_class_init_out(void)
-- 
1.8.3.1




[PATCH v5 05/10] ACPI ERST: support for ACPI ERST feature

2021-06-30 Thread Eric DeVolder
This change implements the support for the ACPI ERST feature.

This implements a PCI device for ACPI ERST. This implments the
non-NVRAM "mode" of operation for ERST.

This change also includes erst.c in the build of general ACPI support.

Signed-off-by: Eric DeVolder 
---
 hw/acpi/erst.c  | 704 
 hw/acpi/meson.build |   1 +
 2 files changed, 705 insertions(+)
 create mode 100644 hw/acpi/erst.c

diff --git a/hw/acpi/erst.c b/hw/acpi/erst.c
new file mode 100644
index 000..6e9bd2e
--- /dev/null
+++ b/hw/acpi/erst.c
@@ -0,0 +1,704 @@
+/*
+ * ACPI Error Record Serialization Table, ERST, Implementation
+ *
+ * Copyright (c) 2021 Oracle and/or its affiliates.
+ *
+ * ACPI ERST introduced in ACPI 4.0, June 16, 2009.
+ * ACPI Platform Error Interfaces : Error Serialization
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see 
+ */
+
+#include 
+#include 
+#include 
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "hw/qdev-core.h"
+#include "exec/memory.h"
+#include "qom/object.h"
+#include "hw/pci/pci.h"
+#include "qom/object_interfaces.h"
+#include "qemu/error-report.h"
+#include "migration/vmstate.h"
+#include "hw/qdev-properties.h"
+#include "hw/acpi/acpi.h"
+#include "hw/acpi/acpi-defs.h"
+#include "hw/acpi/aml-build.h"
+#include "hw/acpi/bios-linker-loader.h"
+#include "exec/address-spaces.h"
+#include "sysemu/hostmem.h"
+#include "hw/acpi/erst.h"
+#include "trace.h"
+
+/* UEFI 2.1: Append N Common Platform Error Record */
+#define UEFI_CPER_RECORD_MIN_SIZE 128U
+#define UEFI_CPER_RECORD_LENGTH_OFFSET 20U
+#define UEFI_CPER_RECORD_ID_OFFSET 96U
+#define IS_UEFI_CPER_RECORD(ptr) \
+(((ptr)[0] == 'C') && \
+ ((ptr)[1] == 'P') && \
+ ((ptr)[2] == 'E') && \
+ ((ptr)[3] == 'R'))
+#define THE_UEFI_CPER_RECORD_ID(ptr) \
+(*(uint64_t *)(&(ptr)[UEFI_CPER_RECORD_ID_OFFSET]))
+
+/*
+ * This implementation is an ACTION (cmd) and VALUE (data)
+ * interface consisting of just two 64-bit registers.
+ */
+#define ERST_REG_SIZE (2UL * sizeof(uint64_t))
+#define ERST_CSR_ACTION (0UL << 3) /* action (cmd) */
+#define ERST_CSR_VALUE  (1UL << 3) /* argument/value (data) */
+
+/*
+ * ERST_RECORD_SIZE is the buffer size for exchanging ERST
+ * record contents. Thus, it defines the maximum record size.
+ * As this is mapped through a PCI BAR, it must be a power of
+ * two, and should be at least PAGE_SIZE.
+ * Records are stored in the backing file in a simple fashion.
+ * The backing file is essentially divided into fixed size
+ * "slots", ERST_RECORD_SIZE in length, with each "slot"
+ * storing a single record. No attempt at optimizing storage
+ * through compression, compaction, etc is attempted.
+ * NOTE that any change to this value will make any pre-
+ * existing backing files, not of the same ERST_RECORD_SIZE,
+ * unusable to the guest.
+ */
+/* 8KiB records, not too small, not too big */
+#define ERST_RECORD_SIZE (2UL * 4096)
+
+#define ERST_INVALID_RECORD_ID (~0UL)
+#define ERST_EXECUTE_OPERATION_MAGIC 0x9CUL
+
+/*
+ * Object cast macro
+ */
+#define ACPIERST(obj) \
+OBJECT_CHECK(ERSTDeviceState, (obj), TYPE_ACPI_ERST)
+
+/*
+ * Main ERST device state structure
+ */
+typedef struct {
+PCIDevice parent_obj;
+
+HostMemoryBackend *hostmem;
+MemoryRegion *hostmem_mr;
+
+MemoryRegion iomem; /* programming registes */
+MemoryRegion nvmem; /* exchange buffer */
+uint32_t prop_size;
+hwaddr bar0; /* programming registers */
+hwaddr bar1; /* exchange buffer */
+
+uint8_t operation;
+uint8_t busy_status;
+uint8_t command_status;
+uint32_t record_offset;
+uint32_t record_count;
+uint64_t reg_action;
+uint64_t reg_value;
+uint64_t record_identifier;
+
+unsigned next_record_index;
+uint8_t record[ERST_RECORD_SIZE]; /* read/written directly by guest */
+uint8_t tmp_record[ERST_RECORD_SIZE]; /* intermediate manipulation buffer 
*/
+
+} ERSTDeviceState;
+
+/***/
+/***/
+
+static unsigned copy_from_nvram_by_index(ERSTDeviceState *s, unsigned index)
+{
+/* Read an nvram entry into tmp_record */
+unsigned rc = ACPI_ERST_STATUS_FAILED;
+off_t offset = (index * ERST_RECORD_SIZE);
+
+if ((offset + ERST_RECORD_SIZE) <= s->prop_size) {
+if (s->hostmem_mr) {
+ 

[PATCH v5 03/10] ACPI ERST: PCI device_id for ERST

2021-06-30 Thread Eric DeVolder
This change declares the PCI device_id for the new ACPI ERST
device.

Signed-off-by: Eric DeVolder 
---
 include/hw/pci/pci.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 6be4e0c..eef3ef4 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -108,6 +108,7 @@ extern bool pci_available;
 #define PCI_DEVICE_ID_REDHAT_MDPY0x000f
 #define PCI_DEVICE_ID_REDHAT_NVME0x0010
 #define PCI_DEVICE_ID_REDHAT_PVPANIC 0x0011
+#define PCI_DEVICE_ID_REDHAT_ACPI_ERST   0x0012
 #define PCI_DEVICE_ID_REDHAT_QXL 0x0100
 
 #define FMT_PCIBUS  PRIx64
-- 
1.8.3.1




[PATCH v5 02/10] ACPI ERST: specification for ERST support

2021-06-30 Thread Eric DeVolder
Information on the implementation of the ACPI ERST support.

Signed-off-by: Eric DeVolder 
---
 docs/specs/acpi_erst.txt | 152 +++
 1 file changed, 152 insertions(+)
 create mode 100644 docs/specs/acpi_erst.txt

diff --git a/docs/specs/acpi_erst.txt b/docs/specs/acpi_erst.txt
new file mode 100644
index 000..79f8eb9
--- /dev/null
+++ b/docs/specs/acpi_erst.txt
@@ -0,0 +1,152 @@
+ACPI ERST DEVICE
+
+
+The ACPI ERST device is utilized to support the ACPI Error Record
+Serialization Table, ERST, functionality. The functionality is
+designed for storing error records in persistent storage for
+future reference/debugging.
+
+The ACPI specification[1], in Chapter "ACPI Platform Error Interfaces
+(APEI)", and specifically subsection "Error Serialization", outlines
+a method for storing error records into persistent storage.
+
+The format of error records is described in the UEFI specification[2],
+in Appendix N "Common Platform Error Record".
+
+While the ACPI specification allows for an NVRAM "mode" (see
+GET_ERROR_LOG_ADDRESS_RANGE_ATTRIBUTES) where non-volatile RAM is
+directly exposed for direct access by the OS/guest, this implements
+the non-NVRAM "mode". This non-NVRAM "mode" is what is implemented
+by most BIOS (since flash memory requires programming operations
+in order to update its contents). Furthermore, as of the time of this
+writing, Linux does not support the non-NVRAM "mode".
+
+
+Background/Motivation
+-
+Linux uses the persistent storage filesystem, pstore, to record
+information (eg. dmesg tail) upon panics and shutdowns.  Pstore is
+independent of, and runs before, kdump.  In certain scenarios (ie.
+hosts/guests with root filesystems on NFS/iSCSI where networking
+software and/or hardware fails), pstore may contain the only
+information available for post-mortem debugging.
+
+Two common storage backends for the pstore filesystem are ACPI ERST
+and UEFI. Most BIOS implement ACPI ERST.  UEFI is not utilized in
+all guests. With QEMU supporting ACPI ERST, it becomes a viable
+pstore storage backend for virtual machines (as it is now for
+bare metal machines).
+
+Enabling support for ACPI ERST facilitates a consistent method to
+capture kernel panic information in a wide range of guests: from
+resource-constrained microvms to very large guests, and in
+particular, in direct-boot environments (which would lack UEFI
+run-time services).
+
+Note that Microsoft Windows also utilizes the ACPI ERST for certain
+crash information, if available.
+
+
+Invocation
+--
+
+To utilize ACPI ERST, a memory-backend-file object and acpi-erst
+device must be created, for example:
+
+ qemu ...
+ -object memory-backend-file,id=erstnvram,mem-path=acpi-erst.backing,
+  size=0x1,share=on
+ -device acpi-erst,memdev=erstnvram
+
+For proper operation, the ACPI ERST device needs a memory-backend-file
+object with the following parameters:
+
+ - id: The id of the memory-backend-file object is used to associate
+   this memory with the acpi-erst device.
+ - size: The size of the ACPI ERST backing storage. This parameter is
+   required.
+ - mem-path: The location of the ACPI ERST backing storage file. This
+   parameter is also required.
+ - share: The share=on parameter is required so that updates to the
+   ERST back store are written to the file immediately as well. Without
+   it, updates the the backing file are unpredictable and may not
+   properly persist (eg. if qemu should crash).
+
+The ACPI ERST device is a simple PCI device, and requires this one
+parameter:
+
+ - memdev: Is the object id of the memory-backend-file.
+
+
+PCI Interface
+-
+
+The ERST device is a PCI device with two BARs, one for accessing
+the programming registers, and the other for accessing the
+record exchange buffer.
+
+BAR0 contains the programming interface consisting of just two
+64-bit registers. The two registers are an ACTION (cmd) and a
+VALUE (data). All ERST actions/operations/side effects happen
+on the write to the ACTION, by design. Thus any data needed
+by the action must be placed into VALUE prior to writing
+ACTION. Reading the VALUE simply returns the register contents,
+which can be updated by a previous ACTION. This behavior is
+encoded in the ACPI ERST table generated by QEMU.
+
+BAR1 contains the record exchange buffer, and the size of this
+buffer sets the maximum record size. This record exchange
+buffer size is 8KiB.
+
+Backing File
+
+
+The ACPI ERST persistent storage is contained within a single backing
+file. The size and location of the backing file is specified upon
+QEMU startup of the ACPI ERST device.
+
+Records are stored in the backing file in a simple fashion.
+The backing file is essentially divided into fixed size
+"slots", ERST_RECORD_SIZE in length, with each "slot"
+storing a single record. No attempt at optimizing storage
+through compression, compaction, etc is attempted.
+NOTE that 

[PATCH v5 00/10] acpi: Error Record Serialization Table, ERST, support for QEMU

2021-06-30 Thread Eric DeVolder
=
I believe I have corrected for all feedback on v4, but with
responses to certain feedback below.

In patch 1/6, Igor asks:
"you are adding empty template files here
but the later matching bios-tables-test is nowhere to be found
Was testcase lost somewhere along the way?

also it seems you add ERST only to pc/q35,
so why tests/data/acpi/microvm/ERST is here?"

I did miss setting up microvm. That has been corrected.

As for the question about lost test cases, if you are referring
to the new binary blobs for pc,q35, those were in patch
6/6. There is a qtest in patch 5/6. If I don't understand the
question, please indicate as such.


In patch 3/6, Igor asks:
"Also spec (ERST) is rather (maybe intentionally) vague on specifics,
so it would be better that before a patch that implements hw part
were a doc patch describing concrete implementation. As model
you can use docs/specs/acpi_hest_ghes.rst or other docs/specs/acpi_* files.
I'd start posting/discussing that spec within these thread
to avoid spamming list until doc is settled up."

I'm thinking that this cover letter is the bulk of the spec? But as
you say, to avoid spamming the group, we can use this thread to make
suggested changes to this cover letter which I will then convert
into a spec, for v6.


In patch 3/6, in many places Igor mentions utilizing the hostmem
mapped directly in the guest in order to avoid need-less copying.

It is true that the ERST has an "NVRAM" mode that would allow for
all the simplifications Igor points out, however, Linux does not
support this mode. This mode puts the burden of managing the NVRAM
space on the OS. So this implementation, like BIOS, is the non-NVRAM
mode.

I did go ahead and separate the registers from the exchange buffer,
which would facilitate the support of NVRAM mode.

 linux/drivers/acpi/apei/erst.c:
 /* NVRAM ERST Error Log Address Range is not supported yet */
 static void pr_unimpl_nvram(void)
 {
if (printk_ratelimit())
pr_warn("NVRAM ERST Log Address Range not implemented yet.\n");
 }

 static int __erst_write_to_nvram(const struct cper_record_header *record)
 {
/* do not print message, because printk is not safe for NMI */
return -ENOSYS;
 }

 static int __erst_read_to_erange_from_nvram(u64 record_id, u64 *offset)
 {
pr_unimpl_nvram();
return -ENOSYS;
 }

 static int __erst_clear_from_nvram(u64 record_id)
 {
pr_unimpl_nvram();
return -ENOSYS;
 }

=

This patchset introduces support for the ACPI Error Record
Serialization Table, ERST.

For background and implementation information, please see
docs/specs/acpi_erst.txt, which is patch 2/10.

Suggested-by: Konrad Wilk 
Signed-off-by: Eric DeVolder 

---
v5: 30jun2021
 - Create docs/specs/acpi_erst.txt, per Igor
 - Separate PCI BARs for registers and memory, per Igor
 - Convert debugging to use trace infrastructure, per Igor
 - Various other fixups, per Igor

v4: 11jun2021
 - Converted to a PCI device, per Igor.
 - Updated qtest.
 - Rearranged patches, per Igor.

v3: 28may2021
 - Converted to using a TYPE_MEMORY_BACKEND_FILE object rather than
   internal array with explicit file operations, per Igor.
 - Changed the way the qdev and base address are handled, allowing
   ERST to be disabled at run-time. Also aligns better with other
   existing code.

v2: 8feb2021
 - Added qtest/smoke test per Paolo Bonzini
 - Split patch into smaller chunks, per Igor Mammedov
 - Did away with use of ACPI packed structures, per Igor Mammedov

v1: 26oct2020
 - initial post

---

Eric DeVolder (10):
  ACPI ERST: bios-tables-test.c steps 1 and 2
  ACPI ERST: specification for ERST support
  ACPI ERST: PCI device_id for ERST
  ACPI ERST: header file for ERST
  ACPI ERST: support for ACPI ERST feature
  ACPI ERST: build the ACPI ERST table
  ACPI ERST: trace support
  ACPI ERST: create ACPI ERST table for pc/x86 machines.
  ACPI ERST: qtest for ERST
  ACPI ERST: step 6 of bios-tables-test.c

 docs/specs/acpi_erst.txt | 152 +++
 hw/acpi/erst.c   | 918 +++
 hw/acpi/meson.build  |   1 +
 hw/acpi/trace-events |  14 +
 hw/i386/acpi-build.c |   9 +
 hw/i386/acpi-microvm.c   |   9 +
 include/hw/acpi/erst.h   |  84 
 include/hw/pci/pci.h |   1 +
 tests/data/acpi/microvm/ERST | Bin 0 -> 976 bytes
 tests/data/acpi/pc/ERST  | Bin 0 -> 976 bytes
 tests/data/acpi/q35/ERST | Bin 0 -> 976 bytes
 tests/qtest/erst-test.c  | 129 ++
 tests/qtest/meson.build  |   2 +
 13 files changed, 1319 insertions(+)
 create mode 100644 docs/specs/acpi_erst.txt
 create mode 100644 hw/acpi/erst.c
 create mode 100644 include/hw/acpi/erst.h
 create mode 100644 tests/data/acpi/microvm/ERST
 create mode 100644 tests/data/acpi/pc/ERST
 create mode 100644 tests/data/acpi/q35/ERST
 create mode 100644 tests/qtest/erst-test.c

-- 
1.8.3.1




[PATCH v5 10/10] ACPI ERST: step 6 of bios-tables-test.c

2021-06-30 Thread Eric DeVolder
Following the guidelines in tests/qtest/bios-tables-test.c, this
is step 6, the re-generated ACPI tables binary blobs.

Signed-off-by: Eric DeVolder 
---
 tests/data/acpi/microvm/ERST| Bin 0 -> 976 bytes
 tests/data/acpi/pc/ERST | Bin 0 -> 976 bytes
 tests/data/acpi/q35/ERST| Bin 0 -> 976 bytes
 tests/qtest/bios-tables-test-allowed-diff.h |   4 
 4 files changed, 4 deletions(-)

diff --git a/tests/data/acpi/microvm/ERST b/tests/data/acpi/microvm/ERST
index 
e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..db2adaa8d9b45e295f9976d6bb5a07a813214f52
 100644
GIT binary patch
literal 976
zcmaKqTMmLS5Jd+l50TdfOjv?(1qNf{pGN#}aW2XoVQ=kJawAMa;r8^<4tlN$k#r!KbMbUbUyg_
zK(;pcerufGzoGM$#7pML|ITms2HKLp#iT7ge?`3d;=pU-HFM;Z{u=Td@?Bo=v9u+>
bCEw+h{yXwJ@?BooAHQFxe`xQi@1uMGuJKX<

literal 0
HcmV?d1

diff --git a/tests/data/acpi/pc/ERST b/tests/data/acpi/pc/ERST
index 
e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..7236018951f9d111d8cacaa93ee07a8dc3294f18
 100644
GIT binary patch
literal 976
zcmaKqSq_3Q6h#Y^dE9^rOK=GWV{BUvZ#VzQD#NN_})srw9ZqJfYH@yl5T!0*@ExA}PsmTmxA~y7^f
z`=C;Mzb#Jlr!;>?JY$ah@BPi%Ksot29-5NhSucQ
c)srw9ZqJfYH@yl5T!0*@ExA}PsmTmxA~y7^f
z`=C;Mzb#Jlr!;>?JY$ah@BPi%Ksot29-5NhSucQ
c

[PATCH v5 09/10] ACPI ERST: qtest for ERST

2021-06-30 Thread Eric DeVolder
This change provides a qtest that locates and then does a simple
interrogation of the ERST feature within the guest.

Signed-off-by: Eric DeVolder 
---
 tests/qtest/erst-test.c | 129 
 tests/qtest/meson.build |   2 +
 2 files changed, 131 insertions(+)
 create mode 100644 tests/qtest/erst-test.c

diff --git a/tests/qtest/erst-test.c b/tests/qtest/erst-test.c
new file mode 100644
index 000..ce014c1
--- /dev/null
+++ b/tests/qtest/erst-test.c
@@ -0,0 +1,129 @@
+/*
+ * QTest testcase for ACPI ERST
+ *
+ * Copyright (c) 2021 Oracle
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/bitmap.h"
+#include "qemu/uuid.h"
+#include "hw/acpi/acpi-defs.h"
+#include "boot-sector.h"
+#include "acpi-utils.h"
+#include "libqos/libqtest.h"
+#include "qapi/qmp/qdict.h"
+
+#define RSDP_ADDR_INVALID 0x10 /* RSDP must be below this address */
+
+static uint64_t acpi_find_erst(QTestState *qts)
+{
+uint32_t rsdp_offset;
+uint8_t rsdp_table[36 /* ACPI 2.0+ RSDP size */];
+uint32_t rsdt_len, table_length;
+uint8_t *rsdt, *ent;
+uint64_t base = 0;
+
+/* Wait for guest firmware to finish and start the payload. */
+boot_sector_test(qts);
+
+/* Tables should be initialized now. */
+rsdp_offset = acpi_find_rsdp_address(qts);
+
+g_assert_cmphex(rsdp_offset, <, RSDP_ADDR_INVALID);
+
+acpi_fetch_rsdp_table(qts, rsdp_offset, rsdp_table);
+acpi_fetch_table(qts, , _len, _table[16 /* RsdtAddress */],
+ 4, "RSDT", true);
+
+ACPI_FOREACH_RSDT_ENTRY(rsdt, rsdt_len, ent, 4 /* Entry size */) {
+uint8_t *table_aml;
+acpi_fetch_table(qts, _aml, _length, ent, 4, NULL, true);
+if (!memcmp(table_aml + 0 /* Header Signature */, "ERST", 4)) {
+/*
+ * Picking up ERST base address from the Register Region
+ * specified as part of the first Serialization Instruction
+ * Action (which is a Begin Write Operation).
+ */
+memcpy(, _aml[56], sizeof(base));
+g_free(table_aml);
+break;
+}
+g_free(table_aml);
+}
+g_free(rsdt);
+return base;
+}
+
+static char disk[] = "tests/erst-test-disk-XX";
+
+#define ERST_CMD()  \
+"-accel kvm -accel tcg "\
+"-object memory-backend-file," \
+  "id=erstnvram,mem-path=tests/acpi-erst-XX,size=0x1,share=on " \
+"-device acpi-erst,memdev=erstnvram " \
+"-drive id=hd0,if=none,file=%s,format=raw " \
+"-device ide-hd,drive=hd0 ", disk
+
+static void erst_get_error_log_address_range(void)
+{
+QTestState *qts;
+uint64_t log_address_range = 0;
+unsigned log_address_length = 0;
+unsigned log_address_attr = 0;
+
+qts = qtest_initf(ERST_CMD());
+
+uint64_t base = acpi_find_erst(qts);
+g_assert(base != 0);
+
+/* Issue GET_ERROR_LOG_ADDRESS_RANGE command */
+qtest_writel(qts, base + 0, 0xD);
+/* Read GET_ERROR_LOG_ADDRESS_RANGE result */
+log_address_range = qtest_readq(qts, base + 8);\
+
+/* Issue GET_ERROR_LOG_ADDRESS_RANGE_LENGTH command */
+qtest_writel(qts, base + 0, 0xE);
+/* Read GET_ERROR_LOG_ADDRESS_RANGE_LENGTH result */
+log_address_length = qtest_readq(qts, base + 8);\
+
+/* Issue GET_ERROR_LOG_ADDRESS_RANGE_ATTRIBUTES command */
+qtest_writel(qts, base + 0, 0xF);
+/* Read GET_ERROR_LOG_ADDRESS_RANGE_ATTRIBUTES result */
+log_address_attr = qtest_readq(qts, base + 8);\
+
+/* Check log_address_range is not 0,~0 or base */
+g_assert(log_address_range != base);
+g_assert(log_address_range != 0);
+g_assert(log_address_range != ~0UL);
+
+/* Check log_address_length is ERST_RECORD_SIZE */
+g_assert(log_address_length == (8 * 1024));
+
+/* Check log_address_attr is 0 */
+g_assert(log_address_attr == 0);
+
+qtest_quit(qts);
+}
+
+int main(int argc, char **argv)
+{
+int ret;
+
+ret = boot_sector_init(disk);
+if (ret) {
+return ret;
+}
+
+g_test_init(, , NULL);
+
+qtest_add_func("/erst/get-error-log-address-range",
+   erst_get_error_log_address_range);
+
+ret = g_test_run();
+boot_sector_cleanup(disk);
+
+return ret;
+}
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 0c76738..deae443 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -66,6 +66,7 @@ qtests_i386 = \
   (config_all_devices.has_key('CONFIG_RTL8139_PCI') ? ['rtl8139-test'] : []) + 
 \
   (config_all_devices.has_key('CONFIG_E1000E_PCI_EXPRESS') ? 
['fuzz-e1000e-test'] : []) +   \
   (config_all_devices.has_key('CONFIG_ESP_PCI') ? ['am53c974-test'] : []) +
 \
+  (config_all_devices.has_key('CONFIG_ACPI') ? ['erst-test'] : []) +   
  \
   qtests_pci +  

[PATCH v5 06/10] ACPI ERST: build the ACPI ERST table

2021-06-30 Thread Eric DeVolder
This code is called from the machine code (if ACPI supported)
to generate the ACPI ERST table.

Signed-off-by: Eric DeVolder 
---
 hw/acpi/erst.c | 214 +
 1 file changed, 214 insertions(+)

diff --git a/hw/acpi/erst.c b/hw/acpi/erst.c
index 6e9bd2e..1f1dbbc 100644
--- a/hw/acpi/erst.c
+++ b/hw/acpi/erst.c
@@ -555,6 +555,220 @@ static const MemoryRegionOps erst_mem_ops = {
 /***/
 /***/
 
+/* ACPI 4.0: 17.4.1.2 Serialization Instruction Entries */
+static void build_serialization_instruction_entry(GArray *table_data,
+uint8_t serialization_action,
+uint8_t instruction,
+uint8_t flags,
+uint8_t register_bit_width,
+uint64_t register_address,
+uint64_t value,
+uint64_t mask)
+{
+/* ACPI 4.0: Table 17-18 Serialization Instruction Entry */
+struct AcpiGenericAddress gas;
+
+/* Serialization Action */
+build_append_int_noprefix(table_data, serialization_action, 1);
+/* Instruction */
+build_append_int_noprefix(table_data, instruction , 1);
+/* Flags */
+build_append_int_noprefix(table_data, flags   , 1);
+/* Reserved */
+build_append_int_noprefix(table_data, 0   , 1);
+/* Register Region */
+gas.space_id = AML_SYSTEM_MEMORY;
+gas.bit_width = register_bit_width;
+gas.bit_offset = 0;
+switch (register_bit_width) {
+case 8:
+gas.access_width = 1;
+break;
+case 16:
+gas.access_width = 2;
+break;
+case 32:
+gas.access_width = 3;
+break;
+case 64:
+gas.access_width = 4;
+break;
+default:
+gas.access_width = 0;
+break;
+}
+gas.address = register_address;
+build_append_gas_from_struct(table_data, );
+/* Value */
+build_append_int_noprefix(table_data, value  , 8);
+/* Mask */
+build_append_int_noprefix(table_data, mask   , 8);
+}
+
+/* ACPI 4.0: 17.4.1 Serialization Action Table */
+void build_erst(GArray *table_data, BIOSLinker *linker, Object *erst_dev,
+const char *oem_id, const char *oem_table_id)
+{
+ERSTDeviceState *s = ACPIERST(erst_dev);
+unsigned action;
+unsigned erst_start = table_data->len;
+
+s->bar0 = (hwaddr)pci_get_bar_addr(PCI_DEVICE(erst_dev), 0);
+trace_acpi_erst_pci_bar_0(s->bar0);
+s->bar1 = (hwaddr)pci_get_bar_addr(PCI_DEVICE(erst_dev), 1);
+trace_acpi_erst_pci_bar_1(s->bar1);
+
+acpi_data_push(table_data, sizeof(AcpiTableHeader));
+/* serialization_header_length */
+build_append_int_noprefix(table_data, 48, 4);
+/* reserved */
+build_append_int_noprefix(table_data,  0, 4);
+/*
+ * instruction_entry_count - changes to the number of serialization
+ * instructions in the ACTIONs below must be reflected in this
+ * pre-computed value.
+ */
+build_append_int_noprefix(table_data, 29, 4);
+
+#define MASK8  0x00FFUL
+#define MASK16 0xUL
+#define MASK32 0xUL
+#define MASK64 0xUL
+
+for (action = 0; action < ACPI_ERST_MAX_ACTIONS; ++action) {
+switch (action) {
+case ACPI_ERST_ACTION_BEGIN_WRITE_OPERATION:
+build_serialization_instruction_entry(table_data, action,
+ACPI_ERST_INST_WRITE_REGISTER_VALUE, 0, 32,
+s->bar0 + ERST_CSR_ACTION, action, MASK8);
+break;
+case ACPI_ERST_ACTION_BEGIN_READ_OPERATION:
+build_serialization_instruction_entry(table_data, action,
+ACPI_ERST_INST_WRITE_REGISTER_VALUE, 0, 32,
+s->bar0 + ERST_CSR_ACTION, action, MASK8);
+break;
+case ACPI_ERST_ACTION_BEGIN_CLEAR_OPERATION:
+build_serialization_instruction_entry(table_data, action,
+ACPI_ERST_INST_WRITE_REGISTER_VALUE, 0, 32,
+s->bar0 + ERST_CSR_ACTION, action, MASK8);
+break;
+case ACPI_ERST_ACTION_END_OPERATION:
+build_serialization_instruction_entry(table_data, action,
+ACPI_ERST_INST_WRITE_REGISTER_VALUE, 0, 32,
+s->bar0 + ERST_CSR_ACTION, action, MASK8);
+break;
+case ACPI_ERST_ACTION_SET_RECORD_OFFSET:
+build_serialization_instruction_entry(table_data, action,
+ACPI_ERST_INST_WRITE_REGISTER  , 0, 32,
+s->bar0 + ERST_CSR_VALUE , 0, MASK32);
+build_serialization_instruction_entry(table_data, action,
+ACPI_ERST_INST_WRITE_REGISTER_VALUE, 0, 32,
+s->bar0 + ERST_CSR_ACTION, action, MASK8);
+break;
+case ACPI_ERST_ACTION_EXECUTE_OPERATION:
+build_serialization_instruction_entry(table_data, action,
+ACPI_ERST_INST_WRITE_REGISTER_VALUE, 0, 32,
+

[PATCH v5 04/10] ACPI ERST: header file for ERST

2021-06-30 Thread Eric DeVolder
This change introduces the defintions for ACPI ERST.

Signed-off-by: Eric DeVolder 
---
 include/hw/acpi/erst.h | 84 ++
 1 file changed, 84 insertions(+)
 create mode 100644 include/hw/acpi/erst.h

diff --git a/include/hw/acpi/erst.h b/include/hw/acpi/erst.h
new file mode 100644
index 000..07a3fa5
--- /dev/null
+++ b/include/hw/acpi/erst.h
@@ -0,0 +1,84 @@
+/*
+ * ACPI Error Record Serialization Table, ERST, Implementation
+ *
+ * Copyright (c) 2021 Oracle and/or its affiliates.
+ *
+ * ACPI ERST introduced in ACPI 4.0, June 16, 2009.
+ * ACPI Platform Error Interfaces : Error Serialization
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see 
+ */
+#ifndef HW_ACPI_ERST_H
+#define HW_ACPI_ERST_H
+
+void build_erst(GArray *table_data, BIOSLinker *linker, Object *erst_dev,
+const char *oem_id, const char *oem_table_id);
+
+#define TYPE_ACPI_ERST "acpi-erst"
+#define ACPI_ERST_MEMDEV_PROP "memdev"
+
+#define ACPI_ERST_ACTION_BEGIN_WRITE_OPERATION 0x0
+#define ACPI_ERST_ACTION_BEGIN_READ_OPERATION  0x1
+#define ACPI_ERST_ACTION_BEGIN_CLEAR_OPERATION 0x2
+#define ACPI_ERST_ACTION_END_OPERATION 0x3
+#define ACPI_ERST_ACTION_SET_RECORD_OFFSET 0x4
+#define ACPI_ERST_ACTION_EXECUTE_OPERATION 0x5
+#define ACPI_ERST_ACTION_CHECK_BUSY_STATUS 0x6
+#define ACPI_ERST_ACTION_GET_COMMAND_STATUS0x7
+#define ACPI_ERST_ACTION_GET_RECORD_IDENTIFIER 0x8
+#define ACPI_ERST_ACTION_SET_RECORD_IDENTIFIER 0x9
+#define ACPI_ERST_ACTION_GET_RECORD_COUNT  0xA
+#define ACPI_ERST_ACTION_BEGIN_DUMMY_WRITE_OPERATION   0xB
+#define ACPI_ERST_ACTION_RESERVED  0xC
+#define ACPI_ERST_ACTION_GET_ERROR_LOG_ADDRESS_RANGE   0xD
+#define ACPI_ERST_ACTION_GET_ERROR_LOG_ADDRESS_LENGTH  0xE
+#define ACPI_ERST_ACTION_GET_ERROR_LOG_ADDRESS_RANGE_ATTRIBUTES 0xF
+#define ACPI_ERST_ACTION_GET_EXECUTE_OPERATION_TIMINGS 0x10
+#define ACPI_ERST_MAX_ACTIONS \
+(ACPI_ERST_ACTION_GET_EXECUTE_OPERATION_TIMINGS + 1)
+
+#define ACPI_ERST_STATUS_SUCCESS0x00
+#define ACPI_ERST_STATUS_NOT_ENOUGH_SPACE   0x01
+#define ACPI_ERST_STATUS_HARDWARE_NOT_AVAILABLE 0x02
+#define ACPI_ERST_STATUS_FAILED 0x03
+#define ACPI_ERST_STATUS_RECORD_STORE_EMPTY 0x04
+#define ACPI_ERST_STATUS_RECORD_NOT_FOUND   0x05
+
+#define ACPI_ERST_INST_READ_REGISTER 0x00
+#define ACPI_ERST_INST_READ_REGISTER_VALUE   0x01
+#define ACPI_ERST_INST_WRITE_REGISTER0x02
+#define ACPI_ERST_INST_WRITE_REGISTER_VALUE  0x03
+#define ACPI_ERST_INST_NOOP  0x04
+#define ACPI_ERST_INST_LOAD_VAR1 0x05
+#define ACPI_ERST_INST_LOAD_VAR2 0x06
+#define ACPI_ERST_INST_STORE_VAR10x07
+#define ACPI_ERST_INST_ADD   0x08
+#define ACPI_ERST_INST_SUBTRACT  0x09
+#define ACPI_ERST_INST_ADD_VALUE 0x0A
+#define ACPI_ERST_INST_SUBTRACT_VALUE0x0B
+#define ACPI_ERST_INST_STALL 0x0C
+#define ACPI_ERST_INST_STALL_WHILE_TRUE  0x0D
+#define ACPI_ERST_INST_SKIP_NEXT_INSTRUCTION_IF_TRUE 0x0E
+#define ACPI_ERST_INST_GOTO  0x0F
+#define ACPI_ERST_INST_SET_SRC_ADDRESS_BASE  0x10
+#define ACPI_ERST_INST_SET_DST_ADDRESS_BASE  0x11
+#define ACPI_ERST_INST_MOVE_DATA 0x12
+
+/* returns NULL unless there is exactly one device */
+static inline Object *find_erst_dev(void)
+{
+return object_resolve_path_type("", TYPE_ACPI_ERST, NULL);
+}
+#endif
+
-- 
1.8.3.1




[PATCH v5 01/10] ACPI ERST: bios-tables-test.c steps 1 and 2

2021-06-30 Thread Eric DeVolder
Following the guidelines in tests/qtest/bios-tables-test.c, this
change adds empty placeholder files per step 1 for the new ERST
table, and excludes resulting changed files in bios-tables-test-allowed-diff.h
per step 2.

Signed-off-by: Eric DeVolder 
---
 tests/data/acpi/microvm/ERST| 0
 tests/data/acpi/pc/ERST | 0
 tests/data/acpi/q35/ERST| 0
 tests/qtest/bios-tables-test-allowed-diff.h | 4 
 4 files changed, 4 insertions(+)
 create mode 100644 tests/data/acpi/microvm/ERST
 create mode 100644 tests/data/acpi/pc/ERST
 create mode 100644 tests/data/acpi/q35/ERST

diff --git a/tests/data/acpi/microvm/ERST b/tests/data/acpi/microvm/ERST
new file mode 100644
index 000..e69de29
diff --git a/tests/data/acpi/pc/ERST b/tests/data/acpi/pc/ERST
new file mode 100644
index 000..e69de29
diff --git a/tests/data/acpi/q35/ERST b/tests/data/acpi/q35/ERST
new file mode 100644
index 000..e69de29
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h 
b/tests/qtest/bios-tables-test-allowed-diff.h
index dfb8523..e004c71 100644
--- a/tests/qtest/bios-tables-test-allowed-diff.h
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
@@ -1 +1,5 @@
 /* List of comma-separated changed AML files to ignore */
+"tests/data/acpi/pc/ERST",
+"tests/data/acpi/q35/ERST",
+"tests/data/acpi/microvm/ERST",
+
-- 
1.8.3.1




[PATCH v2 28/28] target/xtensa: Use translator_use_goto_tb

2021-06-30 Thread Richard Henderson
Reviewed-by: Max Filippov 
Signed-off-by: Richard Henderson 
---
 target/xtensa/translate.c | 6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/target/xtensa/translate.c b/target/xtensa/translate.c
index 14028d307d..ac42f5efdc 100644
--- a/target/xtensa/translate.c
+++ b/target/xtensa/translate.c
@@ -406,11 +406,7 @@ static void gen_jump(DisasContext *dc, TCGv dest)
 
 static int adjust_jump_slot(DisasContext *dc, uint32_t dest, int slot)
 {
-if (((dc->base.pc_first ^ dest) & TARGET_PAGE_MASK) != 0) {
-return -1;
-} else {
-return slot;
-}
+return translator_use_goto_tb(>base, dest) ? slot : -1;
 }
 
 static void gen_jumpi(DisasContext *dc, uint32_t dest, int slot)
-- 
2.25.1




Re: [PATCH 3/3] cirrus: delete FreeBSD and macOS jobs

2021-06-30 Thread Wainer dos Santos Moschetta



On 6/25/21 2:22 PM, Daniel P. Berrangé wrote:

The builds for these two platforms can now be performed from GitLab CI
using cirrus-run.

Signed-off-by: Daniel P. Berrangé 
---
  .cirrus.yml | 55 -
  1 file changed, 55 deletions(-)


Reviewed-by: Wainer dos Santos Moschetta 



diff --git a/.cirrus.yml b/.cirrus.yml
index f4bf49b704..02c43a074a 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -1,61 +1,6 @@
  env:
CIRRUS_CLONE_DEPTH: 1
  
-freebsd_12_task:

-  freebsd_instance:
-image_family: freebsd-12-2
-cpu: 8
-memory: 8G
-  install_script:
-- ASSUME_ALWAYS_YES=yes pkg bootstrap -f ;
-- pkg install -y bash curl cyrus-sasl git glib gmake gnutls gsed
-  nettle perl5 pixman pkgconf png usbredir ninja
-  script:
-- mkdir build
-- cd build
-# TODO: Enable gnutls again once FreeBSD's libtasn1 got fixed
-# See: https://gitlab.com/gnutls/libtasn1/-/merge_requests/71
-- ../configure --enable-werror --disable-gnutls
-  || { cat config.log meson-logs/meson-log.txt; exit 1; }
-- gmake -j$(sysctl -n hw.ncpu)
-- gmake -j$(sysctl -n hw.ncpu) check V=1
-
-macos_task:
-  osx_instance:
-image: catalina-base
-  install_script:
-- brew install pkg-config python gnu-sed glib pixman make sdl2 bash ninja
-  script:
-- mkdir build
-- cd build
-- ../configure --python=/usr/local/bin/python3 --enable-werror
-   --extra-cflags='-Wno-error=deprecated-declarations'
-   || { cat config.log meson-logs/meson-log.txt; exit 1; }
-- gmake -j$(sysctl -n hw.ncpu)
-- gmake check-unit V=1
-- gmake check-block V=1
-- gmake check-qapi-schema V=1
-- gmake check-softfloat V=1
-- gmake check-qtest-x86_64 V=1
-
-macos_xcode_task:
-  osx_instance:
-# this is an alias for the latest Xcode
-image: catalina-xcode
-  install_script:
-- brew install pkg-config gnu-sed glib pixman make sdl2 bash ninja
-  script:
-- mkdir build
-- cd build
-- ../configure --extra-cflags='-Wno-error=deprecated-declarations' 
--enable-modules
-   --enable-werror --cc=clang || { cat config.log 
meson-logs/meson-log.txt; exit 1; }
-- gmake -j$(sysctl -n hw.ncpu)
-- gmake check-unit V=1
-- gmake check-block V=1
-- gmake check-qapi-schema V=1
-- gmake check-softfloat V=1
-- gmake check-qtest-x86_64 V=1
-
  windows_msys2_task:
timeout_in: 90m
windows_container:





[PATCH v2 25/28] target/sparc: Use translator_use_goto_tb

2021-06-30 Thread Richard Henderson
Reviewed-by: Mark Cave-Ayland 
Signed-off-by: Richard Henderson 
---
 target/sparc/translate.c | 19 +--
 1 file changed, 5 insertions(+), 14 deletions(-)

diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 4bfa3179f8..fb0c242606 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -339,23 +339,14 @@ static inline TCGv gen_dest_gpr(DisasContext *dc, int reg)
 }
 }
 
-static inline bool use_goto_tb(DisasContext *s, target_ulong pc,
-   target_ulong npc)
+static bool use_goto_tb(DisasContext *s, target_ulong pc, target_ulong npc)
 {
-if (unlikely(s->base.singlestep_enabled || singlestep)) {
-return false;
-}
-
-#ifndef CONFIG_USER_ONLY
-return (pc & TARGET_PAGE_MASK) == (s->base.tb->pc & TARGET_PAGE_MASK) &&
-   (npc & TARGET_PAGE_MASK) == (s->base.tb->pc & TARGET_PAGE_MASK);
-#else
-return true;
-#endif
+return translator_use_goto_tb(>base, pc) &&
+   translator_use_goto_tb(>base, npc);
 }
 
-static inline void gen_goto_tb(DisasContext *s, int tb_num,
-   target_ulong pc, target_ulong npc)
+static void gen_goto_tb(DisasContext *s, int tb_num,
+target_ulong pc, target_ulong npc)
 {
 if (use_goto_tb(s, pc, npc))  {
 /* jump to same page: we can use a direct jump */
-- 
2.25.1




Re: [PATCH v7 0/7] virtiofsd: Add support to enable/disable posix acls

2021-06-30 Thread Dr. David Alan Gilbert
* Vivek Goyal (vgo...@redhat.com) wrote:
> Hi,
> 
> This is V7 of the patches.
> 
> Changes since V6.
> 
> - Dropped kernel header update patch as somebody else did it.
> - Fixed coding style issues.
> 
> Currently posix ACL support does not work well with virtiofs and bunch
> of tests fail when I run xfstests "./check -g acl".
> 
> This patches series fixes the issues with virtiofs posix acl support
> and provides options to enable/disable posix acl (-o posix_acl/no_posix_acl).
> By default posix_acls are disabled.
> 
> With this patch series applied and virtiofsd running with "-o posix_acl",
> xfstests "./check -g acl" passes.
> 
> Thanks
> Vivek

Queued

> 
> 
> Vivek Goyal (7):
>   virtiofsd: Fix fuse setxattr() API change issue
>   virtiofsd: Fix xattr operations overwriting errno
>   virtiofsd: Add support for extended setxattr
>   virtiofsd: Add umask to seccom allow list
>   virtiofsd: Add capability to change/restore umask
>   virtiofsd: Switch creds, drop FSETID for system.posix_acl_access xattr
>   virtiofsd: Add an option to enable/disable posix acls
> 
>  docs/tools/virtiofsd.rst  |   3 +
>  tools/virtiofsd/fuse_common.h |  10 ++
>  tools/virtiofsd/fuse_lowlevel.c   |  18 +-
>  tools/virtiofsd/fuse_lowlevel.h   |   3 +-
>  tools/virtiofsd/helper.c  |   1 +
>  tools/virtiofsd/passthrough_ll.c  | 229 --
>  tools/virtiofsd/passthrough_seccomp.c |   1 +
>  7 files changed, 249 insertions(+), 16 deletions(-)
> 
> -- 
> 2.25.4
> 
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




[PATCH v2 20/28] target/riscv: Use translator_use_goto_tb

2021-06-30 Thread Richard Henderson
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.

Reviewed-by: Alistair Francis 
Signed-off-by: Richard Henderson 
---
 target/riscv/translate.c | 20 +---
 1 file changed, 1 insertion(+), 19 deletions(-)

diff --git a/target/riscv/translate.c b/target/riscv/translate.c
index 62a7d7e4c7..deda0c8a44 100644
--- a/target/riscv/translate.c
+++ b/target/riscv/translate.c
@@ -168,29 +168,11 @@ static void gen_exception_inst_addr_mis(DisasContext *ctx)
 generate_exception_mtval(ctx, RISCV_EXCP_INST_ADDR_MIS);
 }
 
-static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
-{
-if (unlikely(ctx->base.singlestep_enabled)) {
-return false;
-}
-
-#ifndef CONFIG_USER_ONLY
-return (ctx->base.tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
-#else
-return true;
-#endif
-}
-
 static void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
 {
-if (use_goto_tb(ctx, dest)) {
-/* chaining is only allowed when the jump is to the same page */
+if (translator_use_goto_tb(>base, dest)) {
 tcg_gen_goto_tb(n);
 tcg_gen_movi_tl(cpu_pc, dest);
-
-/* No need to check for single stepping here as use_goto_tb() will
- * return false in case of single stepping.
- */
 tcg_gen_exit_tb(ctx->base.tb, n);
 } else {
 tcg_gen_movi_tl(cpu_pc, dest);
-- 
2.25.1




[PATCH v2 18/28] target/openrisc: Use translator_use_goto_tb

2021-06-30 Thread Richard Henderson
Reorder the cases in openrisc_tr_tb_stop to make this easier to read.

Cc: Stafford Horne 
Signed-off-by: Richard Henderson 
---
 target/openrisc/translate.c | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index a9c81f8bd5..2d142d8577 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -1720,16 +1720,17 @@ static void openrisc_tr_tb_stop(DisasContextBase 
*dcbase, CPUState *cs)
 /* fallthru */
 
 case DISAS_TOO_MANY:
-if (unlikely(dc->base.singlestep_enabled)) {
-tcg_gen_movi_tl(cpu_pc, jmp_dest);
-gen_exception(dc, EXCP_DEBUG);
-} else if ((dc->base.pc_first ^ jmp_dest) & TARGET_PAGE_MASK) {
-tcg_gen_movi_tl(cpu_pc, jmp_dest);
-tcg_gen_lookup_and_goto_ptr();
-} else {
+if (translator_use_goto_tb(>base, jmp_dest)) {
 tcg_gen_goto_tb(0);
 tcg_gen_movi_tl(cpu_pc, jmp_dest);
 tcg_gen_exit_tb(dc->base.tb, 0);
+break;
+}
+tcg_gen_movi_tl(cpu_pc, jmp_dest);
+if (unlikely(dc->base.singlestep_enabled)) {
+gen_exception(dc, EXCP_DEBUG);
+} else {
+tcg_gen_lookup_and_goto_ptr();
 }
 break;
 
-- 
2.25.1




Re: [PATCH 2/3] gitlab: support for FreeBSD 12, 13 and macOS 11 via cirrus-run

2021-06-30 Thread Wainer dos Santos Moschetta

Hi,

On 6/25/21 2:22 PM, Daniel P. Berrangé wrote:

This adds support for running 4 jobs via Cirrus CI runners:

  * FreeBSD 12
  * FreeBSD 13
  * macOS 11 with default XCode
  * macOS 11 with latest XCode

The gitlab job uses a container published by the libvirt-ci
project (https://gitlab.com/libvirt/libvirt-ci) that contains
the 'cirrus-run' command. This accepts a short yaml file that
describes a single Cirrus CI job, runs it using the Cirrus CI
REST API, and reports any output to the console.

In this way Cirrus CI is effectively working as an indirect
custom runner for GitLab CI pipelines. The key benefit is that
Cirrus CI job results affect the GitLab CI pipeline result and
so the user only has look at one CI dashboard.

Signed-off-by: Daniel P. Berrangé 
---
  .gitlab-ci.d/cirrus.yml | 103 
  .gitlab-ci.d/cirrus/README.rst  |  54 +++
  .gitlab-ci.d/cirrus/build.yml   |  35 ++
  .gitlab-ci.d/cirrus/freebsd-12.vars |  13 
  .gitlab-ci.d/cirrus/freebsd-13.vars |  13 
  .gitlab-ci.d/cirrus/macos-11.vars   |  15 
  .gitlab-ci.d/qemu-project.yml   |   1 +
  7 files changed, 234 insertions(+)
  create mode 100644 .gitlab-ci.d/cirrus.yml
  create mode 100644 .gitlab-ci.d/cirrus/README.rst
  create mode 100644 .gitlab-ci.d/cirrus/build.yml
  create mode 100644 .gitlab-ci.d/cirrus/freebsd-12.vars
  create mode 100644 .gitlab-ci.d/cirrus/freebsd-13.vars
  create mode 100644 .gitlab-ci.d/cirrus/macos-11.vars

diff --git a/.gitlab-ci.d/cirrus.yml b/.gitlab-ci.d/cirrus.yml
new file mode 100644
index 00..d7b4cce79b
--- /dev/null
+++ b/.gitlab-ci.d/cirrus.yml
@@ -0,0 +1,103 @@
+# Jobs that we delegate to Cirrus CI because they require an operating
+# system other than Linux. These jobs will only run if the required
+# setup has been performed on the GitLab account.
+#
+# The Cirrus CI configuration is generated by replacing target-specific
+# variables in a generic template: some of these variables are provided
+# when the GitLab CI job is defined, others are taken from a shell
+# snippet generated using lcitool.
+#
+# Note that the $PATH environment variable has to be treated with
+# special care, because we can't just override it at the GitLab CI job
+# definition level or we risk breaking it completely.
+.cirrus_build_job:
+  stage: build
+  image: registry.gitlab.com/libvirt/libvirt-ci/cirrus-run:master
+  needs: []
+  script:
+- source .gitlab-ci.d/cirrus/$NAME.vars
+- sed -e "s|[@]CI_REPOSITORY_URL@|$CI_REPOSITORY_URL|g"
+  -e "s|[@]CI_COMMIT_REF_NAME@|$CI_COMMIT_REF_NAME|g"
+  -e "s|[@]CI_COMMIT_SHA@|$CI_COMMIT_SHA|g"
+  -e "s|[@]CIRRUS_VM_INSTANCE_TYPE@|$CIRRUS_VM_INSTANCE_TYPE|g"
+  -e "s|[@]CIRRUS_VM_IMAGE_SELECTOR@|$CIRRUS_VM_IMAGE_SELECTOR|g"
+  -e "s|[@]CIRRUS_VM_IMAGE_NAME@|$CIRRUS_VM_IMAGE_NAME|g"
+  -e "s|[@]CIRRUS_VM_CPUS@|$CIRRUS_VM_CPUS|g"
+  -e "s|[@]CIRRUS_VM_RAM@|$CIRRUS_VM_RAM|g"
+  -e "s|[@]UPDATE_COMMAND@|$UPDATE_COMMAND|g"
+  -e "s|[@]INSTALL_COMMAND@|$INSTALL_COMMAND|g"
+  -e "s|[@]PATH@|$PATH_EXTRA${PATH_EXTRA:+:}\$PATH|g"
+  -e "s|[@]PKG_CONFIG_PATH@|$PKG_CONFIG_PATH|g"
+  -e "s|[@]PKGS@|$PKGS|g"
+  -e "s|[@]MAKE@|$MAKE|g"
+  -e "s|[@]PYTHON@|$PYTHON|g"
+  -e "s|[@]PIP3@|$PIP3|g"
+  -e "s|[@]PYPI_PKGS@|$PYPI_PKGS|g"
+  -e "s|[@]CONFIGURE_ARGS@|$CONFIGURE_ARGS|g"
+  -e "s|[@]TEST_TARGETSS@|$TEST_TARGETSS|g"
+  <.gitlab-ci.d/cirrus/build.yml >.gitlab-ci.d/cirrus/$NAME.yml
+- cat .gitlab-ci.d/cirrus/$NAME.yml
+- cirrus-run -v --show-build-log always .gitlab-ci.d/cirrus/$NAME.yml
+  rules:
+- if: "$TEMPORARILY_DISABLED"


Reading 'TEMPORARILY_DISABLED' I immediately think the job is 
malfunctioning or under maintenance.


But since the plan is to keep it running as 'non-gate' until it proves 
reliable, so maybe you could rename the variable to 'NON_GATE' or 
'STAGING_JOB' (i.e. some words to better express the intent).


Thanks!

- Wainer


+  allow_failure: true
+- if: "$CIRRUS_GITHUB_REPO && $CIRRUS_API_TOKEN"
+
+x64-freebsd-12-build:
+  extends: .cirrus_build_job
+  variables:
+NAME: freebsd-12
+CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
+CIRRUS_VM_IMAGE_SELECTOR: image_family
+CIRRUS_VM_IMAGE_NAME: freebsd-12-2
+CIRRUS_VM_CPUS: 8
+CIRRUS_VM_RAM: 8G
+UPDATE_COMMAND: pkg update
+INSTALL_COMMAND: pkg install -y
+# TODO: Enable gnutls again once FreeBSD's libtasn1 got fixed
+# See: https://gitlab.com/gnutls/libtasn1/-/merge_requests/71
+CONFIGURE_ARGS: --disable-gnutls
+TEST_TARGETS: check
+
+x64-freebsd-13-build:
+  extends: .cirrus_build_job
+  variables:
+NAME: freebsd-13
+CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
+CIRRUS_VM_IMAGE_SELECTOR: image_family
+CIRRUS_VM_IMAGE_NAME: freebsd-13-0
+CIRRUS_VM_CPUS: 8
+CIRRUS_VM_RAM: 8G
+UPDATE_COMMAND: pkg 

Re: [PATCH v3] target/s390x: Fix CC set by CONVERT TO FIXED/LOGICAL

2021-06-30 Thread Richard Henderson

On 6/30/21 3:50 AM, Ulrich Weigand wrote:

The FP-to-integer conversion instructions need to set CC 3 whenever
a "special case" occurs; this is the case whenever the instruction
also signals the IEEE invalid exception.  (See e.g. figure 19-18
in the Principles of Operation.)

However, qemu currently will set CC 3 only in the case where the
input was a NaN.  This is indeed one of the special cases, but
there are others, most notably the case where the input is out
of range of the target data type.

This patch fixes the problem by switching these instructions to
the "static" CC method and computing the correct result directly
in the helper.  (It cannot be re-computed later as the information
about the invalid exception is no longer available.)

This fixes a bug observed when running the wasmtime test suite
under the s390x-linux-user target.

Signed-off-by: Ulrich Weigand
---
  target/s390x/fpu_helper.c | 63 ---
  target/s390x/helper.h | 24 +-
  target/s390x/translate.c  | 39 +
  3 files changed, 83 insertions(+), 43 deletions(-)


Reviewed-by: Richard Henderson 

r~



[PATCH] python: Configure tox to skip missing interpreters

2021-06-30 Thread Wainer dos Santos Moschetta
Currently tox tests against the installed interpreters, however if any
supported interpreter is absent then it will return fail. It seems not
reasonable to expect developers to have all supported interpreters
installed on their systems. Luckily tox can be configured to skip
missing interpreters.

This changed the tox setup so that missing interpreters are skipped by
default. On the CI, however, we still want to enforce it tests
against all supported. This way on CI the
--skip-missing-interpreters=false option is passed to tox.

Signed-off-by: Wainer dos Santos Moschetta 
---
Tested locally with `make check-tox` and where I only Python 3.6 and 3.9
installed.
Tested on CI: https://gitlab.com/wainersm/qemu/-/jobs/1390010988
Still on CI, but I deliberately removed Python 3.8: 
https://gitlab.com/wainersm/qemu/-/jobs/1390046531

 .gitlab-ci.d/static_checks.yml | 1 +
 python/Makefile| 5 -
 python/setup.cfg   | 1 +
 3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/.gitlab-ci.d/static_checks.yml b/.gitlab-ci.d/static_checks.yml
index b01f6ec231..96dbd9e310 100644
--- a/.gitlab-ci.d/static_checks.yml
+++ b/.gitlab-ci.d/static_checks.yml
@@ -43,6 +43,7 @@ check-python-tox:
 - make -C python check-tox
   variables:
 GIT_DEPTH: 1
+QEMU_TOX_EXTRA_ARGS: --skip-missing-interpreters=false
   needs:
 job: python-container
   allow_failure: true
diff --git a/python/Makefile b/python/Makefile
index ac46ae33e7..fe27a3e12e 100644
--- a/python/Makefile
+++ b/python/Makefile
@@ -1,4 +1,5 @@
 QEMU_VENV_DIR=.dev-venv
+QEMU_TOX_EXTRA_ARGS ?=
 
 .PHONY: help
 help:
@@ -15,6 +16,8 @@ help:
@echo "These tests use the newest dependencies."
@echo "Requires: Python 3.6 - 3.10, and tox."
@echo "Hint (Fedora): 'sudo dnf install python3-tox python3.10'"
+   @echo "The variable QEMU_TOX_EXTRA_ARGS can be use to pass extra"
+   @echo "arguments to tox".
@echo ""
@echo "make check-dev:"
@echo "Run tests in a venv against your default python3 version."
@@ -87,7 +90,7 @@ check:
 
 .PHONY: check-tox
 check-tox:
-   @tox
+   @tox $(QEMU_TOX_EXTRA_ARGS)
 
 .PHONY: clean
 clean:
diff --git a/python/setup.cfg b/python/setup.cfg
index 11f71d5312..14bab90288 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -121,6 +121,7 @@ multi_line_output=3
 
 [tox:tox]
 envlist = py36, py37, py38, py39, py310
+skip_missing_interpreters = true
 
 [testenv]
 allowlist_externals = make
-- 
2.31.1




[PATCH v2 16/28] target/mips: Fix missing else in gen_goto_tb

2021-06-30 Thread Richard Henderson
Do not emit dead code for the singlestep_enabled case,
after having exited the TB with a debug exception.

Reviewed-by: Philippe Mathieu-Daudé 
Signed-off-by: Richard Henderson 
---
 target/mips/tcg/translate.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c
index 52ae88b777..17e79f3de3 100644
--- a/target/mips/tcg/translate.c
+++ b/target/mips/tcg/translate.c
@@ -5030,8 +5030,9 @@ static void gen_goto_tb(DisasContext *ctx, int n, 
target_ulong dest)
 if (ctx->base.singlestep_enabled) {
 save_cpu_state(ctx, 0);
 gen_helper_raise_exception_debug(cpu_env);
+} else {
+tcg_gen_lookup_and_goto_ptr();
 }
-tcg_gen_lookup_and_goto_ptr();
 }
 }
 
-- 
2.25.1




Re: [PATCH v2 07/12] python: update help text for check-tox

2021-06-30 Thread Wainer dos Santos Moschetta


On 6/29/21 6:27 PM, John Snow wrote:



On Tue, Jun 29, 2021 at 4:25 PM Wainer dos Santos Moschetta 
mailto:waine...@redhat.com>> wrote:


Hi John,

On 6/29/21 1:42 PM, John Snow wrote:
> Move it up near the check-pipenv help text, and update it to
suggest parity.
>
> (At the time I first added it, I wasn't sure if I would be
keeping it,
> but I've come to appreciate it as it has actually helped uncover
bugs I
> would not have noticed without it. It should stay.)
>
> Signed-off-by: John Snow mailto:js...@redhat.com>>
> Reviewed-by: Willian Rampazzo mailto:willi...@redhat.com>>
> ---
>   python/Makefile | 8 ++--
>   1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/python/Makefile b/python/Makefile
> index 07ad73ccd0..d2cfa6ad8f 100644
> --- a/python/Makefile
> +++ b/python/Makefile
> @@ -9,13 +9,17 @@ help:
>       @echo "    Requires: Python 3.6 and pipenv."
>       @echo "    Hint (Fedora): 'sudo dnf install python3.6 pipenv'"
>       @echo ""
> +     @echo "make check-tox:"
> +     @echo "    Run tests against multiple python versions."
> +     @echo "    These tests use the newest dependencies."
> +     @echo "    Requires: Python 3.6 - 3.10, and tox."
> +     @echo "    Hint (Fedora): 'sudo dnf install python3-tox
python3.10'"
> +     @echo ""

Somewhat related... in my system I don't have all supported python
versions installed, thus check-tox fails.

Instead, maybe, you could configure tox (as below) to test to
whatever
supported versions the developer have installed in the system; and on
absence of some versions it won't fail the tests entirely.

diff --git a/python/setup.cfg b/python/setup.cfg
index e730f208d3..1db8aaf340 100644
--- a/python/setup.cfg
+++ b/python/setup.cfg
@@ -123,6 +123,7 @@ multi_line_output=3

  [tox:tox]
  envlist = py36, py37, py38, py39, py310
+skip_missing_interpreters=true


Didn't know this was an option, to be honest ... I wonder if it can be 
toggled on/off easily? I like the idea that it will fail if we don't 
set up the CI environment correctly instead of succeeding quietly.


Though, you're right, some is better than none. Send a patch if you want?



I just sent a patch. Message-Id: 
<20210630184546.456582-1-waine...@redhat.com>


- Wainer



--js


[PATCH v2 19/28] target/ppc: Use translator_use_goto_tb

2021-06-30 Thread Richard Henderson
Reviewed-by: Luis Pires 
Signed-off-by: Richard Henderson 
---
 target/ppc/translate.c | 10 +-
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index f65d1e81ea..0fb09f2301 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -4302,15 +4302,7 @@ static inline void gen_update_cfar(DisasContext *ctx, 
target_ulong nip)
 
 static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
 {
-if (unlikely(ctx->singlestep_enabled)) {
-return false;
-}
-
-#ifndef CONFIG_USER_ONLY
-return (ctx->base.tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
-#else
-return true;
-#endif
+return translator_use_goto_tb(>base, dest);
 }
 
 static void gen_lookup_and_goto_ptr(DisasContext *ctx)
-- 
2.25.1




Re: [PATCH v2] virtiofsd: Don't allow file creation with FUSE_OPEN

2021-06-30 Thread Dr. David Alan Gilbert
* Greg Kurz (gr...@kaod.org) wrote:
> A well behaved FUSE client uses FUSE_CREATE to create files. It isn't
> supposed to pass O_CREAT along a FUSE_OPEN request, as documented in
> the "fuse_lowlevel.h" header :
> 
> /**
>  * Open a file
>  *
>  * Open flags are available in fi->flags. The following rules
>  * apply.
>  *
>  *  - Creation (O_CREAT, O_EXCL, O_NOCTTY) flags will be
>  *filtered out / handled by the kernel.
> 
> But if the client happens to do it anyway, the server ends up passing
> this flag to open() without the mandatory mode_t 4th argument. Since
> open() is a variadic function, glibc will happily pass whatever it
> finds on the stack to the syscall. If this file is compiled with
> -D_FORTIFY_SOURCE=2, glibc will even detect that and abort:
> 
> *** invalid openat64 call: O_CREAT or O_TMPFILE without mode ***: terminated
> 
> Specifying O_CREAT with FUSE_OPEN is a protocol violation. Check this
> in do_open(), print out a message and return an error to the client,
> EINVAL like we already do when fuse_mbuf_iter_advance() fails.
> 
> The FUSE filesystem doesn't currently support O_TMPFILE, but the very
> same would happen if O_TMPFILE was passed in a FUSE_OPEN request. Check
> that as well.
> 
> Signed-off-by: Greg Kurz 

Queued

> ---
> 
> v2:
>  - do the check in core FUSE code instead of passthrough_ll (libfuse folks)
> 
> 
>  tools/virtiofsd/fuse_lowlevel.c | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/tools/virtiofsd/fuse_lowlevel.c b/tools/virtiofsd/fuse_lowlevel.c
> index 7fe2cef1eb3b..3d725bcba2ca 100644
> --- a/tools/virtiofsd/fuse_lowlevel.c
> +++ b/tools/virtiofsd/fuse_lowlevel.c
> @@ -1084,6 +1084,12 @@ static void do_open(fuse_req_t req, fuse_ino_t nodeid,
>  return;
>  }
>  
> +/* File creation is handled by do_create() or do_mknod() */
> +if (arg->flags & (O_CREAT | O_TMPFILE)) {
> +fuse_reply_err(req, EINVAL);
> +return;
> +}
> +
>  memset(, 0, sizeof(fi));
>  fi.flags = arg->flags;
>  fi.kill_priv = arg->open_flags & FUSE_OPEN_KILL_SUIDGID;
> -- 
> 2.31.1
> 
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




[PATCH v2 24/28] target/sh4: Use translator_use_goto_tb

2021-06-30 Thread Richard Henderson
Cc: Yoshinori Sato 
Signed-off-by: Richard Henderson 
---
 target/sh4/translate.c | 11 +++
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/target/sh4/translate.c b/target/sh4/translate.c
index 4dcfff81f6..db09a0bce3 100644
--- a/target/sh4/translate.c
+++ b/target/sh4/translate.c
@@ -225,17 +225,12 @@ static inline bool use_exit_tb(DisasContext *ctx)
 return (ctx->tbflags & GUSA_EXCLUSIVE) != 0;
 }
 
-static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
+static bool use_goto_tb(DisasContext *ctx, target_ulong dest)
 {
-/* Use a direct jump if in same page and singlestep not enabled */
-if (unlikely(ctx->base.singlestep_enabled || use_exit_tb(ctx))) {
+if (use_exit_tb(ctx)) {
 return false;
 }
-#ifndef CONFIG_USER_ONLY
-return (ctx->base.tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
-#else
-return true;
-#endif
+return translator_use_goto_tb(>base, dest);
 }
 
 static void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
-- 
2.25.1




[PATCH v2 27/28] target/tricore: Use tcg_gen_lookup_and_goto_ptr

2021-06-30 Thread Richard Henderson
The non-single-step case of gen_goto_tb may use
tcg_gen_lookup_and_goto_ptr to indirectly chain.

Reviewed-by: Bastian Koppelmann 
Signed-off-by: Richard Henderson 
---
 target/tricore/translate.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/target/tricore/translate.c b/target/tricore/translate.c
index 09465ea013..865020754d 100644
--- a/target/tricore/translate.c
+++ b/target/tricore/translate.c
@@ -3243,8 +3243,9 @@ static void gen_goto_tb(DisasContext *ctx, int n, 
target_ulong dest)
 gen_save_pc(dest);
 if (ctx->base.singlestep_enabled) {
 generate_qemu_excp(ctx, EXCP_DEBUG);
+} else {
+tcg_gen_lookup_and_goto_ptr();
 }
-tcg_gen_exit_tb(NULL, 0);
 }
 }
 
-- 
2.25.1




  1   2   3   >