Re: [PATCH qemu v20] spapr: Implement Open Firmware client interface

2021-06-08 Thread Alexey Kardashevskiy




On 6/8/21 08:54, BALATON Zoltan wrote:

On Mon, 7 Jun 2021, David Gibson wrote:

On Fri, Jun 04, 2021 at 03:59:22PM +0200, BALATON Zoltan wrote:

On Fri, 4 Jun 2021, David Gibson wrote:

On Wed, Jun 02, 2021 at 02:29:29PM +0200, BALATON Zoltan wrote:

On Wed, 2 Jun 2021, David Gibson wrote:

On Thu, May 27, 2021 at 02:42:39PM +0200, BALATON Zoltan wrote:

On Thu, 27 May 2021, David Gibson wrote:

On Tue, May 25, 2021 at 12:08:45PM +0200, BALATON Zoltan wrote:

On Tue, 25 May 2021, David Gibson wrote:

On Mon, May 24, 2021 at 12:55:07PM +0200, BALATON Zoltan wrote:

On Mon, 24 May 2021, David Gibson wrote:
What's ePAPR then and how is it different from PAPR? I mean the 
acronym not

the hypercall method, the latter is explained in that doc but what ePAPR
stands for and why is that method called like that is not clear to me.


Ok, history lesson time.

For a long time PAPR has been the document that described the OS
environment for IBM POWER based server hardware.  Before it was called
PAPR (POWER Architecture Platform Requirements) it was called the
"RPA" (Requirements for the POWER Architecture, I think?).  You might
see the old name in a few places.

Requiring a full Open Firmware and a bunch of other fairly heavyweight
stuff, PAPR really wasn't suitable for embedded ppc chips and boards.
The situation with those used to be a complete mess with basically
every board variant having it's own different firmware with its own
different way of presenting some fragments of vital data to the OS.

ePAPR - Embedded Power Architecture Platform Requirements - was
created as a standard to try to unify how this stuff was handled on
embedded ppc chips.  I was one of the authors on early versions of
it.  It's mostly based around giving the OS a flattened device tree,
with some deliberately minimal requirements on firmware initialization
and entry state.  Here's a link to one of those early versions:

http://elinux.org/images/c/cf/Power_ePAPR_APPROVED_v1.1.pdf

I thought there were later versions, but I couldn't seem to find any.
It's possible the process of refining later versions just petered out
as the embedded ppc world mostly died and the flattened device tree
development mostly moved to ARM.

Since some of the embedded chips from Freescale had hypervisor
capabilities, a hypercall model was added to ePAPR - but that wasn't
something I was greatly involved in, so I don't know much about it.

ePAPR is the reason that the original PAPR is sometimes referred to as
"sPAPR" to disambiguate.


Ah, thanks that really puts it in context. I've heard about PReP and 
CHRP in connection with the boards I've tried to emulate but don't know 
much about PAPR and server POWER systems.



The ePAPR (1.) seems to be preferred by KVM and
MOL OSI supported for compatibility.


That document looks pretty out of date.  Most of it is only discussing
KVM PR, which is now barely maintained.  KVM HV only works with PAPR
hypercalls.


The links says it's latest kernel docs, so maybe an update need to be 
sent

to KVM?


I guess, but the chances of me finding time to do it are approximately
zero.


So if we need something else instead of
2. PAPR hypercalls there seems to be two options: ePAPR and MOL OSI 
which

should work with KVM but then I'm not sure how to handle those on TCG.


[...]
I've tested that the missing rtas is not the reason for 
getting no output
via serial though, as even when disabling rtas on 
pegasos2.rom it boots and
I still get serial output just some PCI devices are not 
detected (such as
USB, the video card and the not emulated ethernet port but 
these are not
fatal so it might even work as a first try without rtas, 
just to boot a
Linux kernel for testing it would be enough if I can fix 
the serial output).
I still don't know why it's not finding serial but I think 
it may be some
missing or wrong info in the device tree I generat. I'll 
try to focus on

this for now and leave the above rtas question for later.


Oh.. another thought on that.  You have an ISA serial port 
on Pegasos,
I believe.  I wonder if the PCI->ISA bridge needs some 
configuration /
initialization that the firmware is expected to do.  If so 
you'll need

to mimic that setup in qemu for the VOF case.


That's what I begin to think because I've added everything to 
the device
tree that I thought could be needed and I still don't get it 
working so it
may need some config from the firmware. But how do I access 
device registers
from board code? I've tried adding a machine reset method and 
write to
memory mapped device registers but all my attempts failed. 
I've tried
cpu_stl_le_data and even memory_region_dispatch_write but 
these did not get
to the device. What's the way to access guest mmio regs from 
QEMU?


That's odd, cpu_stl() and memory_region_dispatch_write() 
should work
from board code (after the relevant memory regions are 
configured, of
course).  As an ISA serial port, it's probably accessed 
through IO
space, not memory space 

Re: [RESEND] Multiple SMMUv3 instances on PCI Bus and PCI Host Bridge

2021-06-08 Thread Nicolin Chen
Hi Eric,

Thanks for the reply!

On Mon, Jun 07, 2021 at 11:19:39AM +0200, Eric Auger wrote:

> > So I started to have questions in my mind:
> > (1) Can PCI host bridge (PCIE.128) add to a different vSMMU without
> > following PCIE.0's SMMU setup?
> changes need to be made in hw/arm/virt.c
> create_smmu() is passed the primary bus the iommu is attached to.
> Currently arm virt only supports one smmu instance. So playing with qemu
> options is not sufficient.

Yes. I had my local change to do that.

> Besides that, effectively there are IORT changes needed because you need
> to route RCs to the different SMMU instances, ie. some RIDs need to
> reach SMMU#0 and others #SMMU#1.
> You can get inspired of "[PATCH v4 6/8] hw/arm/virt-acpi-build: Add
> explicit IORT idmap for smmuv3 node for this kind of changes."

I see! I tried some change at my IORT table following the way
from this patch. And it seems to work now. Thank you!



Re: [PATCH 4/4] aspeed: sonorapass: enable pca954x muxes

2021-06-08 Thread Cédric Le Goater
On 6/9/21 3:58 AM, Joel Stanley wrote:
> On Tue, 8 Jun 2021 at 19:56, Patrick Venture  wrote:
>>
>> On Wed, May 19, 2021 at 10:18 AM Patrick Venture  wrote:
>>>
>>> On Tue, May 18, 2021 at 4:27 PM Joel Stanley  wrote:

 On Tue, 18 May 2021 at 19:41, Patrick Venture  wrote:
>
> Enables the pca954x muxes in the bmc board configuration.
>
> Signed-off-by: Patrick Venture 
> Reviewed-by: Hao Wu 

 Not sure about this one, there's no device tree for it in Linux.
>>>
>>> Yeah, this was just a pick-up from grepping other BMC boards.  I added
>>> these going off the comment alone.  I'd be okay with dropping this in
>>> the series.
>>
>> In this case, the number of patches changed within a version change --
>> should I start a fresh series or just bump the version and drop the
>> last patch?
> 
> I wasn't saying we shouldn't include this change - it's good. I just
> didn't have any information to say whether it was correct or not.
> 
> I see you chose to resend without this one, lets get Cedric to merge
> those patches.

I took these patches in the aspeed-6.1 branch : 

  hw/arm: add quanta-gbs-bmc machine
  hw/arm: quanta-gbs-bmc add i2c comments
  hw/arm: gsj add i2c comments
  hw/arm: gsj add pca9548
  hw/arm: quanta-q71l add pca954x muxes
  aspeed: sonorapass: enable pca954x muxes

Peter,

I can include them in an aspeed PR.

Thanks,

C.



Re: [PATCH v2 3/3] hw/arm: quanta-q71l add pca954x muxes

2021-06-08 Thread Cédric Le Goater
On 6/8/21 10:25 PM, Patrick Venture wrote:
> Adds the pca954x muxes expected.
> 
> Tested: Booted quanta-q71l image to userspace.
> Signed-off-by: Patrick Venture 
> Reviewed-by: Hao Wu 
> Reviewed-by: Joel Stanley 

Reviewed-by: Cédric Le Goater 

I guess this patchset can go through the arm tree directly.

Thanks,

C.

> ---
>  hw/arm/Kconfig  |  1 +
>  hw/arm/aspeed.c | 11 ---
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
> index 9d1c2a6f7b..4a033e81ef 100644
> --- a/hw/arm/Kconfig
> +++ b/hw/arm/Kconfig
> @@ -413,6 +413,7 @@ config ASPEED_SOC
>  select PCA9552
>  select SERIAL
>  select SMBUS_EEPROM
> +select PCA954X
>  select SSI
>  select SSI_M25P80
>  select TMP105
> diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
> index 3fe6c55744..35a28b0e8b 100644
> --- a/hw/arm/aspeed.c
> +++ b/hw/arm/aspeed.c
> @@ -14,6 +14,7 @@
>  #include "hw/arm/boot.h"
>  #include "hw/arm/aspeed.h"
>  #include "hw/arm/aspeed_soc.h"
> +#include "hw/i2c/i2c_mux_pca954x.h"
>  #include "hw/i2c/smbus_eeprom.h"
>  #include "hw/misc/pca9552.h"
>  #include "hw/misc/tmp105.h"
> @@ -461,14 +462,18 @@ static void quanta_q71l_bmc_i2c_init(AspeedMachineState 
> *bmc)
>  /* TODO: i2c-1: Add Frontpanel FRU eeprom@57 24c64 */
>  /* TODO: Add Memory Riser i2c mux and eeproms. */
>  
> -/* TODO: i2c-2: pca9546@74 */
> -/* TODO: i2c-2: pca9548@77 */
> +i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 2), "pca9546", 
> 0x74);
> +i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 2), "pca9548", 
> 0x77);
> +
>  /* TODO: i2c-3: Add BIOS FRU eeprom@56 24c64 */
> -/* TODO: i2c-7: Add pca9546@70 */
> +
> +/* i2c-7 */
> +i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 7), "pca9546", 
> 0x70);
>  /*- i2c@0: pmbus@59 */
>  /*- i2c@1: pmbus@58 */
>  /*- i2c@2: pmbus@58 */
>  /*- i2c@3: pmbus@59 */
> +
>  /* TODO: i2c-7: Add PDB FRU eeprom@52 */
>  /* TODO: i2c-8: Add BMC FRU eeprom@50 */
>  }
> 




Re: [PATCH v1 2/5] ui: Add a helper to wait on a dmabuf sync

2021-06-08 Thread Dongwon Kim
Hi Gerd,

Our goal is to block virtio-gpu driver running on the guest from writing
on the buffer that hasn't been completely blitted yet. If we do
graphic_hw_gl_block, it will block the next commands from being processed but
won't stop the guest fill the scanout buffers and send commands, I think.




Re: [PATCH 4/4] aspeed: sonorapass: enable pca954x muxes

2021-06-08 Thread Joel Stanley
On Tue, 8 Jun 2021 at 19:56, Patrick Venture  wrote:
>
> On Wed, May 19, 2021 at 10:18 AM Patrick Venture  wrote:
> >
> > On Tue, May 18, 2021 at 4:27 PM Joel Stanley  wrote:
> > >
> > > On Tue, 18 May 2021 at 19:41, Patrick Venture  wrote:
> > > >
> > > > Enables the pca954x muxes in the bmc board configuration.
> > > >
> > > > Signed-off-by: Patrick Venture 
> > > > Reviewed-by: Hao Wu 
> > >
> > > Not sure about this one, there's no device tree for it in Linux.
> >
> > Yeah, this was just a pick-up from grepping other BMC boards.  I added
> > these going off the comment alone.  I'd be okay with dropping this in
> > the series.
>
> In this case, the number of patches changed within a version change --
> should I start a fresh series or just bump the version and drop the
> last patch?

I wasn't saying we shouldn't include this change - it's good. I just
didn't have any information to say whether it was correct or not.

I see you chose to resend without this one, lets get Cedric to merge
those patches.

Cheers,

Joel



Re: [PATCH v2 2/3] hw/timer: Initial commit of Ibex Timer

2021-06-08 Thread Bin Meng
On Wed, Jun 9, 2021 at 7:49 AM Alistair Francis
 wrote:
>
> Add support for the Ibex timer. This is used with the RISC-V
> mtime/mtimecmp similar to the SiFive CLINT.
>
> We currently don't support changing the prescale or the timervalue.
>
> Signed-off-by: Alistair Francis 
> ---
>  include/hw/timer/ibex_timer.h |  52 ++
>  hw/timer/ibex_timer.c | 305 ++
>  MAINTAINERS   |   6 +-
>  hw/timer/meson.build  |   1 +
>  4 files changed, 360 insertions(+), 4 deletions(-)
>  create mode 100644 include/hw/timer/ibex_timer.h
>  create mode 100644 hw/timer/ibex_timer.c
>
> diff --git a/include/hw/timer/ibex_timer.h b/include/hw/timer/ibex_timer.h
> new file mode 100644
> index 00..6a43537003
> --- /dev/null
> +++ b/include/hw/timer/ibex_timer.h
> @@ -0,0 +1,52 @@
> +/*
> + * QEMU lowRISC Ibex Timer device
> + *
> + * Copyright (c) 2021 Western Digital
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to 
> deal
> + * in the Software without restriction, including without limitation the 
> rights
> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> + * copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
> FROM,
> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> + * THE SOFTWARE.
> + */
> +
> +#ifndef HW_IBEX_TIMER_H
> +#define HW_IBEX_TIMER_H
> +
> +#include "hw/sysbus.h"
> +
> +#define TYPE_IBEX_TIMER "ibex-timer"
> +OBJECT_DECLARE_SIMPLE_TYPE(IbexTimerState, IBEX_TIMER)
> +
> +struct IbexTimerState {
> +/*  */
> +SysBusDevice parent_obj;
> +
> +/*  */
> +MemoryRegion mmio;
> +
> +uint32_t timer_ctrl;
> +uint32_t timer_cfg0;
> +uint32_t timer_compare_lower0;
> +uint32_t timer_compare_upper0;
> +uint32_t timer_intr_enable;
> +uint32_t timer_intr_state;
> +uint32_t timer_intr_test;
> +
> +uint32_t timebase_freq;
> +
> +qemu_irq irq;
> +};
> +#endif /* HW_IBEX_TIMER_H */
> diff --git a/hw/timer/ibex_timer.c b/hw/timer/ibex_timer.c
> new file mode 100644
> index 00..4d55eb5088
> --- /dev/null
> +++ b/hw/timer/ibex_timer.c
> @@ -0,0 +1,305 @@
> +/*
> + * QEMU lowRISC Ibex Timer device
> + *
> + * Copyright (c) 2021 Western Digital
> + *
> + * For details check the documentation here:
> + *https://docs.opentitan.org/hw/ip/rv_timer/doc/
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
> + * of this software and associated documentation files (the "Software"), to 
> deal
> + * in the Software without restriction, including without limitation the 
> rights
> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> + * copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
> FROM,
> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> + * THE SOFTWARE.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/log.h"
> +#include "qemu/timer.h"
> +#include "hw/timer/ibex_timer.h"
> +#include "hw/irq.h"
> +#include "hw/qdev-properties.h"
> +#include "target/riscv/cpu.h"
> +#include "migration/vmstate.h"
> +
> +REG32(CTRL, 0x00)
> +FIELD(CTRL, ACTIVE, 0, 1)
> +REG32(CFG0, 0x100)
> +FIELD(CFG0, PRESCALE, 0, 12)
> +FIELD(CFG0, STEP, 16, 8)
> +REG32(LOWER0, 0x104)
> +REG32(UPPER0, 0x108)
> +REG32(COMPARE_LOWER0, 0x10C)
> +REG32(COMPARE_UPPER0, 0x110)
> +REG32(INTR_ENABLE, 0x114)
> +FIELD(INTR_ENABLE, IE_0, 0, 1)
> +REG32(INTR_STATE, 0x118)
> +FIELD(INTR_STATE, IS_0, 0, 1)
> +REG32(INTR_TEST, 0x11C)
> +FIELD(INTR_TEST, T_0, 0, 1)
> +
> +static uint64_t 

Re: [PATCH v2 3/3] hw/riscv: OpenTitan: Connect the mtime and mtimecmp timer

2021-06-08 Thread Bin Meng
On Wed, Jun 9, 2021 at 7:49 AM Alistair Francis
 wrote:
>
> Connect the Ibex timer to the OpenTitan machine. The timer can trigger
> the RISC-V MIE interrupt as well as a custom device interrupt.
>
> Signed-off-by: Alistair Francis 
> ---
>  include/hw/riscv/opentitan.h |  5 -
>  hw/riscv/opentitan.c | 14 +++---
>  2 files changed, 15 insertions(+), 4 deletions(-)
>

Reviewed-by: Bin Meng 



[PATCH] KVM: Fix dirty ring mmap incorrect size due to renaming accident

2021-06-08 Thread Peter Xu
Found this when I wanted to try the per-vcpu dirty rate series out, then I
found that it's not really working and it can quickly hang death a guest.  I
found strange errors (e.g. guest crash after migration) happens even without
the per-vcpu dirty rate series.

When merging dirty ring, probably no one notice that the trivial renaming diff
[1] missed two existing references of kvm_dirty_ring_sizes; they do matter
since otherwise we'll mmap() a shorter range of memory after the renaming.

I think it didn't SIGBUS for me easily simply because some other stuff within
qemu mmap()ed right after the dirty rings (e.g. when testing 4096 slots, it
aligned with one small page on x86), so when we access the rings we've been
reading/writting to random memory elsewhere of qemu.

Fix the two sizes when map/unmap the shared dirty gfn memory.

[1] 
https://lore.kernel.org/qemu-devel/dac5f0c6-1bca-3daf-e5d2-6451dbbac...@redhat.com/

Cc: Hyman Huang 
Cc: Paolo Bonzini 
Cc: Dr. David Alan Gilbert 
Signed-off-by: Peter Xu 
---
 accel/kvm/kvm-all.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index c7ec5388500..e5b10dd129c 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -411,7 +411,7 @@ static int do_kvm_destroy_vcpu(CPUState *cpu)
 }
 
 if (cpu->kvm_dirty_gfns) {
-ret = munmap(cpu->kvm_dirty_gfns, s->kvm_dirty_ring_size);
+ret = munmap(cpu->kvm_dirty_gfns, s->kvm_dirty_ring_bytes);
 if (ret < 0) {
 goto err;
 }
@@ -495,7 +495,7 @@ int kvm_init_vcpu(CPUState *cpu, Error **errp)
 
 if (s->kvm_dirty_ring_size) {
 /* Use MAP_SHARED to share pages with the kernel */
-cpu->kvm_dirty_gfns = mmap(NULL, s->kvm_dirty_ring_size,
+cpu->kvm_dirty_gfns = mmap(NULL, s->kvm_dirty_ring_bytes,
PROT_READ | PROT_WRITE, MAP_SHARED,
cpu->kvm_fd,
PAGE_SIZE * KVM_DIRTY_LOG_PAGE_OFFSET);
-- 
2.31.1




Re: [PATCH 32/55] target/arm: Implement MVE VRMLALDAVH, VRMLSLDAVH

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+#define DO_LDAVH(OP, ESIZE, TYPE, H, XCHG, EVENACC, ODDACC, TO128)  \
+uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
+void *vm, uint64_t a)   \
+{   \
+uint16_t mask = mve_element_mask(env);  \
+unsigned e; \
+TYPE *n = vn, *m = vm;  \
+Int128 acc = TO128(a);  \


This seems to miss the << 8.

Which suggests that the whole thing can be done without Int128:


+for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) {  \
+if (mask & 1) { \
+if (e & 1) {\
+acc = ODDACC(acc, TO128(n[H(e - 1 * XCHG)] * m[H(e)])); \


  tmp = n * m;
  tmp = (tmp >> 8) + ((tmp >> 7) & 1);
  acc ODDACC tmp;


+static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
+{
+MVEGenDualAccOpFn *fns[] = {


static const, etc.


r~



Re: [PATCH 31/55] include/qemu/int128.h: Add function to create Int128 from int64_t

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

int128_make64() creates an Int128 from an unsigned 64 bit value; add
a function int128_makes64() creating an Int128 from a signed 64 bit
value.

Signed-off-by: Peter Maydell
---
  include/qemu/int128.h | 10 ++
  1 file changed, 10 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH 30/55] target/arm: Implement MVE VMLSLDAV

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
+{
+MVEGenDualAccOpFn *fns[4][2] = {


static const, otherwise,
Reviewed-by: Richard Henderson 

r~



Re: [PATCH 29/55] target/arm: Implement MVE VMLALDAV

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
+{
+MVEGenDualAccOpFn *fns[4][2] = {


static const, otherwise,
Reviewed-by: Richard Henderson 

r~



Re: [PATCH 29/55] target/arm: Implement MVE VMLALDAV

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement the MVE VMLALDAV insn, which multiplies pairs of integer
elements, accumulating them into a 64-bit result in a pair of
general-purpose registers.

Signed-off-by: Peter Maydell
---
  target/arm/helper-mve.h|   8 +++
  target/arm/translate.h |  10 
  target/arm/mve.decode  |  15 ++
  target/arm/mve_helper.c|  32 
  target/arm/translate-mve.c | 100 +
  5 files changed, 165 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: A bug of Monitor Chardev ?

2021-06-08 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)



在 2021/6/8 23:37, Daniel P. Berrangé 写道:
> On Tue, Jun 08, 2021 at 04:07:30PM +0200, Markus Armbruster wrote:
>> "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)"
>>  writes:
>>
>>> We find a race during QEMU starting, which would case the QEMU process 
>>> coredump.
>>>
>>>  |
>>> |
>>> [1] create MON chardev  |
>>> qemu_create_early_backends  |
>>>   chardev_init_func |
>>> |
>>> [2] create MON iothread |
>>> qemu_create_late_backends   |
>>>   mon_init_func |
>>> aio_bh_schedule---> monitor_qmp_setup_handlers_bh
>>> [3] enter main loog |tcp_chr_update_read_handler
>>> (* A client come in, e.g. Libvirt *)|  update_ioc_handlers
>>> tcp_chr_new_client  |
>>>   update_ioc_handlers   |
>>> |
>>> [4] create new hup_source   |
>>> s->hup_source = *PTR1*  |
>>>   g_source_attach(s->hup_source)|
>>> |[5] 
>>> remove_hup_source(*PTR1*)
>>> |(create new hup_source)
>>> | s->hup_source = *PTR2*
>>> [6] g_source_attach_unlocked|
>>>   *PTR1* is freed by [5]|
>>> 
>>> Do you have any suggestion to fix this bug ? Thanks!
>>
>> Do we?  We talked, but I'm not sure we reached a conclusion.
> 
> Seems like we ended up with two options.
> 
>   1. A workaround for the current  specific problem by rearranging
>  the initilization code in the monitor a little.
> 
>   2. A design fix of splitting the chardev creation into two
>  parts, one creation, and one activation.
> 
> The latter is significantly more work, but is a better long term bet IMHO.
> But what we really is someone motivated to actually implement one of the
> two options.
> 

How about the following implementation of option-1 ? We've tested it for several
weeks, it works fine.

diff --git a/chardev/char-socket.c b/chardev/char-socket.c
index a484641..ecb3db9 100644
--- a/chardev/char-socket.c
+++ b/chardev/char-socket.c
@@ -722,6 +722,19 @@ static void tcp_chr_update_read_handler(Chardev *chr)
 update_ioc_handlers(s);
 }

+static void tcp_chr_disable_handler(Chardev *chr)
+{
+SocketChardev *s = SOCKET_CHARDEV(chr);
+
+if (s->listener && s->state == TCP_CHARDEV_STATE_DISCONNECTED) {
+qio_net_listener_set_client_func_full(s->listener, NULL, NULL,
+  NULL, chr->gcontext);
+}
+
+remove_fd_in_watch(chr);
+remove_hup_source(s);
+}
+
 static bool tcp_chr_is_connected(Chardev *chr)
 {
 SocketChardev *s = SOCKET_CHARDEV(chr);
@@ -1703,6 +1716,7 @@ static void char_socket_class_init(ObjectClass *oc, void
*data)
 cc->chr_add_watch = tcp_chr_add_watch;
 cc->chr_set_reconnect_time = tcp_chr_set_reconnect_time;
 cc->chr_update_read_handler = tcp_chr_update_read_handler;
+cc->chr_disable_handler = tcp_chr_disable_handler;
 cc->chr_is_connected = tcp_chr_is_connected;
 cc->chr_get_connect_id = tcp_chr_get_connect_id;

diff --git a/chardev/char.c b/chardev/char.c
index ff0a3cf..990fe4f 100644
--- a/chardev/char.c
+++ b/chardev/char.c
@@ -238,6 +238,15 @@ void qemu_chr_be_update_read_handlers(Chardev *s,
 }
 }

+void qemu_chr_be_disable_handlers(Chardev *s)
+{
+ChardevClass *cc = CHARDEV_GET_CLASS(s);
+
+if (cc->chr_disable_handler) {
+cc->chr_disable_handler(s);
+}
+}
+
 int qemu_chr_add_client(Chardev *s, int fd)
 {
 return CHARDEV_GET_CLASS(s)->chr_add_client ?
diff --git a/include/chardev/char.h b/include/chardev/char.h
index d1ec628..7a8c740 100644
--- a/include/chardev/char.h
+++ b/include/chardev/char.h
@@ -212,6 +212,8 @@ void qemu_chr_be_write_impl(Chardev *s, uint8_t *buf, int 
len);
 void qemu_chr_be_update_read_handlers(Chardev *s,
   GMainContext *context);

+void qemu_chr_be_disable_handlers(Chardev *s);
+
 /**
  * qemu_chr_be_event:
  * @event: the event to send
@@ -282,6 +284,7 @@ typedef struct ChardevClass {
 int (*chr_sync_read)(Chardev *s, const uint8_t *buf, int len);
 GSource *(*chr_add_watch)(Chardev *s, GIOCondition cond);
 void (*chr_update_read_handler)(Chardev *s);
+void (*chr_disable_handler)(Chardev *s);
 int (*chr_ioctl)(Chardev *s, int cmd, void *arg);
 int (*get_msgfds)(Chardev *s, int* fds, int num);
 int (*set_msgfds)(Chardev *s, int *fds, int num);
diff --git a/monitor/qmp.c b/monitor/qmp.c
index 9a69ae4..2c2248c 100644
--- a/monitor/qmp.c
+++ b/monitor/qmp.c
@@ -413,11 +413,13 @@ void monitor_init_qmp(Chardev *chr, bool pretty)
  * e.g. the chardev is 

Re: [PATCH 2/4] Python QEMU utils: introduce a generic feature list

2021-06-08 Thread Cleber Rosa Junior
On Tue, Jun 8, 2021 at 5:42 PM Wainer dos Santos Moschetta <
waine...@redhat.com> wrote:

> Hi,
>
> On 6/8/21 11:09 AM, Cleber Rosa wrote:
> > Which can be used to check for any "feature" that is available as a
> > QEMU command line option, and that will return its list of available
> > options.
> >
> > This is a generalization of the list_accel() utility function, which
> > is itself re-implemented in terms of the more generic feature.
> >
> > Signed-off-by: Cleber Rosa 
> > ---
> >   python/qemu/utils/__init__.py |  2 ++
> >   python/qemu/utils/accel.py| 15 ++--
> >   python/qemu/utils/feature.py  | 44 +++
> >   3 files changed, 48 insertions(+), 13 deletions(-)
> >   create mode 100644 python/qemu/utils/feature.py
> >
> > diff --git a/python/qemu/utils/__init__.py
> b/python/qemu/utils/__init__.py
> > index 7f1a5138c4..1d0789eaa2 100644
> > --- a/python/qemu/utils/__init__.py
> > +++ b/python/qemu/utils/__init__.py
> > @@ -20,12 +20,14 @@
> >
> >   # pylint: disable=import-error
> >   from .accel import kvm_available, list_accel, tcg_available
> > +from .feature import list_feature
> >
> >
> >   __all__ = (
> >   'get_info_usernet_hostfwd_port',
> >   'kvm_available',
> >   'list_accel',
> > +'list_feature',
> >   'tcg_available',
> >   )
> >
> > diff --git a/python/qemu/utils/accel.py b/python/qemu/utils/accel.py
> > index 297933df2a..b5bb80c6d3 100644
> > --- a/python/qemu/utils/accel.py
> > +++ b/python/qemu/utils/accel.py
> > @@ -14,13 +14,11 @@
> >   # the COPYING file in the top-level directory.
> >   #
> >
> > -import logging
> >   import os
> > -import subprocess
> >   from typing import List, Optional
> >
> > +from qemu.utils.feature import list_feature
> >
> > -LOG = logging.getLogger(__name__)
> >
> >   # Mapping host architecture to any additional architectures it can
> >   # support which often includes its 32 bit cousin.
> > @@ -39,16 +37,7 @@ def list_accel(qemu_bin: str) -> List[str]:
> >   @raise Exception: if failed to run `qemu -accel help`
> >   @return a list of accelerator names.
> >   """
> > -if not qemu_bin:
> > -return []
> > -try:
> > -out = subprocess.check_output([qemu_bin, '-accel', 'help'],
> > -  universal_newlines=True)
> > -except:
> > -LOG.debug("Failed to get the list of accelerators in %s",
> qemu_bin)
> > -raise
> > -# Skip the first line which is the header.
> > -return [acc.strip() for acc in out.splitlines()[1:]]
> > +return list_feature(qemu_bin, 'accel')
> >
> >
> >   def kvm_available(target_arch: Optional[str] = None,
> > diff --git a/python/qemu/utils/feature.py b/python/qemu/utils/feature.py
> > new file mode 100644
> > index 00..b4a5f929ab
> > --- /dev/null
> > +++ b/python/qemu/utils/feature.py
> > @@ -0,0 +1,44 @@
> > +"""
> > +QEMU feature module:
> > +
> > +This module provides a utility for discovering the availability of
> > +generic features.
> > +"""
> > +# Copyright (C) 2022 Red Hat Inc.
> Cleber, please, tell me how is the future like! :)
>

I grabbed a sports almanac.  That's all I can say. :)

Now seriously, thanks for spotting the typo.


> > +#
> > +# Authors:
> > +#  Cleber Rosa 
> > +#
> > +# This work is licensed under the terms of the GNU GPL, version 2.  See
> > +# the COPYING file in the top-level directory.
> > +#
> > +
> > +import logging
> > +import subprocess
> > +from typing import List
> > +
> > +
> > +LOG = logging.getLogger(__name__)
> > +
> > +
> > +def list_feature(qemu_bin: str, feature: str) -> List[str]:
> > +"""
> > +List available options the QEMU binary for a given feature type.
> > +
> > +By calling a "qemu $feature -help" and parsing its output.
>
> I understand we need a mean to easily cancel the test if given feature
> is not present. However, I'm unsure this generic list_feature() is what
> we need.
>
> The `-accel help` returns a simple list of strings (besides the header,
> which is dismissed). Whereas `-machine help` returns what could be
> parsed as a tuple of (name, description).
>
> Another example is `-cpu help` which will print a similar list as
> `-machine`, plus a section with CPUID flags.
>
>
I made sure it worked with both "accel" and "machine", but you're right
about many other "-$feature help" that won't conform to the mapping
("-chardev help" is probably the only other one that should work).  What I
thought about was to keep the same list_feature(), but make its parsing of
items flexible.  Then it could be reused for other list_$feature() like
methods.  At the same time, it could be an opportunity to standardize a bit
more of the "help" outputs.

For instance, I think it would make sense for "cpu" to keep showing the
amount of information it shows, but:

1) The first item could be the name of the relevant "option" (the cpu
model) for that feature (cpu), instead of, say, "x86". Basically reversing
the order of first 

Re: [PATCH 28/55] target/arm: Implement MVE VMULL

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement the MVE VMULL insn, which multiplies two single
width integer elements to produce a double width result.

Signed-off-by: Peter Maydell
---
  target/arm/helper-mve.h| 14 ++
  target/arm/mve.decode  |  5 +
  target/arm/mve_helper.c| 35 +++
  target/arm/translate-mve.c |  4 
  4 files changed, 58 insertions(+)


Reviewed-by: Richard Henderson 

r~



[PATCH v2 2/3] hw/timer: Initial commit of Ibex Timer

2021-06-08 Thread Alistair Francis
Add support for the Ibex timer. This is used with the RISC-V
mtime/mtimecmp similar to the SiFive CLINT.

We currently don't support changing the prescale or the timervalue.

Signed-off-by: Alistair Francis 
---
 include/hw/timer/ibex_timer.h |  52 ++
 hw/timer/ibex_timer.c | 305 ++
 MAINTAINERS   |   6 +-
 hw/timer/meson.build  |   1 +
 4 files changed, 360 insertions(+), 4 deletions(-)
 create mode 100644 include/hw/timer/ibex_timer.h
 create mode 100644 hw/timer/ibex_timer.c

diff --git a/include/hw/timer/ibex_timer.h b/include/hw/timer/ibex_timer.h
new file mode 100644
index 00..6a43537003
--- /dev/null
+++ b/include/hw/timer/ibex_timer.h
@@ -0,0 +1,52 @@
+/*
+ * QEMU lowRISC Ibex Timer device
+ *
+ * Copyright (c) 2021 Western Digital
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to 
deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#ifndef HW_IBEX_TIMER_H
+#define HW_IBEX_TIMER_H
+
+#include "hw/sysbus.h"
+
+#define TYPE_IBEX_TIMER "ibex-timer"
+OBJECT_DECLARE_SIMPLE_TYPE(IbexTimerState, IBEX_TIMER)
+
+struct IbexTimerState {
+/*  */
+SysBusDevice parent_obj;
+
+/*  */
+MemoryRegion mmio;
+
+uint32_t timer_ctrl;
+uint32_t timer_cfg0;
+uint32_t timer_compare_lower0;
+uint32_t timer_compare_upper0;
+uint32_t timer_intr_enable;
+uint32_t timer_intr_state;
+uint32_t timer_intr_test;
+
+uint32_t timebase_freq;
+
+qemu_irq irq;
+};
+#endif /* HW_IBEX_TIMER_H */
diff --git a/hw/timer/ibex_timer.c b/hw/timer/ibex_timer.c
new file mode 100644
index 00..4d55eb5088
--- /dev/null
+++ b/hw/timer/ibex_timer.c
@@ -0,0 +1,305 @@
+/*
+ * QEMU lowRISC Ibex Timer device
+ *
+ * Copyright (c) 2021 Western Digital
+ *
+ * For details check the documentation here:
+ *https://docs.opentitan.org/hw/ip/rv_timer/doc/
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to 
deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "qemu/timer.h"
+#include "hw/timer/ibex_timer.h"
+#include "hw/irq.h"
+#include "hw/qdev-properties.h"
+#include "target/riscv/cpu.h"
+#include "migration/vmstate.h"
+
+REG32(CTRL, 0x00)
+FIELD(CTRL, ACTIVE, 0, 1)
+REG32(CFG0, 0x100)
+FIELD(CFG0, PRESCALE, 0, 12)
+FIELD(CFG0, STEP, 16, 8)
+REG32(LOWER0, 0x104)
+REG32(UPPER0, 0x108)
+REG32(COMPARE_LOWER0, 0x10C)
+REG32(COMPARE_UPPER0, 0x110)
+REG32(INTR_ENABLE, 0x114)
+FIELD(INTR_ENABLE, IE_0, 0, 1)
+REG32(INTR_STATE, 0x118)
+FIELD(INTR_STATE, IS_0, 0, 1)
+REG32(INTR_TEST, 0x11C)
+FIELD(INTR_TEST, T_0, 0, 1)
+
+static uint64_t cpu_riscv_read_rtc(uint32_t timebase_freq)
+{
+return muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
+timebase_freq, NANOSECONDS_PER_SECOND);
+}
+
+static void ibex_timer_update_irqs(IbexTimerState *s)
+{
+CPUState *cs = qemu_get_cpu(0);
+RISCVCPU *cpu = RISCV_CPU(cs);
+uint64_t value = s->timer_compare_lower0 |
+  

[PATCH v2 3/3] hw/riscv: OpenTitan: Connect the mtime and mtimecmp timer

2021-06-08 Thread Alistair Francis
Connect the Ibex timer to the OpenTitan machine. The timer can trigger
the RISC-V MIE interrupt as well as a custom device interrupt.

Signed-off-by: Alistair Francis 
---
 include/hw/riscv/opentitan.h |  5 -
 hw/riscv/opentitan.c | 14 +++---
 2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/include/hw/riscv/opentitan.h b/include/hw/riscv/opentitan.h
index aab9bc9245..86cceef698 100644
--- a/include/hw/riscv/opentitan.h
+++ b/include/hw/riscv/opentitan.h
@@ -22,6 +22,7 @@
 #include "hw/riscv/riscv_hart.h"
 #include "hw/intc/ibex_plic.h"
 #include "hw/char/ibex_uart.h"
+#include "hw/timer/ibex_timer.h"
 #include "qom/object.h"
 
 #define TYPE_RISCV_IBEX_SOC "riscv.lowrisc.ibex.soc"
@@ -35,6 +36,7 @@ struct LowRISCIbexSoCState {
 RISCVHartArrayState cpus;
 IbexPlicState plic;
 IbexUartState uart;
+IbexTimerState timer;
 
 MemoryRegion flash_mem;
 MemoryRegion rom;
@@ -57,7 +59,7 @@ enum {
 IBEX_DEV_SPI,
 IBEX_DEV_I2C,
 IBEX_DEV_PATTGEN,
-IBEX_DEV_RV_TIMER,
+IBEX_DEV_TIMER,
 IBEX_DEV_SENSOR_CTRL,
 IBEX_DEV_OTP_CTRL,
 IBEX_DEV_PWRMGR,
@@ -82,6 +84,7 @@ enum {
 };
 
 enum {
+IBEX_TIMER_TIMEREXPIRED0_0 = 125,
 IBEX_UART0_RX_PARITY_ERR_IRQ = 8,
 IBEX_UART0_RX_TIMEOUT_IRQ = 7,
 IBEX_UART0_RX_BREAK_ERR_IRQ = 6,
diff --git a/hw/riscv/opentitan.c b/hw/riscv/opentitan.c
index 7545dcda9c..c5a7e3bacb 100644
--- a/hw/riscv/opentitan.c
+++ b/hw/riscv/opentitan.c
@@ -36,7 +36,7 @@ static const MemMapEntry ibex_memmap[] = {
 [IBEX_DEV_SPI] ={  0x4005,  0x1000  },
 [IBEX_DEV_I2C] ={  0x4008,  0x1000  },
 [IBEX_DEV_PATTGEN] ={  0x400e,  0x1000  },
-[IBEX_DEV_RV_TIMER] =   {  0x4010,  0x1000  },
+[IBEX_DEV_TIMER] =  {  0x4010,  0x1000  },
 [IBEX_DEV_SENSOR_CTRL] ={  0x4011,  0x1000  },
 [IBEX_DEV_OTP_CTRL] =   {  0x4013,  0x4000  },
 [IBEX_DEV_PWRMGR] = {  0x4040,  0x1000  },
@@ -106,6 +106,8 @@ static void lowrisc_ibex_soc_init(Object *obj)
 object_initialize_child(obj, "plic", >plic, TYPE_IBEX_PLIC);
 
 object_initialize_child(obj, "uart", >uart, TYPE_IBEX_UART);
+
+object_initialize_child(obj, "timer", >timer, TYPE_IBEX_TIMER);
 }
 
 static void lowrisc_ibex_soc_realize(DeviceState *dev_soc, Error **errp)
@@ -159,6 +161,14 @@ static void lowrisc_ibex_soc_realize(DeviceState *dev_soc, 
Error **errp)
3, qdev_get_gpio_in(DEVICE(>plic),
IBEX_UART0_RX_OVERFLOW_IRQ));
 
+if (!sysbus_realize(SYS_BUS_DEVICE(>timer), errp)) {
+return;
+}
+sysbus_mmio_map(SYS_BUS_DEVICE(>timer), 0, memmap[IBEX_DEV_TIMER].base);
+sysbus_connect_irq(SYS_BUS_DEVICE(>timer),
+   0, qdev_get_gpio_in(DEVICE(>plic),
+   IBEX_TIMER_TIMEREXPIRED0_0));
+
 create_unimplemented_device("riscv.lowrisc.ibex.gpio",
 memmap[IBEX_DEV_GPIO].base, memmap[IBEX_DEV_GPIO].size);
 create_unimplemented_device("riscv.lowrisc.ibex.spi",
@@ -167,8 +177,6 @@ static void lowrisc_ibex_soc_realize(DeviceState *dev_soc, 
Error **errp)
 memmap[IBEX_DEV_I2C].base, memmap[IBEX_DEV_I2C].size);
 create_unimplemented_device("riscv.lowrisc.ibex.pattgen",
 memmap[IBEX_DEV_PATTGEN].base, memmap[IBEX_DEV_PATTGEN].size);
-create_unimplemented_device("riscv.lowrisc.ibex.rv_timer",
-memmap[IBEX_DEV_RV_TIMER].base, memmap[IBEX_DEV_RV_TIMER].size);
 create_unimplemented_device("riscv.lowrisc.ibex.sensor_ctrl",
 memmap[IBEX_DEV_SENSOR_CTRL].base, memmap[IBEX_DEV_SENSOR_CTRL].size);
 create_unimplemented_device("riscv.lowrisc.ibex.otp_ctrl",
-- 
2.31.1




[PATCH v2 1/3] hw/char/ibex_uart: Make the register layout private

2021-06-08 Thread Alistair Francis
We don't need to expose the register layout in the public header, so
don't.

Signed-off-by: Alistair Francis 
Reviewed-by: Bin Meng 
---
 include/hw/char/ibex_uart.h | 37 -
 hw/char/ibex_uart.c | 37 +
 2 files changed, 37 insertions(+), 37 deletions(-)

diff --git a/include/hw/char/ibex_uart.h b/include/hw/char/ibex_uart.h
index 546f958eb8..a39985516a 100644
--- a/include/hw/char/ibex_uart.h
+++ b/include/hw/char/ibex_uart.h
@@ -31,43 +31,6 @@
 #include "qemu/timer.h"
 #include "qom/object.h"
 
-REG32(INTR_STATE, 0x00)
-FIELD(INTR_STATE, TX_WATERMARK, 0, 1)
-FIELD(INTR_STATE, RX_WATERMARK, 1, 1)
-FIELD(INTR_STATE, TX_EMPTY, 2, 1)
-FIELD(INTR_STATE, RX_OVERFLOW, 3, 1)
-REG32(INTR_ENABLE, 0x04)
-REG32(INTR_TEST, 0x08)
-REG32(CTRL, 0x0C)
-FIELD(CTRL, TX_ENABLE, 0, 1)
-FIELD(CTRL, RX_ENABLE, 1, 1)
-FIELD(CTRL, NF, 2, 1)
-FIELD(CTRL, SLPBK, 4, 1)
-FIELD(CTRL, LLPBK, 5, 1)
-FIELD(CTRL, PARITY_EN, 6, 1)
-FIELD(CTRL, PARITY_ODD, 7, 1)
-FIELD(CTRL, RXBLVL, 8, 2)
-FIELD(CTRL, NCO, 16, 16)
-REG32(STATUS, 0x10)
-FIELD(STATUS, TXFULL, 0, 1)
-FIELD(STATUS, RXFULL, 1, 1)
-FIELD(STATUS, TXEMPTY, 2, 1)
-FIELD(STATUS, RXIDLE, 4, 1)
-FIELD(STATUS, RXEMPTY, 5, 1)
-REG32(RDATA, 0x14)
-REG32(WDATA, 0x18)
-REG32(FIFO_CTRL, 0x1c)
-FIELD(FIFO_CTRL, RXRST, 0, 1)
-FIELD(FIFO_CTRL, TXRST, 1, 1)
-FIELD(FIFO_CTRL, RXILVL, 2, 3)
-FIELD(FIFO_CTRL, TXILVL, 5, 2)
-REG32(FIFO_STATUS, 0x20)
-FIELD(FIFO_STATUS, TXLVL, 0, 5)
-FIELD(FIFO_STATUS, RXLVL, 16, 5)
-REG32(OVRD, 0x24)
-REG32(VAL, 0x28)
-REG32(TIMEOUT_CTRL, 0x2c)
-
 #define IBEX_UART_TX_FIFO_SIZE 16
 #define IBEX_UART_CLOCK 5000 /* 50MHz clock */
 
diff --git a/hw/char/ibex_uart.c b/hw/char/ibex_uart.c
index 73b8f2e45b..fe4b6c3c9e 100644
--- a/hw/char/ibex_uart.c
+++ b/hw/char/ibex_uart.c
@@ -35,6 +35,43 @@
 #include "qemu/log.h"
 #include "qemu/module.h"
 
+REG32(INTR_STATE, 0x00)
+FIELD(INTR_STATE, TX_WATERMARK, 0, 1)
+FIELD(INTR_STATE, RX_WATERMARK, 1, 1)
+FIELD(INTR_STATE, TX_EMPTY, 2, 1)
+FIELD(INTR_STATE, RX_OVERFLOW, 3, 1)
+REG32(INTR_ENABLE, 0x04)
+REG32(INTR_TEST, 0x08)
+REG32(CTRL, 0x0C)
+FIELD(CTRL, TX_ENABLE, 0, 1)
+FIELD(CTRL, RX_ENABLE, 1, 1)
+FIELD(CTRL, NF, 2, 1)
+FIELD(CTRL, SLPBK, 4, 1)
+FIELD(CTRL, LLPBK, 5, 1)
+FIELD(CTRL, PARITY_EN, 6, 1)
+FIELD(CTRL, PARITY_ODD, 7, 1)
+FIELD(CTRL, RXBLVL, 8, 2)
+FIELD(CTRL, NCO, 16, 16)
+REG32(STATUS, 0x10)
+FIELD(STATUS, TXFULL, 0, 1)
+FIELD(STATUS, RXFULL, 1, 1)
+FIELD(STATUS, TXEMPTY, 2, 1)
+FIELD(STATUS, RXIDLE, 4, 1)
+FIELD(STATUS, RXEMPTY, 5, 1)
+REG32(RDATA, 0x14)
+REG32(WDATA, 0x18)
+REG32(FIFO_CTRL, 0x1c)
+FIELD(FIFO_CTRL, RXRST, 0, 1)
+FIELD(FIFO_CTRL, TXRST, 1, 1)
+FIELD(FIFO_CTRL, RXILVL, 2, 3)
+FIELD(FIFO_CTRL, TXILVL, 5, 2)
+REG32(FIFO_STATUS, 0x20)
+FIELD(FIFO_STATUS, TXLVL, 0, 5)
+FIELD(FIFO_STATUS, RXLVL, 16, 5)
+REG32(OVRD, 0x24)
+REG32(VAL, 0x28)
+REG32(TIMEOUT_CTRL, 0x2c)
+
 static void ibex_uart_update_irqs(IbexUartState *s)
 {
 if (s->uart_intr_state & s->uart_intr_enable & 
R_INTR_STATE_TX_WATERMARK_MASK) {
-- 
2.31.1




[PATCH v2 0/3] hw/riscv: OpenTitan: Add support for the RISC-V timer

2021-06-08 Thread Alistair Francis
v2:
 - Address review comments


Alistair Francis (3):
  hw/char/ibex_uart: Make the register layout private
  hw/timer: Initial commit of Ibex Timer
  hw/riscv: OpenTitan: Connect the mtime and mtimecmp timer

 include/hw/char/ibex_uart.h   |  37 -
 include/hw/riscv/opentitan.h  |   5 +-
 include/hw/timer/ibex_timer.h |  52 ++
 hw/char/ibex_uart.c   |  37 +
 hw/riscv/opentitan.c  |  14 +-
 hw/timer/ibex_timer.c | 305 ++
 MAINTAINERS   |   6 +-
 hw/timer/meson.build  |   1 +
 8 files changed, 412 insertions(+), 45 deletions(-)
 create mode 100644 include/hw/timer/ibex_timer.h
 create mode 100644 hw/timer/ibex_timer.c

-- 
2.31.1




Re: [PATCH 27/55] target/arm: Implement MVE VHADD, VHSUB

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement MVE VHADD and VHSUB insns, which perform an addition
or subtraction and then halve the result.

Signed-off-by: Peter Maydell 
---
  target/arm/helper-mve.h| 14 ++
  target/arm/mve.decode  |  5 +
  target/arm/mve_helper.c| 25 +
  target/arm/translate-mve.c |  4 
  4 files changed, 48 insertions(+)

diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
index bfe2057592f..7b22990c3ba 100644
--- a/target/arm/helper-mve.h
+++ b/target/arm/helper-mve.h
@@ -118,3 +118,17 @@ DEF_HELPER_FLAGS_4(mve_vabdsw, TCG_CALL_NO_WG, void, env, 
ptr, ptr, ptr)
  DEF_HELPER_FLAGS_4(mve_vabdub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
  DEF_HELPER_FLAGS_4(mve_vabduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
  DEF_HELPER_FLAGS_4(mve_vabduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+
+DEF_HELPER_FLAGS_4(mve_vhaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+
+DEF_HELPER_FLAGS_4(mve_vhsubsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhsubsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
+DEF_HELPER_FLAGS_4(mve_vhsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
index 087d3db2a31..241d1c44c19 100644
--- a/target/arm/mve.decode
+++ b/target/arm/mve.decode
@@ -96,6 +96,11 @@ VMIN_U   111 1  0 . .. ... 0 ... 0 0110 . 1 . 1 
... 0 @2op
  VABD_S   111 0  0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
  VABD_U   111 1  0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
  
+VHADD_S  111 0  0 . .. ... 0 ... 0  . 1 . 0 ... 0 @2op

+VHADD_U  111 1  0 . .. ... 0 ... 0  . 1 . 0 ... 0 @2op
+VHSUB_S  111 0  0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
+VHSUB_U  111 1  0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
+
  # Vector miscellaneous
  
  VCLS   1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op

diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
index f026a9969d6..5982f6bf5eb 100644
--- a/target/arm/mve_helper.c
+++ b/target/arm/mve_helper.c
@@ -415,3 +415,28 @@ DO_2OP_U(vminu, DO_MIN)
  
  DO_2OP_S(vabds, DO_ABD)

  DO_2OP_U(vabdu, DO_ABD)
+
+static inline uint32_t do_vhadd_u(uint32_t n, uint32_t m)
+{
+return ((uint64_t)n + m) >> 1;
+}
+
+static inline int32_t do_vhadd_s(int32_t n, int32_t m)
+{
+return ((int64_t)n + m) >> 1;
+}
+
+static inline uint32_t do_vhsub_u(uint32_t n, uint32_t m)
+{
+return ((uint64_t)n - m) >> 1;
+}
+
+static inline int32_t do_vhsub_s(int32_t n, int32_t m)
+{
+return ((int64_t)n - m) >> 1;
+}


Use 64-bit inputs and you don't need to replicate these for signed/unsigned. 
But either way,


Reviewed-by: Richard Henderson 

r~



Re: [PATCH 26/55] target/arm: Implement MVE VABD

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement the MVE VABD insn.

Signed-off-by: Peter Maydell
---
  target/arm/helper-mve.h| 7 +++
  target/arm/mve.decode  | 3 +++
  target/arm/mve_helper.c| 5 +
  target/arm/translate-mve.c | 2 ++
  4 files changed, 17 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH 25/55] target/arm: Implement MVE VMAX, VMIN

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement the MVE VMAX and VMIN insns.

Signed-off-by: Peter Maydell
---
  target/arm/helper-mve.h| 14 ++
  target/arm/mve.decode  |  5 +
  target/arm/mve_helper.c| 14 ++
  target/arm/translate-mve.c |  4 
  4 files changed, 37 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH 24/55] target/arm: Implement MVE VRMULH

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement the MVE VRMULH insn, which performs a rounding multiply
and then returns the high half.

Signed-off-by: Peter Maydell
---
  target/arm/helper-mve.h|  7 +++
  target/arm/mve.decode  |  3 +++
  target/arm/mve_helper.c| 22 ++
  target/arm/translate-mve.c |  2 ++
  4 files changed, 34 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH 23/55] target/arm: Implement MVE VMULH

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement the MVE VMULH insn, which performs a vector
multiply and returns the high half of the result.

Signed-off-by: Peter Maydell
---
  target/arm/helper-mve.h|  7 +++
  target/arm/mve.decode  |  3 +++
  target/arm/mve_helper.c| 26 ++
  target/arm/translate-mve.c |  2 ++
  4 files changed, 38 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH 22/55] target/arm: Implement MVE VADD, VSUB, VMUL

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+#define DO_2OP(INSN, FN) \
+static bool trans_##INSN(DisasContext *s, arg_2op *a)   \
+{   \
+MVEGenTwoOpFn *fns[] = {\


static const, otherwise,
Reviewed-by: Richard Henderson 

r~



Re: [PATCH 21/55] target/arm: Implement MVE VAND, VBIC, VORR, VORN, VEOR

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+DO_2OP(vand, 1, uint8_t, H1, DO_AND)
+DO_2OP(vbic, 1, uint8_t, H1, DO_BIC)
+DO_2OP(vorr, 1, uint8_t, H1, DO_ORR)
+DO_2OP(vorn, 1, uint8_t, H1, DO_ORN)
+DO_2OP(veor, 1, uint8_t, H1, DO_EOR)


Again, logicals should use uint64_t.

Otherwise,
Reviewed-by: Richard Henderson 


r~



Re: [PATCH 20/55] target/arm: Implement MVE VDUP

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+#define DO_VDUP(OP, ESIZE, TYPE, H) \
+void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t val) \
+{   \
+TYPE *d = vd;   \
+uint16_t mask = mve_element_mask(env);  \
+unsigned e; \
+for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) {  \
+uint64_t bytemask = mask_to_bytemask##ESIZE(mask);  \
+d[H(e)] &= ~bytemask;   \
+d[H(e)] |= (val & bytemask);\
+}   \
+mve_advance_vpt(env);   \
+}
+
+DO_VDUP(vdupb, 1, uint8_t, H1)
+DO_VDUP(vduph, 2, uint16_t, H2)
+DO_VDUP(vdupw, 4, uint32_t, H4)


Hmm.  I think the masking should be done at either uint32_t or uint64_t.  Doing 
it byte-by-byte is wasteful.


Whether you want to do the replication in tcg (I can export gen_dup_i32 from 
tcg-op-gvec.c) and have one helper, or do the replication here e.g.


static void do_vdup(CPUARMState *env, void *vd, uint64_t val);
void helper(mve_vdupb)(CPUARMState *env, void *vd, uint32_t val)
{
do_vdup(env, vd, dup_const(MO_8, val));
}


r~



[Bug 1323758] Re: Mouse stops working when connected usb-storage-device

2021-06-08 Thread Kendrick
I have used rel 7.x 8.x ubuntu 18.04 and see this happening in all of
them the original user had it only in ubuntu.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1323758

Title:
  Mouse stops working when connected usb-storage-device

Status in QEMU:
  Expired
Status in qemu package in Ubuntu:
  Confirmed

Bug description:
  I'm running a guest that has Windows 8 Pro (x64) installed. Every time
  I pass through a usb storage device from the host to the guest, the
  mouse stops working in the vnc client. When I remove the usb-device
  the mouse works again.

  The mouse only stops working when I pass through a usb storage device
  and then make the vlc viewer (client) inactive by clicking on another
  program on the local computer (where I'm running the vnc viewer
  (client)). As long as I keep the vnc viewer active, the mouse works
  without any problems. But as soon as I make the vnc viewer inactive
  and then active again, the mouse will no longer work. I have to reboot
  the guest or remove the usb storage device.

  I can't find any related problems on the internet, so it may be just
  me?

  I hope someone can help me with this.

  EDIT: I posted the extra/new information in comments. But as I know
  see it might be wrong and maybe I should've posted them in this bug
  description container (by editing)? Please tell me if I did it wrong
  and I will change it. Sorry for this misunderstanding.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1323758/+subscriptions



Re: [PATCH 19/55] target/arm: Implement MVE VNEG

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+#define DO_NEG(N)(-(N))
+#define DO_FNEG(N)((N) ^ ~((__typeof(N))-1 >> 1))
+
+DO_1OP(vnegb, 1, int8_t, H1, DO_NEG)
+DO_1OP(vnegh, 2, int16_t, H2, DO_NEG)
+DO_1OP(vnegw, 4, int32_t, H4, DO_NEG)
+
+DO_1OP(vfnegh, 2, uint16_t, H2, DO_FNEG)
+DO_1OP(vfnegs, 4, uint32_t, H4, DO_FNEG)


Similar comments to abs.  Otherwise,
Reviewed-by: Richard Henderson 


r~



Re: [PATCH 18/55] target/arm: Implement MVE VABS

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+DO_1OP(vfabsh, 2, uint16_t, H2, DO_FABS)
+DO_1OP(vfabss, 4, uint32_t, H4, DO_FABS)


Could just as plausibly be done on uint64_t.

#define DO_FABSH(N)  ((N) & dup_const(MO_16, 0x7fff))
#define DO_FABSS(N)  ((N) & dup_const(MO_32, 0x7fff))


+MVEGenOneOpFn *fns[] = {


static const


r~



Re: [PATCH 17/55] target/arm: Implement MVE VMVN (register)

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+DO_1OP(vmvn, 1, uint8_t, H1, DO_NOT)


This is a logical operation; you might as well perform in uint64_t.

Otherwise,
Reviewed-by: Richard Henderson 


r~



Re: [PATCH 16/55] target/arm: Implement MVE VREV16, VREV32, VREV64

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+static uint64_t mask_to_bytemask8(uint16_t mask)
+{
+return mask_to_bytemask4(mask) |
+((uint64_t)mask_to_bytemask4(mask >> 4) << 32);
+}


Again, suggest to share the array from expand_pred_b.


+DO_1OP(vrev16b, 2, uint16_t, H2, bswap16)
+DO_1OP(vrev32b, 4, uint32_t, H4, bswap32)
+DO_1OP(vrev32h, 4, uint32_t, H4, hswap32)
+DO_1OP(vrev64b, 8, uint64_t, , bswap64)
+DO_1OP(vrev64h, 8, uint64_t, , hswap64)
+DO_1OP(vrev64w, 8, uint64_t, , wswap64)


I've started to wonder if we shouldn't add a no-op H8, just so we don't have 
the empty argument for checkpatch to complain about.


And in this particular case I suppose we could H##ESIZE, which would then 
negate my earlier suggestion for using sizeof.



+MVEGenOneOpFn *fns[] = {


static const, etc.


r~



Re: [PATCH 15/55] bitops.h: Provide hswap32(), hswap64(), wswap64() swapping operations

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Currently the ARM SVE helper code defines locally some utility
functions for swapping 16-bit halfwords within 32-bit or 64-bit
values and for swapping 32-bit words within 64-bit values,
parallel to the byte-swapping bswap16/32/64 functions.

We want these also for the ARM MVE code, and they're potentially
generally useful for other targets, so move them to bitops.h.
(We don't put them in bswap.h with the bswap* functions because
they are implemented in terms of the rotate operations also
defined in bitops.h, and including bitops.h from bswap.h seems
better avoided.)

Signed-off-by: Peter Maydell
---
  include/qemu/bitops.h   | 29 +
  target/arm/sve_helper.c | 20 
  2 files changed, 29 insertions(+), 20 deletions(-)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH 14/55] target/arm: Implement MVE VCLS

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement the MVE VCLS insn.

Signed-off-by: Peter Maydell
---
  target/arm/helper-mve.h| 4 
  target/arm/mve.decode  | 1 +
  target/arm/mve_helper.c| 7 +++
  target/arm/translate-mve.c | 1 +
  4 files changed, 13 insertions(+)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH 13/55] target/arm: Implement MVE VCLZ

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

Implement the MVE VCLZ insn (and the necessary machinery
for MVE 1-input vector ops).

Note that for non-load instructions predication is always performed
at a byte level granularity regardless of element size (R_ZLSJ),
and so the masking logic here differs from that used in the VLDR
and VSTR helpers.

Signed-off-by: Peter Maydell 
---
  target/arm/helper-mve.h|  4 
  target/arm/mve.decode  |  8 +++
  target/arm/mve_helper.c| 48 ++
  target/arm/translate-mve.c | 43 ++
  4 files changed, 103 insertions(+)

diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
index e47d4164ae7..c5c1315b161 100644
--- a/target/arm/helper-mve.h
+++ b/target/arm/helper-mve.h
@@ -32,3 +32,7 @@ DEF_HELPER_FLAGS_3(mve_vldrh_uw, TCG_CALL_NO_WG, void, env, 
ptr, i32)
  DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
  DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
  DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
+
+DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
+DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
+DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
index 3bc5f034531..24999bf703e 100644
--- a/target/arm/mve.decode
+++ b/target/arm/mve.decode
@@ -20,13 +20,17 @@
  #
  
  %qd 22:1 13:3

+%qm 5:1 1:3
  
  _vstr rn qd imm p a w size l u

+&1op qd qm size
  
  @vldr_vstr ... . . . . l:1 rn:4 ... .. imm:7 _vstr qd=%qd u=0

  # Note that both Rn and Qd are 3 bits only (no D bit)
  @vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 _vstr
  
+@1op    size:2 ..     &1op qd=%qd qm=%qm

+
  # Vector loads and stores
  
  # Widening loads and narrowing stores:

@@ -61,3 +65,7 @@ VLDR_VSTR1110110 1 a:1 . w:1 .  ... 01 
...   @vldr_vstr \
   size=1 p=1
  VLDR_VSTR1110110 1 a:1 . w:1 .  ... 10 ...   @vldr_vstr \
   size=2 p=1
+
+# Vector miscellaneous
+
+VCLZ   1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
index 6a2fc1c37cd..b7c44f57c09 100644
--- a/target/arm/mve_helper.c
+++ b/target/arm/mve_helper.c
@@ -196,3 +196,51 @@ DO_VSTR(vstrh_w, 4, stw, int32_t, H4)
  
  #undef DO_VLDR

  #undef DO_VSTR
+
+/*
+ * Take the bottom bits of mask (which is 1 bit per lane) and
+ * convert to a mask which has 1s in each byte which is predicated.
+ */
+static uint8_t mask_to_bytemask1(uint16_t mask)
+{
+return (mask & 1) ? 0xff : 0;
+}
+
+static uint16_t mask_to_bytemask2(uint16_t mask)
+{
+static const uint16_t masks[] = { 0x, 0x00ff, 0xff00, 0x };
+return masks[mask & 3];
+}
+
+static uint32_t mask_to_bytemask4(uint16_t mask)
+{
+static const uint32_t masks[] = {
+0x, 0x00ff, 0xff00, 0x,
+0x00ff, 0x00ff00ff, 0x0000, 0x00ff,
+0xff00, 0xffff, 0xff00ff00, 0xff00,
+0x, 0x00ff, 0xff00, 0x,
+};


I'll note that

(1) the values for the mask_to_bytemask2 array overlap the first 4 values of 
the mask_to_bytemask4 array, and


(2) both of these overlap with the larger

static inline uint64_t expand_pred_b(uint8_t byte)

from SVE.  It'd be nice to share the storage, whatever the actual functional 
interface into the array.



+#define DO_1OP(OP, ESIZE, TYPE, H, FN)  \
+void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
+{   \
+TYPE *d = vd, *m = vm;  \
+uint16_t mask = mve_element_mask(env);  \
+unsigned e; \
+for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) {  \
+TYPE r = FN(m[H(e)]);   \
+uint64_t bytemask = mask_to_bytemask##ESIZE(mask);  \


Why uint64_t and not TYPE?  Or uint32_t?


+if (!mve_eci_check(s)) {
+return true;
+}
+
+if (!vfp_access_check(s)) {
+return true;
+}


Not the first instance, but is it worth saving 4 lines per and combining these 
into one IF?



+#define DO_1OP(INSN, FN)\
+static bool trans_##INSN(DisasContext *s, arg_1op *a)   \
+{   \
+MVEGenOneOpFn *fns[] = {\


static const.


r~



Re: [PATCH 12/55] target/arm: Implement widening/narrowing MVE VLDR/VSTR insns

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+#define DO_VLDST_WIDE_NARROW(OP, SLD, ULD, ST)  \
+static bool trans_##OP(DisasContext *s, arg_VLDR_VSTR *a)   \
+{   \
+MVEGenLdStFn *ldfns[] = {   \
+gen_helper_mve_##SLD,   \
+gen_helper_mve_##ULD,   \
+};  \
+MVEGenLdStFn *stfns[] = {   \
+gen_helper_mve_##ST,\
+NULL,   \
+};  \
+return do_ldst(s, a, a->l ? ldfns[a->u] : stfns[a->u]); \
+}


static const on the arrays, or array, as before.

Otherwise,
Reviewed-by: Richard Henderson 


r~



Re: [PATCH 11/55] target/arm: Implement MVE VLDR/VSTR (non-widening forms)

2021-06-08 Thread Richard Henderson

On 6/8/21 2:33 PM, Richard Henderson wrote:



+static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
+{
+    MVEGenLdStFn *ldfns[] = {


static MVEGenLdStFn * const ldfns


+    MVEGenLdStFn *stfns[] = {


Likewise, though...


+    return do_ldst(s, a, a->l ? ldfns[a->size] : stfns[a->size]);


... just put em together into a two-dimensional array, with a->l as the second 
index?


... or separate VLDR from VSTR.

r~



Re: [PATCH 2/4] Python QEMU utils: introduce a generic feature list

2021-06-08 Thread Wainer dos Santos Moschetta

Hi,

On 6/8/21 11:09 AM, Cleber Rosa wrote:

Which can be used to check for any "feature" that is available as a
QEMU command line option, and that will return its list of available
options.

This is a generalization of the list_accel() utility function, which
is itself re-implemented in terms of the more generic feature.

Signed-off-by: Cleber Rosa 
---
  python/qemu/utils/__init__.py |  2 ++
  python/qemu/utils/accel.py| 15 ++--
  python/qemu/utils/feature.py  | 44 +++
  3 files changed, 48 insertions(+), 13 deletions(-)
  create mode 100644 python/qemu/utils/feature.py

diff --git a/python/qemu/utils/__init__.py b/python/qemu/utils/__init__.py
index 7f1a5138c4..1d0789eaa2 100644
--- a/python/qemu/utils/__init__.py
+++ b/python/qemu/utils/__init__.py
@@ -20,12 +20,14 @@
  
  # pylint: disable=import-error

  from .accel import kvm_available, list_accel, tcg_available
+from .feature import list_feature
  
  
  __all__ = (

  'get_info_usernet_hostfwd_port',
  'kvm_available',
  'list_accel',
+'list_feature',
  'tcg_available',
  )
  
diff --git a/python/qemu/utils/accel.py b/python/qemu/utils/accel.py

index 297933df2a..b5bb80c6d3 100644
--- a/python/qemu/utils/accel.py
+++ b/python/qemu/utils/accel.py
@@ -14,13 +14,11 @@
  # the COPYING file in the top-level directory.
  #
  
-import logging

  import os
-import subprocess
  from typing import List, Optional
  
+from qemu.utils.feature import list_feature
  
-LOG = logging.getLogger(__name__)
  
  # Mapping host architecture to any additional architectures it can

  # support which often includes its 32 bit cousin.
@@ -39,16 +37,7 @@ def list_accel(qemu_bin: str) -> List[str]:
  @raise Exception: if failed to run `qemu -accel help`
  @return a list of accelerator names.
  """
-if not qemu_bin:
-return []
-try:
-out = subprocess.check_output([qemu_bin, '-accel', 'help'],
-  universal_newlines=True)
-except:
-LOG.debug("Failed to get the list of accelerators in %s", qemu_bin)
-raise
-# Skip the first line which is the header.
-return [acc.strip() for acc in out.splitlines()[1:]]
+return list_feature(qemu_bin, 'accel')
  
  
  def kvm_available(target_arch: Optional[str] = None,

diff --git a/python/qemu/utils/feature.py b/python/qemu/utils/feature.py
new file mode 100644
index 00..b4a5f929ab
--- /dev/null
+++ b/python/qemu/utils/feature.py
@@ -0,0 +1,44 @@
+"""
+QEMU feature module:
+
+This module provides a utility for discovering the availability of
+generic features.
+"""
+# Copyright (C) 2022 Red Hat Inc.

Cleber, please, tell me how is the future like! :)

+#
+# Authors:
+#  Cleber Rosa 
+#
+# This work is licensed under the terms of the GNU GPL, version 2.  See
+# the COPYING file in the top-level directory.
+#
+
+import logging
+import subprocess
+from typing import List
+
+
+LOG = logging.getLogger(__name__)
+
+
+def list_feature(qemu_bin: str, feature: str) -> List[str]:
+"""
+List available options the QEMU binary for a given feature type.
+
+By calling a "qemu $feature -help" and parsing its output.


I understand we need a mean to easily cancel the test if given feature 
is not present. However, I'm unsure this generic list_feature() is what 
we need.


The `-accel help` returns a simple list of strings (besides the header, 
which is dismissed). Whereas `-machine help` returns what could be 
parsed as a tuple of (name, description).


Another example is `-cpu help` which will print a similar list as 
`-machine`, plus a section with CPUID flags.


If confess I still don't have a better idea, although I feel it will 
require a OO design.


Thanks!

- Wainer


+
+@param qemu_bin (str): path to the QEMU binary.
+@param feature (str): feature name, matching the command line option.
+@raise Exception: if failed to run `qemu -feature help`
+@return a list of available options for the given feature.
+"""
+if not qemu_bin:
+return []
+try:
+out = subprocess.check_output([qemu_bin, '-%s' % feature, 'help'],
+  universal_newlines=True)
+except:
+LOG.debug("Failed to get the list of %s(s) in %s", feature, qemu_bin)
+raise
+# Skip the first line which is the header.
+return [item.split(' ', 1)[0] for item in out.splitlines()[1:]]





RE: [PATCH v1 3/5] ui/egl: Add egl helpers to help with synchronization

2021-06-08 Thread Kasireddy, Vivek
Hi Gerd,
 
> > +epoxy_has_egl_extension(qemu_egl_display,
> > +"EGL_ANDROID_native_fence_sync")) {
> 
> What about non-android?  Is the name there just for historical reasons?
> Or do we actually need something else for desktop systems?
[Kasireddy, Vivek] It is not specific to Android:
https://www.khronos.org/registry/EGL/extensions/ANDROID/EGL_ANDROID_native_fence_sync.txt

I have been using Linux (Fedora 33 for both Guest and Host) as my
test platform.

> 
> > +void egl_dmabuf_wait_sync(QemuDmaBuf *dmabuf)
> 
> See other mail on blocking wait.  Otherwise looks sane.
> 
> > +static void gd_gl_wait_dmabuf(DisplayChangeListener *dcl,
> > +  QemuDmaBuf *dmabuf)
> 
> separate patch for the gtk bits please.
[Kasireddy, Vivek] Ok, will do.

Thanks,
Vivek

> 
> thanks,
>   Gerd




Re: [PATCH 11/55] target/arm: Implement MVE VLDR/VSTR (non-widening forms)

2021-06-08 Thread Richard Henderson

On 6/7/21 9:57 AM, Peter Maydell wrote:

+static uint16_t mve_element_mask(CPUARMState *env)
+{
+/*
+ * Return the mask of which elements in the MVE vector should be
+ * updated. This is a combination of multiple things:
+ *  (1) by default, we update every lane in the vector
+ *  (2) VPT predication stores its state in the VPR register;
+ *  (3) low-overhead-branch tail predication will mask out part
+ *  the vector on the final iteration of the loop
+ *  (4) if EPSR.ECI is set then we must execute only some beats
+ *  of the insn
+ * We combine all these into a 16-bit result with the same semantics
+ * as VPR.P0: 0 to mask the lane, 1 if it is active.
+ * 8-bit vector ops will look at all bits of the result;
+ * 16-bit ops will look at bits 0, 2, 4, ...;
+ * 32-bit ops will look at bits 0, 4, 8 and 12.
+ * Compare pseudocode GetCurInstrBeat(), though that only returns
+ * the 4-bit slice of the mask corresponding to a single beat.
+ */
+uint16_t mask = extract32(env->v7m.vpr, R_V7M_VPR_P0_SHIFT,
+  R_V7M_VPR_P0_LENGTH);


Any reason you're not using FIELD_EX32 and and FIELD_DP32 so far in this file?


+#define DO_VLDR(OP, ESIZE, LDTYPE, TYPE, H) \
+void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr)\
+{   \
+TYPE *d = vd;   \
+uint16_t mask = mve_element_mask(env);  \
+unsigned b, e;  \


esize is redundant with sizeof(type); perhaps just make it a local variable?


diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
index c54d5cb7305..e8bb2372ad9 100644
--- a/target/arm/translate-mve.c
+++ b/target/arm/translate-mve.c
@@ -1,6 +1,6 @@
  /*
   *  ARM translation: M-profile MVE instructions
-
+ *
   *  Copyright (c) 2021 Linaro, Ltd.


Is this just diff silliness?  I see that it has decided that helper-mve.h is a 
rename from translate-mve.c...



+static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
+{
+TCGv_i32 addr;
+uint32_t offset;
+TCGv_ptr qreg;
+
+if (!dc_isar_feature(aa32_mve, s)) {
+return false;
+}
+
+if (a->qd > 7 || !fn) {
+return false;
+}


It's a funny old decode,

  if D then UNDEFINED.
  d = D:Qd,

Is the spec forward looking to more than 7 Q registers?
It's tempting to just drop the D:Qd from the decode...


+static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
+{
+MVEGenLdStFn *ldfns[] = {


static MVEGenLdStFn * const ldfns


+MVEGenLdStFn *stfns[] = {


Likewise, though...


+return do_ldst(s, a, a->l ? ldfns[a->size] : stfns[a->size]);


... just put em together into a two-dimensional array, with a->l as the second 
index?



r~



Re: [PATCH] Add Connor Kuehl as reviewer for AMD SEV

2021-06-08 Thread Connor Kuehl
On 6/8/21 3:45 PM, Daniel P. Berrangé wrote:
>> Right, I am just worried that if I am the only person that shows up in
>> the get_maintainer.pl output, the submitter will have to know some other
>> way who a relevant maintainer is that can take the patches otherwise
>> they won't be CC'd. Or we'll have to hope a relevant maintainer sees
>> them on the list. Or I'll have to chase down a maintainer myself
>> assuming the reviews all check out. :-)
> 
> Well there's no real guarantee that any of the previous committers will
> take the patch even if they are listed by get_maintainer. This is typical
> with anything lacking a maintainer assigned. We typically hope that
> whoever runs the "misc" queue sees the patch and picks it up, but often
> it requires pings to remind someone to pick it up.
> 
> The only real right answer here is to actually get someone as the
> nominated maintainer. Every other scenario is a just a band aid and
> is not a good experiance for contributors. A nominated reviewer is
> usually hoped to be a stepping stone to someone becoming maintainer
> in future, so in that sense the fact that only you will be cc'd is
> sort of intentional :-)

That makes perfect sense. I'll forge on ahead, then :-)

Thanks!

Connor




Re: [PATCH] Add Connor Kuehl as reviewer for AMD SEV

2021-06-08 Thread Connor Kuehl
On 6/8/21 2:34 PM, Dr. David Alan Gilbert wrote:
>> Note: because there's no maintainer entry, when running
>> ./scripts/get_maintainers.pl on target/i386/sev.c, my name and the qemu
>> mailing list is the only thing that shows up... it doesn't even show
>> previous committers (as it would before applying this patch). Which is
>> probably not great considering I do not make pull requests to QEMU.
>>
>> Is the way forward to get someone to sign up as a maintainer before
>> applying a patch like this?
> 
> If you wanted to do a submaintainer for it and send it to one of the x86
> maintainers rather than having to do full pulls?

I'm not opposed to this. I think I have a few of the right people on CC,
so let's see if they weigh in on this. Unless it means I have to manage
a GPG key again... (just kidding, kind of...)

Connor




Re: [PATCH] Add Connor Kuehl as reviewer for AMD SEV

2021-06-08 Thread Daniel P . Berrangé
On Tue, Jun 08, 2021 at 03:32:54PM -0500, Connor Kuehl wrote:
> On 6/8/21 3:10 PM, Daniel P. Berrangé wrote:
> > On Tue, Jun 08, 2021 at 02:25:37PM -0500, Connor Kuehl wrote:
> >> It may not be appropriate for me to take over as a maintainer at this time,
> >> but I would consider myself familiar with AMD SEV and what this code is
> >> meant to be doing as part of a VMM for launching SEV-protected guests.
> >>
> >> To that end, I would be happy to volunteer as a reviewer for SEV-related
> >> changes so that I am CC'd on them and can help share the review burden with
> >> whoever does maintain this code.
> >>
> >> Signed-off-by: Connor Kuehl 
> >> ---
> >> Note: because there's no maintainer entry, when running
> >> ./scripts/get_maintainers.pl on target/i386/sev.c, my name and the qemu
> >> mailing list is the only thing that shows up... it doesn't even show
> >> previous committers (as it would before applying this patch). Which is
> >> probably not great considering I do not make pull requests to QEMU.
> >>
> >> Is the way forward to get someone to sign up as a maintainer before
> >> applying a patch like this?
> > 
> > There's no requirement to have a maintainer before having a reviewer.
> > If any of the existing committers shown do send pull requests, it is
> > probably co-incidental since they're not listed as official maintainers,
> > and being listed as Reviewer doesn't commit you to doing pull requests.
> > 
> > That said if you're the only nominated reviewer and actually do useful
> > reviews, you will probably quickly find yourself the defacto maintainer
> > in 12 months time and end up doing pull requests... 
> 
> Right, I am just worried that if I am the only person that shows up in
> the get_maintainer.pl output, the submitter will have to know some other
> way who a relevant maintainer is that can take the patches otherwise
> they won't be CC'd. Or we'll have to hope a relevant maintainer sees
> them on the list. Or I'll have to chase down a maintainer myself
> assuming the reviews all check out. :-)

Well there's no real guarantee that any of the previous committers will
take the patch even if they are listed by get_maintainer. This is typical
with anything lacking a maintainer assigned. We typically hope that
whoever runs the "misc" queue sees the patch and picks it up, but often
it requires pings to remind someone to pick it up.

The only real right answer here is to actually get someone as the
nominated maintainer. Every other scenario is a just a band aid and
is not a good experiance for contributors. A nominated reviewer is
usually hoped to be a stepping stone to someone becoming maintainer
in future, so in that sense the fact that only you will be cc'd is
sort of intentional :-)

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH] qemu-{img,nbd}: Don't report zeroed cluster as a hole

2021-06-08 Thread Nir Soffer
On Tue, Jun 8, 2021 at 9:46 PM Eric Blake  wrote:
>
> On Tue, Jun 08, 2021 at 07:38:10PM +0300, Nir Soffer wrote:
> > On Tue, Jun 8, 2021 at 12:22 AM Eric Blake  wrote:
> > >
> > > On Mon, Jun 07, 2021 at 11:22:04PM +0300, Nir Soffer wrote:
> > > > When zeroing a cluster in an image with backing file, qemu-img and
> > > > qemu-nbd reported the area as a hole. This does not affect the guest
> > > > since the area is read as zero, but breaks code trying to reconstruct
> > > > the image chain based on qemu-img map or qemu-nbd block status response.
> > >
> > > Trying to reconstruct the image chain based on qemu-nbd block status
> > > should not be attempted on just base:allocation data, but should also
> > > take into account qemu:allocation-depth.
> >
> > This is correct when looking at the entire chain, but when we reconstruct
> > image data, we copy each image in the layer *without* the backing chain.
> >
> > The example I provided was not detailed enough, what we actually do is:
> >
> > qemu-nbd .. 'json:{"driver": "qcow2", "backing": null, "file":
> > {"driver": "file", "filename": "top.qcow2"}}'
> >
> > So there is no backing chain and allocation depth is not relevant.
> > - Allocated areas should be reported with flags 0
> > - Zero areas which are not holes should be reported as NBD_STATE_ZERO
> > - Zero areas which are holes (not allocated in this image) should be
> > reported as NBD_STATE_HOLE
>
> Again, what you WANT is qemu:allocation-depth.
>
> $ ./qemu-nbd -r -t -f qcow2 -A 'json:{"driver":"qcow2", "backing":null, \
>   "file":{"driver":"file", "filename":"top.qcow2"}}'
> $ nbdinfo --map=qemu:allocation-depth nbd://localhost
>  0   655360  unallocated
>  65536  1310721  local
> 196608   655360  unallocated
>
> $ nbdinfo --map nbd://localhost
>  0   655363  hole,zero
>  65536   655360  allocated
> 131072  1310723  hole,zero
>
> You don't care whether the information reads as zero or not, but
> whether top.qcow2 is responsible for the data at that cluster.
> base:allocation does not answer that question.  But
> qemu:allocation-depth answers it perfectly.
>
> >
> > > From the perspective of the
> > > core NBD protocol, there is no backing file, so trying to guess what
> > > the backing file contains without using qemu extensions is unlikely to
> > > be correct, as shown in your example.  The fact that you could abuse
> > > it with qemu 5.2 but it broke in 6.0
> >
> > I'm not abusing anything, I'm only using public APIs. qemu-nbd behavior
> > should not change without good reason, and we did not have any good
> > reason to change the behavior for qcow2 images.
>
> Ah, but we did.  Exposing BDRV_BLOCK_ALLOCATED as server, but
> consuming it as BDRV_BLOCK_DATA as client, was inconsistent.  It was a
> bug that we ever used BLOCK_ALLOCATED in the first place, when it has
> _always_ been that the NBD semantics were supposed to be modeled on
> our definition of BLOCK_DATA.  That it took us a couple of years to
> notice our bug is unfortunate, but we DO have a good reason for the
> change - we were fixing an actual bug where we were reporting
> incorrect information compared to what the NBD spec was documenting.
>
> >
> > >  is not necessarily the sign of a
> > > regression in 6.0, but rather could be evidence that you have been
> > > trying to use an undocumented implementation quirk rather than a
> > > stable interface.
> >
> > I'm pretty convinced that this is a regression in qemu-nbd 6.0 since I 
> > created
> > this regression :-)
>
> I understand that you were surprised by the ramifications of your
> patch causing more changes than what you expected, but I still argue
> that your patch was correct and that the decision to incorporate it
> was intentional because it was the right thing to do.  Papering over
> the fallout for the sake of clients that should be using
> qemu:allocation-depth instead does not seem like it is worth the
> maintenance nightmare to me.
>
> >
> > Since we started using qemu-nbd in 2018, qemu-nbd has always reported
> > holes in qcow2 images, but not in raw files. We discussed this several 
> > times,
> > and you explained that we have allocation information from qcow2, but not
> > from raw format.
> >
> > My attempt to fix hole reporting in raw images has failed; reporting holes 
> > in
> > raw images is nice to have, but it broke the behavior of qemu-nbd with qcow2
> > images, which is a critical issue for ovirt.
>
> Rather, ovirt had been relying on buggy behavior, and now that the bug
> has been fixed, we are scrambling to figure out how to make ovirt
> still play nicely.  But my answer to that is to use
> qemu:allocation-depth.  It was introduced in 5.2, so it predates the
> point where base:allocation behavior was fixed, and it provides the
> answer to the question you are really asking (which parts of my image
> came from the image directly, rather than a backing file), rather than
> 

Re: [PATCH] tests/unit/test-char.c: Fix error handling issues

2021-06-08 Thread Daniel P . Berrangé
On Tue, Jun 08, 2021 at 11:51:35PM +0400, Marc-André Lureau wrote:
> Hi
> 
> On Tue, Jun 8, 2021 at 9:06 PM Peter Maydell 
> wrote:
> 
> > Coverity spots some minor error-handling issues in this test code.
> > These are mostly due to the test code assuming that the glib test
> > macros g_assert_cmpint() and friends will always abort on failure.
> > This is not the case: if the test case chooses to call
> > g_test_set_nonfatal_assertions() then they will mark the test case as
> > a failure and continue.  (This is different to g_assert(),
> > g_assert_not_reached(), and assert(), which really do all always kill
> > the process.) The idea is that you use g_assert() for things
> > which are really assertions, as you would in normal QEMU code,
> > and g_assert_cmpint() and friends for "this check is the thing
> > this test case is testing" checks.
> >
> > In fact this test case does not currently set assertions to be
> > nonfatal, but we should ideally be aiming to get to a point where we
> > can set that more generally, because the test harness gives much
> > better error reporting (including minor details like "what was the
> > name of the test case that actually failed") than a raw assert/abort
> > does.  So we mostly fix the Coverity nits by making the error-exit
> > path return early if necessary, rather than by converting the
> > g_assert_cmpint()s to g_assert()s.
> >
> > Fixes: Coverity CID 1432505, 1432514, 1432600, 1451384
> > Signed-off-by: Peter Maydell 
> > ---
> > We had some previous not-very-conclusive discussion about
> > g_assert_foo vs g_assert in this thread:
> >
> > https://lore.kernel.org/qemu-devel/cafeaca9juochqrh5orybjqwpqsyez5z3dvmy7fjx0dw4nbg...@mail.gmail.com/
> > This patch is in some sense me asserting my opinion about the
> > right thing to do. You might disagree...
> >
> > I think that improving the quality of the failure reporting
> > in 'make check' is useful, and that we should probably turn
> > on g_test_set_nonfatal_assertions() everywhere. (The worst that
> > can happen is that instead of crashing on the assert we proceed
> > and crash a bit later, I think.) Awkwardly we don't have a single
> > place where we could put that call, so I guess it's a coccinelle
> > script to add it to every test's main() function.
> >
> >
> I don't have any strong opinion on this. But I don't see much sense in
> having extra code for things that should never happen. I would teach
> coverity instead that those asserts are always fatal. aborting right where
> the assert is reached is easier for the developer, as you get a direct
> backtrace. Given that tests are usually grouped in domains, it doesn't help
> much to keep running the rest of the tests in that group anyway.
> 
> Fwiw, none of the tests in glib or gtk seem to use
> g_test_set_nonfatal_assertions(), probably for similar considerations.

The method was introduced relatively recently (recent in glib terms,
this was still 2013).

commit a6a87506877939fee54bdc7eca70d47fc7d893d4
Author: Matthias Clasen 
Date:   Sat Aug 17 15:18:29 2013 -0400

Add a way to make assertions non-fatal

When using test harnesses other than gtester (e.g. using TAP),
it can be suboptimal to have the very first failed assertion
abort the test suite.

This commit adds a g_test_set_nonfatal_assertions() that can
be called in a test binary to change the behaviour of most
assert macros to just call g_test_fail() and continue. We
don't change the behavior of g_assert() and g_assert_not_reached(),
since these to assertion macros are older than GTest, are
widely used outside of testsuites, and will cause compiler
warnings if they loose their noreturn annotation.

https://bugzilla.gnome.org/show_bug.cgi?id=692125


This makes sense as a rationale so I'm surprised that they
never then used it in glib tests.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PULL 0/2] Libslirp patches

2021-06-08 Thread Marc-André Lureau
Hi

On Tue, Jun 8, 2021 at 8:55 PM Peter Maydell 
wrote:

> On Tue, 8 Jun 2021 at 16:55, Marc-André Lureau
>  wrote:
> >
> > Hi
> >
> > On Mon, Jun 7, 2021 at 4:17 PM Peter Maydell 
> wrote:
>
> >> >> clang sanitizer build: link failure:
> >> >> subprojects/libslirp/libslirp.so.0.3.0.p/src_arp_table.c.o: In
> >> >> function `arp_table_add':
> >> >>
> /home/petmay01/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
> >> >> undefined reference to `__ubsan_handle_type_mismatch_v1'
> >> >>
> /home/petmay01/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
> >> >> undefined reference to `__ubsan_handle_type_mismatch_v1'
> >> >>
> /home/petmay01/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:51:
> >> >> undefined reference to `__ubsan_handle_type_mismatch_v1'
> >> >>
> /home/petmay01/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:34:
> >> >> undefined reference to `__ubsan_handle_type_mismatch_v1'
> >> >>
> /home/petmay01/linaro/qemu-for-merges/build/clang/../../subprojects/libslirp/src/arp_table.c:34:
> >> >> undefined reference to `__ubsan_handle_type_mismatch_v1'
> >> >> (and lots more similar)
> >>
> >> > I don't get this  when running make vm-build-netbsd. What else am I
> missing?
> >>
> >>
> >> This isn't NetBSD related, it's just a clang sanitizer build on Linux.
> >
> >
> >
> > I am running configure with '--enable-sanitizers' --cc=clang
> --cxx=clang++ --host-cc=clang, I can't reproduce.
> >
> > What's your distro? (or meson + clang versions)
>
> Ubuntu 18.04.5 LTS (bionic); configure arguments
> '--cc=clang' '--cxx=clang++' '--enable-gtk'
> '--extra-cflags=-fsanitize=undefined  -fno-sanitize=shift-base
> -Werror'
> clang version 6.0.0-1ubuntu2


Per subproject `default_library` was added in 0.54, and we require 0.55.3.
Why is it trying to build libslirp.so?

I tried to make vm-build-ubuntu.i386 with the following changes:

 diff --git a/tests/vm/ubuntu.i386 b/tests/vm/ubuntu.i386
index 47681b6f87..21d0b64eb1 100755
--- a/tests/vm/ubuntu.i386
+++ b/tests/vm/ubuntu.i386
@@ -18,7 +18,7 @@ import ubuntuvm
 DEFAULT_CONFIG = {
 'install_cmds' : "apt-get update,"\
  "apt-get build-dep -y qemu,"\
- "apt-get install -y libfdt-dev language-pack-en
ninja-build",
+ "apt-get install -y libfdt-dev language-pack-en
ninja-build clang",
 }

 class UbuntuX86VM(ubuntuvm.UbuntuVM):
@@ -32,7 +32,7 @@ class UbuntuX86VM(ubuntuvm.UbuntuVM):
 cd $(mktemp -d);
 sudo chmod a+r /dev/vdb;
 tar -xf /dev/vdb;
-./configure {configure_opts};
+./configure {configure_opts} --cc=clang --cxx=clang++
--host-cc=clang --extra-cflags='-fsanitize=undefined
 -fno-sanitize=shift-base -Werror';
 make --output-sync {target} -j{jobs} {verbose};
 """

(or with EXTRA_CONFIGURE_OPTS)

And it failed with:

[2363/9207] Linking target qemu-system-aarch64
FAILED: qemu-system-aarch64
clang++ @qemu-system-aarch64.rsp
libqemu-aarch64-softmmu.fa.p/accel_tcg_cputlb.c.o: In function
`helper_atomic_cmpxchgq_le_mmu':
/tmp/tmp.VkWONZ62bA/build/../accel/tcg/atomic_template.h:86: undefined
reference to `__atomic_compare_exchange_8'
libqemu-aarch64-softmmu.fa.p/accel_tcg_cputlb.c.o: In function
`helper_atomic_xchgq_le_mmu':
/tmp/tmp.VkWONZ62bA/build/../accel/tcg/atomic_template.h:134: undefined
reference to `__atomic_exchange_8'
libqemu-aarch64-softmmu.fa.p/accel_tcg_cputlb.c.o: In function
`helper_atomic_fetch_addq_le_mmu':

Any idea what I am missing?

thanks

-- 
Marc-André Lureau


Re: [PATCH] tests/unit/test-char.c: Fix error handling issues

2021-06-08 Thread Daniel P . Berrangé
On Tue, Jun 08, 2021 at 06:06:06PM +0100, Peter Maydell wrote:
> Coverity spots some minor error-handling issues in this test code.
> These are mostly due to the test code assuming that the glib test
> macros g_assert_cmpint() and friends will always abort on failure.
> This is not the case: if the test case chooses to call
> g_test_set_nonfatal_assertions() then they will mark the test case as
> a failure and continue.  (This is different to g_assert(),
> g_assert_not_reached(), and assert(), which really do all always kill
> the process.) The idea is that you use g_assert() for things
> which are really assertions, as you would in normal QEMU code,
> and g_assert_cmpint() and friends for "this check is the thing
> this test case is testing" checks.
> 
> In fact this test case does not currently set assertions to be
> nonfatal, but we should ideally be aiming to get to a point where we
> can set that more generally, because the test harness gives much
> better error reporting (including minor details like "what was the
> name of the test case that actually failed") than a raw assert/abort
> does.  So we mostly fix the Coverity nits by making the error-exit
> path return early if necessary, rather than by converting the
> g_assert_cmpint()s to g_assert()s.
> 
> Fixes: Coverity CID 1432505, 1432514, 1432600, 1451384
> Signed-off-by: Peter Maydell 
> ---
> We had some previous not-very-conclusive discussion about
> g_assert_foo vs g_assert in this thread:
> https://lore.kernel.org/qemu-devel/cafeaca9juochqrh5orybjqwpqsyez5z3dvmy7fjx0dw4nbg...@mail.gmail.com/
> This patch is in some sense me asserting my opinion about the
> right thing to do. You might disagree...

In that thread you show a difference in the TAP output when
g_test_set_nonfatal_assertions is enabled. Instead of it
reporting an abort, it reports an error against the test
and carries on running.

> I think that improving the quality of the failure reporting
> in 'make check' is useful, and that we should probably turn
> on g_test_set_nonfatal_assertions() everywhere. (The worst that
> can happen is that instead of crashing on the assert we proceed
> and crash a bit later, I think.) Awkwardly we don't have a single
> place where we could put that call, so I guess it's a coccinelle
> script to add it to every test's main() function.

Yes, it is a bit of a philosophical question which behaviour
is better - immediate exit, vs report & carry on.  In the
Perl world the normal is to report & carry on so you get
full results for the entire suite. In python / C world it
has been more common to immediately exit.

The report & carry on obviously results in cascading errors
unless you take extra steps to skip stuff you know is going
to cascade. You did some examples of that here with the extra
'goto fail' jumps.

The flipside is that if you have a test that fails 6
different scenarios it is nice to see all 6 failures upfront,
instead of having to play whack-a-mole fixing one and then
discovering the next failure, then fixing that and discovering
the next failure, etc.


When we discussed this last on IRC, I suggested that we
introduce a 'q_test_init' that wraps around g_test_init.
This q_test_init could set g_test_set_nonfatal_assertions
and call 'g_test_init'.

This would avoid need for coccinelle script, as a sed
s/g_test_init/q_test_init/ would suffice. We can stuff
other logic into q_test_Init if we wanted to. Perhaps
a private TMPDIR for example.

>  tests/unit/test-char.c | 36 ++--
>  1 file changed, 34 insertions(+), 2 deletions(-)
> 
> diff --git a/tests/unit/test-char.c b/tests/unit/test-char.c
> index 5b3b48ebacd..43630ab57f8 100644
> --- a/tests/unit/test-char.c
> +++ b/tests/unit/test-char.c
> @@ -214,6 +214,10 @@ static void char_mux_test(void)
>  qemu_chr_fe_take_focus(_be2);
>  
>  base = qemu_chr_find("mux-label-base");
> +g_assert_nonnull(base);
> +if (base == 0) {
> +goto fail;
> +}
>  g_assert_cmpint(qemu_chr_be_can_write(base), !=, 0);
>  
>  qemu_chr_be_write(base, (void *)"hello", 6);
> @@ -333,6 +337,7 @@ static void char_mux_test(void)
>  g_assert_cmpint(strlen(data), !=, 0);
>  g_free(data);
>  
> +fail:
>  qemu_chr_fe_deinit(_be1, false);
>  qemu_chr_fe_deinit(_be2, true);
>  }
> @@ -486,6 +491,9 @@ static void char_pipe_test(void)
>  chr = qemu_chr_new("pipe", tmp, NULL);
>  g_assert_nonnull(chr);
>  g_free(tmp);
> +if (!chr) {
> +goto fail;
> +}
>  
>  qemu_chr_fe_init(, chr, _abort);
>  
> @@ -493,12 +501,20 @@ static void char_pipe_test(void)
>  g_assert_cmpint(ret, ==, 9);
>  
>  fd = open(out, O_RDWR);
> +g_assert_cmpint(fd, >=, 0);
> +if (fd < 0) {
> +goto fail;
> +}
>  ret = read(fd, buf, sizeof(buf));
>  g_assert_cmpint(ret, ==, 9);
>  g_assert_cmpstr(buf, ==, "pipe-out");
>  close(fd);
>  
>  fd = open(in, O_WRONLY);
> +g_assert_cmpint(fd, >=, 0);
> +

Re: [PATCH] Add Connor Kuehl as reviewer for AMD SEV

2021-06-08 Thread Connor Kuehl
On 6/8/21 3:10 PM, Daniel P. Berrangé wrote:
> On Tue, Jun 08, 2021 at 02:25:37PM -0500, Connor Kuehl wrote:
>> It may not be appropriate for me to take over as a maintainer at this time,
>> but I would consider myself familiar with AMD SEV and what this code is
>> meant to be doing as part of a VMM for launching SEV-protected guests.
>>
>> To that end, I would be happy to volunteer as a reviewer for SEV-related
>> changes so that I am CC'd on them and can help share the review burden with
>> whoever does maintain this code.
>>
>> Signed-off-by: Connor Kuehl 
>> ---
>> Note: because there's no maintainer entry, when running
>> ./scripts/get_maintainers.pl on target/i386/sev.c, my name and the qemu
>> mailing list is the only thing that shows up... it doesn't even show
>> previous committers (as it would before applying this patch). Which is
>> probably not great considering I do not make pull requests to QEMU.
>>
>> Is the way forward to get someone to sign up as a maintainer before
>> applying a patch like this?
> 
> There's no requirement to have a maintainer before having a reviewer.
> If any of the existing committers shown do send pull requests, it is
> probably co-incidental since they're not listed as official maintainers,
> and being listed as Reviewer doesn't commit you to doing pull requests.
> 
> That said if you're the only nominated reviewer and actually do useful
> reviews, you will probably quickly find yourself the defacto maintainer
> in 12 months time and end up doing pull requests... 

Right, I am just worried that if I am the only person that shows up in
the get_maintainer.pl output, the submitter will have to know some other
way who a relevant maintainer is that can take the patches otherwise
they won't be CC'd. Or we'll have to hope a relevant maintainer sees
them on the list. Or I'll have to chase down a maintainer myself
assuming the reviews all check out. :-)

Connor




[PATCH v2 3/3] hw/arm: quanta-q71l add pca954x muxes

2021-06-08 Thread Patrick Venture
Adds the pca954x muxes expected.

Tested: Booted quanta-q71l image to userspace.
Signed-off-by: Patrick Venture 
Reviewed-by: Hao Wu 
Reviewed-by: Joel Stanley 
---
 hw/arm/Kconfig  |  1 +
 hw/arm/aspeed.c | 11 ---
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
index 9d1c2a6f7b..4a033e81ef 100644
--- a/hw/arm/Kconfig
+++ b/hw/arm/Kconfig
@@ -413,6 +413,7 @@ config ASPEED_SOC
 select PCA9552
 select SERIAL
 select SMBUS_EEPROM
+select PCA954X
 select SSI
 select SSI_M25P80
 select TMP105
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
index 3fe6c55744..35a28b0e8b 100644
--- a/hw/arm/aspeed.c
+++ b/hw/arm/aspeed.c
@@ -14,6 +14,7 @@
 #include "hw/arm/boot.h"
 #include "hw/arm/aspeed.h"
 #include "hw/arm/aspeed_soc.h"
+#include "hw/i2c/i2c_mux_pca954x.h"
 #include "hw/i2c/smbus_eeprom.h"
 #include "hw/misc/pca9552.h"
 #include "hw/misc/tmp105.h"
@@ -461,14 +462,18 @@ static void quanta_q71l_bmc_i2c_init(AspeedMachineState 
*bmc)
 /* TODO: i2c-1: Add Frontpanel FRU eeprom@57 24c64 */
 /* TODO: Add Memory Riser i2c mux and eeproms. */
 
-/* TODO: i2c-2: pca9546@74 */
-/* TODO: i2c-2: pca9548@77 */
+i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 2), "pca9546", 0x74);
+i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 2), "pca9548", 0x77);
+
 /* TODO: i2c-3: Add BIOS FRU eeprom@56 24c64 */
-/* TODO: i2c-7: Add pca9546@70 */
+
+/* i2c-7 */
+i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 7), "pca9546", 0x70);
 /*- i2c@0: pmbus@59 */
 /*- i2c@1: pmbus@58 */
 /*- i2c@2: pmbus@58 */
 /*- i2c@3: pmbus@59 */
+
 /* TODO: i2c-7: Add PDB FRU eeprom@52 */
 /* TODO: i2c-8: Add BMC FRU eeprom@50 */
 }
-- 
2.32.0.rc1.229.g3e70b5a671-goog




[PATCH v2 2/3] hw/arm: gsj add pca9548

2021-06-08 Thread Patrick Venture
Tested: Quanta-gsj firmware booted.

i2c /dev entries driver
I2C init bus 1 freq 10
I2C init bus 2 freq 10
I2C init bus 3 freq 10
I2C init bus 4 freq 10
I2C init bus 8 freq 10
I2C init bus 9 freq 10
at24 9-0055: 8192 byte 24c64 EEPROM, writable, 1 bytes/write
I2C init bus 10 freq 10
at24 10-0055: 8192 byte 24c64 EEPROM, writable, 1 bytes/write
I2C init bus 12 freq 10
I2C init bus 15 freq 10
i2c i2c-15: Added multiplexed i2c bus 16
i2c i2c-15: Added multiplexed i2c bus 17
i2c i2c-15: Added multiplexed i2c bus 18
i2c i2c-15: Added multiplexed i2c bus 19
i2c i2c-15: Added multiplexed i2c bus 20
i2c i2c-15: Added multiplexed i2c bus 21
i2c i2c-15: Added multiplexed i2c bus 22
i2c i2c-15: Added multiplexed i2c bus 23
pca954x 15-0075: registered 8 multiplexed busses for I2C switch pca9548

Signed-off-by: Patrick Venture 
Reviewed-by: Hao Wu 
Reviewed-by: Joel Stanley 
---
 hw/arm/Kconfig  | 1 +
 hw/arm/npcm7xx_boards.c | 6 ++
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
index b887f6a5b1..9d1c2a6f7b 100644
--- a/hw/arm/Kconfig
+++ b/hw/arm/Kconfig
@@ -377,6 +377,7 @@ config NPCM7XX
 select SERIAL
 select SSI
 select UNIMP
+select PCA954X
 
 config FSL_IMX25
 bool
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
index 9b7a7cd201..f0a96564e2 100644
--- a/hw/arm/npcm7xx_boards.c
+++ b/hw/arm/npcm7xx_boards.c
@@ -18,6 +18,7 @@
 
 #include "hw/arm/npcm7xx.h"
 #include "hw/core/cpu.h"
+#include "hw/i2c/i2c_mux_pca954x.h"
 #include "hw/i2c/smbus_eeprom.h"
 #include "hw/loader.h"
 #include "hw/qdev-core.h"
@@ -231,10 +232,7 @@ static void quanta_gsj_i2c_init(NPCM7xxState *soc)
  * - ucd90160@6b
  */
 
-/*
- * i2c-15:
- * - pca9548@75
- */
+i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 15), "pca9548", 0x75);
 }
 
 static void quanta_gsj_fan_init(NPCM7xxMachine *machine, NPCM7xxState *soc)
-- 
2.32.0.rc1.229.g3e70b5a671-goog




[PATCH v2 1/3] hw/arm: gsj add i2c comments

2021-06-08 Thread Patrick Venture
Adds comments to the board init to identify missing i2c devices.

Signed-off-by: Patrick Venture 
Reviewed-by: Hao Wu 
Reviewed-by: Joel Stanley 
---
 hw/arm/npcm7xx_boards.c | 16 +++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
index d4553e3786..9b7a7cd201 100644
--- a/hw/arm/npcm7xx_boards.c
+++ b/hw/arm/npcm7xx_boards.c
@@ -220,7 +220,21 @@ static void quanta_gsj_i2c_init(NPCM7xxState *soc)
 at24c_eeprom_init(soc, 9, 0x55, 8192);
 at24c_eeprom_init(soc, 10, 0x55, 8192);
 
-/* TODO: Add additional i2c devices. */
+/*
+ * i2c-11:
+ * - power-brick@36: delta,dps800
+ * - hotswap@15: ti,lm5066i
+ */
+
+/*
+ * i2c-12:
+ * - ucd90160@6b
+ */
+
+/*
+ * i2c-15:
+ * - pca9548@75
+ */
 }
 
 static void quanta_gsj_fan_init(NPCM7xxMachine *machine, NPCM7xxState *soc)
-- 
2.32.0.rc1.229.g3e70b5a671-goog




[PATCH v2 0/3] With the pca954x i2c mux available, enable it on aspeed and nuvoton BMC boards.

2021-06-08 Thread Patrick Venture
v2:
- Dropped sonorapass patch.

Patrick Venture (3):
  hw/arm: gsj add i2c comments
  hw/arm: gsj add pca9548
  hw/arm: quanta-q71l add pca954x muxes

 hw/arm/Kconfig  |  2 ++
 hw/arm/aspeed.c | 11 ---
 hw/arm/npcm7xx_boards.c | 14 +-
 3 files changed, 23 insertions(+), 4 deletions(-)

-- 
2.32.0.rc1.229.g3e70b5a671-goog




Re: [PATCH 05/26] configure, meson: convert pam detection to meson

2021-06-08 Thread Daniel P . Berrangé
On Tue, Jun 08, 2021 at 12:45:51PM -0700, Richard Henderson wrote:
> On 6/8/21 4:22 AM, Paolo Bonzini wrote:
> > +pam = not_found
> > +if not get_option('auth_pam').auto() or have_system
> > +  pam = cc.find_library('pam', has_headers: ['security/pam_appl.h'],
> 
> The condition doesn't look right.
> Why are we looking for pam if --disable-pam-auth?
> 
> Surely
> 
>   if not get_option('auth_pam').disabled() and have_system

This isn't entirely obvious at first glance, but the line after
the one you quote with the 'required' param makes it "do the
right thing (tm)".

The 'auth_pam' option is a tri-state taking 'enabled', 'disabled'
and 'auto', with 'auto' being the default state. When a tri-state
value is passed as the value of the 'required' parameter, then

   required==enabled   is interpreted as 'required=true'
   required==auto  is interpreted as 'required=false'
   required==disabled  means the entire call is a no-op

So this logic:

 if not get_option('auth_pam').auto() or have_system
pam = cc.find_library('pam', has_headers: ['security/pam_appl.h'],
  required: get_option('auth_pam'),
  ...)

Means

  => If 'auto' is set, then only look for the library if we're
 building system emulators. In this case 'required:' will
 evaluate to 'false', and so we'll gracefully degrade
 if the library is missing.


  => If 'enabled' is set, then we'll look for the library
 and if it is missing then it is a fatal error as
 'required' will evaluate to 'true'.

  => If 'disabled' is set, then the 'find_library' call
 will not look for anything, immediately return a
 'not found' result and let the caller carry on.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH] tests/unit/test-char.c: Fix error handling issues

2021-06-08 Thread Peter Maydell
On Tue, 8 Jun 2021 at 20:51, Marc-André Lureau
 wrote:
>
> Hi
>
> On Tue, Jun 8, 2021 at 9:06 PM Peter Maydell  wrote:
>> I think that improving the quality of the failure reporting
>> in 'make check' is useful, and that we should probably turn
>> on g_test_set_nonfatal_assertions() everywhere. (The worst that
>> can happen is that instead of crashing on the assert we proceed
>> and crash a bit later, I think.) Awkwardly we don't have a single
>> place where we could put that call, so I guess it's a coccinelle
>> script to add it to every test's main() function.
>>
>
> I don't have any strong opinion on this. But I don't see much sense in
> having extra code for things that should never happen.

The point is that I want to make them happen, though...

> I would teach coverity instead that those asserts are always fatal.

If you want an assert that's always fatal, that's g_assert().
These ones are documented as not always fatal.

> Fwiw, none of the tests in glib or gtk seem to use
> g_test_set_nonfatal_assertions(), probably for similar considerations.

That's interesting. I did wonder about these APIs, and if glib
themselves aren't using them that seems like a reason why they're
so awkward.

thanks
-- PMM



Re: [PATCH] Add Connor Kuehl as reviewer for AMD SEV

2021-06-08 Thread Daniel P . Berrangé
On Tue, Jun 08, 2021 at 02:25:37PM -0500, Connor Kuehl wrote:
> It may not be appropriate for me to take over as a maintainer at this time,
> but I would consider myself familiar with AMD SEV and what this code is
> meant to be doing as part of a VMM for launching SEV-protected guests.
> 
> To that end, I would be happy to volunteer as a reviewer for SEV-related
> changes so that I am CC'd on them and can help share the review burden with
> whoever does maintain this code.
> 
> Signed-off-by: Connor Kuehl 
> ---
> Note: because there's no maintainer entry, when running
> ./scripts/get_maintainers.pl on target/i386/sev.c, my name and the qemu
> mailing list is the only thing that shows up... it doesn't even show
> previous committers (as it would before applying this patch). Which is
> probably not great considering I do not make pull requests to QEMU.
> 
> Is the way forward to get someone to sign up as a maintainer before
> applying a patch like this?

There's no requirement to have a maintainer before having a reviewer.
If any of the existing committers shown do send pull requests, it is
probably co-incidental since they're not listed as official maintainers,
and being listed as Reviewer doesn't commit you to doing pull requests.

That said if you're the only nominated reviewer and actually do useful
reviews, you will probably quickly find yourself the defacto maintainer
in 12 months time and end up doing pull requests... 

>  MAINTAINERS | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7d9cd29042..3eb7ce8fc6 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -2938,6 +2938,10 @@ F: hw/core/clock-vmstate.c
>  F: hw/core/qdev-clock.c
>  F: docs/devel/clocks.rst
>  
> +AMD Secure Encrypted Virtualization
> +R: Connor Kuehl 
> +F: target/i386/sev.c

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH 4/4] aspeed: sonorapass: enable pca954x muxes

2021-06-08 Thread Patrick Venture
On Wed, May 19, 2021 at 10:18 AM Patrick Venture  wrote:
>
> On Tue, May 18, 2021 at 4:27 PM Joel Stanley  wrote:
> >
> > On Tue, 18 May 2021 at 19:41, Patrick Venture  wrote:
> > >
> > > Enables the pca954x muxes in the bmc board configuration.
> > >
> > > Signed-off-by: Patrick Venture 
> > > Reviewed-by: Hao Wu 
> >
> > Not sure about this one, there's no device tree for it in Linux.
>
> Yeah, this was just a pick-up from grepping other BMC boards.  I added
> these going off the comment alone.  I'd be okay with dropping this in
> the series.

In this case, the number of patches changed within a version change --
should I start a fresh series or just bump the version and drop the
last patch?

>
> >
> > > ---
> > >  hw/arm/aspeed.c | 22 +++---
> > >  1 file changed, 11 insertions(+), 11 deletions(-)
> > >
> > > diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
> > > index 35a28b0e8b..27fd51980c 100644
> > > --- a/hw/arm/aspeed.c
> > > +++ b/hw/arm/aspeed.c
> > > @@ -541,14 +541,16 @@ static void swift_bmc_i2c_init(AspeedMachineState 
> > > *bmc)
> > >
> > >  static void sonorapass_bmc_i2c_init(AspeedMachineState *bmc)
> > >  {
> > > +I2CSlave *i2c_mux;
> > >  AspeedSoCState *soc = >soc;
> > >
> > >  /* bus 2 : */
> > >  i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 2), "tmp105", 
> > > 0x48);
> > >  i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 2), "tmp105", 
> > > 0x49);
> > > -/* bus 2 : pca9546 @ 0x73 */
> > > +i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 2), "pca9546", 
> > > 0x73);
> > >
> > > -/* bus 3 : pca9548 @ 0x70 */
> > > +/* bus 3 : */
> > > +i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 3), "pca9548", 
> > > 0x70);
> > >
> > >  /* bus 4 : */
> > >  uint8_t *eeprom4_54 = g_malloc0(8 * 1024);
> > > @@ -562,7 +564,7 @@ static void 
> > > sonorapass_bmc_i2c_init(AspeedMachineState *bmc)
> > >  /* bus 6 : */
> > >  i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 6), "tmp105", 
> > > 0x48);
> > >  i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 6), "tmp105", 
> > > 0x49);
> > > -/* bus 6 : pca9546 @ 0x73 */
> > > +i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 6), "pca9546", 
> > > 0x73);
> > >
> > >  /* bus 8 : */
> > >  uint8_t *eeprom8_56 = g_malloc0(8 * 1024);
> > > @@ -573,14 +575,12 @@ static void 
> > > sonorapass_bmc_i2c_init(AspeedMachineState *bmc)
> > >  /* bus 8 : adc128d818 @ 0x1d */
> > >  /* bus 8 : adc128d818 @ 0x1f */
> > >
> > > -/*
> > > - * bus 13 : pca9548 @ 0x71
> > > - *  - channel 3:
> > > - *  - tmm421 @ 0x4c
> > > - *  - tmp421 @ 0x4e
> > > - *  - tmp421 @ 0x4f
> > > - */
> > > -
> > > +/* bus 13 : */
> > > +i2c_mux = i2c_slave_create_simple(aspeed_i2c_get_bus(>i2c, 13),
> > > +  "pca9548", 0x71);
> > > +i2c_slave_create_simple(pca954x_i2c_get_bus(i2c_mux, 3), "tmp421", 
> > > 0x4c);
> > > +i2c_slave_create_simple(pca954x_i2c_get_bus(i2c_mux, 3), "tmp421", 
> > > 0x4e);
> > > +i2c_slave_create_simple(pca954x_i2c_get_bus(i2c_mux, 3), "tmp421", 
> > > 0x4f);
> > >  }
> > >
> > >  static void witherspoon_bmc_i2c_init(AspeedMachineState *bmc)
> > > --
> > > 2.31.1.751.gd2f1c929bd-goog
> > >



Re: [PATCH] tests/unit/test-char.c: Fix error handling issues

2021-06-08 Thread Marc-André Lureau
Hi

On Tue, Jun 8, 2021 at 9:06 PM Peter Maydell 
wrote:

> Coverity spots some minor error-handling issues in this test code.
> These are mostly due to the test code assuming that the glib test
> macros g_assert_cmpint() and friends will always abort on failure.
> This is not the case: if the test case chooses to call
> g_test_set_nonfatal_assertions() then they will mark the test case as
> a failure and continue.  (This is different to g_assert(),
> g_assert_not_reached(), and assert(), which really do all always kill
> the process.) The idea is that you use g_assert() for things
> which are really assertions, as you would in normal QEMU code,
> and g_assert_cmpint() and friends for "this check is the thing
> this test case is testing" checks.
>
> In fact this test case does not currently set assertions to be
> nonfatal, but we should ideally be aiming to get to a point where we
> can set that more generally, because the test harness gives much
> better error reporting (including minor details like "what was the
> name of the test case that actually failed") than a raw assert/abort
> does.  So we mostly fix the Coverity nits by making the error-exit
> path return early if necessary, rather than by converting the
> g_assert_cmpint()s to g_assert()s.
>
> Fixes: Coverity CID 1432505, 1432514, 1432600, 1451384
> Signed-off-by: Peter Maydell 
> ---
> We had some previous not-very-conclusive discussion about
> g_assert_foo vs g_assert in this thread:
>
> https://lore.kernel.org/qemu-devel/cafeaca9juochqrh5orybjqwpqsyez5z3dvmy7fjx0dw4nbg...@mail.gmail.com/
> This patch is in some sense me asserting my opinion about the
> right thing to do. You might disagree...
>
> I think that improving the quality of the failure reporting
> in 'make check' is useful, and that we should probably turn
> on g_test_set_nonfatal_assertions() everywhere. (The worst that
> can happen is that instead of crashing on the assert we proceed
> and crash a bit later, I think.) Awkwardly we don't have a single
> place where we could put that call, so I guess it's a coccinelle
> script to add it to every test's main() function.
>
>
I don't have any strong opinion on this. But I don't see much sense in
having extra code for things that should never happen. I would teach
coverity instead that those asserts are always fatal. aborting right where
the assert is reached is easier for the developer, as you get a direct
backtrace. Given that tests are usually grouped in domains, it doesn't help
much to keep running the rest of the tests in that group anyway.

Fwiw, none of the tests in glib or gtk seem to use
g_test_set_nonfatal_assertions(), probably for similar considerations.

 tests/unit/test-char.c | 36 ++--
>  1 file changed, 34 insertions(+), 2 deletions(-)
>
> diff --git a/tests/unit/test-char.c b/tests/unit/test-char.c
> index 5b3b48ebacd..43630ab57f8 100644
> --- a/tests/unit/test-char.c
> +++ b/tests/unit/test-char.c
> @@ -214,6 +214,10 @@ static void char_mux_test(void)
>  qemu_chr_fe_take_focus(_be2);
>
>  base = qemu_chr_find("mux-label-base");
> +g_assert_nonnull(base);
> +if (base == 0) {
> +goto fail;
> +}
>  g_assert_cmpint(qemu_chr_be_can_write(base), !=, 0);
>
>  qemu_chr_be_write(base, (void *)"hello", 6);
> @@ -333,6 +337,7 @@ static void char_mux_test(void)
>  g_assert_cmpint(strlen(data), !=, 0);
>  g_free(data);
>
> +fail:
>  qemu_chr_fe_deinit(_be1, false);
>  qemu_chr_fe_deinit(_be2, true);
>  }
> @@ -486,6 +491,9 @@ static void char_pipe_test(void)
>  chr = qemu_chr_new("pipe", tmp, NULL);
>  g_assert_nonnull(chr);
>  g_free(tmp);
> +if (!chr) {
> +goto fail;
> +}
>
>  qemu_chr_fe_init(, chr, _abort);
>
> @@ -493,12 +501,20 @@ static void char_pipe_test(void)
>  g_assert_cmpint(ret, ==, 9);
>
>  fd = open(out, O_RDWR);
> +g_assert_cmpint(fd, >=, 0);
> +if (fd < 0) {
> +goto fail;
> +}
>  ret = read(fd, buf, sizeof(buf));
>  g_assert_cmpint(ret, ==, 9);
>  g_assert_cmpstr(buf, ==, "pipe-out");
>  close(fd);
>
>  fd = open(in, O_WRONLY);
> +g_assert_cmpint(fd, >=, 0);
> +if (fd < 0) {
> +goto fail;
> +}
>  ret = write(fd, "pipe-in", 8);
>  g_assert_cmpint(ret, ==, 8);
>  close(fd);
> @@ -518,6 +534,7 @@ static void char_pipe_test(void)
>
>  qemu_chr_fe_deinit(, true);
>
> +fail:
>  g_assert(g_unlink(in) == 0);
>  g_assert(g_unlink(out) == 0);
>  g_assert(g_rmdir(tmp_path) == 0);
> @@ -556,7 +573,10 @@ static int make_udp_socket(int *port)
>  socklen_t alen = sizeof(addr);
>  int ret, sock = qemu_socket(PF_INET, SOCK_DGRAM, 0);
>
> -g_assert_cmpint(sock, >, 0);
> +g_assert_cmpint(sock, >=, 0);
> +if (sock < 0) {
> +return sock;
> +}
>  addr.sin_family = AF_INET ;
>  addr.sin_addr.s_addr = htonl(INADDR_ANY);
>  addr.sin_port = 0;
> @@ -586,6 +606,9 @@ static void 

Re: [PATCH 05/26] configure, meson: convert pam detection to meson

2021-06-08 Thread Richard Henderson

On 6/8/21 12:45 PM, Richard Henderson wrote:

On 6/8/21 4:22 AM, Paolo Bonzini wrote:

+pam = not_found
+if not get_option('auth_pam').auto() or have_system
+  pam = cc.find_library('pam', has_headers: ['security/pam_appl.h'],


The condition doesn't look right.
Why are we looking for pam if --disable-pam-auth?

Surely

   if not get_option('auth_pam').disabled() and have_system

?


Actually, same question vs crypto in patch 3.

r~




Re: [PATCH 05/26] configure, meson: convert pam detection to meson

2021-06-08 Thread Richard Henderson

On 6/8/21 4:22 AM, Paolo Bonzini wrote:

+pam = not_found
+if not get_option('auth_pam').auto() or have_system
+  pam = cc.find_library('pam', has_headers: ['security/pam_appl.h'],


The condition doesn't look right.
Why are we looking for pam if --disable-pam-auth?

Surely

  if not get_option('auth_pam').disabled() and have_system

?


r~



[PATCH v2 2/2] hw/arm: quanta-gbs-bmc add i2c comments

2021-06-08 Thread Patrick Venture
Add a comment and i2c method that describes the board layout.

Tested: firmware booted to userspace.
Signed-off-by: Patrick Venture 
Reviewed-by: Brandon Kim 
Reviewed-by: Hao Wu 
---
 hw/arm/npcm7xx_boards.c | 60 +
 1 file changed, 60 insertions(+)

diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
index 34a214fe79..d9de375826 100644
--- a/hw/arm/npcm7xx_boards.c
+++ b/hw/arm/npcm7xx_boards.c
@@ -238,6 +238,65 @@ static void quanta_gsj_fan_init(NPCM7xxMachine *machine, 
NPCM7xxState *soc)
 npcm7xx_connect_pwm_fan(soc, [2], 0x05, 1);
 }
 
+static void quanta_gbs_i2c_init(NPCM7xxState *soc)
+{
+/*
+ * i2c-0:
+ * pca9546@71
+ *
+ * i2c-1:
+ * pca9535@24
+ * pca9535@20
+ * pca9535@21
+ * pca9535@22
+ * pca9535@23
+ * pca9535@25
+ * pca9535@26
+ *
+ * i2c-2:
+ * sbtsi@4c
+ *
+ * i2c-5:
+ * atmel,24c64@50 mb_fru
+ * pca9546@71
+ * - channel 0: max31725@54
+ * - channel 1: max31725@55
+ * - channel 2: max31725@5d
+ *  atmel,24c64@51 fan_fru
+ * - channel 3: atmel,24c64@52 hsbp_fru
+ *
+ * i2c-6:
+ * pca9545@73
+ *
+ * i2c-7:
+ * pca9545@72
+ *
+ * i2c-8:
+ * adi,adm1272@10
+ *
+ * i2c-9:
+ * pca9546@71
+ * - channel 0: isil,isl68137@60
+ * - channel 1: isil,isl68137@61
+ * - channel 2: isil,isl68137@63
+ * - channel 3: isil,isl68137@45
+ *
+ * i2c-10:
+ * pca9545@71
+ *
+ * i2c-11:
+ * pca9545@76
+ *
+ * i2c-12:
+ * maxim,max34451@4e
+ * isil,isl68137@5d
+ * isil,isl68137@5e
+ *
+ * i2c-14:
+ * pca9545@70
+ */
+}
+
 static void npcm750_evb_init(MachineState *machine)
 {
 NPCM7xxState *soc;
@@ -282,6 +341,7 @@ static void quanta_gbs_init(MachineState *machine)
 npcm7xx_connect_flash(>fiu[0], 0, "mx66u51235f",
   drive_get(IF_MTD, 0, 0));
 
+quanta_gbs_i2c_init(soc);
 npcm7xx_load_kernel(machine, soc);
 }
 
-- 
2.31.1.751.gd2f1c929bd-goog




[PATCH v2 1/2] hw/arm: add quanta-gbs-bmc machine

2021-06-08 Thread Patrick Venture
Adds initial quanta-gbs-bmc machine support.

Tested: Boots to userspace.
Signed-off-by: Patrick Venture 
Reviewed-by: Brandon Kim 
Reviewed-by: Hao Wu 
---
 hw/arm/npcm7xx_boards.c | 33 +
 1 file changed, 33 insertions(+)

diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
index d4553e3786..34a214fe79 100644
--- a/hw/arm/npcm7xx_boards.c
+++ b/hw/arm/npcm7xx_boards.c
@@ -29,6 +29,7 @@
 
 #define NPCM750_EVB_POWER_ON_STRAPS 0x1ff7
 #define QUANTA_GSJ_POWER_ON_STRAPS 0x1fff
+#define QUANTA_GBS_POWER_ON_STRAPS 0x17ff
 
 static const char npcm7xx_default_bootrom[] = "npcm7xx_bootrom.bin";
 
@@ -268,6 +269,22 @@ static void quanta_gsj_init(MachineState *machine)
 npcm7xx_load_kernel(machine, soc);
 }
 
+static void quanta_gbs_init(MachineState *machine)
+{
+NPCM7xxState *soc;
+
+soc = npcm7xx_create_soc(machine, QUANTA_GBS_POWER_ON_STRAPS);
+npcm7xx_connect_dram(soc, machine->ram);
+qdev_realize(DEVICE(soc), NULL, _fatal);
+
+npcm7xx_load_bootrom(machine, soc);
+
+npcm7xx_connect_flash(>fiu[0], 0, "mx66u51235f",
+  drive_get(IF_MTD, 0, 0));
+
+npcm7xx_load_kernel(machine, soc);
+}
+
 static void npcm7xx_set_soc_type(NPCM7xxMachineClass *nmc, const char *type)
 {
 NPCM7xxClass *sc = NPCM7XX_CLASS(object_class_by_name(type));
@@ -316,6 +333,18 @@ static void gsj_machine_class_init(ObjectClass *oc, void 
*data)
 mc->default_ram_size = 512 * MiB;
 };
 
+static void gbs_bmc_machine_class_init(ObjectClass *oc, void *data)
+{
+NPCM7xxMachineClass *nmc = NPCM7XX_MACHINE_CLASS(oc);
+MachineClass *mc = MACHINE_CLASS(oc);
+
+npcm7xx_set_soc_type(nmc, TYPE_NPCM730);
+
+mc->desc = "Quanta GBS (Cortex-A9)";
+mc->init = quanta_gbs_init;
+mc->default_ram_size = 1 * GiB;
+}
+
 static const TypeInfo npcm7xx_machine_types[] = {
 {
 .name   = TYPE_NPCM7XX_MACHINE,
@@ -332,6 +361,10 @@ static const TypeInfo npcm7xx_machine_types[] = {
 .name   = MACHINE_TYPE_NAME("quanta-gsj"),
 .parent = TYPE_NPCM7XX_MACHINE,
 .class_init = gsj_machine_class_init,
+}, {
+.name   = MACHINE_TYPE_NAME("quanta-gbs-bmc"),
+.parent = TYPE_NPCM7XX_MACHINE,
+.class_init = gbs_bmc_machine_class_init,
 },
 };
 
-- 
2.31.1.751.gd2f1c929bd-goog




Re: [PATCH] hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds

2021-06-08 Thread Dr. David Alan Gilbert
* Peter Xu (pet...@redhat.com) wrote:
> On Tue, Jun 08, 2021 at 07:49:56PM +0100, Dr. David Alan Gilbert wrote:
> > * Peter Xu (pet...@redhat.com) wrote:
> > > These two commands are missing when adding the QMP sister commands.  Add 
> > > them,
> > > so developers can play with them easier.
> > > 
> > > Cc: Dr. David Alan Gilbert 
> > > Cc: Juan Quintela 
> > > Cc: Leonardo Bras Soares Passos 
> > > Cc: Chuan Zheng 
> > > Cc: huang...@chinatelecom.cn
> > > Signed-off-by: Peter Xu 
> > 
> > Reviewed-by: Dr. David Alan Gilbert 
> > 
> > > ---
> > > PS: I really doubt whether this is working as expected... I ran one 
> > > 200MB/s
> > > workload inside, what I measured is 20MB/s with current algorithm...  
> > > Sampling
> > > 512 pages out of 1G mem is not wise enough I guess, especially that 
> > > assumes
> > > dirty workload is spread across the memories while it's normally not the 
> > > case..
> > 
> > What size of address space did you dirty - was it 20MB?
> 
> IIRC it was either 200M or 500M, based on a 1G small VM.

What was your sample time ?

Dave

> -- 
> Peter Xu
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




[PATCH v2 0/2] Adds quanta-gbs-bmc machine to nuvoton boards.

2021-06-08 Thread Patrick Venture
This is a board supported by OpenBmc.

v2:
- Fixed missing hyphen in Cortex name and dropped TODO on hardware
strap value.

Patrick Venture (2):
  hw/arm: add quanta-gbs-bmc machine
  hw/arm: quanta-gbs-bmc add i2c comments

 hw/arm/npcm7xx_boards.c | 93 +
 1 file changed, 93 insertions(+)

-- 
2.31.1.751.gd2f1c929bd-goog




Re: [PATCH] Add Connor Kuehl as reviewer for AMD SEV

2021-06-08 Thread Dr. David Alan Gilbert
* Connor Kuehl (cku...@redhat.com) wrote:
> It may not be appropriate for me to take over as a maintainer at this time,
> but I would consider myself familiar with AMD SEV and what this code is
> meant to be doing as part of a VMM for launching SEV-protected guests.
> 
> To that end, I would be happy to volunteer as a reviewer for SEV-related
> changes so that I am CC'd on them and can help share the review burden with
> whoever does maintain this code.
> 
> Signed-off-by: Connor Kuehl 

Ooh yes please, we could do with someone to be a reviewer;

> ---
> Note: because there's no maintainer entry, when running
> ./scripts/get_maintainers.pl on target/i386/sev.c, my name and the qemu
> mailing list is the only thing that shows up... it doesn't even show
> previous committers (as it would before applying this patch). Which is
> probably not great considering I do not make pull requests to QEMU.
> 
> Is the way forward to get someone to sign up as a maintainer before
> applying a patch like this?

If you wanted to do a submaintainer for it and send it to one of the x86
maintainers rather than having to do full pulls?

Dave

>  MAINTAINERS | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7d9cd29042..3eb7ce8fc6 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -2938,6 +2938,10 @@ F: hw/core/clock-vmstate.c
>  F: hw/core/qdev-clock.c
>  F: docs/devel/clocks.rst
>  
> +AMD Secure Encrypted Virtualization
> +R: Connor Kuehl 
> +F: target/i386/sev.c
> +
>  Usermode Emulation
>  --
>  Overall usermode emulation
> -- 
> 2.31.1
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH 04/26] configure, meson: convert libtasn1 detection to meson

2021-06-08 Thread Richard Henderson

On 6/8/21 4:22 AM, Paolo Bonzini wrote:

Make it depend on gnutls too, since it is only used as part of gnutls
tests.

Signed-off-by: Paolo Bonzini
---
  configure  | 19 ---
  meson.build|  9 +
  tests/unit/meson.build |  2 +-
  3 files changed, 6 insertions(+), 24 deletions(-)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH] Add Connor Kuehl as reviewer for AMD SEV

2021-06-08 Thread Connor Kuehl
On 6/8/21 2:25 PM, Connor Kuehl wrote:
> It may not be appropriate for me to take over as a maintainer at this time,
> but I would consider myself familiar with AMD SEV and what this code is
> meant to be doing as part of a VMM for launching SEV-protected guests.
> 
> To that end, I would be happy to volunteer as a reviewer for SEV-related
> changes so that I am CC'd on them and can help share the review burden with
> whoever does maintain this code.
> 
> Signed-off-by: Connor Kuehl 
> ---
> Note: because there's no maintainer entry, when running
> ./scripts/get_maintainers.pl on target/i386/sev.c, my name and the qemu
> mailing list is the only thing that shows up... it doesn't even show
> previous committers (as it would before applying this patch). Which is
> probably not great considering I do not make pull requests to QEMU.
> 
> Is the way forward to get someone to sign up as a maintainer before
> applying a patch like this?

I need to resend this patch since I realized I forgot to add
target/i386/sev_i386.h, and target/i386/sev-stub.c, but I still am
wondering about the answer to the question above.

Connor




[PATCH] Add Connor Kuehl as reviewer for AMD SEV

2021-06-08 Thread Connor Kuehl
It may not be appropriate for me to take over as a maintainer at this time,
but I would consider myself familiar with AMD SEV and what this code is
meant to be doing as part of a VMM for launching SEV-protected guests.

To that end, I would be happy to volunteer as a reviewer for SEV-related
changes so that I am CC'd on them and can help share the review burden with
whoever does maintain this code.

Signed-off-by: Connor Kuehl 
---
Note: because there's no maintainer entry, when running
./scripts/get_maintainers.pl on target/i386/sev.c, my name and the qemu
mailing list is the only thing that shows up... it doesn't even show
previous committers (as it would before applying this patch). Which is
probably not great considering I do not make pull requests to QEMU.

Is the way forward to get someone to sign up as a maintainer before
applying a patch like this?

 MAINTAINERS | 4 
 1 file changed, 4 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7d9cd29042..3eb7ce8fc6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2938,6 +2938,10 @@ F: hw/core/clock-vmstate.c
 F: hw/core/qdev-clock.c
 F: docs/devel/clocks.rst
 
+AMD Secure Encrypted Virtualization
+R: Connor Kuehl 
+F: target/i386/sev.c
+
 Usermode Emulation
 --
 Overall usermode emulation
-- 
2.31.1




[PATCH v1 1/1] vfio/migration: Correct device state from vmstate change for savevm case.

2021-06-08 Thread Kirti Wankhede
Set _SAVING flag for device state from vmstate change handler when it gets
called from savevm.

Currently State transition savevm/suspend is seen as:
_RUNNING -> _STOP -> Stop-and-copy -> _STOP

State transition savevm/suspend should be:
_RUNNING -> Stop-and-copy -> _STOP

State transition from _RUNNING to _STOP occurs from vfio_vmstate_change()
where when vmstate changes from running to !running, _RUNNING flag is reset
but at the same time when vfio_vmstate_change() is called for
RUN_STATE_SAVE_VM, _SAVING bit should be set.

Reported by: Yishai Hadas 
Signed-off-by: Kirti Wankhede 
---
 hw/vfio/migration.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 384576cfc051..33242b2313b9 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -725,7 +725,16 @@ static void vfio_vmstate_change(void *opaque, bool 
running, RunState state)
  * _RUNNING bit
  */
 mask = ~VFIO_DEVICE_STATE_RUNNING;
-value = 0;
+
+/*
+ * When VM state transition to stop for savevm command, device should
+ * start saving data.
+ */
+if (state == RUN_STATE_SAVE_VM) {
+value = VFIO_DEVICE_STATE_SAVING;
+} else {
+value = 0;
+}
 }
 
 ret = vfio_migration_set_state(vbasedev, mask, value);
-- 
2.7.0




Re: [PATCH v3 16/33] nbd/client-connection: add possibility of negotiation

2021-06-08 Thread Vladimir Sementsov-Ogievskiy

12.05.2021 09:42, Vladimir Sementsov-Ogievskiy wrote:

11.05.2021 13:45, Roman Kagan wrote:

On Fri, Apr 16, 2021 at 11:08:54AM +0300, Vladimir Sementsov-Ogievskiy wrote:

Add arguments and logic to support nbd negotiation in the same thread
after successful connection.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
  include/block/nbd.h |   9 +++-
  block/nbd.c |   4 +-
  nbd/client-connection.c | 105 ++--
  3 files changed, 109 insertions(+), 9 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 57381be76f..5d86e6a393 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -409,11 +409,16 @@ const char *nbd_err_lookup(int err);
  /* nbd/client-connection.c */
  typedef struct NBDClientConnection NBDClientConnection;
-NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr);
+NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
+   bool do_negotiation,
+   const char *export_name,
+   const char *x_dirty_bitmap,
+   QCryptoTLSCreds *tlscreds);
  void nbd_client_connection_release(NBDClientConnection *conn);
  QIOChannelSocket *coroutine_fn
-nbd_co_establish_connection(NBDClientConnection *conn, Error **errp);
+nbd_co_establish_connection(NBDClientConnection *conn, NBDExportInfo *info,
+    QIOChannel **ioc, Error **errp);
  void coroutine_fn nbd_co_establish_connection_cancel(NBDClientConnection 
*conn);
diff --git a/block/nbd.c b/block/nbd.c
index 9bd68dcf10..5e63caaf4b 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -361,7 +361,7 @@ static coroutine_fn void nbd_reconnect_attempt(BDRVNBDState 
*s)
  s->ioc = NULL;
  }
-    s->sioc = nbd_co_establish_connection(s->conn, NULL);
+    s->sioc = nbd_co_establish_connection(s->conn, NULL, NULL, NULL);
  if (!s->sioc) {
  ret = -ECONNREFUSED;
  goto out;
@@ -2033,7 +2033,7 @@ static int nbd_open(BlockDriverState *bs, QDict *options, 
int flags,
  goto fail;
  }
-    s->conn = nbd_client_connection_new(s->saddr);
+    s->conn = nbd_client_connection_new(s->saddr, false, NULL, NULL, NULL);
  /*
   * establish TCP connection, return error if it fails
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index b45a0bd5f6..ae4a77f826 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -30,14 +30,19 @@
  #include "qapi/clone-visitor.h"
  struct NBDClientConnection {
-    /* Initialization constants */
+    /* Initialization constants, never change */
  SocketAddress *saddr; /* address to connect to */
+    QCryptoTLSCreds *tlscreds;
+    NBDExportInfo initial_info;
+    bool do_negotiation;
  /*
   * Result of last attempt. Valid in FAIL and SUCCESS states.
   * If you want to steal error, don't forget to set pointer to NULL.
   */
+    NBDExportInfo updated_info;
  QIOChannelSocket *sioc;
+    QIOChannel *ioc;
  Error *err;
  QemuMutex mutex;
@@ -47,12 +52,25 @@ struct NBDClientConnection {
  Coroutine *wait_co; /* nbd_co_establish_connection() wait in yield() */
  };
-NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr)
+NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
+   bool do_negotiation,
+   const char *export_name,
+   const char *x_dirty_bitmap,
+   QCryptoTLSCreds *tlscreds)
  {
  NBDClientConnection *conn = g_new(NBDClientConnection, 1);
+    object_ref(OBJECT(tlscreds));
  *conn = (NBDClientConnection) {
  .saddr = QAPI_CLONE(SocketAddress, saddr),
+    .tlscreds = tlscreds,
+    .do_negotiation = do_negotiation,
+
+    .initial_info.request_sizes = true,
+    .initial_info.structured_reply = true,
+    .initial_info.base_allocation = true,
+    .initial_info.x_dirty_bitmap = g_strdup(x_dirty_bitmap),
+    .initial_info.name = g_strdup(export_name ?: "")
  };
  qemu_mutex_init(>mutex);
@@ -68,9 +86,59 @@ static void 
nbd_client_connection_do_free(NBDClientConnection *conn)
  }
  error_free(conn->err);
  qapi_free_SocketAddress(conn->saddr);
+    object_unref(OBJECT(conn->tlscreds));
+    g_free(conn->initial_info.x_dirty_bitmap);
+    g_free(conn->initial_info.name);
  g_free(conn);
  }
+/*
+ * Connect to @addr and do NBD negotiation if @info is not null. If @tlscreds
+ * given @outioc is provided. @outioc is provided only on success.  The call 
may


s/given/are given/
s/provided/returned/g


+ * be cancelled in parallel by simply qio_channel_shutdown(sioc).


I assume by "in parallel" you mean "from another thread", I'd suggest to

Re: [PATCH] hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds

2021-06-08 Thread Peter Xu
On Tue, Jun 08, 2021 at 07:49:56PM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (pet...@redhat.com) wrote:
> > These two commands are missing when adding the QMP sister commands.  Add 
> > them,
> > so developers can play with them easier.
> > 
> > Cc: Dr. David Alan Gilbert 
> > Cc: Juan Quintela 
> > Cc: Leonardo Bras Soares Passos 
> > Cc: Chuan Zheng 
> > Cc: huang...@chinatelecom.cn
> > Signed-off-by: Peter Xu 
> 
> Reviewed-by: Dr. David Alan Gilbert 
> 
> > ---
> > PS: I really doubt whether this is working as expected... I ran one 200MB/s
> > workload inside, what I measured is 20MB/s with current algorithm...  
> > Sampling
> > 512 pages out of 1G mem is not wise enough I guess, especially that assumes
> > dirty workload is spread across the memories while it's normally not the 
> > case..
> 
> What size of address space did you dirty - was it 20MB?

IIRC it was either 200M or 500M, based on a 1G small VM.

-- 
Peter Xu




Re: [PATCH 03/26] configure, meson: convert crypto detection to meson

2021-06-08 Thread Richard Henderson

On 6/8/21 4:22 AM, Paolo Bonzini wrote:

Signed-off-by: Paolo Bonzini
---
  configure  | 188 +++--
  crypto/meson.build |  41 +++--
  meson.build|  81 +-
  meson_options.txt  |   6 ++
  tests/unit/meson.build |   6 +-
  5 files changed, 90 insertions(+), 232 deletions(-)


Reviewed-by: Richard Henderson 

r~



Re: [PATCH v4 1/7] file-posix: fix max_iov for /dev/sg devices

2021-06-08 Thread Vladimir Sementsov-Ogievskiy

08.06.2021 16:16, Paolo Bonzini wrote:

Even though it was only called for devices that have bs->sg set (which
must be character devices), sg_get_max_segments looked at /sys/dev/block
which only works for block devices.

On Linux the sg driver has its own way to provide the maximum number of
iovecs in a scatter/gather list, so add support for it.  The block device
path is kept because it will be reinstated in the next patches.

Signed-off-by: Paolo Bonzini 
---
  block/file-posix.c | 11 +++
  1 file changed, 11 insertions(+)

diff --git a/block/file-posix.c b/block/file-posix.c
index f37dfc10b3..536998a1d6 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1180,6 +1180,17 @@ static int sg_get_max_segments(int fd)
  goto out;
  }
  
+if (S_ISCHR(st.st_mode)) {


Why not check "if (bs->sg) {" instead? It seems to be more consistent with 
issuing SG_ ioctl. Or what I miss?


+if (ioctl(fd, SG_GET_SG_TABLESIZE, ) == 0) {
+return ret;
+}
+return -ENOTSUP;
+}
+
+if (!S_ISBLK(st.st_mode)) {
+return -ENOTSUP;
+}
+
  sysfspath = g_strdup_printf("/sys/dev/block/%u:%u/queue/max_segments",
  major(st.st_rdev), minor(st.st_rdev));
  sysfd = open(sysfspath, O_RDONLY);




--
Best regards,
Vladimir



RE: [PATCH v3] docs/devel: Explain in more detail the TB chaining mechanisms

2021-06-08 Thread Luis Fernando Fujita Pires
From: Luis Pires 
> Signed-off-by: Luis Pires 
> ---
> v3:
>  - Dropped "most common" from the sentence introducing the chaining
> mechanisms
>  - Changed wording about using the TB address returned by exit_tb
> 
> v2:
>  - s/outer execution loop/main loop
>  - Mention re-evaluation of cpu_exec_interrupt()
>  - Changed wording on lookup_and_goto_ptr()
>  - Added more details to step 2 of goto+tb + exit_tb
>  - Added details about when goto_tb + exit_tb cannot be used
> 
>  docs/devel/tcg.rst | 103 +++--
>  1 file changed, 91 insertions(+), 12 deletions(-)
> 
> diff --git a/docs/devel/tcg.rst b/docs/devel/tcg.rst index
> 4ebde44b9d..a65fb7b1c4 100644
> --- a/docs/devel/tcg.rst
> +++ b/docs/devel/tcg.rst
> @@ -11,13 +11,14 @@ performances.
>  QEMU's dynamic translation backend is called TCG, for "Tiny Code  Generator".
> For more information, please take a look at ``tcg/README``.
> 
> -Some notable features of QEMU's dynamic translator are:
> +The following sections outline some notable features and implementation
> +details of QEMU's dynamic translator.
> 
>  CPU state optimisations
>  ---
> 
> -The target CPUs have many internal states which change the way it -evaluates
> instructions. In order to achieve a good speed, the
> +The target CPUs have many internal states which change the way they
> +evaluate instructions. In order to achieve a good speed, the
>  translation phase considers that some state information of the virtual  CPU
> cannot change in it. The state is recorded in the Translation  Block (TB). If 
> the
> state changes (e.g. privilege level), a new TB will @@ -31,17 +32,95 @@ Direct
> block chaining
>  -
> 
>  After each translated basic block is executed, QEMU uses the simulated -
> Program Counter (PC) and other cpu state information (such as the CS
> +Program Counter (PC) and other CPU state information (such as the CS
>  segment base value) to find the next basic block.
> 
> -In order to accelerate the most common cases where the new simulated PC -is
> known, QEMU can patch a basic block so that it jumps directly to the -next 
> one.
> -
> -The most portable code uses an indirect jump. An indirect jump makes -it 
> easier
> to make the jump target modification atomic. On some host -architectures (such
> as x86 or PowerPC), the ``JUMP`` opcode is -directly patched so that the block
> chaining has no overhead.
> +In its simplest, less optimized form, this is done by exiting from the
> +current TB, going through the TB epilogue, and then back to the main
> +loop. That’s where QEMU looks for the next TB to execute, translating
> +it from the guest architecture if it isn’t already available in memory.
> +Then QEMU proceeds to execute this next TB, starting at the prologue
> +and then moving on to the translated instructions.
> +
> +Exiting from the TB this way will cause the ``cpu_exec_interrupt()``
> +callback to be re-evaluated before executing additional instructions.
> +It is mandatory to exit this way after any CPU state changes that may
> +unmask interrupts.
> +
> +In order to accelerate the cases where the TB for the new simulated PC
> +is already available, QEMU has mechanisms that allow multiple TBs to be
> +chained directly, without having to go back to the main loop as
> +described above. These mechanisms are:
> +
> +``lookup_and_goto_ptr``
> +^^^
> +
> +Calling ``tcg_gen_lookup_and_goto_ptr()`` will emit a call to
> +``helper_lookup_tb_ptr``. This helper will look for an existing TB that
> +matches the current CPU state. If the destination TB is available its
> +code address is returned, otherwise the address of the JIT epilogue is
> +returned. The call to the helper is always followed by the tcg
> +``goto_ptr`` opcode, which branches to the returned address. In this
> +way, we either branch to the next TB or return to the main loop.
> +
> +``goto_tb + exit_tb``
> +^
> +
> +The translation code usually implements branching by performing the
> +following steps:
> +
> +1. Call ``tcg_gen_goto_tb()`` passing a jump slot index (either 0 or 1)
> +   as a parameter.
> +
> +2. Emit TCG instructions to update the CPU state with any information
> +   that has been assumed constant and is required by the main loop to
> +   correctly locate and execute the next TB. For most guests, this is
> +   just the PC of the branch destination, but others may store additional
> +   data. The information updated in this step must be inferable from both
> +   ``cpu_get_tb_cpu_state()`` and ``cpu_restore_state()``.
> +
> +3. Call ``tcg_gen_exit_tb()`` passing the address of the current TB and
> +   the jump slot index again.
> +
> +Step 1, ``tcg_gen_goto_tb()``, will emit a ``goto_tb`` TCG instruction
> +that later on gets translated to a jump to an address associated with
> +the specified jump slot. Initially, this is the address of step 2's
> +instructions, which update the CPU state 

Re: [PATCH] blockdev: fix drive-backup transaction endless drained section

2021-06-08 Thread Eric Blake
On Tue, Jun 08, 2021 at 08:18:52PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> drive_backup_prepare() does bdrv_drained_begin() in hope that
> bdrv_drained_end() will be called in drive_backup_clean(). Still we
> need to set state->bs for this to work. That's done too late: a lot of
> failure paths in drive_backup_prepare() miss setting state->bs. Fix
> that.
> 
> Fixes: 2288ccfac96281c316db942d10e3f921c1373064
> Fixes: https://gitlab.com/qemu-project/qemu/-/issues/399
> Signed-off-by: Vladimir Sementsov-Ogievskiy 
> ---
>  blockdev.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)

Reviewed-by: Eric Blake 

> 
> diff --git a/blockdev.c b/blockdev.c
> index f08192deda..094c085962 100644
> --- a/blockdev.c
> +++ b/blockdev.c
> @@ -1714,6 +1714,7 @@ static void drive_backup_prepare(BlkActionState 
> *common, Error **errp)
>  aio_context = bdrv_get_aio_context(bs);
>  aio_context_acquire(aio_context);
>  
> +state->bs = bs;
>  /* Paired with .clean() */
>  bdrv_drained_begin(bs);

Commit 2288ccfac9 had these two lines in the opposite order, but that
doesn't matter, the important part is that we do indeed need to set
state->bs regardless of any later failure detections, to get .clean to
do the matching drained_end.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v6 4/4] Jobs based on custom runners: add job definitions for QEMU's machines

2021-06-08 Thread Wainer dos Santos Moschetta

Hi,

On 6/8/21 10:36 AM, Cleber Rosa Junior wrote:



On Tue, Jun 8, 2021 at 2:30 AM Philippe Mathieu-Daudé > wrote:


Hi Alex, Stefan,

On 6/8/21 5:14 AM, Cleber Rosa wrote:
> The QEMU project has two machines (aarch64 and s390x) that can
be used
> for jobs that do build and run tests.

AFAIK there is more hardware available to the project, so I'm
wondering
what happened to the rest, is it a deliberate choice to start small?


Hi Phil,

Yes, this series was deliberately focused on the first two machines 
owned and available to QEMU.


What will happen with the rest, since we are wasting resources?


The plan is to allow all machines (currently available and to-be 
available) to be connected as custom runners. This hopefully gets that 
started.


About "more hardware available to the project", there's one VM from 
fosshost which was made available not long ago, and which I set up 
even more recently, which could be used as a gitlab runner too.  But, 
even though some new hardware resource is available (and wasted?), the 
human resources to maintain them have not been properly determined, so 
I believe it's a good decision to start with the machines that have 
been operational for long, and that already have to the best of my 
knowledge, people maintaining them.


I also see a "Debian unstable mips64el (Debian) @ cipunited.cn 
" registered as a runner, but I don't have extra 
information about it.


Besides that, I'll send another series shortly, that builds upon this 
series, and adds a Red Hat focused job, on a Red Hat managed machine.  
This should be what other entities should be capable of doing and 
allowed to do.


Who has access to what and should do what (setup)? How is this list of
hw managed btw? Should there be some public visibility (i.e. Wiki)?


These are good questions, and I believe Alex can answer them about 
those two machines.  Even though I hooked them up to GitLab, AFAICT he 
is the ultimate admin (maybe Peter too?).


About hardware management, it has been suggested to use either the 
Wiki or a MAINTAINERS entry.  This is still unresolved and feedback 
would be appreciated.  For me to propose a MAINTAINERS entry, say, on 
a v7, I'd need the information on who is managing them.



Is there a document explaining what are the steps to follow for an
entity to donate / sponsor hardware? Where would it be stored, should
this hw be shipped somewhere? What documentation should be
provided for
its system administration?

In case an entity manages hosting and maintenance, can the QEMU
project
share the power bill? Up to which amount?
Similar question if a sysadmin has to be paid.

If the QEMU project can't spend money on CI, what is expected from
resource donators? Simply hardware + power (electricity) + network
traffic? Also sysadmining and monitoring? Do we expect some kind of
reliability on the data stored here or can we assume disposable /
transient runners?
Should donators also provide storage? Do we have a minimum storage
requirement?

Should we provide some guideline explaining any code might be run by
our contributors on these runners and some security measures have to
be taken / considered?

Sponsors usually expect some advertising to thanks them, and often
regular reports on how their resources are used, else they might not
renew their offer. Who should care to keep the relationship with
sponsors?

Where is defined what belong to the project, who is responsible,
who can
request access to this hardware, what resource can be used?


You obviously directed the question towards Alex and Stefan 
(rightfully so), so I won't attempt to answer these ones at this point.


More generically, what is the process for a private / corporate entity
to register a runner to the project? (how did it work for this aarch64
and s390x one?)


The steps are listed on the documentation.  Basically anyone with 
knowledge of the "registration token" can add new machines to GitLab 
as runners.  For the two aarch64 and s390x, it was a matter of 
following the documentation steps which basically involve:


1) providing the hostname(s) in the inventory file
2) provide the "registration token" in the vars.yml file
3) running the playbooks


What else am I missing?


I think you're missing the answers to all your good questions :).

And I understand that are a lot of them (from everyone, including 
myself).  The dilemma here is: should we activate the machines already 
available, and learn in practice, what's missing?  I honestly believe 
we should.



IMHO we should merge the minimum possible to start using the existing 
machines, then address the questions (good questions, btw) raised by 
Philippe as needed.


Thanks!

- Wainer



Thanks,
- Clr.

Thanks,

Phil.

> This introduces those jobs,

Re: [PATCH v3 2/7] hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds

2021-06-08 Thread Dr. David Alan Gilbert
* huang...@chinatelecom.cn (huang...@chinatelecom.cn) wrote:
> From: Peter Xu 
> 
> These two commands are missing when adding the QMP sister commands.
> Add them, so developers can play with them easier.
> 
> Signed-off-by: Peter Xu 
> Signed-off-by: Hyman Huang(黄勇) 

I've queued 1 and 2 (with a line wrap on 1);  we can take the
rest after Peter is happy with the other stuff.

Dave

> ---
>  hmp-commands-info.hx  | 13 
>  hmp-commands.hx   | 14 +
>  include/monitor/hmp.h |  2 ++
>  migration/dirtyrate.c | 47 +++
>  4 files changed, 76 insertions(+)
> 
> diff --git a/hmp-commands-info.hx b/hmp-commands-info.hx
> index b2347a6aea..fb59c27200 100644
> --- a/hmp-commands-info.hx
> +++ b/hmp-commands-info.hx
> @@ -867,3 +867,16 @@ SRST
>``info replay``
>  Display the record/replay information: mode and the current icount.
>  ERST
> +
> +{
> +.name   = "dirty_rate",
> +.args_type  = "",
> +.params = "",
> +.help   = "show dirty rate information",
> +.cmd= hmp_info_dirty_rate,
> +},
> +
> +SRST
> +  ``info dirty_rate``
> +Display the vcpu dirty rate information.
> +ERST
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 2d21fe5ad4..84dcc3aae6 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -1727,3 +1727,17 @@ ERST
>  .flags  = "p",
>  },
>  
> +SRST
> +``calc_dirty_rate`` *second*
> +  Start a round of dirty rate measurement with the period specified in 
> *second*.
> +  The result of the dirty rate measurement may be observed with ``info
> +  dirty_rate`` command.
> +ERST
> +
> +{
> +.name   = "calc_dirty_rate",
> +.args_type  = "second:l,sample_pages_per_GB:l?",
> +.params = "second [sample_pages_per_GB]",
> +.help   = "start a round of guest dirty rate measurement",
> +.cmd= hmp_calc_dirty_rate,
> +},
> diff --git a/include/monitor/hmp.h b/include/monitor/hmp.h
> index 605d57287a..3baa1058e2 100644
> --- a/include/monitor/hmp.h
> +++ b/include/monitor/hmp.h
> @@ -129,5 +129,7 @@ void hmp_info_replay(Monitor *mon, const QDict *qdict);
>  void hmp_replay_break(Monitor *mon, const QDict *qdict);
>  void hmp_replay_delete_break(Monitor *mon, const QDict *qdict);
>  void hmp_replay_seek(Monitor *mon, const QDict *qdict);
> +void hmp_info_dirty_rate(Monitor *mon, const QDict *qdict);
> +void hmp_calc_dirty_rate(Monitor *mon, const QDict *qdict);
>  
>  #endif
> diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
> index 2ee3890721..320c56ba2c 100644
> --- a/migration/dirtyrate.c
> +++ b/migration/dirtyrate.c
> @@ -20,6 +20,9 @@
>  #include "ram.h"
>  #include "trace.h"
>  #include "dirtyrate.h"
> +#include "monitor/hmp.h"
> +#include "monitor/monitor.h"
> +#include "qapi/qmp/qdict.h"
>  
>  static int CalculatingState = DIRTY_RATE_STATUS_UNSTARTED;
>  static struct DirtyRateStat DirtyStat;
> @@ -447,3 +450,47 @@ struct DirtyRateInfo *qmp_query_dirty_rate(Error **errp)
>  {
>  return query_dirty_rate_info();
>  }
> +
> +void hmp_info_dirty_rate(Monitor *mon, const QDict *qdict)
> +{
> +DirtyRateInfo *info = query_dirty_rate_info();
> +
> +monitor_printf(mon, "Status: %s\n",
> +   DirtyRateStatus_str(info->status));
> +monitor_printf(mon, "Start Time: %"PRIi64" (ms)\n",
> +   info->start_time);
> +monitor_printf(mon, "Sample Pages: %"PRIu64" (per GB)\n",
> +   info->sample_pages);
> +monitor_printf(mon, "Period: %"PRIi64" (sec)\n",
> +   info->calc_time);
> +monitor_printf(mon, "Dirty rate: ");
> +if (info->has_dirty_rate) {
> +monitor_printf(mon, "%"PRIi64" (MB/s)\n", info->dirty_rate);
> +} else {
> +monitor_printf(mon, "(not ready)\n");
> +}
> +g_free(info);
> +}
> +
> +void hmp_calc_dirty_rate(Monitor *mon, const QDict *qdict)
> +{
> +int64_t sec = qdict_get_try_int(qdict, "second", 0);
> +int64_t sample_pages = qdict_get_try_int(qdict, "sample_pages_per_GB", 
> -1);
> +bool has_sample_pages = (sample_pages != -1);
> +Error *err = NULL;
> +
> +if (!sec) {
> +monitor_printf(mon, "Incorrect period length specified!\n");
> +return;
> +}
> +
> +qmp_calc_dirty_rate(sec, has_sample_pages, sample_pages, );
> +if (err) {
> +hmp_handle_error(mon, err);
> +return;
> +}
> +
> +monitor_printf(mon, "Starting dirty rate measurement with period %"PRIi64
> +   " seconds\n", sec);
> +monitor_printf(mon, "[Please use 'info dirty_rate' to check results]\n");
> +}
> -- 
> 2.18.2
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH v6 3/4] Jobs based on custom runners: docs and gitlab-runner setup playbook

2021-06-08 Thread Wainer dos Santos Moschetta

Hi,

On 6/8/21 12:14 AM, Cleber Rosa wrote:

To have the jobs dispatched to custom runners, gitlab-runner must
be installed, active as a service and properly configured.  The
variables file and playbook introduced here should help with those
steps.

The playbook introduced here covers the Linux distributions and
has been primarily tested on OS/machines that the QEMU project
has available to act as runners, namely:

  * Ubuntu 20.04 on aarch64
  * Ubuntu 18.04 on s390x

But, it should work on all other Linux distributions.  Earlier
versions were tested on FreeBSD too, so chances of success are
high.

Signed-off-by: Cleber Rosa 
---
  docs/devel/ci.rst  | 57 
  scripts/ci/setup/.gitignore|  1 +
  scripts/ci/setup/gitlab-runner.yml | 61 ++
  scripts/ci/setup/vars.yml.template | 12 ++
  4 files changed, 131 insertions(+)
  create mode 100644 scripts/ci/setup/.gitignore
  create mode 100644 scripts/ci/setup/gitlab-runner.yml
  create mode 100644 scripts/ci/setup/vars.yml.template

diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
index 35c6b5e269..bbd89e54d7 100644
--- a/docs/devel/ci.rst
+++ b/docs/devel/ci.rst
@@ -56,3 +56,60 @@ To run the playbook, execute::
  
cd scripts/ci/setup

ansible-playbook -i inventory build-environment.yml
+
+gitlab-runner setup and registration
+
+
+The gitlab-runner agent needs to be installed on each machine that
+will run jobs.  The association between a machine and a GitLab project
+happens with a registration token.  To find the registration token for
+your repository/project, navigate on GitLab's web UI to:
+
+ * Settings (the gears like icon), then
+ * CI/CD, then
+ * Runners, and click on the "Expand" button, then
+ * Under "Set up a specific Runner manually", look for the value under
+   "Use the following registration token during setup"
+
+Copy the ``scripts/ci/setup/vars.yml.template`` file to
+``scripts/ci/setup/vars.yml``.  Then, set the
+``gitlab_runner_registration_token`` variable to the value obtained
+earlier.
+
+.. note:: gitlab-runner is not available from the standard location
+  for all OS and architectures combinations.  For some systems,
+  a custom build may be necessary.  Some builds are avaiable
+  at https://cleber.fedorapeople.org/gitlab-runner/ and this
+  URI may be used as a value on ``vars.yml``
I think you can remove the information about the gitlab-running being 
not available for some systems.

+
+To run the playbook, execute::
+
+  cd scripts/ci/setup
+  ansible-playbook -i inventory gitlab-runner.yml
+
+Following the registration, it's necessary to configure the runner tags,
+and optionally other configurations on the GitLab UI.  Navigate to:
+
+ * Settings (the gears like icon), then
+ * CI/CD, then
+ * Runners, and click on the "Expand" button, then
+ * "Runners activated for this project", then
+ * Click on the "Edit" icon (next to the "Lock" Icon)
+
+Under tags, add values matching the jobs a runner should run.  For a
+Ubuntu 20.04 aarch64 system, the tags should be set as::
+
+  ubuntu_20.04,aarch64


Also users no longer need manually create the tags.

Remaining of the file looks good to me.


+
+Because the job definition at ``.gitlab-ci.d/custom-runners.yml``
+would contain::
+
+  ubuntu-20.04-aarch64-all:
+   tags:
+   - ubuntu_20.04
+   - aarch64
+
+It's also recommended to:
+
+ * increase the "Maximum job timeout" to something like ``2h``
+ * give it a better Description
diff --git a/scripts/ci/setup/.gitignore b/scripts/ci/setup/.gitignore
new file mode 100644
index 00..f112d05dd0
--- /dev/null
+++ b/scripts/ci/setup/.gitignore
@@ -0,0 +1 @@
+vars.yml
\ No newline at end of file
diff --git a/scripts/ci/setup/gitlab-runner.yml 
b/scripts/ci/setup/gitlab-runner.yml
new file mode 100644
index 00..98dab92bb5
--- /dev/null
+++ b/scripts/ci/setup/gitlab-runner.yml
@@ -0,0 +1,61 @@
+---
+- name: Installation of gitlab-runner
+  hosts: all
+  vars_files:
+- vars.yml
+  tasks:
+- debug:
+msg: 'Checking for a valid GitLab registration token'
+  failed_when: "gitlab_runner_registration_token == 
'PLEASE_PROVIDE_A_VALID_TOKEN'"
+
+- name: Create a group for the gitlab-runner service
+  group:
+name: gitlab-runner
+
+- name: Create a user for the gitlab-runner service
+  user:
+user: gitlab-runner
+group: gitlab-runner
+comment: GitLab Runner
+home: /home/gitlab-runner
+shell: /bin/bash
+
+- name: Remove the .bash_logout file when on Ubuntu systems
+  file:
+path: /home/gitlab-runner/.bash_logout
+state: absent
+  when: "ansible_facts['distribution'] == 'Ubuntu'"
+
+- name: Set the Operating System for gitlab-runner
+  set_fact:
+gitlab_runner_os: "{{ ansible_facts[\"system\"]|lower }}"
+- debug:
+msg: gitlab-runner OS is {{ 

Re: [PATCH v3 2/7] hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds

2021-06-08 Thread Dr. David Alan Gilbert
* huang...@chinatelecom.cn (huang...@chinatelecom.cn) wrote:
> From: Peter Xu 
> 
> These two commands are missing when adding the QMP sister commands.
> Add them, so developers can play with them easier.
> 
> Signed-off-by: Peter Xu 
> Signed-off-by: Hyman Huang(黄勇) 


Reviewed-by: Dr. David Alan Gilbert 

> ---
>  hmp-commands-info.hx  | 13 
>  hmp-commands.hx   | 14 +
>  include/monitor/hmp.h |  2 ++
>  migration/dirtyrate.c | 47 +++
>  4 files changed, 76 insertions(+)
> 
> diff --git a/hmp-commands-info.hx b/hmp-commands-info.hx
> index b2347a6aea..fb59c27200 100644
> --- a/hmp-commands-info.hx
> +++ b/hmp-commands-info.hx
> @@ -867,3 +867,16 @@ SRST
>``info replay``
>  Display the record/replay information: mode and the current icount.
>  ERST
> +
> +{
> +.name   = "dirty_rate",
> +.args_type  = "",
> +.params = "",
> +.help   = "show dirty rate information",
> +.cmd= hmp_info_dirty_rate,
> +},
> +
> +SRST
> +  ``info dirty_rate``
> +Display the vcpu dirty rate information.
> +ERST
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 2d21fe5ad4..84dcc3aae6 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -1727,3 +1727,17 @@ ERST
>  .flags  = "p",
>  },
>  
> +SRST
> +``calc_dirty_rate`` *second*
> +  Start a round of dirty rate measurement with the period specified in 
> *second*.
> +  The result of the dirty rate measurement may be observed with ``info
> +  dirty_rate`` command.
> +ERST
> +
> +{
> +.name   = "calc_dirty_rate",
> +.args_type  = "second:l,sample_pages_per_GB:l?",
> +.params = "second [sample_pages_per_GB]",
> +.help   = "start a round of guest dirty rate measurement",
> +.cmd= hmp_calc_dirty_rate,
> +},
> diff --git a/include/monitor/hmp.h b/include/monitor/hmp.h
> index 605d57287a..3baa1058e2 100644
> --- a/include/monitor/hmp.h
> +++ b/include/monitor/hmp.h
> @@ -129,5 +129,7 @@ void hmp_info_replay(Monitor *mon, const QDict *qdict);
>  void hmp_replay_break(Monitor *mon, const QDict *qdict);
>  void hmp_replay_delete_break(Monitor *mon, const QDict *qdict);
>  void hmp_replay_seek(Monitor *mon, const QDict *qdict);
> +void hmp_info_dirty_rate(Monitor *mon, const QDict *qdict);
> +void hmp_calc_dirty_rate(Monitor *mon, const QDict *qdict);
>  
>  #endif
> diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
> index 2ee3890721..320c56ba2c 100644
> --- a/migration/dirtyrate.c
> +++ b/migration/dirtyrate.c
> @@ -20,6 +20,9 @@
>  #include "ram.h"
>  #include "trace.h"
>  #include "dirtyrate.h"
> +#include "monitor/hmp.h"
> +#include "monitor/monitor.h"
> +#include "qapi/qmp/qdict.h"
>  
>  static int CalculatingState = DIRTY_RATE_STATUS_UNSTARTED;
>  static struct DirtyRateStat DirtyStat;
> @@ -447,3 +450,47 @@ struct DirtyRateInfo *qmp_query_dirty_rate(Error **errp)
>  {
>  return query_dirty_rate_info();
>  }
> +
> +void hmp_info_dirty_rate(Monitor *mon, const QDict *qdict)
> +{
> +DirtyRateInfo *info = query_dirty_rate_info();
> +
> +monitor_printf(mon, "Status: %s\n",
> +   DirtyRateStatus_str(info->status));
> +monitor_printf(mon, "Start Time: %"PRIi64" (ms)\n",
> +   info->start_time);
> +monitor_printf(mon, "Sample Pages: %"PRIu64" (per GB)\n",
> +   info->sample_pages);
> +monitor_printf(mon, "Period: %"PRIi64" (sec)\n",
> +   info->calc_time);
> +monitor_printf(mon, "Dirty rate: ");
> +if (info->has_dirty_rate) {
> +monitor_printf(mon, "%"PRIi64" (MB/s)\n", info->dirty_rate);
> +} else {
> +monitor_printf(mon, "(not ready)\n");
> +}
> +g_free(info);
> +}
> +
> +void hmp_calc_dirty_rate(Monitor *mon, const QDict *qdict)
> +{
> +int64_t sec = qdict_get_try_int(qdict, "second", 0);
> +int64_t sample_pages = qdict_get_try_int(qdict, "sample_pages_per_GB", 
> -1);
> +bool has_sample_pages = (sample_pages != -1);
> +Error *err = NULL;
> +
> +if (!sec) {
> +monitor_printf(mon, "Incorrect period length specified!\n");
> +return;
> +}
> +
> +qmp_calc_dirty_rate(sec, has_sample_pages, sample_pages, );
> +if (err) {
> +hmp_handle_error(mon, err);
> +return;
> +}
> +
> +monitor_printf(mon, "Starting dirty rate measurement with period %"PRIi64
> +   " seconds\n", sec);
> +monitor_printf(mon, "[Please use 'info dirty_rate' to check results]\n");
> +}
> -- 
> 2.18.2
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH v3 1/7] migration/dirtyrate: make sample page count configurable

2021-06-08 Thread Dr. David Alan Gilbert
* Eric Blake (ebl...@redhat.com) wrote:
> On Mon, Jun 07, 2021 at 09:11:34AM +0800, huang...@chinatelecom.cn wrote:
> > From: Hyman Huang(黄勇) 
> > 
> > introduce optional sample-pages argument in calc-dirty-rate,
> > making sample page count per GB configurable so that more
> > accurate dirtyrate can be calculated.
> > 
> > Signed-off-by: Hyman Huang(黄勇) 
> > ---
> 
> > +++ b/qapi/migration.json
> >  # Example:
> > -#   {"command": "calc-dirty-rate", "data": {"calc-time": 1} }
> > +#   {"command": "calc-dirty-rate", "data": {"calc-time": 1, 
> > 'sample-pages': 512} }
> >  #
> >  ##
> > -{ 'command': 'calc-dirty-rate', 'data': {'calc-time': 'int64'} }
> > +{ 'command': 'calc-dirty-rate', 'data': {'calc-time': 'int64', 
> > '*sample-pages': 'int'} }
> 
> Long line. Please wrap at 80 columns.

I can fix up that.

> -- 
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.   +1-919-301-3266
> Virtualization:  qemu.org | libvirt.org
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH] hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds

2021-06-08 Thread Dr. David Alan Gilbert
* Peter Xu (pet...@redhat.com) wrote:
> These two commands are missing when adding the QMP sister commands.  Add them,
> so developers can play with them easier.
> 
> Cc: Dr. David Alan Gilbert 
> Cc: Juan Quintela 
> Cc: Leonardo Bras Soares Passos 
> Cc: Chuan Zheng 
> Cc: huang...@chinatelecom.cn
> Signed-off-by: Peter Xu 

Reviewed-by: Dr. David Alan Gilbert 

> ---
> PS: I really doubt whether this is working as expected... I ran one 200MB/s
> workload inside, what I measured is 20MB/s with current algorithm...  Sampling
> 512 pages out of 1G mem is not wise enough I guess, especially that assumes
> dirty workload is spread across the memories while it's normally not the 
> case..

What size of address space did you dirty - was it 20MB?

Dave

> ---
>  hmp-commands-info.hx  | 13 +
>  hmp-commands.hx   | 14 ++
>  include/monitor/hmp.h |  2 ++
>  migration/dirtyrate.c | 43 +++
>  4 files changed, 72 insertions(+)
> 
> diff --git a/hmp-commands-info.hx b/hmp-commands-info.hx
> index ab0c7aa5eea..f8a9141dd8a 100644
> --- a/hmp-commands-info.hx
> +++ b/hmp-commands-info.hx
> @@ -880,3 +880,16 @@ SRST
>``info replay``
>  Display the record/replay information: mode and the current icount.
>  ERST
> +
> +{
> +.name   = "dirty_rate",
> +.args_type  = "",
> +.params = "",
> +.help   = "show dirty rate information",
> +.cmd= hmp_info_dirty_rate,
> +},
> +
> +SRST
> +  ``info dirty_rate``
> +Display the vcpu dirty rate information.
> +ERST
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 2d21fe5ad42..4c27fb91f7d 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -1727,3 +1727,17 @@ ERST
>  .flags  = "p",
>  },
>  
> +SRST
> +``calc_dirty_rate`` *second*
> +  Start a round of dirty rate measurement with the period specified in 
> *second*.
> +  The result of the dirty rate measurement may be observed with ``info
> +  dirty_rate`` command.
> +ERST
> +
> +{
> +.name   = "calc_dirty_rate",
> +.args_type  = "second:l",
> +.params = "second",
> +.help   = "start a round of guest dirty rate measurement",
> +.cmd= hmp_calc_dirty_rate,
> +},
> diff --git a/include/monitor/hmp.h b/include/monitor/hmp.h
> index 605d57287ae..3baa1058e2c 100644
> --- a/include/monitor/hmp.h
> +++ b/include/monitor/hmp.h
> @@ -129,5 +129,7 @@ void hmp_info_replay(Monitor *mon, const QDict *qdict);
>  void hmp_replay_break(Monitor *mon, const QDict *qdict);
>  void hmp_replay_delete_break(Monitor *mon, const QDict *qdict);
>  void hmp_replay_seek(Monitor *mon, const QDict *qdict);
> +void hmp_info_dirty_rate(Monitor *mon, const QDict *qdict);
> +void hmp_calc_dirty_rate(Monitor *mon, const QDict *qdict);
>  
>  #endif
> diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
> index ccb98147e89..382287d2912 100644
> --- a/migration/dirtyrate.c
> +++ b/migration/dirtyrate.c
> @@ -20,6 +20,9 @@
>  #include "ram.h"
>  #include "trace.h"
>  #include "dirtyrate.h"
> +#include "monitor/hmp.h"
> +#include "monitor/monitor.h"
> +#include "qapi/qmp/qdict.h"
>  
>  static int CalculatingState = DIRTY_RATE_STATUS_UNSTARTED;
>  static struct DirtyRateStat DirtyStat;
> @@ -424,3 +427,43 @@ struct DirtyRateInfo *qmp_query_dirty_rate(Error **errp)
>  {
>  return query_dirty_rate_info();
>  }
> +
> +void hmp_info_dirty_rate(Monitor *mon, const QDict *qdict)
> +{
> +DirtyRateInfo *info = query_dirty_rate_info();
> +
> +monitor_printf(mon, "Status: %s\n",
> +   DirtyRateStatus_str(info->status));
> +monitor_printf(mon, "Start Time: %"PRIi64" (ms)\n",
> +   info->start_time);
> +monitor_printf(mon, "Period: %"PRIi64" (sec)\n",
> +   info->calc_time);
> +monitor_printf(mon, "Dirty rate: ");
> +if (info->has_dirty_rate) {
> +monitor_printf(mon, "%"PRIi64" (MB/s)\n", info->dirty_rate);
> +} else {
> +monitor_printf(mon, "(not ready)\n");
> +}
> +g_free(info);
> +}
> +
> +void hmp_calc_dirty_rate(Monitor *mon, const QDict *qdict)
> +{
> +int64_t sec = qdict_get_try_int(qdict, "second", 0);
> +Error *err = NULL;
> +
> +if (!sec) {
> +monitor_printf(mon, "Incorrect period length specified!\n");
> +return;
> +}
> +
> +qmp_calc_dirty_rate(sec, );
> +if (err) {
> +hmp_handle_error(mon, err);
> +return;
> +}
> +
> +monitor_printf(mon, "Starting dirty rate measurement with period %"PRIi64
> +   " seconds\n", sec);
> +monitor_printf(mon, "[Please use 'info dirty_rate' to check results]\n");
> +}
> -- 
> 2.31.1
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH v6 2/4] Jobs based on custom runners: build environment docs and playbook

2021-06-08 Thread Wainer dos Santos Moschetta

Hi,

On 6/8/21 12:14 AM, Cleber Rosa wrote:

To run basic jobs on custom runners, the environment needs to be
properly set up.  The most common requirement is having the right
packages installed.

The playbook introduced here covers the QEMU's project s390x and
aarch64 machines.  At the time this is being proposed, those machines
have already had this playbook applied to them.

Signed-off-by: Cleber Rosa 
---
  docs/devel/ci.rst  | 30 
  scripts/ci/setup/build-environment.yml | 98 ++
  scripts/ci/setup/inventory.template|  1 +
  3 files changed, 129 insertions(+)
  create mode 100644 scripts/ci/setup/build-environment.yml
  create mode 100644 scripts/ci/setup/inventory.template

diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
index 585b7bf4b8..35c6b5e269 100644
--- a/docs/devel/ci.rst
+++ b/docs/devel/ci.rst
@@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner".
  The GitLab CI jobs definition for the custom runners are located under::
  
.gitlab-ci.d/custom-runners.yml

+
+Machine Setup Howto
+---
+
+For all Linux based systems, the setup can be mostly automated by the
+execution of two Ansible playbooks.  Create an ``inventory`` file
+under ``scripts/ci/setup``, such as this::

Missing to mention the template file.

+
+  fully.qualified.domain
+  other.machine.hostname
+
+You may need to set some variables in the inventory file itself.  One
+very common need is to tell Ansible to use a Python 3 interpreter on
+those hosts.  This would look like::
+
+  fully.qualified.domain ansible_python_interpreter=/usr/bin/python3
+  other.machine.hostname ansible_python_interpreter=/usr/bin/python3
+
+Build environment
+~
+
+The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will
+set up machines with the environment needed to perform builds and run
+QEMU tests.  It covers a number of different Linux distributions and
+FreeBSD.
+
+To run the playbook, execute::
+
+  cd scripts/ci/setup
+  ansible-playbook -i inventory build-environment.yml
diff --git a/scripts/ci/setup/build-environment.yml 
b/scripts/ci/setup/build-environment.yml
new file mode 100644
index 00..664f2f0519
--- /dev/null
+++ b/scripts/ci/setup/build-environment.yml
@@ -0,0 +1,98 @@
+---
+- name: Installation of basic packages to build QEMU
+  hosts: all


You will need to "become: yes" if the login user is not privileged, right?

Or mention on the documentation how the user should configure the 
connection for privileged login.


About privilege escalation with Ansible: 
https://docs.ansible.com/ansible/latest/user_guide/become.html



+  tasks:


Just a tip: you can put all those task under a block 
(https://docs.ansible.com/ansible/latest/user_guide/playbooks_blocks.html) 
so check if "ansible_facts['distribution'] == 'Ubuntu'" only once.


I reviewed the remain of the playbook; it looks good to me.


+- name: Update apt cache
+  apt:
+update_cache: yes
+  when:
+- ansible_facts['distribution'] == 'Ubuntu'
+
+- name: Install basic packages to build QEMU on Ubuntu 18.04/20.04
+  package:
+name:
+# Originally from tests/docker/dockerfiles/ubuntu1804.docker
+  - ccache
+  - gcc
+  - gettext
+  - git
+  - glusterfs-common
+  - libaio-dev
+  - libattr1-dev
+  - libbrlapi-dev
+  - libbz2-dev
+  - libcacard-dev
+  - libcap-ng-dev
+  - libcurl4-gnutls-dev
+  - libdrm-dev
+  - libepoxy-dev
+  - libfdt-dev
+  - libgbm-dev
+  - libgtk-3-dev
+  - libibverbs-dev
+  - libiscsi-dev
+  - libjemalloc-dev
+  - libjpeg-turbo8-dev
+  - liblzo2-dev
+  - libncurses5-dev
+  - libncursesw5-dev
+  - libnfs-dev
+  - libnss3-dev
+  - libnuma-dev
+  - libpixman-1-dev
+  - librados-dev
+  - librbd-dev
+  - librdmacm-dev
+  - libsasl2-dev
+  - libsdl2-dev
+  - libseccomp-dev
+  - libsnappy-dev
+  - libspice-protocol-dev
+  - libssh-dev
+  - libusb-1.0-0-dev
+  - libusbredirhost-dev
+  - libvdeplug-dev
+  - libvte-2.91-dev
+  - libzstd-dev
+  - make
+  - python3-yaml
+  - python3-sphinx
+  - python3-sphinx-rtd-theme
+  - ninja-build
+  - sparse
+  - xfslibs-dev
+state: present
+  when:
+- ansible_facts['distribution'] == 'Ubuntu'
+
+- name: Install packages to build QEMU on Ubuntu 18.04/20.04 on non-s390x
+  package:
+name:
+  - libspice-server-dev
+  - libxen-dev
+state: present
+  when:
+- ansible_facts['distribution'] == 'Ubuntu'
+- ansible_facts['architecture'] != 's390x'
+
+- name: Install basic packages to build QEMU on Ubuntu 18.04

Re: [PATCH] qemu-{img,nbd}: Don't report zeroed cluster as a hole

2021-06-08 Thread Eric Blake
On Tue, Jun 08, 2021 at 07:38:10PM +0300, Nir Soffer wrote:
> On Tue, Jun 8, 2021 at 12:22 AM Eric Blake  wrote:
> >
> > On Mon, Jun 07, 2021 at 11:22:04PM +0300, Nir Soffer wrote:
> > > When zeroing a cluster in an image with backing file, qemu-img and
> > > qemu-nbd reported the area as a hole. This does not affect the guest
> > > since the area is read as zero, but breaks code trying to reconstruct
> > > the image chain based on qemu-img map or qemu-nbd block status response.
> >
> > Trying to reconstruct the image chain based on qemu-nbd block status
> > should not be attempted on just base:allocation data, but should also
> > take into account qemu:allocation-depth.
> 
> This is correct when looking at the entire chain, but when we reconstruct
> image data, we copy each image in the layer *without* the backing chain.
> 
> The example I provided was not detailed enough, what we actually do is:
> 
> qemu-nbd .. 'json:{"driver": "qcow2", "backing": null, "file":
> {"driver": "file", "filename": "top.qcow2"}}'
> 
> So there is no backing chain and allocation depth is not relevant.
> - Allocated areas should be reported with flags 0
> - Zero areas which are not holes should be reported as NBD_STATE_ZERO
> - Zero areas which are holes (not allocated in this image) should be
> reported as NBD_STATE_HOLE

Again, what you WANT is qemu:allocation-depth.

$ ./qemu-nbd -r -t -f qcow2 -A 'json:{"driver":"qcow2", "backing":null, \
  "file":{"driver":"file", "filename":"top.qcow2"}}'
$ nbdinfo --map=qemu:allocation-depth nbd://localhost
 0   655360  unallocated
 65536  1310721  local
196608   655360  unallocated

$ nbdinfo --map nbd://localhost
 0   655363  hole,zero
 65536   655360  allocated
131072  1310723  hole,zero

You don't care whether the information reads as zero or not, but
whether top.qcow2 is responsible for the data at that cluster.
base:allocation does not answer that question.  But
qemu:allocation-depth answers it perfectly.

> 
> > From the perspective of the
> > core NBD protocol, there is no backing file, so trying to guess what
> > the backing file contains without using qemu extensions is unlikely to
> > be correct, as shown in your example.  The fact that you could abuse
> > it with qemu 5.2 but it broke in 6.0
> 
> I'm not abusing anything, I'm only using public APIs. qemu-nbd behavior
> should not change without good reason, and we did not have any good
> reason to change the behavior for qcow2 images.

Ah, but we did.  Exposing BDRV_BLOCK_ALLOCATED as server, but
consuming it as BDRV_BLOCK_DATA as client, was inconsistent.  It was a
bug that we ever used BLOCK_ALLOCATED in the first place, when it has
_always_ been that the NBD semantics were supposed to be modeled on
our definition of BLOCK_DATA.  That it took us a couple of years to
notice our bug is unfortunate, but we DO have a good reason for the
change - we were fixing an actual bug where we were reporting
incorrect information compared to what the NBD spec was documenting.

> 
> >  is not necessarily the sign of a
> > regression in 6.0, but rather could be evidence that you have been
> > trying to use an undocumented implementation quirk rather than a
> > stable interface.
> 
> I'm pretty convinced that this is a regression in qemu-nbd 6.0 since I created
> this regression :-)

I understand that you were surprised by the ramifications of your
patch causing more changes than what you expected, but I still argue
that your patch was correct and that the decision to incorporate it
was intentional because it was the right thing to do.  Papering over
the fallout for the sake of clients that should be using
qemu:allocation-depth instead does not seem like it is worth the
maintenance nightmare to me.

> 
> Since we started using qemu-nbd in 2018, qemu-nbd has always reported
> holes in qcow2 images, but not in raw files. We discussed this several times,
> and you explained that we have allocation information from qcow2, but not
> from raw format.
> 
> My attempt to fix hole reporting in raw images has failed; reporting holes in
> raw images is nice to have, but it broke the behavior of qemu-nbd with qcow2
> images, which is a critical issue for ovirt.

Rather, ovirt had been relying on buggy behavior, and now that the bug
has been fixed, we are scrambling to figure out how to make ovirt
still play nicely.  But my answer to that is to use
qemu:allocation-depth.  It was introduced in 5.2, so it predates the
point where base:allocation behavior was fixed, and it provides the
answer to the question you are really asking (which parts of my image
came from the image directly, rather than a backing file), rather than
merely an indirect answer (how can I abuse the determination of which
parts of the image are allocated or sparse to imply that those same
portions must come from a backing image).  There is nothing
semantically wrong with a sparse cluster in the top 

Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx

2021-06-08 Thread Vladimir Sementsov-Ogievskiy

14.05.2021 00:04, Paolo Bonzini wrote:

On 12/05/21 09:15, Vladimir Sementsov-Ogievskiy wrote:




I don't understand.  Why doesn't aio_co_enter go through the ctx != 
qemu_get_current_aio_context() branch and just do aio_co_schedule? That was at 
least the idea behind aio_co_wake and aio_co_enter.



Because ctx is exactly qemu_get_current_aio_context(), as we are not in 
iothread but in nbd connection thread. So, qemu_get_current_aio_context() 
returns qemu_aio_context.


So the problem is that threads other than the main thread and
the I/O thread should not return qemu_aio_context.  The vCPU thread
may need to return it when running with BQL taken, though.

Something like this (untested):

diff --git a/include/block/aio.h b/include/block/aio.h
index 5f342267d5..10fcae1515 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -691,10 +691,13 @@ void aio_co_enter(AioContext *ctx, struct Coroutine *co);
   * Return the AioContext whose event loop runs in the current thread.
   *
   * If called from an IOThread this will be the IOThread's AioContext.  If
- * called from another thread it will be the main loop AioContext.
+ * called from the main thread or with the "big QEMU lock" taken it
+ * will be the main loop AioContext.
   */
  AioContext *qemu_get_current_aio_context(void);

+void qemu_set_current_aio_context(AioContext *ctx);
+
  /**
   * aio_context_setup:
   * @ctx: the aio context
diff --git a/iothread.c b/iothread.c
index 7f086387be..22b967e77c 100644
--- a/iothread.c
+++ b/iothread.c
@@ -39,11 +39,23 @@ DECLARE_CLASS_CHECKERS(IOThreadClass, IOTHREAD,
  #define IOTHREAD_POLL_MAX_NS_DEFAULT 0ULL
  #endif

-static __thread IOThread *my_iothread;
+static __thread AioContext *my_aiocontext;
+
+void qemu_set_current_aio_context(AioContext *ctx)
+{
+    assert(!my_aiocontext);
+    my_aiocontext = ctx;
+}

  AioContext *qemu_get_current_aio_context(void)
  {
-    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
+    if (my_aiocontext) {
+    return my_aiocontext;
+    }
+    if (qemu_mutex_iothread_locked()) {
+    return qemu_get_aio_context();
+    }
+    return NULL;
  }

  static void *iothread_run(void *opaque)
@@ -56,7 +68,7 @@ static void *iothread_run(void *opaque)
   * in this new thread uses glib.
   */
  g_main_context_push_thread_default(iothread->worker_context);
-    my_iothread = iothread;
+    qemu_set_current_aio_context(iothread->ctx);
  iothread->thread_id = qemu_get_thread_id();
  qemu_sem_post(>init_done_sem);

diff --git a/stubs/iothread.c b/stubs/iothread.c
index 8cc9e28c55..25ff398894 100644
--- a/stubs/iothread.c
+++ b/stubs/iothread.c
@@ -6,3 +6,7 @@ AioContext *qemu_get_current_aio_context(void)
  {
  return qemu_get_aio_context();
  }
+
+void qemu_set_current_aio_context(AioContext *ctx)
+{
+}
diff --git a/tests/unit/iothread.c b/tests/unit/iothread.c
index afde12b4ef..cab38b3da8 100644
--- a/tests/unit/iothread.c
+++ b/tests/unit/iothread.c
@@ -30,13 +30,26 @@ struct IOThread {
  bool stopping;
  };

-static __thread IOThread *my_iothread;
+static __thread AioContext *my_aiocontext;
+
+void qemu_set_current_aio_context(AioContext *ctx)
+{
+    assert(!my_aiocontext);
+    my_aiocontext = ctx;
+}

  AioContext *qemu_get_current_aio_context(void)
  {
-    return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
+    if (my_aiocontext) {
+    return my_aiocontext;
+    }
+    if (qemu_mutex_iothread_locked()) {
+    return qemu_get_aio_context();
+    }
+    return NULL;
  }

+
  static void iothread_init_gcontext(IOThread *iothread)
  {
  GSource *source;
@@ -54,7 +67,7 @@ static void *iothread_run(void *opaque)

  rcu_register_thread();

-    my_iothread = iothread;
+    qemu_set_current_aio_context(iothread->ctx);
  qemu_mutex_lock(>init_done_lock);
  iothread->ctx = aio_context_new(_abort);

diff --git a/util/main-loop.c b/util/main-loop.c
index d9c55df6f5..4ae5b23e99 100644
--- a/util/main-loop.c
+++ b/util/main-loop.c
@@ -170,6 +170,7 @@ int qemu_init_main_loop(Error **errp)
  if (!qemu_aio_context) {
  return -EMFILE;
  }
+    qemu_set_current_aio_context(qemu_aio_context);
  qemu_notify_bh = qemu_bh_new(notify_event_cb, NULL);
  gpollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
  src = aio_get_g_source(qemu_aio_context);

If it works for you, I can post it as a formal patch.



This doesn't work for iotests.. qemu-io goes through version in stub. It works 
if I add:

diff --git a/stubs/iothread.c b/stubs/iothread.c
index 8cc9e28c55..967a01c4f0 100644
--- a/stubs/iothread.c
+++ b/stubs/iothread.c
@@ -2,7 +2,18 @@
 #include "block/aio.h"
 #include "qemu/main-loop.h"
 
+static __thread AioContext *my_aiocontext;

+
+void qemu_set_current_aio_context(AioContext *ctx)
+{
+assert(!my_aiocontext);
+my_aiocontext = ctx;
+}
+
 AioContext *qemu_get_current_aio_context(void)
 {
-return qemu_get_aio_context();
+if (my_aiocontext) {
+

Re: [RFC PATCH] target/ppc: fix address translation bug for hash table mmus

2021-06-08 Thread Bruno Piazera Larsen


On 08/06/2021 13:37, Bruno Piazera Larsen wrote:



On 08/06/2021 12:35, Richard Henderson wrote:

On 6/8/21 7:39 AM, Bruno Piazera Larsen wrote:
That's odd.  We already have more arguments than the number of 
argument registers...  A 5x slowdown is distinctly odd.
I did some more digging and the problem is not with 
ppc_radix64_check_prot, the problem is ppc_radix64_xlate, which 
currently has 7 arguments and we're increasing to 8. 7 feels like 
the correct number, but I couldn't find docs supporting it, so I 
could be wrong.


According to tcg/ppc/tcg-target.c.inc, there are 8 argument registers 
for ppc hosts.  But now I see you didn't actually say on which host 
you observed the problem...  It's 6 argument registers for x86_64 host.


Oh, yes, sorry. I'm experiencing it in a POWER9 machine (ppc64le 
architecture). According to tcg this shouldn't be the issue, then, so 
idk if that's the real reason or not. All I know is that as soon as 
gcc can't optimize an argument away it happens (fprintf in 
radix64_xlate, using one of the mmuidx_* functions, defining those as 
macros).


I'll test it in my x86_64 machine and see if such a slowdown happens. 
It's not conclusive evidence, but the function is too complex for me 
to follow the disassembly if I can avoid it...


Test has been done: Slow down also happens on the x86_64 machine (but 
without change its already 360s, so idk if the slowdown is that 
dramatic), so it's _probably_ not going over the argument register 
count. I have no clue what could be. Still working on the struct version 
to see if anything changes.


--
Bruno Piazera Larsen
Instituto de Pesquisas ELDORADO 


Departamento Computação Embarcada
Analista de Software Trainee
Aviso Legal - Disclaimer 


Re: [PATCH v2 0/5] mptcp support

2021-06-08 Thread Dr. David Alan Gilbert
* Dr. David Alan Gilbert (git) (dgilb...@redhat.com) wrote:
> From: "Dr. David Alan Gilbert" 
> 
> Hi,
>   This set adds support for multipath TCP (mptcp), and has
> been tested for migration and (lightly) for NBD.
> 
>   Multipath-tcp is a bit like bonding, but at L3; you can use
> it to handle failure, but can also use it to split traffic across
> multiple interfaces.
> 
>   Using a pair of 10Gb interfaces, I've managed to get 19Gbps
> (with the only tuning being using huge pages and turning the MTU up).
> 
>   It needs a bleeding-edge Linux kernel (in some older ones you get
> false accept messages for the subflows), and a C lib that has the
> constants defined (as current glibc does).
> 
>   To use it you just need to append ,mptcp to an address; for migration:
> 
>   -incoming tcp:0:,mptcp
>   migrate -d tcp:192.168.11.20:,mptcp
> 
> For nbd:
> 
>   (qemu) nbd_server_start 0.0.0.0:,mptcp=on
> 
>   -blockdev 
> driver=nbd,server.type=inet,server.host=192.168.11.20,server.port=,server.mptcp=on,node-name=nbddisk,export=mydisk
>  \
>   -device virtio-blk,drive=nbddisk,id=disk0
> 
> (Many of the other NBD address parsers/forms would need extra work)
> 
>   All comments welcome.
> 
> Dave

Queued

> 
> v2
>   Use of if defined(...) in the json file based on feedback
>   A few missing ifdef's (from a bsd build test)
>   Added nbd example.
> 
> 
> Dr. David Alan Gilbert (5):
>   channel-socket: Only set CLOEXEC if we have space for fds
>   io/net-listener: Call the notifier during finalize
>   migration: Add cleanup hook for inwards migration
>   migration/socket: Close the listener at the end
>   sockets: Support multipath TCP
> 
>  io/channel-socket.c   |  8 
>  io/dns-resolver.c |  4 
>  io/net-listener.c |  3 +++
>  migration/migration.c |  3 +++
>  migration/migration.h |  4 
>  migration/multifd.c   |  5 +
>  migration/socket.c| 24 ++--
>  qapi/sockets.json |  5 -
>  util/qemu-sockets.c   | 23 +++
>  9 files changed, 68 insertions(+), 11 deletions(-)
> 
> -- 
> 2.31.1
> 
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH v6 1/4] Jobs based on custom runners: documentation and configuration placeholder

2021-06-08 Thread Wainer dos Santos Moschetta



On 6/8/21 12:14 AM, Cleber Rosa wrote:

As described in the included documentation, the "custom runner" jobs
extend the GitLab CI jobs already in place.  One of their primary
goals of catching and preventing regressions on a wider number of host
systems than the ones provided by GitLab's shared runners.

This sets the stage in which other community members can add their own
machine configuration documentation/scripts, and accompanying job
definitions.  As a general rule, those newly added contributed jobs
should run as "non-gating", until their reliability is verified (AKA
"allow_failure: true").

Signed-off-by: Cleber Rosa 
---
  .gitlab-ci.d/custom-runners.yml | 14 ++
  .gitlab-ci.d/qemu-project.yml   |  1 +
  docs/devel/ci.rst   | 28 
  docs/devel/index.rst|  1 +
  4 files changed, 44 insertions(+)
  create mode 100644 .gitlab-ci.d/custom-runners.yml
  create mode 100644 docs/devel/ci.rst



Reviewed-by: Wainer dos Santos Moschetta 




diff --git a/.gitlab-ci.d/custom-runners.yml b/.gitlab-ci.d/custom-runners.yml
new file mode 100644
index 00..a07b27384c
--- /dev/null
+++ b/.gitlab-ci.d/custom-runners.yml
@@ -0,0 +1,14 @@
+# The CI jobs defined here require GitLab runners installed and
+# registered on machines that match their operating system names,
+# versions and architectures.  This is in contrast to the other CI
+# jobs that are intended to run on GitLab's "shared" runners.
+
+# Different than the default approach on "shared" runners, based on
+# containers, the custom runners have no such *requirement*, as those
+# jobs should be capable of running on operating systems with no
+# compatible container implementation, or no support from
+# gitlab-runner.  To avoid problems that gitlab-runner can cause while
+# reusing the GIT repository, let's enable the clone strategy, which
+# guarantees a fresh repository on each job run.
+variables:
+  GIT_STRATEGY: clone
diff --git a/.gitlab-ci.d/qemu-project.yml b/.gitlab-ci.d/qemu-project.yml
index 64cb2ba1da..dde8270301 100644
--- a/.gitlab-ci.d/qemu-project.yml
+++ b/.gitlab-ci.d/qemu-project.yml
@@ -9,3 +9,4 @@ include:
- local: '/.gitlab-ci.d/crossbuilds.yml'
- local: '/.gitlab-ci.d/buildtest.yml'
- local: '/.gitlab-ci.d/static_checks.yml'
+  - local: '/.gitlab-ci.d/custom-runners.yml'
diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
new file mode 100644
index 00..585b7bf4b8
--- /dev/null
+++ b/docs/devel/ci.rst
@@ -0,0 +1,28 @@
+==
+CI
+==
+
+QEMU has configurations enabled for a number of different CI services.
+The most up to date information about them and their status can be
+found at::
+
+   https://wiki.qemu.org/Testing/CI
+
+Jobs on Custom Runners
+==
+
+Besides the jobs run under the various CI systems listed before, there
+are a number additional jobs that will run before an actual merge.
+These use the same GitLab CI's service/framework already used for all
+other GitLab based CI jobs, but rely on additional systems, not the
+ones provided by GitLab as "shared runners".
+
+The architecture of GitLab's CI service allows different machines to
+be set up with GitLab's "agent", called gitlab-runner, which will take
+care of running jobs created by events such as a push to a branch.
+Here, the combination of a machine, properly configured with GitLab's
+gitlab-runner, is called a "custom runner".
+
+The GitLab CI jobs definition for the custom runners are located under::
+
+  .gitlab-ci.d/custom-runners.yml
diff --git a/docs/devel/index.rst b/docs/devel/index.rst
index 791925dcda..c9a02e786e 100644
--- a/docs/devel/index.rst
+++ b/docs/devel/index.rst
@@ -27,6 +27,7 @@ Contents:
 migration
 atomics
 stable-process
+   ci
 qtest
 decodetree
 secure-coding-practices





Re: [PATCH v6 4/4] Jobs based on custom runners: add job definitions for QEMU's machines

2021-06-08 Thread Wainer dos Santos Moschetta

Hi,

On 6/8/21 12:14 AM, Cleber Rosa wrote:

The QEMU project has two machines (aarch64 and s390x) that can be used
for jobs that do build and run tests.  This introduces those jobs,
which are a mapping of custom scripts used for the same purpose.

Signed-off-by: Cleber Rosa 
---
  .gitlab-ci.d/custom-runners.yml | 208 
  1 file changed, 208 insertions(+)

diff --git a/.gitlab-ci.d/custom-runners.yml b/.gitlab-ci.d/custom-runners.yml
index a07b27384c..061d3cdfed 100644
--- a/.gitlab-ci.d/custom-runners.yml
+++ b/.gitlab-ci.d/custom-runners.yml
@@ -12,3 +12,211 @@
  # guarantees a fresh repository on each job run.
  variables:
GIT_STRATEGY: clone
+
+# All ubuntu-18.04 jobs should run successfully in an environment
+# setup by the scripts/ci/setup/build-environment.yml task
+# "Install basic packages to build QEMU on Ubuntu 18.04/20.04"
+ubuntu-18.04-s390x-all-linux-static:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'


Should it restrict the job for pushes to qemu-project only? If yes, then 
it probably needs the statement:


'$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'

If you change that here, you will end it changing all over the jobs. In 
general, there are many boilerplates in this file. I'm ok to merge it as 
is as long as it is followed by another series to refactor the code.



+ script:
+ # --disable-libssh is needed because of 
https://bugs.launchpad.net/qemu/+bug/1838763
+ # --disable-glusterfs is needed because there's no static version of those 
libs in distro supplied packages
+ - mkdir build
+ - cd build
+ - ../configure --enable-debug --static --disable-system --disable-glusterfs 
--disable-libssh
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+ - make --output-sync -j`nproc` check-tcg V=1
+
+ubuntu-18.04-s390x-all:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+ubuntu-18.04-s390x-alldbg:
Maybe we don't need both ubuntu-18.04-s390x-all and 
ubuntu-18.04-s390x-alldbg jobs.

+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --enable-debug --disable-libssh
+ - make clean
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+ubuntu-18.04-s390x-clang:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+   when: manual
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh --cc=clang --cxx=clang++ --enable-sanitizers
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+ubuntu-18.04-s390x-tci:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh --enable-tcg-interpreter
+ - make --output-sync -j`nproc`
I think it needs to `make check-tcg` at least. See "build-tci" in 
`.gitlab-ci.d/buildtest.yml` for other tests being executed on shared 
runners.

+
+ubuntu-18.04-s390x-notcg:
The "build-tcg-disabled" in `.gitlab-ci.d/buildtest.yml` could be 
mimic-ed here too.

+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_18.04
+ - s390x
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+   when: manual
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh --disable-tcg
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+
+# All ubuntu-20.04 jobs should run successfully in an environment
+# setup by the scripts/ci/setup/qemu/build-environment.yml task
+# "Install basic packages to build QEMU on Ubuntu 18.04/20.04"
+ubuntu-20.04-aarch64-all-linux-static:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_20.04
+ - aarch64
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ # --disable-libssh is needed because of 
https://bugs.launchpad.net/qemu/+bug/1838763
+ # --disable-glusterfs is needed because there's no static version of those 
libs in distro supplied packages
+ - mkdir build
+ - cd build
+ - ../configure --enable-debug --static --disable-system --disable-glusterfs 
--disable-libssh
+ - make --output-sync -j`nproc`
+ - make --output-sync -j`nproc` check V=1
+ - make --output-sync -j`nproc` check-tcg V=1
+
+ubuntu-20.04-aarch64-all:
+ allow_failure: true
+ needs: []
+ stage: build
+ tags:
+ - ubuntu_20.04
+ - aarch64
+ rules:
+ - if: '$CI_COMMIT_BRANCH =~ /^staging/'
+ script:
+ - mkdir build
+ - cd build
+ - ../configure --disable-libssh
+ - make 

Re: [PATCH] migration/rdma: Fix cm event use after free

2021-06-08 Thread Dr. David Alan Gilbert
* Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> * Li Zhijian (lizhij...@cn.fujitsu.com) wrote:
> > Signed-off-by: Li Zhijian 
> 
> Thanks! I don't think I understood that 'ack' actually meant 'free'!
> 
> Reviewed-by: Dr. David Alan Gilbert 

Queued

> 
> > ---
> >  migration/rdma.c | 11 ---
> >  1 file changed, 8 insertions(+), 3 deletions(-)
> > 
> > diff --git a/migration/rdma.c b/migration/rdma.c
> > index 1cdb4561f32..d90b29a4b51 100644
> > --- a/migration/rdma.c
> > +++ b/migration/rdma.c
> > @@ -1539,16 +1539,20 @@ static int qemu_rdma_wait_comp_channel(RDMAContext 
> > *rdma)
> >  
> >  if (pfds[1].revents) {
> >  ret = rdma_get_cm_event(rdma->channel, _event);
> > -if (!ret) {
> > -rdma_ack_cm_event(cm_event);
> > +if (ret) {
> > +error_report("failed to get cm event while wait "
> > + "completion channel");
> > +return -EPIPE;
> >  }
> >  
> >  error_report("receive cm event while wait comp 
> > channel,"
> >   "cm event is %d", cm_event->event);
> >  if (cm_event->event == RDMA_CM_EVENT_DISCONNECTED ||
> >  cm_event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) {
> > +rdma_ack_cm_event(cm_event);
> >  return -EPIPE;
> >  }
> > +rdma_ack_cm_event(cm_event);
> >  }
> >  break;
> >  
> > @@ -3285,7 +3289,6 @@ static void rdma_cm_poll_handler(void *opaque)
> >  error_report("get_cm_event failed %d", errno);
> >  return;
> >  }
> > -rdma_ack_cm_event(cm_event);
> >  
> >  if (cm_event->event == RDMA_CM_EVENT_DISCONNECTED ||
> >  cm_event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) {
> > @@ -3298,12 +3301,14 @@ static void rdma_cm_poll_handler(void *opaque)
> >  rdma->return_path->error_state = -EPIPE;
> >  }
> >  }
> > +rdma_ack_cm_event(cm_event);
> >  
> >  if (mis->migration_incoming_co) {
> >  qemu_coroutine_enter(mis->migration_incoming_co);
> >  }
> >  return;
> >  }
> > +rdma_ack_cm_event(cm_event);
> >  }
> >  
> >  static int qemu_rdma_accept(RDMAContext *rdma)
> > -- 
> > 2.30.2
> > 
> > 
> > 
> -- 
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> 
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH v4 4/7] file-posix: try BLKSECTGET on block devices too, do not round to power of 2

2021-06-08 Thread Eric Blake
On Tue, Jun 08, 2021 at 03:16:31PM +0200, Paolo Bonzini wrote:
> bs->sg is only true for character devices, but block devices can also
> be used with scsi-block and scsi-generic.  Unfortunately BLKSECTGET
> returns bytes in an int for /dev/sgN devices, and sectors in a short
> for block devices, so account for that in the code.

Gotta love inconsistent and poorly-documented kernel interfaces! (on my
system, 'man -k BLKSECTGET' had no hits)

> 
> The maximum transfer also need not be a power of 2 (for example I have
> seen disks with 1280 KiB maximum transfer) so there's no need to pass
> the result through pow2floor.
> 
> Signed-off-by: Paolo Bonzini 
> ---
>  block/file-posix.c | 44 
>  1 file changed, 24 insertions(+), 20 deletions(-)
>

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v3 1/1] yank: Unregister function when using TLS migration

2021-06-08 Thread Dr. David Alan Gilbert
* Leonardo Bras (leobra...@gmail.com) wrote:
> After yank feature was introduced in migration, whenever migration
> is started using TLS, the following error happens in both source and
> destination hosts:
> 
> (qemu) qemu-kvm: ../util/yank.c:107: yank_unregister_instance:
> Assertion `QLIST_EMPTY(>yankfns)' failed.
> 
> This happens because of a missing yank_unregister_function() when using
> qio-channel-tls.
> 
> Fix this by also allowing TYPE_QIO_CHANNEL_TLS object type to perform
> yank_unregister_function() in channel_close() and multifd_load_cleanup().
> 
> Also, inside migration_channel_connect() and
> migration_channel_process_incoming() move yank_register_function() so
> it only runs once on a TLS migration.
> 
> Fixes: b5eea99ec2f ("migration: Add yank feature", 2021-01-13)
> Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1964326
> Signed-off-by: Leonardo Bras 

Queued; thank you!

> --
> Changes since v2:
> - Dropped all references to ioc->master
> - yank_register_function() and yank_unregister_function() now only run
>   once in a TLS migration.
> 
> Changes since v1:
> - Cast p->c to QIOChannelTLS into multifd_load_cleanup()
> ---
>  migration/channel.c   | 26 ++
>  migration/multifd.c   |  3 ++-
>  migration/qemu-file-channel.c |  4 +++-
>  3 files changed, 19 insertions(+), 14 deletions(-)
> 
> diff --git a/migration/channel.c b/migration/channel.c
> index c9ee902021..01275a9162 100644
> --- a/migration/channel.c
> +++ b/migration/channel.c
> @@ -38,18 +38,19 @@ void migration_channel_process_incoming(QIOChannel *ioc)
>  trace_migration_set_incoming_channel(
>  ioc, object_get_typename(OBJECT(ioc)));
>  
> -if (object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_SOCKET)) {
> -yank_register_function(MIGRATION_YANK_INSTANCE,
> -   migration_yank_iochannel,
> -   QIO_CHANNEL(ioc));
> -}
> -
>  if (s->parameters.tls_creds &&
>  *s->parameters.tls_creds &&
>  !object_dynamic_cast(OBJECT(ioc),
>   TYPE_QIO_CHANNEL_TLS)) {
>  migration_tls_channel_process_incoming(s, ioc, _err);
>  } else {
> +if (object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_SOCKET) ||
> +object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_TLS)) {
> +yank_register_function(MIGRATION_YANK_INSTANCE,
> +   migration_yank_iochannel,
> +   QIO_CHANNEL(ioc));
> +}
> +
>  migration_ioc_process_incoming(ioc, _err);
>  }
>  
> @@ -76,12 +77,6 @@ void migration_channel_connect(MigrationState *s,
>  ioc, object_get_typename(OBJECT(ioc)), hostname, error);
>  
>  if (!error) {
> -if (object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_SOCKET)) {
> -yank_register_function(MIGRATION_YANK_INSTANCE,
> -   migration_yank_iochannel,
> -   QIO_CHANNEL(ioc));
> -}
> -
>  if (s->parameters.tls_creds &&
>  *s->parameters.tls_creds &&
>  !object_dynamic_cast(OBJECT(ioc),
> @@ -99,6 +94,13 @@ void migration_channel_connect(MigrationState *s,
>  } else {
>  QEMUFile *f = qemu_fopen_channel_output(ioc);
>  
> +if (object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_SOCKET) ||
> +object_dynamic_cast(OBJECT(ioc), TYPE_QIO_CHANNEL_TLS)) {
> +yank_register_function(MIGRATION_YANK_INSTANCE,
> +   migration_yank_iochannel,
> +   QIO_CHANNEL(ioc));
> +}
> +
>  qemu_mutex_lock(>qemu_file_lock);
>  s->to_dst_file = f;
>  qemu_mutex_unlock(>qemu_file_lock);
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 0a4803cfcc..2e8f001bc0 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -987,7 +987,8 @@ int multifd_load_cleanup(Error **errp)
>  for (i = 0; i < migrate_multifd_channels(); i++) {
>  MultiFDRecvParams *p = _recv_state->params[i];
>  
> -if (object_dynamic_cast(OBJECT(p->c), TYPE_QIO_CHANNEL_SOCKET)
> +if ((object_dynamic_cast(OBJECT(p->c), TYPE_QIO_CHANNEL_SOCKET) ||
> + object_dynamic_cast(OBJECT(p->c), TYPE_QIO_CHANNEL_TLS))
>  && OBJECT(p->c)->ref == 1) {
>  yank_unregister_function(MIGRATION_YANK_INSTANCE,
>   migration_yank_iochannel,
> diff --git a/migration/qemu-file-channel.c b/migration/qemu-file-channel.c
> index 876d05a540..fad340ea7a 100644
> --- a/migration/qemu-file-channel.c
> +++ b/migration/qemu-file-channel.c
> @@ -26,6 +26,7 @@
>  #include "qemu-file-channel.h"
>  #include "qemu-file.h"
>  #include "io/channel-socket.h"
> +#include "io/channel-tls.h"
>  #include "qemu/iov.h"
>  #include "qemu/yank.h"

Re: [PATCH v3 0/4] target/i386/cpu: introduce new CPU models for x86-64 ABI levels

2021-06-08 Thread Daniel P . Berrangé
On Mon, Jun 07, 2021 at 06:33:10PM -0300, Eduardo Habkost wrote:
> On Mon, Jun 07, 2021 at 02:58:39PM +0100, Daniel P. Berrangé wrote:
> > This series is motivated by this blog that describes how RHEL-9
> > will recommend use of the x86-64-v2 microarchitectural ABI level:
> > 
> >   
> > https://developers.redhat.com/blog/2021/01/05/building-red-hat-enterprise-linux-9-for-the-x86-64-v2-microarchitecture-level/
> > 
> > The implication of compiling code with -march=x86-64-v2 is that
> > this code will no longer be guaranteed to be runnable on a
> > number of the CPU models exposed by the x86_64 target emulator,
> > most notably qemu64 which is the default.
> > 
> > This series is not proposing to change the QEMU default CPU model
> > for x86_64 target. I show how this is can trivially be done in
> > patch 3, but not suggesting that we actually do that, as upstream
> > is quite conservative in dropping support for old host hardware.
> > 
> > New CPU models
> > ==
> > 
> > It is instead exploring the possibilities of defining new CPU
> > models in QEMU to closely match the x86-64 uarch ABI levels in
> > a relatively vendor agnostic manner. This could be used by
> > downstream vendors who wish to target specific uarch ABI levels
> > in custom machine types.
> > 
> > One of the nice things about "qemu64" is that its naming presents
> > it as effectively being a vendor-neutral model (if we ignore that
> > vendor=AMD is in fact reported in CPUID).
> > 
> > If we look at the feature set fo x86-64-v2 ABI, we see that the
> > QEMU "Nehalem" model is the closest match. This is also happens
> > to be runnable on AMD Opteron G4/G5 and EPYC hosts. None the less,
> > the use of an Intel specific CPU model name on an AMD host feels
> > uncomfortable.
> > 
> > Vendor neutral naming
> > =
> > 
> > The idea behind this series is thus to introduce new CPU model
> > names with vendor neutral naming, to more directly correlate
> > with defined x86-64 uarch ABI levels. We don't want to just
> > invent CPUs with a completely arbitrary set of CPU features as
> > history has shown that brings its own problems. eg a guest
> > uses features A and B, but only does CPUID checks for existence
> > of feature B, assuming that B implies A.
> > 
> > The specification for x86-64 ABI levels uses x86-64-vNN naming
> > but this clashes with QEMU's use of "vNN" for versioning. I
> > felt it would be confusing to end up with CPU model names
> > like  "x86-64-v1-v1". Thus I've used an "-abiNNN" suffix
> > instead. Also note I have an "-abi1" suffix here for the
> > baseline. Arguably we could just drop the suffix entirely for
> > the baseline.
> > 
> > A further note is that we could have used "qemu64" as the
> > naming prefix, eg qemu64-abi2, qemu64-abi3, etc. Alot of
> > people hold negative opinions of the qemu64 model in general
> > though, so I felt a clean break with the past might be
> > desirable, even though the x86-64-abi1 CPU  model is effectively
> > identical to qemu64.
> > 
> > Runnability of new models
> > =
> > 
> > The goal of the x86-64-abiNNN CPU models is that they should be
> > runnable on any physical host which supports the CPUIDs features
> > for that uarch ABI level. It is hard to figure out what exact
> > set of CPUID features we should report. The uarch ABI document
> > only specifies the minimum expectation, but we can't define a
> > CPU in QEMU using only the minimum set as that would omit
> > countless important features.
> > 
> 
> Do you have a list of features that were not in the ABI document
> but were included in the CPU models you've added?  What exactly
> make them important enough for us, but not important enough for
> the ABI level specification writers?

The ABI specification isn't trying to define a real CPU, so it
doesn't define a full set of features. It is merely setting a
minimum bar, against which vendor specific CPUs are evaluated
for compliance. IOW, there are multiple AMD and Intel CPUs that
satisfy x86-64-abi1. They will all have have a distinct set of
features, but share a common core. The ABI spec is fairly
focused on the various SIMD matrix instructions in particular.

In this QEMU patch meanwhile, we are attempting to define a
real CPU, so need to spec more than just the core featureset.

> In patch 2/3 you wrote:
> 
> | Historically we've found that defining CPU models with an arbitrary
> | combination of CPU features can be problematic, as some guest OS
> | will not check all features they use, and instead assume that if
> | they see feature "XX", then "YY" will always exist. This is reasonable
> | in bare metal, but subject to breakage in virtualization.
> 
> Do we know how likely that is?  Any examples where this has
> happened?

I don't have specific bugs off hand. I just know that the traditional
way libvirt implemented host-model by taking a base model and turning
on/off arbitrary features has been a source of bugs.

> What if we 

Re: [PATCH v4 3/7] block: add max_hw_transfer to BlockLimits

2021-06-08 Thread Eric Blake
On Tue, Jun 08, 2021 at 03:16:30PM +0200, Paolo Bonzini wrote:
> For block host devices, I/O can happen through either the kernel file
> descriptor I/O system calls (preadv/pwritev, io_submit, io_uring)
> or the SCSI passthrough ioctl SG_IO.
> 
> In the latter case, the size of each transfer can be limited by the
> HBA, while for file descriptor I/O the kernel is able to split and
> merge I/O in smaller pieces as needed.  Applying the HBA limits to
> file descriptor I/O results in more system calls and suboptimal
> performance, so this patch splits the max_transfer limit in two:
> max_transfer remains valid and is used in general, while max_hw_transfer
> is limited to the maximum hardware size.  max_hw_transfer can then be
> included by the scsi-generic driver in the block limits page, to ensure
> that the stricter hardware limit is used.
> 
> Signed-off-by: Paolo Bonzini 
> ---
>  block/block-backend.c  | 12 
>  block/file-posix.c |  2 +-
>  block/io.c |  1 +
>  hw/scsi/scsi-generic.c |  2 +-
>  include/block/block_int.h  |  7 +++
>  include/sysemu/block-backend.h |  1 +

[you can use git's orderfile option to put .h changes first]

>  6 files changed, 23 insertions(+), 2 deletions(-)
> 
> diff --git a/block/block-backend.c b/block/block-backend.c
> index 15f1ea4288..2ea1412a54 100644
> --- a/block/block-backend.c
> +++ b/block/block-backend.c
> @@ -1953,6 +1953,18 @@ uint32_t blk_get_request_alignment(BlockBackend *blk)
>  return bs ? bs->bl.request_alignment : BDRV_SECTOR_SIZE;
>  }
>  
> +/* Returns the maximum hardware transfer length, in bytes; guaranteed 
> nonzero */
> +uint64_t blk_get_max_hw_transfer(BlockBackend *blk)

Since we have declared (elsewhere) that the maximum block device is
signed, would this be better as int64_t?  (Our reasoning is that off_t
is also signed, and we are unlikely to need to handle anything bigger
than what off_t can handle; plus it leaves room for returning errors,
although this particular function is not giving errors; see also
Vladimir's work on making the block layer 64-bit clean).  I'm not
opposed to unsigned here to represent lack of errors, but consistency
with the rest of the block layer may argue for signed.

> +++ b/include/block/block_int.h
> @@ -695,6 +695,13 @@ typedef struct BlockLimits {
>   * clamped down. */
>  uint32_t max_transfer;
>  
> +/* Maximal hardware transfer length in bytes.  Applies whenever
> + * transfers to the device bypass the kernel I/O scheduler, for
> + * example with SG_IO.  If larger than max_transfer or if zero,
> + * blk_get_max_hw_transfer will fall back to max_transfer.
> + */
> +uint64_t max_hw_transfer;
> +

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v4 2/7] scsi-generic: pass max_segments via max_iov field in BlockLimits

2021-06-08 Thread Eric Blake
On Tue, Jun 08, 2021 at 03:16:29PM +0200, Paolo Bonzini wrote:
> I/O to a disk via read/write is not limited by the number of segments allowed
> by the host adapter; the kernel can split requests if needed, and the limit
> imposed by the host adapter can be very low (256k or so) to avoid that SG_IO
> returns EINVAL if memory is heavily fragmented.

to avoid SG_IO returning EINVAL

> 
> Since this value is only interesting for SG_IO-based I/O, do not include
> it in the max_transfer and only take it into account when patching the
> block limits VPD page in the scsi-generic device.
> 
> Signed-off-by: Paolo Bonzini 
> ---
>  block/file-posix.c | 3 +--
>  hw/scsi/scsi-generic.c | 6 --
>  2 files changed, 5 insertions(+), 4 deletions(-)
>

Reviewed-by: Eric Blake 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




RE: [RFC] Adding the A64FX's HPC funtions.

2021-06-08 Thread ishii.shuuic...@fujitsu.com
Hi, peter.

Thank you for your comment.

> I think it would be worth scoping out how much work the a64fx CPU would
> require (ie what else does it need beyond these extensions and whatever
> features we currently implement?). If that's not a lot of work it might be 
> simpler
> to just add it (possibly even add it but with one or two of its features as
> not-yet-implemented.)

I don't think it will take much effort if you just implement the A64FX extended 
function register. 
As you pointed out, we are investigating the possibility of adding function 
processing, 
but in that case, we think it will take some time to verify, including the 
creation of test tools. 

Also, as we proceed with the implementation of the "-cpu max" option as the 
first step, 
we expect to receive useful comments from the community. 
If there are no problems, we would like to implement the -cpu max option in the 
first step.
What do you think?

Best regards.

> -Original Message-
> From: Peter Maydell 
> Sent: Thursday, June 3, 2021 4:11 AM
> To: Richard Henderson 
> Cc: Ishii, Shuuichirou/石井 周一郎 ;
> qemu-...@nongnu.org; qemu-devel@nongnu.org
> Subject: Re: [RFC] Adding the A64FX's HPC funtions.
> 
> On Wed, 2 Jun 2021 at 20:02, Richard Henderson
>  wrote:
> > On 6/1/21 8:21 AM, Peter Maydell wrote:
> > >>> 2) Is it OK to specify the option to set the HPC extension of
> > >>> A64FX as follows, for example?
> > >>>
> > >>> -M virt -cpu max,a64fx-hpc-sec=on (*sector cache function) -M virt
> > >>> -cpu max,a64fx-hpc-hwpf=on (*hardware prefetvh assist function) -M
> > >>> virt -cpu max,a64fx-hpc-hwb=on (*hardware barrier function)
> > >>>
> > >>> It is also possible to implement something like -cpu a64fx, but
> > >>> since we don't know if we can implement it immediately, we assume
> > >>> that we will use the -cpu max option first.
> >
> > My first thought is that -cpu max can simply enable the extensions,
> > without extra flags.  The max cpu has all of the features that we can
> > enable, and as I see it this is just one more.
> 
> I dunno, because it's not an architectural feature, it's an implementation
> feature. We can rely on architectural features not to step on each others'
> toes, but there's no guarantee that some other impdef feature might not clash
> with these a64fx ones.
> 
> Also, how does the guest OS typically detect the presence of these features? 
> If
> it does it by looking for MIDR etc values that say "this is an A64FX" then 
> -cpu
> max won't trigger that.
> 
> > I would like to add -cpu a64fx at some point.  But as you say, that
> > need not happen right away.
> 
> I think it would be worth scoping out how much work the a64fx CPU would
> require (ie what else does it need beyond these extensions and whatever
> features we currently implement?). If that's not a lot of work it might be 
> simpler
> to just add it (possibly even add it but with one or two of its features as
> not-yet-implemented.)
> 
> thanks
> -- PMM




RE: [RFC] Adding the A64FX's HPC funtions.

2021-06-08 Thread ishii.shuuic...@fujitsu.com
Hi, Richard.

> Well, Peter disagreed with having them enabled by default in -cpu max, so we
> might need at least one extra property.  I see no reason to have three
> properties -- one property a64fx-hpc should be sufficient.  But we might not
> want any command-line properties, see below...

I understood.

> For comparison, in the Arm Cortex-A76 manual,
>https://developer.arm.com/documentation/100798/0301/
> section B2.4 "AArch64 registers by functional group", there is a concise
> listing of all of the system registers and their reset values.

Thank you for the information.

> The most important of these for QEMU to create '-cpu a64fx' are the
> ID_AA64{ISAR,MMFR,PFR} and MIDR values.  These values determine all of
> the
> standard architectural features,

The values of ID_AA64{ISAR,MMFR,PFR} and MIDR are not listed in the 
specifications published at this time. 
Of course, they are listed in the A64FX specification document managed within 
Fujitsu,
but we cannot tell how far these setting values can be disclosed 
without checking with the A64FX specification staff within Fujitsu.

In order to make the "-cpu a64fx" option, the above problem needs to be solved.
When the necessary register specifications are released,
it will be possible to implement the "-cpu a64fx" option,
but I thought it would be better to implement the "-cpu max" option as a first 
step,
partly because I don't know when it will be possible to solve this problem.

However, MIDR.partnum can be found in "CPU implementer" of /proc/cpuinfo,
and CPU FEAT is partially displayed in kernel boot messages.
It is true that there are some values that are publicly known in a sense from 
Linux on A64FX-equipped machines,
even if they are not listed in the existing public A64FX|specification.

To what extent ID_AA64{ISAR,MMFR,PFR} can be made public needs to be confirmed 
with the A64FX specification staff. As for the MIDR register values,
I think they can be implemented in QEMU as publicly known information that can 
be recognized by the OS.

When considering implementation with the "-cpu a64fx" option,
is there any problem to define only the value of MIDR,
using a temporary value for now until the information of ID_AA64{ISAR,MMFR,PFR} 
can be disclosed?

> Peter is suggesting that if full support for -cpu a64fx apart from the hpc
> extensions is close, then we shouldn't implementing a property for -cpu max,
> but only implement -cpu a64fx.  (Because how does the OS detect the hpc
> feature, apart from the MIDR value?)

The HPC extension is implemented as an impldef register as a unique feature for 
HPC in the A64FX processor.
the existence of the HPC extension would be determined from the fact that 
MIDR.partnum is A64FX (0x46).

Best regards.

> -Original Message-
> From: Richard Henderson 
> Sent: Friday, June 4, 2021 5:08 AM
> To: Ishii, Shuuichirou/石井 周一郎 ; Peter
> Maydell 
> Cc: qemu-...@nongnu.org; qemu-devel@nongnu.org
> Subject: Re: [RFC] Adding the A64FX's HPC funtions.
> 
> On 6/3/21 1:17 AM, ishii.shuuic...@fujitsu.com wrote:
> > Hi, Richard.
> >
> > Thank you for your comment.
> >
> >> My first thought is that -cpu max can simply enable the extensions,
> >> without extra flags.  The max cpu has all of the features that we can
> >> enable, and as I see it this is just one more.
> >
> > Let me confirm a few things about the above comment.
> > Does it mean that I don't need to explicitly enable individual
> > extensions such as a64fx-hpc-sec, a64fx-hpc-hwpf, and a64fx-hpc-hwb,
> > since all extensions can be enabled by specifying -cpu max?
> 
> Well, Peter disagreed with having them enabled by default in -cpu max, so we
> might need at least one extra property.  I see no reason to have three
> properties -- one property a64fx-hpc should be sufficient.  But we might not
> want any command-line properties, see below...
> 
> >
> >> The microarchitectural document provided does not list all of the system
> >> register reset values for the A64FX, and I would be surprised if there were
> an
> >> architectural id register that specified a non-standard extension like 
> >> this.
> >> Thus I would expect to add ARM_FEATURE_A64FX with which to enable
> these
> >> extensions in helper.c.
> >
> > As you said,
> > some of the published specifications do not describe the reset values of the
> registers.
> > I would like to implement this in QEMU by referring to a real machine with
> A64FX.
> 
> I presume there exists some documentation for this somewhere, though
> possibly
> only internal to Fujitsu so far.
> 
> For comparison, in the Arm Cortex-A76 manual,
>https://developer.arm.com/documentation/100798/0301/
> section B2.4 "AArch64 registers by functional group", there is a concise
> listing of all of the system registers and their reset values.
> 
> The most important of these for QEMU to create '-cpu a64fx' are the
> ID_AA64{ISAR,MMFR,PFR} and MIDR values.  These values determine all of
> the
> standard architectural features, and 

Re: [PATCH 02/26] configure: drop unused variables for xts

2021-06-08 Thread Richard Henderson

On 6/8/21 4:22 AM, Paolo Bonzini wrote:

All XTS configuration uses qemu_private_xts.  Drop other variables as
they have only ever been used to generate the summary (which has since
been moved to meson.build).

Signed-off-by: Paolo Bonzini
---
  configure | 4 
  1 file changed, 4 deletions(-)


Reviewed-by: Richard Henderson 

r~



RE: [RFC] Adding the A64FX's HPC funtions.

2021-06-08 Thread ishii.shuuic...@fujitsu.com
Hi, Richard.

Thank you for your comment.

> My first thought is that -cpu max can simply enable the extensions, without
> extra flags.  The max cpu has all of the features that we can enable, and as I
> see it this is just one more.

Let me confirm a few things about the above comment.
Does it mean that I don't need to explicitly enable individual extensions
such as a64fx-hpc-sec, a64fx-hpc-hwpf, and a64fx-hpc-hwb,
since all extensions can be enabled by specifying -cpu max?

> The microarchitectural document provided does not list all of the system
> register reset values for the A64FX, and I would be surprised if there were an
> architectural id register that specified a non-standard extension like this.
> Thus I would expect to add ARM_FEATURE_A64FX with which to enable these
> extensions in helper.c.

As you said,
some of the published specifications do not describe the reset values of the 
registers.
I would like to implement this in QEMU by referring to a real machine with 
A64FX.

> I can certainly help you with this.

I am not familiar with this, and I apologize for any inconvenience this may 
cause,
but I appreciate your cooperation.

Best regards.

> -Original Message-
> From: Richard Henderson 
> Sent: Thursday, June 3, 2021 4:02 AM
> To: Peter Maydell ; Ishii, Shuuichirou/石井 周一郎
> 
> Cc: qemu-...@nongnu.org; qemu-devel@nongnu.org
> Subject: Re: [RFC] Adding the A64FX's HPC funtions.
> 
> On 6/1/21 8:21 AM, Peter Maydell wrote:
> >>> I'm thinking of implementing A64FX HPC extension in qemu.
> >>> A64FX [1] is a CPU developed by Fujitsu that implements armv8+SVE.
> >>>
> >>> [1]
> >>>
> https://github.com/fujitsu/A64FX/blob/master/doc/A64FX_Microarchitec
> >>> ture
> >>> _Manual_en_1.4.pdf
> >>>
> >>> A64FX is a CPU developed for HPC (High Performance Computing), and
> >>> HPC extensions [2] are implemented to improve the performance of user
> programs.
> >>>
> >>> [2]
> >>>
> https://github.com/fujitsu/A64FX/blob/master/doc/A64FX_Specification
> >>> _HP
> >>> C_Extension_v1_EN.pdf
> >>>
> >>> The details of each function are described in [2], and the HPC
> >>> extensions include
> >>> 1) Tag address override
> >>> 2) Sector cache
> >>> 3) Hardware barrier
> >>> 4) Hardware prefetch assist
> >>> are implemented.
> 
> Thanks for the pointers.  It looks to me that it'll be easy to implement 
> these in
> qemu.  We'll need to implement the registers, so that the OS can read back the
> values, but we do not need to actually do anything with them.
> 
> >>> 1) Is target/arm/helper.c enough to implement the register
> >>> (ARMCPRegInfo
> >>> structure) of HPC extension function of A64FX?
> 
> Yes.
> 
> >>> 2) Is it OK to specify the option to set the HPC extension of A64FX
> >>> as follows, for example?
> >>>
> >>> -M virt -cpu max,a64fx-hpc-sec=on (*sector cache function) -M virt
> >>> -cpu max,a64fx-hpc-hwpf=on (*hardware prefetvh assist function) -M
> >>> virt -cpu max,a64fx-hpc-hwb=on (*hardware barrier function)
> >>>
> >>> It is also possible to implement something like -cpu a64fx, but
> >>> since we don't know if we can implement it immediately, we assume
> >>> that we will use the -cpu max option first.
> 
> My first thought is that -cpu max can simply enable the extensions, without
> extra flags.  The max cpu has all of the features that we can enable, and as I
> see it this is just one more.
> 
> I would like to add -cpu a64fx at some point.  But as you say, that need not
> happen right away.
> 
> >>> Since there is no example of A64FX function implemented in QEMU, we
> >>> would appreciate your comments before we post a patch.
> 
> We endeavor to enable features by the architectural id registers when 
> possible.
>   Thus the cpu_isar_feature() checks in helper.c.
> 
> The microarchitectural document provided does not list all of the system
> register reset values for the A64FX, and I would be surprised if there were an
> architectural id register that specified a non-standard extension like this.
> Thus I would expect to add ARM_FEATURE_A64FX with which to enable these
> extensions in helper.c.
> 
> I can certainly help you with this.
> 
> 
> r~




Re: [PATCH 01/26] meson: drop unused CONFIG_GCRYPT_HMAC

2021-06-08 Thread Richard Henderson

On 6/8/21 4:22 AM, Paolo Bonzini wrote:

CONFIG_GCRYPT_HMAC has been removed now that all supported distros have it.

Signed-off-by: Paolo Bonzini
---
  meson.build | 1 -
  1 file changed, 1 deletion(-)


Reviewed-by: Richard Henderson 

r~



  1   2   3   4   >