[PATCH 2/2] virtio-net: delete also control queue when TX/RX deleted

2019-12-25 Thread Yuri Benditovich
https://bugzilla.redhat.com/show_bug.cgi?id=1708480
If the control queue is not deleted together with TX/RX, it
later will be ignored in freeing cache resources and hot
unplug will not be completed.

Signed-off-by: Yuri Benditovich 
---
 hw/net/virtio-net.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index db3d7c38e6..f325440d01 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3101,7 +3101,8 @@ static void virtio_net_device_unrealize(DeviceState *dev, 
Error **errp)
 for (i = 0; i < max_queues; i++) {
 virtio_net_del_queue(n, i);
 }
-
+/* delete also control vq */
+virtio_del_queue(vdev, max_queues * 2);
 qemu_announce_timer_del(>announce_timer, false);
 g_free(n->vqs);
 qemu_del_nic(n->nic);
-- 
2.17.1




[PATCH 1/2] virtio: reset region cache when on queue deletion

2019-12-25 Thread Yuri Benditovich
https://bugzilla.redhat.com/show_bug.cgi?id=1708480
Fix leak of region reference that prevents complete
device deletion on hot unplug.

Signed-off-by: Yuri Benditovich 
---
 hw/virtio/virtio.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 04716b5f6c..baadec8abc 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -2340,6 +2340,11 @@ void virtio_del_queue(VirtIODevice *vdev, int n)
 vdev->vq[n].vring.num_default = 0;
 vdev->vq[n].handle_output = NULL;
 vdev->vq[n].handle_aio_output = NULL;
+/*
+ * with vring.num = 0 the queue will be ignored
+ * in later loops of region cache reset
+ */
+virtio_virtqueue_reset_region_cache(>vq[n]);
 g_free(vdev->vq[n].used_elems);
 }
 
-- 
2.17.1




[PATCH 0/2] Solve problem of hot removal of virtio-net-pci

2019-12-25 Thread Yuri Benditovich
There is regression of hotplug removal of virtio-net-pci on q35
introduced long time ago in 3.0.0 with commit:
 48564041a73adbbff52834f9edbe3806fceefab7
 exec: reintroduce MemoryRegion caching

virtio-net does not destroy some of caching resources upon
hot unplug and on further hot plug the device with the same
name can not be added.
These 2 patches solve the problem.

Yuri Benditovich (2):
  virtio: reset region cache when on queue deletion
  virtio-net: delete also control queue when TX/RX deleted

 hw/net/virtio-net.c | 3 ++-
 hw/virtio/virtio.c  | 5 +
 2 files changed, 7 insertions(+), 1 deletion(-)

-- 
2.17.1




Re: [PATCH] target/ppc: fix memory dump endianness in QEMU monitor

2019-12-25 Thread David Gibson
On Tue, Dec 24, 2019 at 01:10:34PM -0300, Fabiano Rosas wrote:
> David Gibson  writes:
> 
> >> It looks like the hflags MSR_LE is being updated correctly with TCG. But
> >> with KVM we only touch it on system_reset
> >
> > Ah.. right.  I think to fix that we'd want an hreg_compute_hflags() at
> > the end of sucking the state out of KVM.
> >
> 
> Hm.. The hflags is a TCG thing that does not get used with KVM at all,
> except for that one bit in the monitor disas function. I'd rather keep
> it completely out of kvm_enabled code.

That's a fair point.

It doesn't look like there's any reason ppc_tr_init_disas_context()
couldn't initialize ctx->le_mode directly from the MSR rather than
hflags anyway.  If we do that, we should again be able to remove the
LE bit from hflags entirely.  I think that's the way to go.

> 
> Couldn't we perhaps make it conditional on the acceleration type?
> Using kvm_enabled() or some ifdef.
> 
> Thanks
> 
> >> (and possibly h_cede? I don't
> >> know if it is QEMU who handles it).
> >
> > It's KVM.  If we used the qemu one it would add an awful lot of
> > latency to cedes.
> >> 
> >> So I would let hflags be.
> >> 
> >> 
> >> ... Actually, I don't really know the purpose of hflags. It comes from:
> >> 
> >>   commit 3f3373166227b13e762e20d2fb51eadfa6a2d653
> >>   Author: Fabrice Bellard 
> >>   Date:   Wed Aug 20 23:02:09 2003 +
> >>   
> >>   pop ss, mov ss, x and sti disable irqs for the next instruction -
> >>   began dispatch optimization by adding new x86 cpu 'hidden' flags
> >>   
> >>   
> >>   git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@372 
> >> c046a42c-6fe2-441c-8c8c-71466251a162
> >> 
> >> Could any one clarify that?
> >
> > Not really.  It's really, really old, in the cruft bits of TCG I don't
> > much understand.
> 

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [PATCH] block: nbd: Fix dirty bitmap context name

2019-12-25 Thread Nir Soffer
On Thu, Dec 19, 2019 at 5:55 PM Nir Soffer  wrote:
>
> On Thu, Dec 19, 2019 at 5:17 PM Vladimir Sementsov-Ogievskiy
>  wrote:
...
> > Note also, that client may use NBD_OPT_LIST_META_CONTEXT with query
> > "qemu:dirty-bitmap:", to get list of all exported bitmaps.
>
> This is another option, I did not try to use this yet. We can use the single
> exported bitmap and fail if we get more than one. This is probably better
> than changing the entire stack to support bitmap name.

I went with this option for now, getting the single bitmap published by qemu.
https://gerrit.ovirt.org/c/105861/2/common/ovirt_imageio_common/nbd.py

This way I can work with current libvirt, and I don't care about the name of
the bitmap.

Practically this will break if libvirt will configure another dirty
bitmap, but it is fine.
Using API that promise list with unknown bitmap name is the same as API
returning only one bitmap. Libvirt cannot change it now since it will break
existing users :-)




Re: [PATCH] qapi/block: fix nbd-server-add spec

2019-12-25 Thread Vladimir Sementsov-Ogievskiy
21.12.2019 1:04, Eric Blake wrote:
> On 12/19/19 10:14 AM, Nir Soffer wrote:
> 
> 1. Using disk name as a bitmap name is a bad behavior, as they are 
> completely
> different concepts. Especially keeping in mind that user already knows 
> disk name anyway
> and no reason to write this export name inside metadata context of this 
> export.

 The different concept is expressed by the "qemu:dirty-bitmap:" prefix.
 "qemu:dirty-bitmap:export-name" means the dirty bitmap for this export.
>>>
>>> Why do you think so? Did you read NBD specification?
>>
>> Yes - the name of the bitmap does not have any meaning.
>> But for nbd_server_add we allow only single bitmap for export.
> 
> Just because qemu is currently limited to only exposing one bitmap at the 
> moment does not mean that a future version can't expose multiple bitmaps. It 
> may very well be that we have reason to expose both "qemu:dirty-bitmap:timeA" 
> and "qemu:dirty-bitmap:timeB" on the same export, for exposing two bitmaps at 
> once.  To get to that point, we'd have to refactor the QAPI command to allow 
> attaching more than one bitmap at the time of creating the NBD export, but 
> it's not impossible.
> 
>>
>>> Metadata context is always owned by some export.
>>
>> Of course.
>>
>>> Do you mean that there will bemetadata contexts
>>>
>>> qemu:dirty-bitmap:export-A
>>> qemu:dirty-bitmap:export-B
>>>
>>> both defined for export-A?
>>
>> It does not make sense, but it is valid.
> 
> If an image has multiple bitmaps, exposing all of those as separate contexts 
> at the same time for a single export can indeed make sense.
> 
>>
> 2. It's not directly documented. You assume that NAME == @name. I 
> understand that
> it may be assumed.. But it's not documented.

 But NAME is likely to be understood as the name argument, and unlikely to 
 be the
 bitmap name.
>>>
>>> Yes likely. But it's still bad specification, which should be fixed.
>>
>> If we cannot change the current behavior since it will break current users,
>> I agree fixing the spec to describe the current behavior is a good idea.
> 
> We need the doc fix. 

Do you like the patch starting this thread? It's a fix we are talking about..

> Whether we also want an additional fix adding an optional parameter allowing 
> user control over the export name is also under debate (the fact that the old 
> x-nbd-server-add-bitmap supported it shows that it may be useful, but it is 
> not minimal, and as I pointed out at the time of removing x-, libvirt can 
> always control what name is exposed by creating a temporary bitmap and 
> merging from other bitmaps into the temporary).
> 
> We also have to think about a future of parallel backup jobs: libvirt can 
> create a single temporary bitmap to expose whatever name it wants under one 
> job, but if libvirt wants to expose the SAME user-visible name to two 
> parallel jobs, it cannot create two bitmaps with the same name, so having a 
> way to set the user-visible name of an arbitrary bitmap when producing the 
> NBD export makes sense on that front.
> 
> 

> 3. It's never worked like you write. So if we change the behavior, we'll 
> break
> existing users.

 Do we have existing users? isn't this new feature in 4.2?
>>>
>>> No, it's since 4.0
> 
> As long as altering the exported name is controlled by a new optional 
> parameter, it does not hurt older 4.0 clients that do not use the new 
> parameter.
> 


-- 
Best regards,
Vladimir


[Bug 1838390] Re: vmx_write_mem: mmu_gva_to_gpa failed when using hvf

2019-12-25 Thread Jimmy
Ubuntu 18.04 VM crashes after login. Works when the RAM is 4 Gb. Posting
this here as the error message looked very similar.

qemu-system-x86_64   -m 6G  -vga virtio   -show-cursor   -usb   -device
usb-tablet   -enable-kvm   -drive file=~/QEMU/ubuntu-
desktop-18.04.qcow2,if=virtio   -accel hvf   -cpu host

Crashed with below message
vmx_write_mem: mmu_gva_to_gpa 97ccb8dbe000 failed
Abort trap: 6

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1838390

Title:
  vmx_write_mem: mmu_gva_to_gpa failed when using hvf

Status in QEMU:
  New

Bug description:
  Installed qemu 4.0.0 by homebrew, used below commands:

  1. qemu-img create -f raw arch-vm.img 100G
  2. qemu-system-x86_64 -show-cursor -only-migratable -nodefaults -boot order=d 
-cdrom archlinux-2019.07.01-x86_64.iso -cpu host -device virtio-keyboard 
-device virtio-mouse -device virtio-tablet -drive 
file=arch-vm.img,format=raw,if=virtio -m 4096 -machine q35,accel=hvf,vmport=off 
-nic user,ipv6=off,model=virtio -smp 4,sockets=1,cores=2,threads=2 -soundhw hda 
-vga virtio

  Displayed bootloader menu successfully, select "Boot Arch Linux" then
  crashed with message: vmx_write_mem: mmu_gva_to_gpa 91953b54
  failed.

  Use tcg accelerator has no problem but very slow.

  See attachment for full crash report.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1838390/+subscriptions



[Bug 1838390] Re: vmx_write_mem: mmu_gva_to_gpa failed when using hvf

2019-12-25 Thread Jimmy
Crash report

** Attachment added: "mac_qemu_crash_report.txt"
   
https://bugs.launchpad.net/qemu/+bug/1838390/+attachment/5315306/+files/mac_qemu_crash_report.txt

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1838390

Title:
  vmx_write_mem: mmu_gva_to_gpa failed when using hvf

Status in QEMU:
  New

Bug description:
  Installed qemu 4.0.0 by homebrew, used below commands:

  1. qemu-img create -f raw arch-vm.img 100G
  2. qemu-system-x86_64 -show-cursor -only-migratable -nodefaults -boot order=d 
-cdrom archlinux-2019.07.01-x86_64.iso -cpu host -device virtio-keyboard 
-device virtio-mouse -device virtio-tablet -drive 
file=arch-vm.img,format=raw,if=virtio -m 4096 -machine q35,accel=hvf,vmport=off 
-nic user,ipv6=off,model=virtio -smp 4,sockets=1,cores=2,threads=2 -soundhw hda 
-vga virtio

  Displayed bootloader menu successfully, select "Boot Arch Linux" then
  crashed with message: vmx_write_mem: mmu_gva_to_gpa 91953b54
  failed.

  Use tcg accelerator has no problem but very slow.

  See attachment for full crash report.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1838390/+subscriptions



Re: [Qemu-devel] [PATCH] RISCV: support riscv vector extension 0.7.1

2019-12-25 Thread LIU Zhiwei


On 2019/12/20 4:38, Richard Henderson wrote:

On 12/18/19 11:11 PM, LIU Zhiwei wrote:

I'm sorry that it's really hard to absorb your opinion. I don't know why clang
will fail

when index beyond the end of vreg[n] into vreg[n+1].

I thought sure one of the address sanitizer checks would detect array bounds
overrun.  But it becomes irrelevant


As Chih-Min Chao said in another part of PATCH V2 thread,  VLEN will be a
property which can be

specified from command line.  So the sub-struct maybe defined as

struct {
     union{
     uint64_t *u64 ;
     int64_t  *s64;
     uint32_t *u32;
     int32_t  *s32;
     uint16_t *u16;
     int16_t  *s16;
     uint8_t  *u8;
     int8_t   *s8;
     } mem;
     target_ulong vxrm;
     target_ulong vxsat;
     target_ulong vl;
     target_ulong vstart;
     target_ulong vtype;
} vext;

Will that be OK?

Pointers have consequences.  It can be done, but I don't think it is ideal.


The (ill, lmul, sew ) of vtype  will be placed within tb_flags, also the bit of
(VSTART == 0 && VL == VLMAX).

So it will take 8 bits of tb flags for vector extension at least.

Good.

However, I have one problem to support both command line VLEN and vreg_ofs.

As in SVE,  vreg ofs is the offset from cpu_env. If the structure of vector
extension (to support command line VLEN) is

struct {
     union{
     uint64_t *u64 ;
     int64_t  *s64;
     uint32_t *u32;
     int32_t  *s32;
     uint16_t *u16;
     int16_t  *s16;
     uint8_t  *u8;
     int8_t   *s8;
     } mem;
     target_ulong vxrm;
     target_ulong vxsat;
     target_ulong vl;
     target_ulong vstart;
     target_ulong vtype;
} vext

I can't find the way to get the direct offset of vreg from cpu_env.

Maybe I should specify a max VLEN like the way of SVE?

I think a maximum vlen is best.  A command-line option to adjust vlen is all
well and good, but there's no reason to have to support vlen=(1<<29).

Oh, and you probably need a minimum vlen of 16 bytes as well, otherwise you
will run afoul of the assert in tcg-op-gvec.c that requires gvec operations to
be aligned mod 16.

I think that all you need is

 uint64_t vreg[32 * MAX_VLEN / 8] QEMU_ALIGNED(16);

which gives us

uint32_t vreg_ofs(DisasContext *ctx, int reg)
{
 return offsetof(CPURISCVState, vreg) + reg * ctx->vlen;
}


struct {

    uint64_t vreg[32 * RV_VLEN_MAX / 64] QEMU_ALIGNED(16);
    target_ulong vxrm;
    target_ulong vxsat;
    target_ulong vl;
    target_ulong vstart;
    target_ulong vtype;
    } vext;

Is it OK?


I don't see the point of a union for vreg.  I don't think you'll find that you
actually use it at all.


I think I can move most of execution check to translate time like SVE 
now. However, there are still some differences from SVE.


1)cpu_env must be used as a parameter for helper function.

    The helpers need  use env->vext.vl and env->vext.vstart.  Thus it 
will be difficult to use out of line tcg_gen_gvec_ool.


    void tcg_gen_gvec_2_ool(uint32_t dofs, uint32_t aofs,

    uint32_t oprsz, uint32_t maxsz, int32_t data,
    gen_helper_gvec_2 *fn)
    {
        ..
    fn(a0, a1, desc);
 ..
 }
    Maybe I have to write  something similar to tcg_gen_gvec_ool in 
trans_rvv.inc.c.  But it will be redundant.


2)simd_desc is not proper.

    I also need to transfer some members of DisasContext to helpers.

    (Data, Vlmax, Mlen) is my current choice. Vlmax is the num of 
elements of this operation, so it will defined as ctx->lmul * ctx->vlen 
/ ctx->sew;


Data is reserved to expand.  Mlen is mask length for one elment, so it 
will defined as ctx->sew/ctx->lmul. As with Mlen, a active element will


be selected by

   static inline int vext_elem_mask(void *v0, int mlen, int index)
   {
    int idx = (index * mlen) / 8;
    int pos = (index * mlen) % 8;

    return (v0[idx] >> pos) & 0x1;
   }

    So I may have to implement vext_desc instead of use the simd_desc, 
which will be another redundant. Maybe a better way to mask elements?



You do need to document the element ordering that you're going to use for vreg.
  I.e. the mapping between the architectural vector register state and the
emulation state.  You have two choices:

(1) all bytes in host endianness (e.g. target/ppc)
(2) bytes within each uint64_t in host endianness,
 but each uint64_t is little-endian (e.g. target/arm).

Both require some fixup when running on a big-endian host.


Yes, I will take (2).


Best Regards,

Zhiwei



r~


Re: [PATCH v2] target/m68k: only change valid bits in CACR

2019-12-25 Thread Thomas Huth
Am Fri, 20 Dec 2019 18:24:15 +0100
schrieb Laurent Vivier :

> This is used by netBSD (and MacOS ROM) to detect the MMU type
> 
> Signed-off-by: Laurent Vivier 
> ---
> 
> Notes:
> v2: change accordingly to Thomas' comments
>   - Replace MMU feature id by a CPU feature id
>   - fix 68030 mask
>   - add 68060 mask
>   - only mask in m68k_movec_to() function
> 
>  target/m68k/cpu.c| 27 +--
>  target/m68k/cpu.h|  5 -
>  target/m68k/helper.c | 10 +-
>  3 files changed, 34 insertions(+), 8 deletions(-)
> 
> diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
> index e6596de29c..f6a46bf2fb 100644
> --- a/target/m68k/cpu.c
> +++ b/target/m68k/cpu.c
> @@ -114,11 +114,8 @@ static void m68000_cpu_initfn(Object *obj)
>  m68k_set_feature(env, M68K_FEATURE_MOVEP);
>  }
>  
> -static void m68020_cpu_initfn(Object *obj)
> +static void m680x0_cpu_common(CPUM68KState *env)

Maybe add a comment in front of the function that this is for 68020 -
68040, so that it is clear that it is not used for 68000 and 68060 ?

>  {
> -M68kCPU *cpu = M68K_CPU(obj);
> -CPUM68KState *env = >env;
> -
>  m68k_set_feature(env, M68K_FEATURE_M68000);

In the long run, we should maybe rename that flag to M68K_FEATURE_M680X0
or M68K_FEATURE_M68K to be able to distinguish between plain 68000 and
generic 680x0 features ... but let's do that in a later patch...

>  m68k_set_feature(env, M68K_FEATURE_USP);
>  m68k_set_feature(env, M68K_FEATURE_WORD_INDEX);
> @@ -136,14 +133,31 @@ static void m68020_cpu_initfn(Object *obj)
>  m68k_set_feature(env, M68K_FEATURE_CHK2);
>  m68k_set_feature(env, M68K_FEATURE_MOVEP);
>  }
> -#define m68030_cpu_initfn m68020_cpu_initfn
> +
> +static void m68020_cpu_initfn(Object *obj)
> +{
> +M68kCPU *cpu = M68K_CPU(obj);
> +CPUM68KState *env = >env;
> +
> +m680x0_cpu_common(env);
> +m68k_set_feature(env, M68K_FEATURE_M68020);
> +}
> +
> +static void m68030_cpu_initfn(Object *obj)
> +{
> +M68kCPU *cpu = M68K_CPU(obj);
> +CPUM68KState *env = >env;
> +
> +m680x0_cpu_common(env);
> +m68k_set_feature(env, M68K_FEATURE_M68030);
> +}
>  
>  static void m68040_cpu_initfn(Object *obj)
>  {
>  M68kCPU *cpu = M68K_CPU(obj);
>  CPUM68KState *env = >env;
>  
> -m68020_cpu_initfn(obj);
> +m680x0_cpu_common(env);
>  m68k_set_feature(env, M68K_FEATURE_M68040);
>  }
>  
> @@ -166,6 +180,7 @@ static void m68060_cpu_initfn(Object *obj)
>  m68k_set_feature(env, M68K_FEATURE_BKPT);
>  m68k_set_feature(env, M68K_FEATURE_RTD);
>  m68k_set_feature(env, M68K_FEATURE_CHK2);
> +m68k_set_feature(env, M68K_FEATURE_M68060);
>  }
>  
>  static void m5208_cpu_initfn(Object *obj)
> diff --git a/target/m68k/cpu.h b/target/m68k/cpu.h
> index 20de3c379a..11c71fa962 100644
> --- a/target/m68k/cpu.h
> +++ b/target/m68k/cpu.h
> @@ -460,6 +460,10 @@ void do_m68k_semihosting(CPUM68KState *env, int
> nr); 
>  enum m68k_features {
>  M68K_FEATURE_M68000,
> +M68K_FEATURE_M68020,
> +M68K_FEATURE_M68030,
> +M68K_FEATURE_M68040,
> +M68K_FEATURE_M68060,
>  M68K_FEATURE_CF_ISA_A,
>  M68K_FEATURE_CF_ISA_B, /* (ISA B or C).  */
>  M68K_FEATURE_CF_ISA_APLUSC, /* BIT/BITREV, FF1, STRLDSR (ISA A+
> or C).  */ @@ -481,7 +485,6 @@ enum m68k_features {
>  M68K_FEATURE_BKPT,
>  M68K_FEATURE_RTD,
>  M68K_FEATURE_CHK2,
> -M68K_FEATURE_M68040, /* instructions specific to MC68040 */
>  M68K_FEATURE_MOVEP,
>  };
>  
> diff --git a/target/m68k/helper.c b/target/m68k/helper.c
> index ae766a6cb0..4aa13b34ed 100644
> --- a/target/m68k/helper.c
> +++ b/target/m68k/helper.c
> @@ -205,7 +205,15 @@ void HELPER(m68k_movec_to)(CPUM68KState *env,
> uint32_t reg, uint32_t val) return;
>  /* MC680[234]0 */
>  case M68K_CR_CACR:
> -env->cacr = val;
> +if (m68k_feature(env, M68K_FEATURE_M68020)) {
> +env->cacr = val & 0x000f;
> +} else if (m68k_feature(env, M68K_FEATURE_M68030)) {
> +env->cacr = val & 0x3f1f;
> +} else if (m68k_feature(env, M68K_FEATURE_M68040)) {
> +env->cacr = val & 0x80008000;
> +} else if (m68k_feature(env, M68K_FEATURE_M68060)) {
> +env->cacr = val & 0xf8e0e000;
> +}
>  m68k_switch_sp(env);
>  return;
>  /* MC680[34]0 */

Reviewed-by: Thomas Huth