Re: [Qemu-devel] [PATCH] monitor: avoid potential dead-lock when cleaning up

2018-08-19 Thread Markus Armbruster
Marc-André Lureau  writes:

> Hi
> On Wed, Aug 1, 2018 at 5:09 PM Markus Armbruster  wrote:
>>
>> Marc-André Lureau  writes:
>>
>> > Hi
>> >
>> > On Wed, Aug 1, 2018 at 3:19 PM, Markus Armbruster  
>> > wrote:
>> >> Marc-André Lureau  writes:
>> >>
>> >>> When a monitor is connected to a Spice chardev, the monitor cleanup
>> >>> can dead-lock:
>> >>>
>> >>>  #0  0x7f43446637fd in __lll_lock_wait () at /lib64/libpthread.so.0
>> >>>  #1  0x7f434465ccf4 in pthread_mutex_lock () at 
>> >>> /lib64/libpthread.so.0
>> >>>  #2  0x556dd79f22ba in qemu_mutex_lock_impl (mutex=0x556dd81c9220 
>> >>> , file=0x556dd7ae3648 "/home/elmarco/src/qq/monitor.c", 
>> >>> line=645) at /home/elmarco/src/qq/util/qemu-thread-posix.c:66
>> >>>  #3  0x556dd7431bd5 in monitor_qapi_event_queue 
>> >>> (event=QAPI_EVENT_SPICE_DISCONNECTED, qdict=0x556dd9abc850, 
>> >>> errp=0x7fffb7bbddd8) at /home/elmarco/src/qq/monitor.c:645
>> >>>  #4  0x556dd79d476b in qapi_event_send_spice_disconnected 
>> >>> (server=0x556dd98ee760, client=0x556ddaaa8560, errp=0x556dd82180d0 
>> >>> ) at qapi/qapi-events-ui.c:149
>> >>>  #5  0x556dd7870fc1 in channel_event (event=3, info=0x556ddad1b590) 
>> >>> at /home/elmarco/src/qq/ui/spice-core.c:235
>> >>>  #6  0x7f434560a6bb in reds_handle_channel_event (reds=> >>> out>, event=3, info=0x556ddad1b590) at reds.c:316
>> >>>  #7  0x7f43455f393b in main_dispatcher_self_handle_channel_event 
>> >>> (info=0x556ddad1b590, event=3, self=0x556dd9a7d8c0) at 
>> >>> main-dispatcher.c:197
>> >>>  #8  0x7f43455f393b in main_dispatcher_channel_event 
>> >>> (self=0x556dd9a7d8c0, event=event@entry=3, info=0x556ddad1b590) at 
>> >>> main-dispatcher.c:197
>> >>>  #9  0x7f4345612833 in red_stream_push_channel_event 
>> >>> (s=s@entry=0x556ddae2ef40, event=event@entry=3) at red-stream.c:414
>> >>>  #10 0x7f434561286b in red_stream_free (s=0x556ddae2ef40) at 
>> >>> red-stream.c:388
>> >>>  #11 0x7f43455f9ddc in red_channel_client_finalize 
>> >>> (object=0x556dd9bb21a0) at red-channel-client.c:347
>> >>>  #12 0x7f434b5f9fb9 in g_object_unref () at 
>> >>> /lib64/libgobject-2.0.so.0
>> >>>  #13 0x7f43455fc212 in red_channel_client_push (rcc=0x556dd9bb21a0) 
>> >>> at red-channel-client.c:1341
>> >>>  #14 0x556dd76081ba in spice_port_set_fe_open (chr=0x556dd9925e20, 
>> >>> fe_open=0) at /home/elmarco/src/qq/chardev/spice.c:241
>> >>>  #15 0x556dd796d74a in qemu_chr_fe_set_open (be=0x556dd9a37c00, 
>> >>> fe_open=0) at /home/elmarco/src/qq/chardev/char-fe.c:340
>> >>>  #16 0x556dd796d4d9 in qemu_chr_fe_set_handlers (b=0x556dd9a37c00, 
>> >>> fd_can_read=0x0, fd_read=0x0, fd_event=0x0, be_change=0x0, opaque=0x0, 
>> >>> context=0x0, set_open=true) at /home/elmarco/src/qq/chardev/char-fe.c:280
>> >>>  #17 0x556dd796d359 in qemu_chr_fe_deinit (b=0x556dd9a37c00, 
>> >>> del=false) at /home/elmarco/src/qq/chardev/char-fe.c:233
>> >>>  #18 0x556dd7432240 in monitor_data_destroy (mon=0x556dd9a37c00) at 
>> >>> /home/elmarco/src/qq/monitor.c:786
>> >>>  #19 0x556dd743b968 in monitor_cleanup () at 
>> >>> /home/elmarco/src/qq/monitor.c:4683
>> >>>  #20 0x556dd75ce776 in main (argc=3, argv=0x7fffb7bbe458, 
>> >>> envp=0x7fffb7bbe478) at /home/elmarco/src/qq/vl.c:4660
>> >>>
>> >>> Because spice code tries to emit a "disconnected" signal on the
>> >>> monitors. Fix this situation by tightening the monitor lock time to
>> >>> the monitor list removal.
>> >>>
>> >>> Signed-off-by: Marc-André Lureau 
>> >>
>> >> Do you think this should go into 3.0?
>> >>
>> >>> ---
>> >>>  monitor.c | 22 +++---
>> >>>  1 file changed, 15 insertions(+), 7 deletions(-)
>> >>>
>> >>> diff --git a/monitor.c b/monitor.c
>> >>> index 0fa0910a2a..a16a6c5311 100644
>> >>> --- a/monitor.c
>> >>> +++ b/monitor.c
>> >>> @@ -4702,8 +4702,6 @@ void monitor_init(Chardev *chr, int flags)
>> >>>
>> >>>  void monitor_cleanup(void)
>> >>>  {
>> >>> -Monitor *mon, *next;
>> >>> -
>> >>>  /*
>> >>>   * We need to explicitly stop the I/O thread (but not destroy it),
>> >>>   * clean up the monitor resources, then destroy the I/O thread since
>> >>> @@ -4719,14 +4717,24 @@ void monitor_cleanup(void)
>> >>>  monitor_qmp_bh_responder(NULL);
>> >>>
>> >>>  /* Flush output buffers and destroy monitors */
>> >>> -qemu_mutex_lock(&monitor_lock);
>> >>> -QTAILQ_FOREACH_SAFE(mon, &mon_list, entry, next) {
>> >>> -QTAILQ_REMOVE(&mon_list, mon, entry);
>> >>> +do {
>> >>
>> >> for (;;), please.
>> >>
>> >>> +Monitor *mon;
>> >>> +
>> >>> +qemu_mutex_lock(&monitor_lock);
>> >>> +mon = QTAILQ_FIRST(&mon_list);
>> >>> +if (mon) {
>> >>> +QTAILQ_REMOVE(&mon_list, mon, entry);
>> >>> +}
>> >>> +qemu_mutex_unlock(&monitor_lock);
>> >>> +
>> >>> +if (!mon) {
>> >>> +break;
>> >>> +}
>> >>> +
>> >>>  monitor_flush(mon);
>> >>>  monitor_data_destroy(mon);

Re: [Qemu-devel] [RFC PATCH] rbd: Don't convert keypairs to JSON and back

2018-08-19 Thread Markus Armbruster
Max Reitz  writes:

> On 2018-08-16 08:40, Markus Armbruster wrote:
>> Max Reitz  writes:
>> 
>>> On 2018-08-15 10:12, Markus Armbruster wrote:
 Max Reitz  writes:
>>>
>>> [...]
>>>
> To me personally the issue is that if you can specify a plain filename,
> bdrv_refresh_filename() should give you that plain filename back.  So
> rbd's implementation of that is lacking.  Well, it just doesn't exist.

 I'm not even sure I understand what you're talking about.
>>>
>>> We have this bdrv_refresh_filename() thing which can do this:
>>>
>>> $ qemu-img info \
>>> "json:{'driver':'raw',
>>>'file':{'driver':'nbd','host':'localhost'}}"
>>> image: nbd://localhost:10809
>>> [...]
>>>
>>> So it can reconstruct a plain filename even if you specify it as options
>>> instead of just using a plain filename.
>>>
>>>
>>> Now here's my fault: I thought it might be necessary for a driver to
>>> implement that function (which rbd doesn't) so that you'd get a nice
>>> filename back (instead of just json:{} garbled things).   But you don't.
>>>  For protocol drivers, you'll just get the initial filename back.  (So
>>> my comment was just wrong.)
>>>
>>> So what I was thinking about was some case where you specified a normal
>>> plain filename and qemu would give you back json:{}.  (If rbd
>>> implemented bdrv_refresh_filename(), that wouldn't happen, because it
>>> would reconstruct a nice normal filename.)  It turns out, I don't think
>>> that can happen so easily.  You'll just get your filename back.
>>>
>>> Because here's what I'm thinking: If someone uses an option that is
>>> undocumented and starts with =, well, too bad.  If someone uses a normal
>>> filename, but gets back a json:{} filename...  Then they are free to use
>>> that anywhere, and their use of "=" is legitimized.
>>>
>>>
>>> Now that issue kind of reappears when you open an RBD volume, and then
>>> e.g. take a blockdev-snapshot.  Then your overlay has an overridden
>>> backing file (one that may be different from what its image header says)
>>> and its filename may well become a json:{} one (to arrange for the
>>> overridden backing file options).  Of course, if you opened the RBD
>>> volume with a filename with some of the options warranting
>>> =keyvalue-pairs, then your json:{} filename will contain those options
>>> under =keyvalue-pairs.
>>>
>>> So...  I'm not quite sure what I want to say?  I think there are edge
>>> cases where the user may not have put any weird option into qemu, but
>>> they do get a json:{} filename with =keyvalue-pairs out of it.  And I
>>> think users are free to use json:{} filenames qemu spews at them, and we
>>> can't blame them for it.
>> 
>> Makes sense.
>> 
>> More reason to deprecate key-value pairs in pseudo-filenames.
>> 
>> The only alternative would be to also provide them in QAPI
>> BlockdevOptionsRbd.  I find that as distasteful as ever, but if the
>> block maintainers decide we need it, I'll hold my nose.
>> 
>> If so, and we are comfortable changing the output the way this patch does
>> (technically altering ABI anyway), we might as well go all the way and
>> filter it out completely.  That would be preferable to cleaning up the 
>> json
>> output of the internal key/value pairs, IMO.
>
> Well, this filtering at least is done by my "Fix some filename
> generation issues" series.

 Likewise.
>>>
>>> The series overhauls quite a bit of the bdrv_refresh_filename()
>>> infrastructure.  That function is also responsible for generating
>>> json:{} filenames.
>>>
>>> One thing it introduces is a BlockDriver field where a driver can
>>> specify which of the runtime options are actually important.  The rest
>>> is omitted from the generated json:{} filename.
>>>
>>> I may have taken the liberty not to include =keyvalue-pairs in RBD's
>>> "strong runtime options" list.
>> 
>> I see.
>> 
>> Permit me to digress a bit.
>> 
>> I understand one application for generating a json: filename is for
>> putting it into an image file header (say as a COW's backing image).
>
> Yes.
>
> (And it's not completely pointless, as there are options you may want to
> specify, but cannot do so in a plain filename.  Like host-key-check for
> https.)

Understood.

> (And technically you need a string filename to point to when doing
> block-commit (Kevin sent patches to remedy this, though), so that could
> be called an application as well.)
>
>> Having image file headers point to other images is not as simple as it
>> may look at first glance.  The basic case of image format plus plain
>> filename (not containing '/') is straightforward enough.  But if I make
>> the filename absolute (with a leading '/'), the image becomes less easy
>> to move to another machine.
>
> That assumes that we correctly implement relative backing file names.
> Believe me, we don't.
>
> For example, say you did this:
>
> $ qemu-img create -f qcow2 foo/bot.qcow2 1M
> $ qemu-img create -f qcow2 -b 

Re: [Qemu-devel] [PULL 0/6] Linux user for 3.1 patches

2018-08-19 Thread no-reply
Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20180819221707.20693-1-laur...@vivier.eu
Subject: [Qemu-devel] [PULL 0/6] Linux user for 3.1 patches

=== TEST SCRIPT BEGIN ===
#!/bin/bash

BASE=base
n=1
total=$(git log --oneline $BASE.. | wc -l)
failed=0

git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram

commits="$(git log --format=%H --reverse $BASE..)"
for c in $commits; do
echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..."
if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then
failed=1
echo
fi
n=$((n+1))
done

exit $failed
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]   patchew/20180819221707.20693-1-laur...@vivier.eu -> 
patchew/20180819221707.20693-1-laur...@vivier.eu
Switched to a new branch 'test'
7978cedd2d linux-user: add QEMU_IFLA_INFO_KIND nested type for tun
627fe9cda1 linux-user: update netlink route types
221a2f81a7 linux-user: fix recvmsg()/recvfrom() with netlink and MSG_TRUNC
47ed6a6db6 sh4: fix use_icount with linux-user
7d2dc89f58 linux-user: fix 32bit g2h()/h2g()
fce4efcc3d qemu-binfmt-conf.sh: add x86_64 target

=== OUTPUT BEGIN ===
Checking PATCH 1/6: qemu-binfmt-conf.sh: add x86_64 target...
WARNING: line over 80 characters
#29: FILE: scripts/qemu-binfmt-conf.sh:17:
+x86_64_magic='\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x3e\x00'

ERROR: line over 90 characters
#30: FILE: scripts/qemu-binfmt-conf.sh:18:
+x86_64_mask='\xff\xff\xff\xff\xff\xfe\xfe\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff'

total: 1 errors, 1 warnings, 18 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 2/6: linux-user: fix 32bit g2h()/h2g()...
Checking PATCH 3/6: sh4: fix use_icount with linux-user...
Checking PATCH 4/6: linux-user: fix recvmsg()/recvfrom() with netlink and 
MSG_TRUNC...
Checking PATCH 5/6: linux-user: update netlink route types...
Checking PATCH 6/6: linux-user: add QEMU_IFLA_INFO_KIND nested type for tun...
=== OUTPUT END ===

Test command exited with code: 1


---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

Re: [Qemu-devel] [PATCH] tests/boot-serial-test: Bump timeout to 6 minutes

2018-08-19 Thread Thomas Huth
On 2018-08-18 12:10, Peter Maydell wrote:
> On 18 August 2018 at 10:07, Thomas Huth  wrote:
>> 6 minutes is really a lot already. I guess most users will hit CTRL-C
>> before waiting so long if there is a realy problem here ... If the
>> current tests just takes a little bit more than 1 minute on the Sparc
>> machine, maybe 2 or 3 minutes would sufficient, too?
> 
> Or maybe not. I don't like tests that run close to their timeout
> limits, because that tends to mean that my automated test runs
> are flaky when the machine happens to be heavily loaded when
> a test is running. I think the purpose of a timeout is to prevent
> the test run hanging indefinitely; as you say, console users can
> hit ctrl-c if they get bored anyway.

... or maybe somebody tries to run TCI on a Sparc host one day ... ok,
you've convinced me, let's go with the 360 seconds:

Reviewed-by: Thomas Huth 

Paolo, could you please queue up this patch (since I don't have anything
else pending for assembling a PULL request)?

 Thomas



Re: [Qemu-devel] [PATCH v2 1/3] hw/pci: factor PCI reserve resources to a separate structure

2018-08-19 Thread Liu, Jing2

Hi Marcel,

On 8/17/2018 11:49 PM, Marcel Apfelbaum wrote:

Hi Jing,


[...]

+/*
+ * additional resources to reserve on firmware init
+ */
+typedef struct PCIResReserve {
+    uint32_t bus_reserve;
+    uint64_t io_reserve;
+    uint64_t mem_reserve;


The patch looks good to me, I noticed you renamed
'mem_no_pref_reserve' to 'mem reserve'.
I remember we had a lot of discussions about the  naming, so they would
be clear and consistent with the firmware counterpart.


OK, will change 'mem_no_pref_reserve' to 'mem_no_pref' and also for
others.

Please add a least a comment in the PCIResReserve.

Will add a comment to the structure definition, and where it's called.


Also, since you encapsulated the fields into a new struct,
you could remove the  "_reserve" suffix so we
remain with clear "bus", "io", "mem" ...


Got it.

Thanks,
Jing


Thanks,
Marcel


+    uint64_t pref32_reserve;
+    uint64_t pref64_reserve;
+} PCIResReserve;
+
  int pci_bridge_qemu_reserve_cap_init(PCIDevice *dev, int cap_offset,
-  uint32_t bus_reserve, uint64_t io_reserve,
-  uint64_t mem_non_pref_reserve,
-  uint64_t mem_pref_32_reserve,
-  uint64_t mem_pref_64_reserve,
-  Error **errp);
+   PCIResReserve res_reserve, Error **errp);
  #endif /* QEMU_PCI_BRIDGE_H */






Re: [Qemu-devel] [PATCH] hw/input/ps2.c: fix erratic mouse behavior for Windows 3.1

2018-08-19 Thread Gerd Hoffmann
On Sun, Aug 19, 2018 at 12:35:09AM -0400, John Arbuckle wrote:
> When the user moves the mouse and moves the scroll wheel at the same
> time, the mouse cursor's movement becomes erratic in Windows 3.1. With
> this patch if the mouse is in ps/2 mode and the scroll wheel is used,
> the command queue is reset. This does not fix the erratic mouse
> problem in Windows NT 4.0.

I don't think we need to reset the queue.  Just ignoring the z axis
value for type 0 should do.  Can you try this?

--- a/hw/input/ps2.c
+++ b/hw/input/ps2.c
@@ -661,6 +661,8 @@ static int ps2_mouse_send_packet(PS2MouseState *s)
 /* extra byte for IMPS/2 or IMEX */
 switch(s->mouse_type) {
 default:
+s->mouse_dz = 0;
+dz1 = 0;
 break;
 case 3:
 if (dz1 > 127)

cheers,
  Gerd




Re: [Qemu-devel] [PATCH] usb-host: insert usb device into hostdevs to be scaned

2018-08-19 Thread gerd hoffmann
  Hi,

> > We don't copy any state.  It is rather pointless, even for save/resume on 
> > the
> > same machine we don't know what state the usb device has and whenever it is
> > still in the state the guest has left it.
> I have tried stop/cont a vm with host-usb device during coping large 
> files(makes usb device busy woking), 
> but usb device was not detached/attached and coping task was completed 
> properly.
>  It seems that we can restart it where we left it off.

stop/cont isn't a problem.

save/resume means "virsh save" + "virsh resume".  This sends the live
migration stream to a file (using "migrate exec:cat>file" monitor
command for save, then 'qemu -incoming "exec:cat file' for resume).

cheers,
  Gerd




[Qemu-devel] [Bug 1470481] Re: qemu-img converts large vhd files into only approx. 127GB raw file causing the VM to crash

2018-08-19 Thread Launchpad Bug Tracker
[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1470481

Title:
  qemu-img converts large vhd files into only approx. 127GB raw file
  causing the VM to crash

Status in QEMU:
  Expired

Bug description:
  I have a VHD file for Windows 2014 server OS. I use the following
  command to convert VHD file (20GB) to a RAW file for KVM.

  qemu-img convert -f vpc -O raw WIN-SNRGCQV6O3O.VHD disk.img

  The output file is about 127GB. When install the VM and boot it up,
  the OS crashes with STOP error after the intial screen. I found on the
  internet that the file limit of 127GB is an existing bug. Kindly fix
  the problem. The workaround to use a Hyper-V to convert to fixed disk
  is not a feasible solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1470481/+subscriptions



[Qemu-devel] [PATCH] tests/tcg/xtensa: add test for failed memory transactions

2018-08-19 Thread Max Filippov
Failed memory transactions should raise exceptions 14 (for fetch) or 15
(for load/store) with XEA2.

Signed-off-by: Max Filippov 
---
 tests/tcg/xtensa/Makefile|  1 +
 tests/tcg/xtensa/test_phys_mem.S | 68 
 2 files changed, 69 insertions(+)
 create mode 100644 tests/tcg/xtensa/test_phys_mem.S

diff --git a/tests/tcg/xtensa/Makefile b/tests/tcg/xtensa/Makefile
index 091518c05583..2f5691f75b09 100644
--- a/tests/tcg/xtensa/Makefile
+++ b/tests/tcg/xtensa/Makefile
@@ -44,6 +44,7 @@ TESTCASES += test_mmu.tst
 TESTCASES += test_mul16.tst
 TESTCASES += test_mul32.tst
 TESTCASES += test_nsa.tst
+TESTCASES += test_phys_mem.tst
 ifdef XT
 TESTCASES += test_pipeline.tst
 endif
diff --git a/tests/tcg/xtensa/test_phys_mem.S b/tests/tcg/xtensa/test_phys_mem.S
new file mode 100644
index ..5db17ce506e8
--- /dev/null
+++ b/tests/tcg/xtensa/test_phys_mem.S
@@ -0,0 +1,68 @@
+#include "macros.inc"
+
+test_suite phys_mem
+
+.purgem test_init
+
+.macro test_init
+movia2, 0xc003 /* PPN */
+movia3, 0xc004 /* VPN */
+wdtlb   a2, a3
+witlb   a2, a3
+.endm
+
+test inst_fetch_no_phys
+set_vector kernel, 1f
+
+movia2, 0xc000
+jx  a2
+1:
+movia2, 0xc000
+rsr a3, excvaddr
+assert  eq, a2, a3
+rsr a3, epc1
+assert  eq, a2, a3
+rsr a3, exccause
+movia2, 14
+assert  eq, a2, a3
+test_end
+
+test read_no_phys
+set_vector kernel, 2f
+
+movia2, 0xc000
+1:
+l32ia3, a2, 0
+test_fail
+2:
+movia2, 0xc000
+rsr a3, excvaddr
+assert  eq, a2, a3
+movia2, 1b
+rsr a3, epc1
+assert  eq, a2, a3
+rsr a3, exccause
+movia2, 15
+assert  eq, a2, a3
+test_end
+
+test write_no_phys
+set_vector kernel, 2f
+
+movia2, 0xc000
+1:
+s32ia3, a2, 0
+test_fail
+2:
+movia2, 0xc000
+rsr a3, excvaddr
+assert  eq, a2, a3
+movia2, 1b
+rsr a3, epc1
+assert  eq, a2, a3
+rsr a3, exccause
+movia2, 15
+assert  eq, a2, a3
+test_end
+
+test_suite_end
-- 
2.11.0




Re: [Qemu-devel] [PATCH] hw/ppc: on 40p machine, change default firmware to OpenBIOS

2018-08-19 Thread David Gibson
On Fri, Aug 10, 2018 at 04:41:54PM +1000, David Gibson wrote:
> On Fri, Aug 10, 2018 at 07:37:12AM +0200, Hervé Poussineau wrote:
> > OpenBIOS gained 40p support in 5b20e4cacecb62fb2bdc6867c11d44cddd77c4ff
> > Use it, instead of relying on an unmaintained and very limited firmware.
> > 
> > Signed-off-by: Hervé Poussineau 
> 
> Applied to ppc-for-3.1, thanks.

But I'm afraid I've had to pull it out.  This breaks boot-serial-test,
because the new firmware prints different messages than the old one
did and the testcase isn't looking for them.

I tried fixing it, but ran into complications I didn't have time to
debug.  So I'm afraid this will miss my coming-soon pull request.

You should be able to reproduce the problem with "make check
SPEED=slow", can you fix this up and resubmit.

> 
> > ---
> >  hw/ppc/prep.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/hw/ppc/prep.c b/hw/ppc/prep.c
> > index 3401570d98..1558855247 100644
> > --- a/hw/ppc/prep.c
> > +++ b/hw/ppc/prep.c
> > @@ -736,7 +736,7 @@ static void ibm_40p_init(MachineState *machine)
> >  /* PCI host */
> >  dev = qdev_create(NULL, "raven-pcihost");
> >  if (!bios_name) {
> > -bios_name = BIOS_FILENAME;
> > +bios_name = "openbios-ppc";
> >  }
> >  qdev_prop_set_string(dev, "bios-name", bios_name);
> >  qdev_prop_set_uint32(dev, "elf-machine", PPC_ELF_MACHINE);
> 



-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [PATCH v2 2/3] hw/pci: add teardown function for PCI resource reserve capability

2018-08-19 Thread Liu, Jing2

Hi Marcel,

On 8/18/2018 12:10 AM, Marcel Apfelbaum wrote:

Hi Jing,

On 08/16/2018 12:28 PM, Jing Liu wrote:

Clean up the PCI config space of resource reserve capability.

Signed-off-by: Jing Liu 
---
  hw/pci/pci_bridge.c | 9 +
  include/hw/pci/pci_bridge.h | 1 +
  2 files changed, 10 insertions(+)

diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
index 15b055e..dbcee90 100644
--- a/hw/pci/pci_bridge.c
+++ b/hw/pci/pci_bridge.c
@@ -465,6 +465,15 @@ int pci_bridge_qemu_reserve_cap_init(PCIDevice 
*dev, int cap_offset,

  return 0;
  }
+void pci_bridge_qemu_reserve_cap_uninit(PCIDevice *dev)
+{
+    uint8_t pos = pci_find_capability(dev, PCI_CAP_ID_VNDR);
+
+    pci_del_capability(dev, PCI_CAP_ID_VNDR, sizeof(PCIBridgeQemuCap));


I think that you only need to call pci_del_capability,


+    memset(dev->config + pos + PCI_CAP_FLAGS, 0,
+   sizeof(PCIBridgeQemuCap) - PCI_CAP_FLAGS);
+}


... no need for the above line. The reason is pci_del_capability
will "unlink" the capability, and even if the data remains in
the configuration space array, it will not be used.


I think I got it: pci_del_capability "unlink" by set the tag
pdev->config[PCI_STATUS] &= ~PCI_STATUS_CAP_LIST;
so that pdev->config will not be used, right?


Do you agree? If yes, just call pci_del_capability and you don't need
this patch.


Yup, I agree with you. And let me remove this patch in next version.

Thanks,
Jing



Thanks,
Marcel



[...]



Re: [Qemu-devel] [PATCH 0/4] Fix socket chardev regression

2018-08-19 Thread Peter Xu
On Fri, Aug 17, 2018 at 03:52:20PM +0200, Marc-André Lureau wrote:
> Hi,
> 
> In commit 25679e5d58e "chardev: tcp: postpone async connection setup"
> (and its follow up 99f2f54174a59), Peter moved chardev socket
> connection to machine_done event. However, chardev created later will
> no longer attempt to connect, and chardev created in tests do not have
> machine_done event (breaking some of vhost-user-test).
> 
> The goal was to move the "connect" source to the chardev frontend
> context (the monitor thread context in his case). chr->gcontext is set
> with qemu_chr_fe_set_handlers(). But there is no guarantee that the
> function will be called in general,

Could you hint a case where we didn't use qemu_chr_fe_set_handlers()
upon a chardev backend?  I thought it was always used in chardev
frontends, and what the backend could do if without a frontend?

[1]

> so we can't delay connection until
> then: the chardev should still attempt to connect during open(), using
> the main context.
> 
> An alternative would be to specify the iothread during chardev
> creation. Setting up monitor OOB would be quite different too, it
> would take the same iothread as argument.
> 
> 99f2f54174a595e is also a bit problematic, since it will behave
> differently before and after machine_done (the first case gives a
> chance to use a different context reliably, the second looks racy)
> 
> In the end, I am not sure this is all necessary, as chardev callbacks
> are called after qemu_chr_fe_set_handlers(), at which point the
> context of sources are updated. In "char-socket: update all ioc
> handlers when changing context", I moved also the hup handler to the
> updated context. So unless the main thread is already stuck, we can
> setup a different context for the chardev at that time. Or not?

IMHO the two patches that you reverted are special-cases for reasons.

The TLS handshake is carried out with an TLS internal GSource which is
not owned by the chardev code, so the qemu_chr_fe_set_handlers() won't
update that GSource (please refer to qio_channel_tls_handshake_task).

The async connection is carried out in a standalone thread that calls
connect().  IMHO we'd better not update the gcontext bound to the
async task since otherwise there'll be a race (IIRC I proposed
something before using a mutex to update the gcontext, but Dan would
prefer not to, and I followed with the suggestion which makes sense to
me).

Could we just postpone these machine done tasks into
qemu_chr_fe_set_handlers() (or say, chr_update_read_handler() hook,
just like what I mentioned in the other thread)?  Though we'll be sure
qemu_chr_fe_set_handlers() will be called for all chardev backends
hence I asked question [1] above.

Regards,

-- 
Peter Xu



Re: [Qemu-devel] [PATCH v2 0/3] hw/pci: PCI resource reserve capability

2018-08-19 Thread Liu, Jing2

Hi Marcel,

On 8/18/2018 12:18 AM, Marcel Apfelbaum wrote:

Hi Jing,

On 08/16/2018 12:28 PM, Jing Liu wrote:

This patch serial is about PCI resource reserve capability.

First patch refactors the resource reserve fields in GenPCIERoorPort
structure out to another new structure, called "PCIResReserve". Modify
the parameter list of pci_bridge_qemu_reserve_cap_init().

Then we add the teardown function called 
pci_bridge_qemu_reserve_cap_uninit().


Last we enable the resource reserve capability for legacy PCI bridge
so that firmware can reserve additional resources for the bridge.


The series looks good to me, please see some minor comments
in the patches.


Thanks very much for your reviewing. I will improve the
codes and send new version then.


Can you please point me to the SeaBIOS / OVMF counterpart?


Sure. SeaBIOS patch serial is here:
https://patchew.org/Seabios/1534386737-8131-1-git-send-email-jing2@linux.intel.com/

Thanks,
Jing


Thanks,
Marcel



Change Log:
v2 -> v1
* add refactoring patch
* add teardown function
* some other fixes

Jing Liu (3):
   hw/pci: factor PCI reserve resources to a separate structure
   hw/pci: add teardown function for PCI resource reserve capability
   hw/pci: add PCI resource reserve capability to legacy PCI bridge

  hw/pci-bridge/gen_pcie_root_port.c | 32 +-
  hw/pci-bridge/pci_bridge_dev.c | 25 
  hw/pci/pci_bridge.c    | 47 
+-

  include/hw/pci/pci_bridge.h    | 18 +++
  4 files changed, 80 insertions(+), 42 deletions(-)







[Qemu-devel] [PATCH] target/xtensa: convert to do_transaction_failed

2018-08-19 Thread Max Filippov
Signed-off-by: Max Filippov 
---
 target/xtensa/cpu.c   |  2 +-
 target/xtensa/cpu.h   |  7 ---
 target/xtensa/op_helper.c | 12 +++-
 3 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index 590813d4f7b9..a54dbe42602d 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -186,7 +186,7 @@ static void xtensa_cpu_class_init(ObjectClass *oc, void 
*data)
 #else
 cc->do_unaligned_access = xtensa_cpu_do_unaligned_access;
 cc->get_phys_page_debug = xtensa_cpu_get_phys_page_debug;
-cc->do_unassigned_access = xtensa_cpu_do_unassigned_access;
+cc->do_transaction_failed = xtensa_cpu_do_transaction_failed;
 #endif
 cc->debug_excp_handler = xtensa_breakpoint_handler;
 cc->disas_set_info = xtensa_cpu_disas_set_info;
diff --git a/target/xtensa/cpu.h b/target/xtensa/cpu.h
index 7472cf3ca32a..1362772617ea 100644
--- a/target/xtensa/cpu.h
+++ b/target/xtensa/cpu.h
@@ -497,9 +497,10 @@ int xtensa_cpu_handle_mmu_fault(CPUState *cs, vaddr 
address, int rw, int size,
 int mmu_idx);
 void xtensa_cpu_do_interrupt(CPUState *cpu);
 bool xtensa_cpu_exec_interrupt(CPUState *cpu, int interrupt_request);
-void xtensa_cpu_do_unassigned_access(CPUState *cpu, hwaddr addr,
- bool is_write, bool is_exec, int opaque,
- unsigned size);
+void xtensa_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr, vaddr 
addr,
+  unsigned size, MMUAccessType access_type,
+  int mmu_idx, MemTxAttrs attrs,
+  MemTxResult response, uintptr_t retaddr);
 void xtensa_cpu_dump_state(CPUState *cpu, FILE *f,
fprintf_function cpu_fprintf, int flags);
 hwaddr xtensa_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr);
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
index d4c942d87980..06fe346f02ff 100644
--- a/target/xtensa/op_helper.c
+++ b/target/xtensa/op_helper.c
@@ -78,18 +78,20 @@ void tlb_fill(CPUState *cs, target_ulong vaddr, int size,
 }
 }
 
-void xtensa_cpu_do_unassigned_access(CPUState *cs, hwaddr addr,
- bool is_write, bool is_exec, int opaque,
- unsigned size)
+void xtensa_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr, vaddr 
addr,
+  unsigned size, MMUAccessType access_type,
+  int mmu_idx, MemTxAttrs attrs,
+  MemTxResult response, uintptr_t retaddr)
 {
 XtensaCPU *cpu = XTENSA_CPU(cs);
 CPUXtensaState *env = &cpu->env;
 
+cpu_restore_state(cs, retaddr, true);
 HELPER(exception_cause_vaddr)(env, env->pc,
-  is_exec ?
+  access_type == MMU_INST_FETCH ?
   INSTR_PIF_ADDR_ERROR_CAUSE :
   LOAD_STORE_PIF_ADDR_ERROR_CAUSE,
-  is_exec ? addr : cs->mem_io_vaddr);
+  addr);
 }
 
 static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
-- 
2.11.0




Re: [Qemu-devel] [PATCH] usb-host: insert usb device into hostdevs to be scaned

2018-08-19 Thread linzhecheng



> -Original Message-
> From: gerd hoffmann [mailto:kra...@redhat.com]
> Sent: Friday, August 17, 2018 2:08 PM
> To: CheneyLin 
> Cc: linzhecheng ; wangxin (U)
> ; qemu-devel@nongnu.org
> Subject: Re: [Qemu-devel] [PATCH] usb-host: insert usb device into hostdevs to
> be scaned
> 
> > > Why live-migrate to localhost?  That is rather tricky due to both
> > > source and target qemu accessing host resources at the same time,
> > > and I guess this is the reason you are seeing the problems here.
> > Thanks for your reply, but I'm still confused about these issues:
> > 1. If local live migration is not supported for host-usb device, why
> > we have to detach it and rescan it in usb_host_post_load_bh? If a
> > device's property *needs_autoscan* is false, how can we scan it(because we
> have detached it)?
> 
> autoscan is for hotplug.  You can un-plug and re-plug a usb device on the
> physical host, and the guest will see these un-plugs and re-plugs.
> 
> autoscan is turned off in case the usb device is specified using
> hostbus+hostaddr attributes, because hostaddr is a moving target.
> Each time the device is plugged in it gets a different address, for hotplug to
> work properly you must specify the device using attributes which don't change.
> You can use hostport instead of hostaddr for example.
> 
> > 2. Normal live migration between two remote hostes makes a vm access
> > differnt host usb devices, how can we copy states and data of them?
> 
> We don't copy any state.  It is rather pointless, even for save/resume on the
> same machine we don't know what state the usb device has and whenever it is
> still in the state the guest has left it.
I have tried stop/cont a vm with host-usb device during coping large 
files(makes usb device busy woking), 
but usb device was not detached/attached and coping task was completed properly.
 It seems that we can restart it where we left it off.

> 
> So instead we virtually un-plug and re-plug the device, to make the guest 
> driver
> re-initialize the device.
> 
> HTH,
>   Gerd




Re: [Qemu-devel] [PATCH 2/2] qemu-img: Add dd seek= option

2018-08-19 Thread Fam Zheng
On Thu, 08/16 04:20, Max Reitz wrote:
> No, the real issue is that dd is still not implemented just as a
> frontend to convert.  Which it should be.  I'm not sure dd was a very
> good idea from the start, and now it should ideally be a frontend to
> convert.
> 
> (My full opinion on the matter: dd has a horrible interface.  I don't
> quite see why we replicated that inside qemu-img.  Also, if you want to
> use dd, why not use qemu-nbd + Linux nbd device + real dd?)

The intention is that dd is a familiar interface and allows for operating on
portions of images. It is much more convenient than "qemu-nbd + Linux nbd + dd"
and a bit more convenient than "booting a Linux VM, attaching the image as a
virtual disk, then use dd in the guest". More so when writing tests.

Fam



[Qemu-devel] [PATCH v2] target/xtensa: clean up gdbstub register handling

2018-08-19 Thread Max Filippov
- move register counting to xtensa/gdbstub.c
- add symbolic names for register types and flags from GDB and use them
  in register counting and access functions.

Signed-off-by: Max Filippov 
---
Changes v1->v2:
- add missing dereference to the n_regs/n_core_regs in xtensa_count_regs

 target/xtensa/cpu.h |  2 ++
 target/xtensa/gdbstub.c | 60 +++--
 target/xtensa/helper.c  | 12 +-
 3 files changed, 51 insertions(+), 23 deletions(-)

diff --git a/target/xtensa/cpu.h b/target/xtensa/cpu.h
index 51b455146494..7472cf3ca32a 100644
--- a/target/xtensa/cpu.h
+++ b/target/xtensa/cpu.h
@@ -503,6 +503,8 @@ void xtensa_cpu_do_unassigned_access(CPUState *cpu, hwaddr 
addr,
 void xtensa_cpu_dump_state(CPUState *cpu, FILE *f,
fprintf_function cpu_fprintf, int flags);
 hwaddr xtensa_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr);
+void xtensa_count_regs(const XtensaConfig *config,
+   unsigned *n_regs, unsigned *n_core_regs);
 int xtensa_cpu_gdb_read_register(CPUState *cpu, uint8_t *buf, int reg);
 int xtensa_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
 void xtensa_cpu_do_unaligned_access(CPUState *cpu, vaddr addr,
diff --git a/target/xtensa/gdbstub.c b/target/xtensa/gdbstub.c
index a8ea98d03fb8..c9450914c72d 100644
--- a/target/xtensa/gdbstub.c
+++ b/target/xtensa/gdbstub.c
@@ -23,6 +23,42 @@
 #include "exec/gdbstub.h"
 #include "qemu/log.h"
 
+enum {
+  xtRegisterTypeArRegfile = 1,  /* Register File ar0..arXX.  */
+  xtRegisterTypeSpecialReg, /* CPU states, such as PS, Booleans, (rsr).  */
+  xtRegisterTypeUserReg,/* User defined registers (rur).  */
+  xtRegisterTypeTieRegfile, /* User define register files.  */
+  xtRegisterTypeTieState,   /* TIE States (mapped on user regs).  */
+  xtRegisterTypeMapped, /* Mapped on Special Registers.  */
+  xtRegisterTypeUnmapped,   /* Special case of masked registers.  */
+  xtRegisterTypeWindow, /* Live window registers (a0..a15).  */
+  xtRegisterTypeVirtual,/* PC, FP.  */
+  xtRegisterTypeUnknown
+};
+
+#define XTENSA_REGISTER_FLAGS_PRIVILEGED0x0001
+#define XTENSA_REGISTER_FLAGS_READABLE  0x0002
+#define XTENSA_REGISTER_FLAGS_WRITABLE  0x0004
+#define XTENSA_REGISTER_FLAGS_VOLATILE  0x0008
+
+void xtensa_count_regs(const XtensaConfig *config,
+   unsigned *n_regs, unsigned *n_core_regs)
+{
+unsigned i;
+
+for (i = 0; config->gdb_regmap.reg[i].targno >= 0; ++i) {
+if (config->gdb_regmap.reg[i].type != xtRegisterTypeTieState &&
+config->gdb_regmap.reg[i].type != xtRegisterTypeMapped &&
+config->gdb_regmap.reg[i].type != xtRegisterTypeUnmapped) {
+++*n_regs;
+if ((config->gdb_regmap.reg[i].flags &
+ XTENSA_REGISTER_FLAGS_PRIVILEGED) == 0) {
+++*n_core_regs;
+}
+}
+}
+}
+
 int xtensa_cpu_gdb_read_register(CPUState *cs, uint8_t *mem_buf, int n)
 {
 XtensaCPU *cpu = XTENSA_CPU(cs);
@@ -40,21 +76,21 @@ int xtensa_cpu_gdb_read_register(CPUState *cs, uint8_t 
*mem_buf, int n)
 }
 
 switch (reg->type) {
-case 9: /*pc*/
+case xtRegisterTypeVirtual: /*pc*/
 return gdb_get_reg32(mem_buf, env->pc);
 
-case 1: /*ar*/
+case xtRegisterTypeArRegfile: /*ar*/
 xtensa_sync_phys_from_window(env);
 return gdb_get_reg32(mem_buf, env->phys_regs[(reg->targno & 0xff)
  % env->config->nareg]);
 
-case 2: /*SR*/
+case xtRegisterTypeSpecialReg: /*SR*/
 return gdb_get_reg32(mem_buf, env->sregs[reg->targno & 0xff]);
 
-case 3: /*UR*/
+case xtRegisterTypeUserReg: /*UR*/
 return gdb_get_reg32(mem_buf, env->uregs[reg->targno & 0xff]);
 
-case 4: /*f*/
+case xtRegisterTypeTieRegfile: /*f*/
 i = reg->targno & 0x0f;
 switch (reg->size) {
 case 4:
@@ -69,7 +105,7 @@ int xtensa_cpu_gdb_read_register(CPUState *cs, uint8_t 
*mem_buf, int n)
 return reg->size;
 }
 
-case 8: /*a*/
+case xtRegisterTypeWindow: /*a*/
 return gdb_get_reg32(mem_buf, env->regs[reg->targno & 0x0f]);
 
 default:
@@ -99,24 +135,24 @@ int xtensa_cpu_gdb_write_register(CPUState *cs, uint8_t 
*mem_buf, int n)
 tmp = ldl_p(mem_buf);
 
 switch (reg->type) {
-case 9: /*pc*/
+case xtRegisterTypeVirtual: /*pc*/
 env->pc = tmp;
 break;
 
-case 1: /*ar*/
+case xtRegisterTypeArRegfile: /*ar*/
 env->phys_regs[(reg->targno & 0xff) % env->config->nareg] = tmp;
 xtensa_sync_window_from_phys(env);
 break;
 
-case 2: /*SR*/
+case xtRegisterTypeSpecialReg: /*SR*/
 env->sregs[reg->targno & 0xff] = tmp;
 break;
 
-case 3: /*UR*/
+case xtRegisterTypeUserReg: /*UR*/
 env->uregs[reg->targno & 0xff] = tmp;
 break;
 

[Qemu-devel] [PULL 6/6] linux-user: add QEMU_IFLA_INFO_KIND nested type for tun

2018-08-19 Thread Laurent Vivier
Signed-off-by: Laurent Vivier 
Message-Id: <20180814161904.12216-4-laur...@vivier.eu>
---
 linux-user/syscall.c | 48 
 1 file changed, 48 insertions(+)

diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index da1fcaf4ca..424296d1a1 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -501,6 +501,20 @@ enum {
 QEMU___IFLA_BRPORT_MAX
 };
 
+enum {
+QEMU_IFLA_TUN_UNSPEC,
+QEMU_IFLA_TUN_OWNER,
+QEMU_IFLA_TUN_GROUP,
+QEMU_IFLA_TUN_TYPE,
+QEMU_IFLA_TUN_PI,
+QEMU_IFLA_TUN_VNET_HDR,
+QEMU_IFLA_TUN_PERSIST,
+QEMU_IFLA_TUN_MULTI_QUEUE,
+QEMU_IFLA_TUN_NUM_QUEUES,
+QEMU_IFLA_TUN_NUM_DISABLED_QUEUES,
+QEMU___IFLA_TUN_MAX,
+};
+
 enum {
 QEMU_IFLA_INFO_UNSPEC,
 QEMU_IFLA_INFO_KIND,
@@ -2315,6 +2329,34 @@ static abi_long 
host_to_target_slave_data_bridge_nlattr(struct nlattr *nlattr,
 return 0;
 }
 
+static abi_long host_to_target_data_tun_nlattr(struct nlattr *nlattr,
+  void *context)
+{
+uint32_t *u32;
+
+switch (nlattr->nla_type) {
+/* uint8_t */
+case QEMU_IFLA_TUN_TYPE:
+case QEMU_IFLA_TUN_PI:
+case QEMU_IFLA_TUN_VNET_HDR:
+case QEMU_IFLA_TUN_PERSIST:
+case QEMU_IFLA_TUN_MULTI_QUEUE:
+break;
+/* uint32_t */
+case QEMU_IFLA_TUN_NUM_QUEUES:
+case QEMU_IFLA_TUN_NUM_DISABLED_QUEUES:
+case QEMU_IFLA_TUN_OWNER:
+case QEMU_IFLA_TUN_GROUP:
+u32 = NLA_DATA(nlattr);
+*u32 = tswap32(*u32);
+break;
+default:
+gemu_log("Unknown QEMU_IFLA_TUN type %d\n", nlattr->nla_type);
+break;
+}
+return 0;
+}
+
 struct linkinfo_context {
 int len;
 char *name;
@@ -2349,6 +2391,12 @@ static abi_long 
host_to_target_data_linkinfo_nlattr(struct nlattr *nlattr,
   nlattr->nla_len,
   NULL,
  
host_to_target_data_bridge_nlattr);
+} else if (strncmp(li_context->name, "tun",
+li_context->len) == 0) {
+return host_to_target_for_each_nlattr(NLA_DATA(nlattr),
+  nlattr->nla_len,
+  NULL,
+
host_to_target_data_tun_nlattr);
 } else {
 gemu_log("Unknown QEMU_IFLA_INFO_KIND %s\n", li_context->name);
 }
-- 
2.17.1




[Qemu-devel] [PULL 3/6] sh4: fix use_icount with linux-user

2018-08-19 Thread Laurent Vivier
This fixes java in a linux-user chroot:
  $ java --version
  qemu-sh4: .../accel/tcg/cpu-exec.c:634: cpu_loop_exec_tb: Assertion 
`use_icount' failed.
  qemu: uncaught target signal 6 (Aborted) - core dumped
  Aborted (core dumped)

In gen_conditional_jump() in the GUSA_EXCLUSIVE part, we must reset
base.is_jmp to DISAS_NEXT after the gen_goto_tb() as it is done in
gen_delayed_conditional_jump() after the gen_jump().

Bug: https://bugs.launchpad.net/qemu/+bug/1768246
Fixes: 4834871bc95b67343248100e2a75ae0d287bc08b
   ("target/sh4: Convert to DisasJumpType")
Reported-by: John Paul Adrian Glaubitz 
Signed-off-by: Laurent Vivier 
Reviewed-by: Richard Henderson 
Reviewed-by: Aurelien Jarno 
Message-Id: <20180811082328.11268-1-laur...@vivier.eu>
---
 target/sh4/translate.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/sh4/translate.c b/target/sh4/translate.c
index 1b9a201d6d..ab254b0e8d 100644
--- a/target/sh4/translate.c
+++ b/target/sh4/translate.c
@@ -293,6 +293,7 @@ static void gen_conditional_jump(DisasContext *ctx, 
target_ulong dest,
disallow it in use_goto_tb, but it handles exit + singlestep.  */
 gen_goto_tb(ctx, 0, dest);
 gen_set_label(l1);
+ctx->base.is_jmp = DISAS_NEXT;
 return;
 }
 
-- 
2.17.1




[Qemu-devel] [PULL 4/6] linux-user: fix recvmsg()/recvfrom() with netlink and MSG_TRUNC

2018-08-19 Thread Laurent Vivier
If recvmsg()/recvfrom() are used with the MSG_TRUNC flag, they return the
real length even if it was longer than the passed buffer.
So when we translate the buffer we must check we don't go beyond the
end of the buffer.

Bug: https://github.com/vivier/qemu-m68k/issues/33
Reported-by: John Paul Adrian Glaubitz 
Signed-off-by: Laurent Vivier 
Reviewed-by: Peter Maydell 
Message-Id: <20180806211806.29845-1-laur...@vivier.eu>
---
 linux-user/syscall.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 1806b33b02..e66faf1c62 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -3892,7 +3892,7 @@ static abi_long do_sendrecvmsg_locked(int fd, struct 
target_msghdr *msgp,
 len = ret;
 if (fd_trans_host_to_target_data(fd)) {
 ret = fd_trans_host_to_target_data(fd)(msg.msg_iov->iov_base,
-   len);
+   MIN(msg.msg_iov->iov_len, len));
 } else {
 ret = host_to_target_cmsg(msgp, &msg);
 }
@@ -4169,7 +4169,12 @@ static abi_long do_recvfrom(int fd, abi_ulong msg, 
size_t len, int flags,
 }
 if (!is_error(ret)) {
 if (fd_trans_host_to_target_data(fd)) {
-ret = fd_trans_host_to_target_data(fd)(host_msg, ret);
+abi_long trans;
+trans = fd_trans_host_to_target_data(fd)(host_msg, MIN(ret, len));
+if (is_error(trans)) {
+ret = trans;
+goto fail;
+}
 }
 if (target_addr) {
 host_to_target_sockaddr(target_addr, addr, addrlen);
-- 
2.17.1




[Qemu-devel] [PULL 0/6] Linux user for 3.1 patches

2018-08-19 Thread Laurent Vivier
The following changes since commit 0abaa41d936becd914a16ee1fe2a981d96d19428:

  Merge remote-tracking branch 'remotes/ehabkost/tags/x86-next-pull-request' 
into staging (2018-08-17 09:46:00 +0100)

are available in the Git repository at:

  git://github.com/vivier/qemu.git tags/linux-user-for-3.1-pull-request

for you to fetch changes up to c072a24c11e713f14ab969799433218d8ceebe19:

  linux-user: add QEMU_IFLA_INFO_KIND nested type for tun (2018-08-20 00:11:06 
+0200)


linux-user fixes:
- netlink fixes (add missing types, fix MSG_TRUNC)
- sh4 fix (tcg state)
- sparc32plus fix (truncate address space to 32bit)
- add x86_64 binfmt data



Laurent Vivier (6):
  qemu-binfmt-conf.sh: add x86_64 target
  linux-user: fix 32bit g2h()/h2g()
  sh4: fix use_icount with linux-user
  linux-user: fix recvmsg()/recvfrom() with netlink and MSG_TRUNC
  linux-user: update netlink route types
  linux-user: add QEMU_IFLA_INFO_KIND nested type for tun

 include/exec/cpu_ldst.h   | 23 +--
 include/exec/cpu_ldst_useronly_template.h | 12 ++--
 linux-user/syscall.c  | 78 ++-
 scripts/qemu-binfmt-conf.sh   |  6 +-
 target/sh4/translate.c|  1 +
 5 files changed, 105 insertions(+), 15 deletions(-)

-- 
2.17.1




[Qemu-devel] [PULL 1/6] qemu-binfmt-conf.sh: add x86_64 target

2018-08-19 Thread Laurent Vivier
Signed-off-by: Laurent Vivier 
Reviewed-by: Richard Henderson 
Reviewed-by: Thomas Huth 
Message-Id: <20180801102944.23457-1-laur...@vivier.eu>
---
 scripts/qemu-binfmt-conf.sh | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/scripts/qemu-binfmt-conf.sh b/scripts/qemu-binfmt-conf.sh
index b0dc8a714a..b5a16742a1 100755
--- a/scripts/qemu-binfmt-conf.sh
+++ b/scripts/qemu-binfmt-conf.sh
@@ -4,7 +4,7 @@
 qemu_target_list="i386 i486 alpha arm armeb sparc32plus ppc ppc64 ppc64le m68k 
\
 mips mipsel mipsn32 mipsn32el mips64 mips64el \
 sh4 sh4eb s390x aarch64 aarch64_be hppa riscv32 riscv64 xtensa xtensaeb \
-microblaze microblazeel or1k"
+microblaze microblazeel or1k x86_64"
 
 
i386_magic='\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x03\x00'
 
i386_mask='\xff\xff\xff\xff\xff\xfe\xfe\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff'
@@ -14,6 +14,10 @@ 
i486_magic='\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x06\
 
i486_mask='\xff\xff\xff\xff\xff\xfe\xfe\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff'
 i486_family=i386
 
+x86_64_magic='\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x3e\x00'
+x86_64_mask='\xff\xff\xff\xff\xff\xfe\xfe\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff'
+x86_64_family=i386
+
 
alpha_magic='\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x26\x90'
 
alpha_mask='\xff\xff\xff\xff\xff\xfe\xfe\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff'
 alpha_family=alpha
-- 
2.17.1




[Qemu-devel] [PULL 5/6] linux-user: update netlink route types

2018-08-19 Thread Laurent Vivier
Add RTA_PREF and RTA_CACHEINFO.

Fix following errors when we start gedit:

  Unknown host RTA type: 12
  Unknown host RTA type: 20

Signed-off-by: Laurent Vivier 
Message-Id: <20180814161904.12216-3-laur...@vivier.eu>
---
 linux-user/syscall.c | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index e66faf1c62..da1fcaf4ca 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -2659,12 +2659,17 @@ static abi_long host_to_target_data_addr_rtattr(struct 
rtattr *rtattr)
 static abi_long host_to_target_data_route_rtattr(struct rtattr *rtattr)
 {
 uint32_t *u32;
+struct rta_cacheinfo *ci;
+
 switch (rtattr->rta_type) {
 /* binary: depends on family type */
 case RTA_GATEWAY:
 case RTA_DST:
 case RTA_PREFSRC:
 break;
+/* u8 */
+case RTA_PREF:
+break;
 /* u32 */
 case RTA_PRIORITY:
 case RTA_TABLE:
@@ -2672,6 +2677,20 @@ static abi_long host_to_target_data_route_rtattr(struct 
rtattr *rtattr)
 u32 = RTA_DATA(rtattr);
 *u32 = tswap32(*u32);
 break;
+/* struct rta_cacheinfo */
+case RTA_CACHEINFO:
+ci = RTA_DATA(rtattr);
+ci->rta_clntref = tswap32(ci->rta_clntref);
+ci->rta_lastuse = tswap32(ci->rta_lastuse);
+ci->rta_expires = tswap32(ci->rta_expires);
+ci->rta_error = tswap32(ci->rta_error);
+ci->rta_used = tswap32(ci->rta_used);
+#if defined(RTNETLINK_HAVE_PEERINFO)
+ci->rta_id = tswap32(ci->rta_id);
+ci->rta_ts = tswap32(ci->rta_ts);
+ci->rta_tsage = tswap32(ci->rta_tsage);
+#endif
+break;
 default:
 gemu_log("Unknown host RTA type: %d\n", rtattr->rta_type);
 break;
-- 
2.17.1




[Qemu-devel] [PULL 2/6] linux-user: fix 32bit g2h()/h2g()

2018-08-19 Thread Laurent Vivier
sparc32plus has 64bit long type but only 32bit virtual address space.

For instance, "apt-get upgrade" failed because of a mmap()/msync()
sequence.

mmap() returned 0xff252000 but msync() used g2h(0xff252000)
to find the host address. The "(target_ulong)" in g2h() doesn't fix the
address because it is 64bit long.

This patch introduces an "abi_ptr" that is set to uint32_t
if the virtual address space is addressed using 32bit in the linux-user
case. It stays set to target_ulong with softmmu case.

Signed-off-by: Laurent Vivier 
Message-Id: <20180814171217.14680-1-laur...@vivier.eu>
Reviewed-by: Richard Henderson 
[lv: added "%" in TARGET_ABI_FMT_ptr "%"PRIx64]
---
 include/exec/cpu_ldst.h   | 23 ++-
 include/exec/cpu_ldst_useronly_template.h | 12 ++--
 linux-user/syscall.c  |  2 +-
 3 files changed, 25 insertions(+), 12 deletions(-)

diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
index 0f2cb717b1..41ed0526e2 100644
--- a/include/exec/cpu_ldst.h
+++ b/include/exec/cpu_ldst.h
@@ -48,8 +48,19 @@
 #define CPU_LDST_H
 
 #if defined(CONFIG_USER_ONLY)
+/* sparc32plus has 64bit long but 32bit space address
+ * this can make bad result with g2h() and h2g()
+ */
+#if TARGET_VIRT_ADDR_SPACE_BITS <= 32
+typedef uint32_t abi_ptr;
+#define TARGET_ABI_FMT_ptr "%x"
+#else
+typedef uint64_t abi_ptr;
+#define TARGET_ABI_FMT_ptr "%"PRIx64
+#endif
+
 /* All direct uses of g2h and h2g need to go away for usermode softmmu.  */
-#define g2h(x) ((void *)((unsigned long)(target_ulong)(x) + guest_base))
+#define g2h(x) ((void *)((unsigned long)(abi_ptr)(x) + guest_base))
 
 #define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
 #define h2g_valid(x) guest_addr_valid((unsigned long)(x) - guest_base)
@@ -61,7 +72,7 @@ static inline int guest_range_valid(unsigned long start, 
unsigned long len)
 
 #define h2g_nocheck(x) ({ \
 unsigned long __ret = (unsigned long)(x) - guest_base; \
-(abi_ulong)__ret; \
+(abi_ptr)__ret; \
 })
 
 #define h2g(x) ({ \
@@ -69,7 +80,9 @@ static inline int guest_range_valid(unsigned long start, 
unsigned long len)
 assert(h2g_valid(x)); \
 h2g_nocheck(x); \
 })
-
+#else
+typedef target_ulong abi_ptr;
+#define TARGET_ABI_FMT_ptr TARGET_ABI_FMT_lx
 #endif
 
 #if defined(CONFIG_USER_ONLY)
@@ -397,7 +410,7 @@ extern __thread uintptr_t helper_retaddr;
  * This is the equivalent of the initial fast-path code used by
  * TCG backends for guest load and store accesses.
  */
-static inline void *tlb_vaddr_to_host(CPUArchState *env, target_ulong addr,
+static inline void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
   int access_type, int mmu_idx)
 {
 #if defined(CONFIG_USER_ONLY)
@@ -405,7 +418,7 @@ static inline void *tlb_vaddr_to_host(CPUArchState *env, 
target_ulong addr,
 #else
 int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
 CPUTLBEntry *tlbentry = &env->tlb_table[mmu_idx][index];
-target_ulong tlb_addr;
+abi_ptr tlb_addr;
 uintptr_t haddr;
 
 switch (access_type) {
diff --git a/include/exec/cpu_ldst_useronly_template.h 
b/include/exec/cpu_ldst_useronly_template.h
index e30e58ed4a..0fd6019af0 100644
--- a/include/exec/cpu_ldst_useronly_template.h
+++ b/include/exec/cpu_ldst_useronly_template.h
@@ -62,7 +62,7 @@
 #endif
 
 static inline RES_TYPE
-glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
+glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, abi_ptr ptr)
 {
 #if !defined(CODE_ACCESS)
 trace_guest_mem_before_exec(
@@ -74,7 +74,7 @@ glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, 
target_ulong ptr)
 
 static inline RES_TYPE
 glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
-  target_ulong ptr,
+  abi_ptr ptr,
   uintptr_t retaddr)
 {
 RES_TYPE ret;
@@ -86,7 +86,7 @@ glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), 
_ra)(CPUArchState *env,
 
 #if DATA_SIZE <= 2
 static inline int
-glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
+glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, abi_ptr ptr)
 {
 #if !defined(CODE_ACCESS)
 trace_guest_mem_before_exec(
@@ -98,7 +98,7 @@ glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, 
target_ulong ptr)
 
 static inline int
 glue(glue(glue(cpu_lds, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
-  target_ulong ptr,
+  abi_ptr ptr,
   uintptr_t retaddr)
 {
 int ret;
@@ -111,7 +111,7 @@ glue(glue(glue(cpu_lds, SUFFIX), MEMSUFFIX), 
_ra)(CPUArchState *env,
 
 #ifndef CODE_ACCESS
 static inline void
-glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr,
+glue(glue(cpu_st, SUFFIX), MEM

Re: [Qemu-devel] [PATCH v2] sh4: fix use_icount with linux-user

2018-08-19 Thread Aurelien Jarno
On 2018-08-16 20:58, Laurent Vivier wrote:
> Le 11/08/2018 à 17:26, Richard Henderson a écrit :
> > On 08/11/2018 01:23 AM, Laurent Vivier wrote:
> >> This fixes java in a linux-user chroot:
> >>   $ java --version
> >>   qemu-sh4: .../accel/tcg/cpu-exec.c:634: cpu_loop_exec_tb: Assertion 
> >> `use_icount' failed.
> >>   qemu: uncaught target signal 6 (Aborted) - core dumped
> >>   Aborted (core dumped)
> >>
> >> In gen_conditional_jump() in the GUSA_EXCLUSIVE part, we must reset
> >> base.is_jmp to DISAS_NEXT after the gen_goto_tb() as it is done in
> >> gen_delayed_conditional_jump() after the gen_jump().
> >>
> >> Bug: https://bugs.launchpad.net/qemu/+bug/1768246
> >> Fixes: 4834871bc95b67343248100e2a75ae0d287bc08b
> >>("target/sh4: Convert to DisasJumpType")
> >> Reported-by: John Paul Adrian Glaubitz 
> >> Signed-off-by: Laurent Vivier 
> >> ---
> >>
> >> Notes:
> >> v2:
> >>   don't revert the part of the original patch,
> >>   but fixes the state problem in gen_conditional_jump()
> > 
> > Reviewed-by: Richard Henderson 

Reviewed-by: Aurelien Jarno 

> Aurélien,
> 
> do you agree if I push this patch through a linux-user pull request?

Yes, that's fine with me.

Thanks,
Aurelien

-- 
Aurelien Jarno  GPG: 4096R/1DDD8C9B
aurel...@aurel32.net http://www.aurel32.net



Re: [Qemu-devel] any suggestions for how to handle guests which expect to be executing out of icache?

2018-08-19 Thread Peter Maydell
On 19 August 2018 at 18:44, Richard Henderson
 wrote:
> On 08/19/2018 03:19 AM, Peter Maydell wrote:
>> Hi; I've been playing around this weekend with writing a QEMU
>> model for a music player I have (an XDuoo X3). This has a MIPS
>> SoC, and its boot process is that the SoC's boot rom loads the
>> guest binary into the CPU's icache and dcache (by playing tricks
>> with the cache tag bits so that it appears to be precached content
>> for a particular physaddr range). The guest binary then runs
>> purely out of cache, until it can initialise the real SDRAM and
>> relocate itself into that.
>>
>> Unfortunately this causes problems for QEMU, because the guest
>> binary expects that while it is running out of the icache at
>> addresses 0x8000-0x80004000 it can happily write data to the
>> SDRAM at that address without overwriting its own code. Since
>> QEMU isn't modelling caches at all, the writes cause the guest
>> to corrupt its own code and it falls over.
>>
>> Does anybody have any suggestions for how we could model this
>> kind of thing?
>
> I assume there are different virtual addresses, or different physical windows
> by which SDRAM is written while the relevant cache lines are pinned?  If so, 
> it
> should be possible to create little ram segments that are mapped into the
> physical+virtual space somewhere with the cache pinning.

MIPS has the thing where vaddr 0x8000..0x9fff is the
"unmapped cached" segment giving a view of the 0x..0x1fff
physaddr space, and 0xa000..0xbfff is the "unmapped uncached"
segment view on to the same physaddrs. So it executes out of 0x8000
and writes direct to the SDRAM (and to MMIO devices) via 0xa000.

I have a hack where the target/mips code maps in a RAM
memory region and then marks it enabled/disabled, and
also special-cases the virt-to-phys lookup so a lookup
of a vaddr at 0x8000 resolves to a physaddr of
0x8000 rather than 0 the way it usually would.
(It's hacky because I'm special-casing assuming the
whole 16K of cache is mapped this way or none is, and
also I've borrowed an IMPDEF 'cache' insn to signal
enabling the mapping and haven't yet figured out when
I should turn it off...but it's good enough to get me
past the bit of guest code that was a problem.)

thanks
-- PMM



[Qemu-devel] [PATCH v2] target-i386: Fix lcall/ljmp to call gate in IA-32e mode

2018-08-19 Thread andrew
From: Andrew Oates 

Currently call gates are always treated as 32-bit gates.  In IA-32e mode
(either compatibility or 64-bit submode), system segment descriptors are
always 64-bit.  Treating them as 32-bit has the expected unfortunate
effect: only the lower 32 bits of the offset are loaded, the stack
pointer is truncated, a bad new stack pointer is loaded from the TSS (if
switching privilege levels), etc.

This change adds support for 64-bit call gate to the lcall and ljmp
instructions.  Additionally, there should be a check for non-canonical
stack pointers, but I've omitted that since there doesn't seem to be
checks for non-canonical addresses in this code elsewhere.

I've left the raise_exception_err_ra lines unwapped at 80 columns to
match the style in the rest of the file.

Signed-off-by: Andrew Oates 
---
v2: fix ljmp as well, and generate #GP if ljmp/lcall'ing to a task gate
or TSS segment.

 target/i386/seg_helper.c | 192 +++
 1 file changed, 152 insertions(+), 40 deletions(-)

diff --git a/target/i386/seg_helper.c b/target/i386/seg_helper.c
index 00301a0c04..b2adddcd7f 100644
--- a/target/i386/seg_helper.c
+++ b/target/i386/seg_helper.c
@@ -518,6 +518,11 @@ static void switch_tss(CPUX86State *env, int tss_selector,
 
 static inline unsigned int get_sp_mask(unsigned int e2)
 {
+#ifdef TARGET_X86_64
+if (e2 & DESC_L_MASK) {
+return 0;
+} else
+#endif
 if (e2 & DESC_B_MASK) {
 return 0x;
 } else {
@@ -1640,6 +1645,14 @@ void helper_ljmp_protected(CPUX86State *env, int new_cs, 
target_ulong new_eip,
 rpl = new_cs & 3;
 cpl = env->hflags & HF_CPL_MASK;
 type = (e2 >> DESC_TYPE_SHIFT) & 0xf;
+
+#ifdef TARGET_X86_64
+if (env->efer & MSR_EFER_LMA) {
+if (type != 12) {
+raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, 
GETPC());
+}
+}
+#endif
 switch (type) {
 case 1: /* 286 TSS */
 case 9: /* 386 TSS */
@@ -1662,6 +1675,23 @@ void helper_ljmp_protected(CPUX86State *env, int new_cs, 
target_ulong new_eip,
 if (type == 12) {
 new_eip |= (e2 & 0x);
 }
+
+#ifdef TARGET_X86_64
+if (env->efer & MSR_EFER_LMA) {
+/* load the upper 8 bytes of the 64-bit call gate */
+if (load_segment_ra(env, &e1, &e2, new_cs + 8, GETPC())) {
+raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc,
+   GETPC());
+}
+type = (e2 >> DESC_TYPE_SHIFT) & 0x1f;
+if (type != 0) {
+raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc,
+   GETPC());
+}
+new_eip |= ((target_ulong)e1) << 32;
+}
+#endif
+
 if (load_segment_ra(env, &e1, &e2, gate_cs, GETPC()) != 0) {
 raise_exception_err_ra(env, EXCP0D_GPF, gate_cs & 0xfffc, 
GETPC());
 }
@@ -1675,11 +1705,22 @@ void helper_ljmp_protected(CPUX86State *env, int 
new_cs, target_ulong new_eip,
 (!(e2 & DESC_C_MASK) && (dpl != cpl))) {
 raise_exception_err_ra(env, EXCP0D_GPF, gate_cs & 0xfffc, 
GETPC());
 }
+#ifdef TARGET_X86_64
+if (env->efer & MSR_EFER_LMA) {
+if (!(e2 & DESC_L_MASK)) {
+raise_exception_err_ra(env, EXCP0D_GPF, gate_cs & 0xfffc, 
GETPC());
+}
+if (e2 & DESC_B_MASK) {
+raise_exception_err_ra(env, EXCP0D_GPF, gate_cs & 0xfffc, 
GETPC());
+}
+}
+#endif
 if (!(e2 & DESC_P_MASK)) {
 raise_exception_err_ra(env, EXCP0D_GPF, gate_cs & 0xfffc, 
GETPC());
 }
 limit = get_seg_limit(e1, e2);
-if (new_eip > limit) {
+if (new_eip > limit &&
+(!(env->hflags & HF_LMA_MASK) || !(e2 & DESC_L_MASK))) {
 raise_exception_err_ra(env, EXCP0D_GPF, 0, GETPC());
 }
 cpu_x86_load_seg_cache(env, R_CS, (gate_cs & 0xfffc) | cpl,
@@ -1724,12 +1765,12 @@ void helper_lcall_protected(CPUX86State *env, int 
new_cs, target_ulong new_eip,
 int shift, target_ulong next_eip)
 {
 int new_stack, i;
-uint32_t e1, e2, cpl, dpl, rpl, selector, offset, param_count;
-uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, sp, type, ss_dpl, sp_mask;
+uint32_t e1, e2, cpl, dpl, rpl, selector, param_count;
+uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl, sp_mask;
 uint32_t val, limit, old_sp_mask;
-target_ulong ssp, old_ssp;
+target_ulong ssp, old_ssp, offset, sp;
 
-LOG_PCALL("lcall %04x:%08x s=%d\n", new_cs, (uint32_t)new_eip, shift);
+LOG_PCALL("lcall %04x:" TARGET_FMT_lx " s=%d\n", new_cs, new_eip, shift);
 LOG_PCALL_STATE(CPU(x86_env_get_

Re: [Qemu-devel] any suggestions for how to handle guests which expect to be executing out of icache?

2018-08-19 Thread Richard Henderson
On 08/19/2018 03:19 AM, Peter Maydell wrote:
> Hi; I've been playing around this weekend with writing a QEMU
> model for a music player I have (an XDuoo X3). This has a MIPS
> SoC, and its boot process is that the SoC's boot rom loads the
> guest binary into the CPU's icache and dcache (by playing tricks
> with the cache tag bits so that it appears to be precached content
> for a particular physaddr range). The guest binary then runs
> purely out of cache, until it can initialise the real SDRAM and
> relocate itself into that.
> 
> Unfortunately this causes problems for QEMU, because the guest
> binary expects that while it is running out of the icache at
> addresses 0x8000-0x80004000 it can happily write data to the
> SDRAM at that address without overwriting its own code. Since
> QEMU isn't modelling caches at all, the writes cause the guest
> to corrupt its own code and it falls over.
> 
> Does anybody have any suggestions for how we could model this
> kind of thing?

I assume there are different virtual addresses, or different physical windows
by which SDRAM is written while the relevant cache lines are pinned?  If so, it
should be possible to create little ram segments that are mapped into the
physical+virtual space somewhere with the cache pinning.


r~



[Qemu-devel] Mainstone II

2018-08-19 Thread Макс Волошин
Hello, could you tell me how to run Linux on QEMU Mainstone II machine? I
took kernel from there:
http://ftp.arm.linux.org.uk/pub/armlinux/people/xscale/mainstone/, but I do
not know what to do with rootfs (it is jffs2). How to add it into memory
bank to be visible to Linux inside QEMU? As for now I get stuck on SCSI
driver (2.4) or AC97 Codecs (2.6). P.S. Do you know what linux distros can
be run on Mainstone?


Re: [Qemu-devel] [PATCH 0/2] Improve qemu-img dd

2018-08-19 Thread Eric Blake

On 08/14/2018 09:56 PM, Eric Blake wrote:

I was trying to test NBD fleecing by copying subsets of one
file to another, and had the idea to use:

$ export NBD drive to be fleeced on port 10809
$ qemu-img create -f qcow2 copy $size
$ qemu-nbd -f qcow2 -p 10810 copy
$ qemu-img dd -f raw -O raw if=nbd://localhost:10809 of=nbd://localhost:10810 \
 skip=$offset seek=$offset count=$((len/cluster)) bs=$cluster

except that seek= wasn't implemented. And in implementing that,
I learned that skip= is broken when combined with count=.

[In the meantime, I had to use:

$ export NBD drive to be fleeced on port 10809
$ modprobe nbd
$ qemu-nbd -c /dev/nbd0 -f raw nbd://localhost:10809
$ qemu-nbd -c /dev/nbd1 -f qcow2 copy
$ dd if=/dev/nbd0 of/dev/nbd1 \


Oops, left out one = on that line.


 skip=$offset seek=$offset count=$((len/cluster)) bs=$cluster


And this needs to be Rather, skip=$((offset/cluster)) 
seek=$((offset/cluster)), unless iflag=skip_bytes oflag=seek_bytes is 
also in use.


What's more, it's essential to use conf=fdatasync when scripting this, 
otherwise, the dd process can end while the data is still in buffers, 
and if 'qemu-nbd -d /dev/nbd1' follows too closely, those buffers are 
lost instead of flushed.  (I lost the better part of a day figuring out 
why things worked when I did it by hand but not when I scripted it, 
until finally figuring out that the final flush is mandatory to avoid 
data loss).




to get the behavior I needed (basically, create an empty qcow2
destination file, then plug in the guest-visible data based on
the subsets of the disk of my choosing, by reading the block
status/dirty bitmap over NBD).  But bouncing through three
NBD client/server pairs just so I can use plain 'dd' instead
of just two pairs with 'qemu-img dd' feels dirty.
]

Eric Blake (2):
   qemu-img: Fix dd with skip= and count=
   qemu-img: Add dd seek= option

  qemu-img.c |  76 ++
  tests/qemu-iotests/160 |  15 +-
  tests/qemu-iotests/160.out | 344 -
  3 files changed, 397 insertions(+), 38 deletions(-)



--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



Re: [Qemu-devel] Simulating a composite machine

2018-08-19 Thread Peter Maydell
On 19 August 2018 at 13:54, Martin Schroeder via Qemu-devel
 wrote:
> Is it possible to instantiate multiple CPUs of different architectures
> and simuate them with different images at the same time? Some examples
> include ARM socs with m3/m4 coprocessor core but also boards with
> multiple processors where it is desirable to connect the chips over
> for example virtual SPI or UART and then simulate the composite system
> as a single machine where each of the cores runs a separate firmware.

Not currently, no. There's some out of tree stuff various people
have done involving connecting up separate QEMU processes.

> Is something like this easy to implement given current processor
> objects or does this require substantial changes to how qemu works?
> One area I do not fully understand is native code generator and
> whether it would be able to cope with two cores of *different*
> architectures at the same time.

At the moment some bits of our core code assume all the CPUs
in the system are basically identical (shared code cache, etc).
I'm planning to do some work to fix the simpler parts of this,
so you can have two different CPUs of the same architecture
in a system (eg a Cortex-M4 with an FPU plus one without an FPU,
or an M3 and an A-class core). Multiple completely different
architectures (eg Microblaze + ARM, or ARM + PPC) is rather
harder, as at the moment we build entirely separate
qemu-system-* binaries for each architecture and there are
some compile-time assumptions made. I'd like to see us work
towards making that possible, but there's potentially quite
a bit of effort required.

thanks
-- PMM



[Qemu-devel] Simulating a composite machine

2018-08-19 Thread Martin Schroeder via Qemu-devel
Is it possible to instantiate multiple CPUs of different architectures
and simuate them with different images at the same time? Some examples
include ARM socs with m3/m4 coprocessor core but also boards with
multiple processors where it is desirable to connect the chips over
for example virtual SPI or UART and then simulate the composite system
as a single machine where each of the cores runs a separate firmware.

Is something like this easy to implement given current processor
objects or does this require substantial changes to how qemu works?
One area I do not fully understand is native code generator and
whether it would be able to cope with two cores of *different*
architectures at the same time.



[Qemu-devel] any suggestions for how to handle guests which expect to be executing out of icache?

2018-08-19 Thread Peter Maydell
Hi; I've been playing around this weekend with writing a QEMU
model for a music player I have (an XDuoo X3). This has a MIPS
SoC, and its boot process is that the SoC's boot rom loads the
guest binary into the CPU's icache and dcache (by playing tricks
with the cache tag bits so that it appears to be precached content
for a particular physaddr range). The guest binary then runs
purely out of cache, until it can initialise the real SDRAM and
relocate itself into that.

Unfortunately this causes problems for QEMU, because the guest
binary expects that while it is running out of the icache at
addresses 0x8000-0x80004000 it can happily write data to the
SDRAM at that address without overwriting its own code. Since
QEMU isn't modelling caches at all, the writes cause the guest
to corrupt its own code and it falls over.

Does anybody have any suggestions for how we could model this
kind of thing?

thanks
-- PMM



[Qemu-devel] [PATCH v2 00/11] convert CPU list to RCU

2018-08-19 Thread Emilio G. Cota
v1: https://lists.gnu.org/archive/html/qemu-devel/2018-08/msg02179.html

Changes since v1:

- Rebase on master
- Add David's Acked-by tag to the spapr patch
- Add 2 patches on QLIST_{EMPTY,REMOVE}_RCU
- Add some fixes for test-rcu-list. I wanted to be able to get no
  races with ThreadSanitizer, but it still warns about two races.
  I'm appending the report just in case, but I think tsan is getting
  confused.
- Add RCU QSIMPLEQ and RCU QTAILQ, piggy-backing their testing
  on test-rcu-list.
- Use an RCU QTAILQ instead of an RCU QLIST for the CPU list.
  - Drop the patch that added the CPUState.in_list field,
since with the QTAILQ it's not necessary.
- Convert a caller in target/s390x that I missed in v1.

You can fetch this series from:
  https://github.com/cota/qemu/tree/rcu-cpulist-v2

Thanks,

Emilio
---
The aforementioned TSan report:

$ make -j 12 tests/test-rcu-simpleq tests/test-rcu-list && tests/test-rcu-list 
1 1
  CC  tests/test-rcu-simpleq.o
  CC  tests/test-rcu-list.o
  LINKtests/test-rcu-list
  LINKtests/test-rcu-simpleq
==
WARNING: ThreadSanitizer: data race (pid=15248)
  Atomic read of size 8 at 0x7b085600 by thread T2:
#0 __tsan_atomic64_load 
../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interface_atomic.cc:538 
(libtsan.so.0+0x6080e)
#1 rcu_q_reader /data/src/qemu/tests/test-rcu-list.c:166 
(test-rcu-list+0x9294)
#2 qemu_thread_start /data/src/qemu/util/qemu-thread-posix.c:504 
(test-rcu-list+0x9af8)

  Previous write of size 8 at 0x7b085600 by thread T3:
#0 malloc ../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interceptors.cc:606 
(libtsan.so.0+0x2a2f3)
#1 g_malloc  (libglib-2.0.so.0+0x51858)
#2 qemu_thread_start /data/src/qemu/util/qemu-thread-posix.c:504 
(test-rcu-list+0x9af8)

  As if synchronized via sleep:
#0 nanosleep 
../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interceptors.cc:366 
(libtsan.so.0+0x48d20)
#1 g_usleep  (libglib-2.0.so.0+0x754de)
#2 qemu_thread_start /data/src/qemu/util/qemu-thread-posix.c:504 
(test-rcu-list+0x9af8)

  Location is heap block of size 32 at 0x7b085600 allocated by thread T3:
#0 malloc ../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interceptors.cc:606 
(libtsan.so.0+0x2a2f3)
#1 g_malloc  (libglib-2.0.so.0+0x51858)
#2 qemu_thread_start /data/src/qemu/util/qemu-thread-posix.c:504 
(test-rcu-list+0x9af8)

  Thread T2 (tid=15251, running) created by main thread at:
#0 pthread_create 
../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interceptors.cc:915 
(libtsan.so.0+0x2af6b)
#1 qemu_thread_create /data/src/qemu/util/qemu-thread-posix.c:534 
(test-rcu-list+0xadc8)
#2 create_thread /data/src/qemu/tests/test-rcu-list.c:70 
(test-rcu-list+0x944f)
#3 rcu_qtest /data/src/qemu/tests/test-rcu-list.c:278 (test-rcu-list+0x95ea)
#4 main /data/src/qemu/tests/test-rcu-list.c:357 (test-rcu-list+0x893f)

  Thread T3 (tid=15252, running) created by main thread at:
#0 pthread_create 
../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interceptors.cc:915 
(libtsan.so.0+0x2af6b)
#1 qemu_thread_create /data/src/qemu/util/qemu-thread-posix.c:534 
(test-rcu-list+0xadc8)
#2 create_thread /data/src/qemu/tests/test-rcu-list.c:70 
(test-rcu-list+0x944f)
#3 rcu_qtest /data/src/qemu/tests/test-rcu-list.c:280 (test-rcu-list+0x9606)
#4 main /data/src/qemu/tests/test-rcu-list.c:357 (test-rcu-list+0x893f)

SUMMARY: ThreadSanitizer: data race /data/src/qemu/tests/test-rcu-list.c:166 in 
rcu_q_reader
==
==
WARNING: ThreadSanitizer: data race (pid=15248)
  Write of size 8 at 0x7b08e880 by thread T1:
#0 free ../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interceptors.cc:649 
(libtsan.so.0+0x2a5ba)
#1 reclaim_list_el /data/src/qemu/tests/test-rcu-list.c:105 
(test-rcu-list+0x8e66)
#2 call_rcu_thread /data/src/qemu/util/rcu.c:284 (test-rcu-list+0xbb57)
#3 qemu_thread_start /data/src/qemu/util/qemu-thread-posix.c:504 
(test-rcu-list+0x9af8)

  Previous atomic read of size 8 at 0x7b08e880 by thread T2:
#0 __tsan_atomic64_load 
../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interface_atomic.cc:538 
(libtsan.so.0+0x6080e)
#1 rcu_q_reader /data/src/qemu/tests/test-rcu-list.c:166 
(test-rcu-list+0x9294)
#2 qemu_thread_start /data/src/qemu/util/qemu-thread-posix.c:504 
(test-rcu-list+0x9af8)

  Thread T1 (tid=15250, running) created by main thread at:
#0 pthread_create 
../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interceptors.cc:915 
(libtsan.so.0+0x2af6b)
#1 qemu_thread_create /data/src/qemu/util/qemu-thread-posix.c:534 
(test-rcu-list+0xadc8)
#2 rcu_init_complete /data/src/qemu/util/rcu.c:327 (test-rcu-list+0xb9f2)
#3 rcu_init /data/src/qemu/util/rcu.c:383 (test-rcu-list+0x89fc)
#4 __libc_csu_init  (test-rcu-list+0x35f9c)

  Thread T2 (tid=15251, running) created by main thread at:
#0 pthread_create 
../../../../gcc-8.1.0/libsanitizer/tsan/tsan_interceptors.cc:915 
(libtsan

[Qemu-devel] [PATCH v2 09/11] tests: add test-rcu-tailq

2018-08-19 Thread Emilio G. Cota
Signed-off-by: Emilio G. Cota 
---
 tests/test-rcu-list.c  | 15 +++
 tests/test-rcu-tailq.c |  2 ++
 tests/Makefile.include |  4 
 3 files changed, 21 insertions(+)
 create mode 100644 tests/test-rcu-tailq.c

diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c
index c8bdf49743..8434700746 100644
--- a/tests/test-rcu-list.c
+++ b/tests/test-rcu-list.c
@@ -91,6 +91,8 @@ struct list_element {
 QLIST_ENTRY(list_element) entry;
 #elif TEST_LIST_TYPE == 2
 QSIMPLEQ_ENTRY(list_element) entry;
+#elif TEST_LIST_TYPE == 3
+QTAILQ_ENTRY(list_element) entry;
 #else
 #error Invalid TEST_LIST_TYPE
 #endif
@@ -129,6 +131,19 @@ static QSIMPLEQ_HEAD(, list_element) Q_list_head =
 #define TEST_LIST_INSERT_HEAD_RCU   QSIMPLEQ_INSERT_HEAD_RCU
 #define TEST_LIST_FOREACH_RCU   QSIMPLEQ_FOREACH_RCU
 #define TEST_LIST_FOREACH_SAFE_RCU  QSIMPLEQ_FOREACH_SAFE_RCU
+
+#elif TEST_LIST_TYPE == 3
+static QTAILQ_HEAD(, list_element) Q_list_head;
+
+#define TEST_NAME "qtailq"
+#define TEST_LIST_REMOVE_RCU(el, f) QTAILQ_REMOVE_RCU(&Q_list_head, el, f)
+
+#define TEST_LIST_INSERT_AFTER_RCU(list_el, el, f)   \
+   QTAILQ_INSERT_AFTER_RCU(&Q_list_head, list_el, el, f)
+
+#define TEST_LIST_INSERT_HEAD_RCU   QTAILQ_INSERT_HEAD_RCU
+#define TEST_LIST_FOREACH_RCU   QTAILQ_FOREACH_RCU
+#define TEST_LIST_FOREACH_SAFE_RCU  QTAILQ_FOREACH_SAFE_RCU
 #else
 #error Invalid TEST_LIST_TYPE
 #endif
diff --git a/tests/test-rcu-tailq.c b/tests/test-rcu-tailq.c
new file mode 100644
index 00..8d487e0ee0
--- /dev/null
+++ b/tests/test-rcu-tailq.c
@@ -0,0 +1,2 @@
+#define TEST_LIST_TYPE 3
+#include "test-rcu-list.c"
diff --git a/tests/Makefile.include b/tests/Makefile.include
index 997c27421a..5fe32fcfd0 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -118,6 +118,8 @@ check-unit-y += tests/test-rcu-list$(EXESUF)
 gcov-files-test-rcu-list-y = util/rcu.c
 check-unit-y += tests/test-rcu-simpleq$(EXESUF)
 gcov-files-test-rcu-simpleq-y = util/rcu.c
+check-unit-y += tests/test-rcu-tailq$(EXESUF)
+gcov-files-test-rcu-tailq-y = util/rcu.c
 check-unit-y += tests/test-qdist$(EXESUF)
 gcov-files-test-qdist-y = util/qdist.c
 check-unit-y += tests/test-qht$(EXESUF)
@@ -603,6 +605,7 @@ test-obj-y = tests/check-qnum.o tests/check-qstring.o 
tests/check-qdict.o \
tests/test-opts-visitor.o tests/test-qmp-event.o \
tests/rcutorture.o tests/test-rcu-list.o \
tests/test-rcu-simpleq.o \
+   tests/test-rcu-tailq.o \
tests/test-qdist.o tests/test-shift128.o \
tests/test-qht.o tests/qht-bench.o tests/test-qht-par.o \
tests/atomic_add-bench.o
@@ -653,6 +656,7 @@ tests/test-int128$(EXESUF): tests/test-int128.o
 tests/rcutorture$(EXESUF): tests/rcutorture.o $(test-util-obj-y)
 tests/test-rcu-list$(EXESUF): tests/test-rcu-list.o $(test-util-obj-y)
 tests/test-rcu-simpleq$(EXESUF): tests/test-rcu-simpleq.o $(test-util-obj-y)
+tests/test-rcu-tailq$(EXESUF): tests/test-rcu-tailq.o $(test-util-obj-y)
 tests/test-qdist$(EXESUF): tests/test-qdist.o $(test-util-obj-y)
 tests/test-qht$(EXESUF): tests/test-qht.o $(test-util-obj-y)
 tests/test-qht-par$(EXESUF): tests/test-qht-par.o tests/qht-bench$(EXESUF) 
$(test-util-obj-y)
-- 
2.17.1




[Qemu-devel] [PATCH v2 04/11] rcu_queue: add RCU QTAILQ

2018-08-19 Thread Emilio G. Cota
Signed-off-by: Emilio G. Cota 
---
 include/qemu/rcu_queue.h | 66 
 1 file changed, 66 insertions(+)

diff --git a/include/qemu/rcu_queue.h b/include/qemu/rcu_queue.h
index e0395c989a..904b3372dc 100644
--- a/include/qemu/rcu_queue.h
+++ b/include/qemu/rcu_queue.h
@@ -193,6 +193,72 @@ extern "C" {
  (var) && ((next) = atomic_rcu_read(&(var)->field.sqe_next), 1); \
  (var) = (next))
 
+/*
+ * RCU tail queue
+ */
+
+/* Tail queue access methods */
+#define QTAILQ_EMPTY_RCU(head)  (atomic_read(&(head)->tqh_first) == NULL)
+#define QTAILQ_FIRST_RCU(head)   atomic_rcu_read(&(head)->tqh_first)
+#define QTAILQ_NEXT_RCU(elm, field)  atomic_rcu_read(&(elm)->field.tqe_next)
+
+/* Tail queue functions */
+#define QTAILQ_INSERT_HEAD_RCU(head, elm, field) do {   \
+(elm)->field.tqe_next = (head)->tqh_first;  \
+if ((elm)->field.tqe_next != NULL) {\
+(head)->tqh_first->field.tqe_prev = &(elm)->field.tqe_next; \
+} else {\
+(head)->tqh_last = &(elm)->field.tqe_next;  \
+}   \
+atomic_rcu_set(&(head)->tqh_first, (elm));  \
+(elm)->field.tqe_prev = &(head)->tqh_first; \
+} while (/*CONSTCOND*/0)
+
+#define QTAILQ_INSERT_TAIL_RCU(head, elm, field) do {   \
+(elm)->field.tqe_next = NULL;   \
+(elm)->field.tqe_prev = (head)->tqh_last;   \
+atomic_rcu_set((head)->tqh_last, (elm));\
+(head)->tqh_last = &(elm)->field.tqe_next;  \
+} while (/*CONSTCOND*/0)
+
+#define QTAILQ_INSERT_AFTER_RCU(head, listelm, elm, field) do { \
+(elm)->field.tqe_next = (listelm)->field.tqe_next;  \
+if ((elm)->field.tqe_next != NULL) {\
+(elm)->field.tqe_next->field.tqe_prev = &(elm)->field.tqe_next; \
+} else {\
+(head)->tqh_last = &(elm)->field.tqe_next;  \
+}   \
+atomic_rcu_set(&(listelm)->field.tqe_next, (elm));  \
+(elm)->field.tqe_prev = &(listelm)->field.tqe_next; \
+} while (/*CONSTCOND*/0)
+
+#define QTAILQ_INSERT_BEFORE_RCU(listelm, elm, field) do {  \
+(elm)->field.tqe_prev = (listelm)->field.tqe_prev;  \
+(elm)->field.tqe_next = (listelm);  \
+atomic_rcu_set((listelm)->field.tqe_prev, (elm));   \
+(listelm)->field.tqe_prev = &(elm)->field.tqe_next; \
+} while (/*CONSTCOND*/0)
+
+#define QTAILQ_REMOVE_RCU(head, elm, field) do {\
+if (((elm)->field.tqe_next) != NULL) {  \
+(elm)->field.tqe_next->field.tqe_prev = (elm)->field.tqe_prev;  \
+} else {\
+(head)->tqh_last = (elm)->field.tqe_prev;   \
+}   \
+atomic_set((elm)->field.tqe_prev, (elm)->field.tqe_next);   \
+(elm)->field.tqe_prev = NULL;   \
+} while (/*CONSTCOND*/0)
+
+#define QTAILQ_FOREACH_RCU(var, head, field)\
+for ((var) = atomic_rcu_read(&(head)->tqh_first);   \
+ (var); \
+ (var) = atomic_rcu_read(&(var)->field.tqe_next))
+
+#define QTAILQ_FOREACH_SAFE_RCU(var, head, field, next)  \
+for ((var) = atomic_rcu_read(&(head)->tqh_first);\
+ (var) && ((next) = atomic_rcu_read(&(var)->field.tqe_next), 1); \
+ (var) = (next))
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1




[Qemu-devel] [PATCH v2 08/11] tests: add test-list-simpleq

2018-08-19 Thread Emilio G. Cota
Signed-off-by: Emilio G. Cota 
---
 tests/test-rcu-list.c| 17 +
 tests/test-rcu-simpleq.c |  2 ++
 tests/Makefile.include   |  4 
 3 files changed, 23 insertions(+)
 create mode 100644 tests/test-rcu-simpleq.c

diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c
index 9bd11367a0..c8bdf49743 100644
--- a/tests/test-rcu-list.c
+++ b/tests/test-rcu-list.c
@@ -89,6 +89,8 @@ static void wait_all_threads(void)
 struct list_element {
 #if TEST_LIST_TYPE == 1
 QLIST_ENTRY(list_element) entry;
+#elif TEST_LIST_TYPE == 2
+QSIMPLEQ_ENTRY(list_element) entry;
 #else
 #error Invalid TEST_LIST_TYPE
 #endif
@@ -112,6 +114,21 @@ static QLIST_HEAD(q_list_head, list_element) Q_list_head;
 #define TEST_LIST_INSERT_HEAD_RCU   QLIST_INSERT_HEAD_RCU
 #define TEST_LIST_FOREACH_RCU   QLIST_FOREACH_RCU
 #define TEST_LIST_FOREACH_SAFE_RCU  QLIST_FOREACH_SAFE_RCU
+
+#elif TEST_LIST_TYPE == 2
+static QSIMPLEQ_HEAD(, list_element) Q_list_head =
+QSIMPLEQ_HEAD_INITIALIZER(Q_list_head);
+
+#define TEST_NAME "qsimpleq"
+#define TEST_LIST_REMOVE_RCU(el, f) \
+ QSIMPLEQ_REMOVE_RCU(&Q_list_head, el, list_element, f)
+
+#define TEST_LIST_INSERT_AFTER_RCU(list_el, el, f)   \
+ QSIMPLEQ_INSERT_AFTER_RCU(&Q_list_head, list_el, el, f)
+
+#define TEST_LIST_INSERT_HEAD_RCU   QSIMPLEQ_INSERT_HEAD_RCU
+#define TEST_LIST_FOREACH_RCU   QSIMPLEQ_FOREACH_RCU
+#define TEST_LIST_FOREACH_SAFE_RCU  QSIMPLEQ_FOREACH_SAFE_RCU
 #else
 #error Invalid TEST_LIST_TYPE
 #endif
diff --git a/tests/test-rcu-simpleq.c b/tests/test-rcu-simpleq.c
new file mode 100644
index 00..057f7d33f7
--- /dev/null
+++ b/tests/test-rcu-simpleq.c
@@ -0,0 +1,2 @@
+#define TEST_LIST_TYPE 2
+#include "test-rcu-list.c"
diff --git a/tests/Makefile.include b/tests/Makefile.include
index 760a0f18b6..997c27421a 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -116,6 +116,8 @@ check-unit-y += tests/rcutorture$(EXESUF)
 gcov-files-rcutorture-y = util/rcu.c
 check-unit-y += tests/test-rcu-list$(EXESUF)
 gcov-files-test-rcu-list-y = util/rcu.c
+check-unit-y += tests/test-rcu-simpleq$(EXESUF)
+gcov-files-test-rcu-simpleq-y = util/rcu.c
 check-unit-y += tests/test-qdist$(EXESUF)
 gcov-files-test-qdist-y = util/qdist.c
 check-unit-y += tests/test-qht$(EXESUF)
@@ -600,6 +602,7 @@ test-obj-y = tests/check-qnum.o tests/check-qstring.o 
tests/check-qdict.o \
tests/test-x86-cpuid.o tests/test-mul64.o tests/test-int128.o \
tests/test-opts-visitor.o tests/test-qmp-event.o \
tests/rcutorture.o tests/test-rcu-list.o \
+   tests/test-rcu-simpleq.o \
tests/test-qdist.o tests/test-shift128.o \
tests/test-qht.o tests/qht-bench.o tests/test-qht-par.o \
tests/atomic_add-bench.o
@@ -649,6 +652,7 @@ tests/test-cutils$(EXESUF): tests/test-cutils.o 
util/cutils.o $(test-util-obj-y)
 tests/test-int128$(EXESUF): tests/test-int128.o
 tests/rcutorture$(EXESUF): tests/rcutorture.o $(test-util-obj-y)
 tests/test-rcu-list$(EXESUF): tests/test-rcu-list.o $(test-util-obj-y)
+tests/test-rcu-simpleq$(EXESUF): tests/test-rcu-simpleq.o $(test-util-obj-y)
 tests/test-qdist$(EXESUF): tests/test-qdist.o $(test-util-obj-y)
 tests/test-qht$(EXESUF): tests/test-qht.o $(test-util-obj-y)
 tests/test-qht-par$(EXESUF): tests/test-qht-par.o tests/qht-bench$(EXESUF) 
$(test-util-obj-y)
-- 
2.17.1




[Qemu-devel] [PATCH v2 07/11] test-rcu-list: abstract the list implementation

2018-08-19 Thread Emilio G. Cota
So that we can test other implementations.

Signed-off-by: Emilio G. Cota 
---
 tests/test-rcu-list.c | 42 ++
 1 file changed, 30 insertions(+), 12 deletions(-)

diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c
index dc58091996..9bd11367a0 100644
--- a/tests/test-rcu-list.c
+++ b/tests/test-rcu-list.c
@@ -82,9 +82,16 @@ static void wait_all_threads(void)
 n_threads = 0;
 }
 
+#ifndef TEST_LIST_TYPE
+#define TEST_LIST_TYPE 1
+#endif
 
 struct list_element {
+#if TEST_LIST_TYPE == 1
 QLIST_ENTRY(list_element) entry;
+#else
+#error Invalid TEST_LIST_TYPE
+#endif
 struct rcu_head rcu;
 };
 
@@ -96,8 +103,19 @@ static void reclaim_list_el(struct rcu_head *prcu)
 atomic_set(&n_reclaims, n_reclaims + 1);
 }
 
+#if TEST_LIST_TYPE == 1
 static QLIST_HEAD(q_list_head, list_element) Q_list_head;
 
+#define TEST_NAME "qlist"
+#define TEST_LIST_REMOVE_RCUQLIST_REMOVE_RCU
+#define TEST_LIST_INSERT_AFTER_RCU  QLIST_INSERT_AFTER_RCU
+#define TEST_LIST_INSERT_HEAD_RCU   QLIST_INSERT_HEAD_RCU
+#define TEST_LIST_FOREACH_RCU   QLIST_FOREACH_RCU
+#define TEST_LIST_FOREACH_SAFE_RCU  QLIST_FOREACH_SAFE_RCU
+#else
+#error Invalid TEST_LIST_TYPE
+#endif
+
 static void *rcu_q_reader(void *arg)
 {
 long long n_reads_local = 0;
@@ -113,7 +131,7 @@ static void *rcu_q_reader(void *arg)
 
 while (atomic_read(&goflag) == GOFLAG_RUN) {
 rcu_read_lock();
-QLIST_FOREACH_RCU(el, &Q_list_head, entry) {
+TEST_LIST_FOREACH_RCU(el, &Q_list_head, entry) {
 n_reads_local++;
 if (atomic_read(&goflag) == GOFLAG_STOP) {
 break;
@@ -150,10 +168,10 @@ static void *rcu_q_updater(void *arg)
 target_el = select_random_el(RCU_Q_LEN);
 j = 0;
 /* FOREACH_RCU could work here but let's use both macros */
-QLIST_FOREACH_SAFE_RCU(prev_el, &Q_list_head, entry, el) {
+TEST_LIST_FOREACH_SAFE_RCU(prev_el, &Q_list_head, entry, el) {
 j++;
 if (target_el == j) {
-QLIST_REMOVE_RCU(prev_el, entry);
+TEST_LIST_REMOVE_RCU(prev_el, entry);
 /* may be more than one updater in the future */
 call_rcu1(&prev_el->rcu, reclaim_list_el);
 n_removed_local++;
@@ -165,12 +183,12 @@ static void *rcu_q_updater(void *arg)
 }
 target_el = select_random_el(RCU_Q_LEN);
 j = 0;
-QLIST_FOREACH_RCU(el, &Q_list_head, entry) {
+TEST_LIST_FOREACH_RCU(el, &Q_list_head, entry) {
 j++;
 if (target_el == j) {
-prev_el = g_new(struct list_element, 1);
+struct list_element *new_el = g_new(struct list_element, 1);
 n_nodes += n_nodes_local;
-QLIST_INSERT_BEFORE_RCU(el, prev_el, entry);
+TEST_LIST_INSERT_AFTER_RCU(el, new_el, entry);
 break;
 }
 }
@@ -195,7 +213,7 @@ static void rcu_qtest_init(void)
 srand(time(0));
 for (i = 0; i < RCU_Q_LEN; i++) {
 new_el = g_new(struct list_element, 1);
-QLIST_INSERT_HEAD_RCU(&Q_list_head, new_el, entry);
+TEST_LIST_INSERT_HEAD_RCU(&Q_list_head, new_el, entry);
 }
 qemu_mutex_lock(&counts_mutex);
 n_nodes += RCU_Q_LEN;
@@ -230,8 +248,8 @@ static void rcu_qtest(const char *test, int duration, int 
nreaders)
 create_thread(rcu_q_updater);
 rcu_qtest_run(duration, nreaders);
 
-QLIST_FOREACH_SAFE_RCU(prev_el, &Q_list_head, entry, el) {
-QLIST_REMOVE_RCU(prev_el, entry);
+TEST_LIST_FOREACH_SAFE_RCU(prev_el, &Q_list_head, entry, el) {
+TEST_LIST_REMOVE_RCU(prev_el, entry);
 call_rcu1(&prev_el->rcu, reclaim_list_el);
 n_removed_local++;
 }
@@ -292,9 +310,9 @@ int main(int argc, char *argv[])
 } else {
 gtest_seconds = 20;
 }
-g_test_add_func("/rcu/qlist/single-threaded", gtest_rcuq_one);
-g_test_add_func("/rcu/qlist/short-few", gtest_rcuq_few);
-g_test_add_func("/rcu/qlist/long-many", gtest_rcuq_many);
+g_test_add_func("/rcu/"TEST_NAME"/single-threaded", 
gtest_rcuq_one);
+g_test_add_func("/rcu/"TEST_NAME"/short-few", gtest_rcuq_few);
+g_test_add_func("/rcu/"TEST_NAME"/long-many", gtest_rcuq_many);
 g_test_in_charge = 1;
 return g_test_run();
 }
-- 
2.17.1




[Qemu-devel] [PATCH v2 05/11] test-rcu-list: access goflag with atomics

2018-08-19 Thread Emilio G. Cota
Instead of declaring it volatile.

Signed-off-by: Emilio G. Cota 
---
 tests/test-rcu-list.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c
index 1514d7ec97..b4ed130081 100644
--- a/tests/test-rcu-list.c
+++ b/tests/test-rcu-list.c
@@ -44,7 +44,7 @@ static int nthreadsrunning;
 #define GOFLAG_RUN  1
 #define GOFLAG_STOP 2
 
-static volatile int goflag = GOFLAG_INIT;
+static int goflag = GOFLAG_INIT;
 
 #define RCU_READ_RUN 1000
 #define RCU_UPDATE_RUN 10
@@ -107,15 +107,15 @@ static void *rcu_q_reader(void *arg)
 
 *(struct rcu_reader_data **)arg = &rcu_reader;
 atomic_inc(&nthreadsrunning);
-while (goflag == GOFLAG_INIT) {
+while (atomic_read(&goflag) == GOFLAG_INIT) {
 g_usleep(1000);
 }
 
-while (goflag == GOFLAG_RUN) {
+while (atomic_read(&goflag) == GOFLAG_RUN) {
 rcu_read_lock();
 QLIST_FOREACH_RCU(el, &Q_list_head, entry) {
 n_reads_local++;
-if (goflag == GOFLAG_STOP) {
+if (atomic_read(&goflag) == GOFLAG_STOP) {
 break;
 }
 }
@@ -142,11 +142,11 @@ static void *rcu_q_updater(void *arg)
 
 *(struct rcu_reader_data **)arg = &rcu_reader;
 atomic_inc(&nthreadsrunning);
-while (goflag == GOFLAG_INIT) {
+while (atomic_read(&goflag) == GOFLAG_INIT) {
 g_usleep(1000);
 }
 
-while (goflag == GOFLAG_RUN) {
+while (atomic_read(&goflag) == GOFLAG_RUN) {
 target_el = select_random_el(RCU_Q_LEN);
 j = 0;
 /* FOREACH_RCU could work here but let's use both macros */
@@ -160,7 +160,7 @@ static void *rcu_q_updater(void *arg)
 break;
 }
 }
-if (goflag == GOFLAG_STOP) {
+if (atomic_read(&goflag) == GOFLAG_STOP) {
 break;
 }
 target_el = select_random_el(RCU_Q_LEN);
@@ -209,9 +209,9 @@ static void rcu_qtest_run(int duration, int nreaders)
 g_usleep(1000);
 }
 
-goflag = GOFLAG_RUN;
+atomic_set(&goflag, GOFLAG_RUN);
 sleep(duration);
-goflag = GOFLAG_STOP;
+atomic_set(&goflag, GOFLAG_STOP);
 wait_all_threads();
 }
 
-- 
2.17.1




[Qemu-devel] [PATCH v2 06/11] test-rcu-list: access counters with atomics

2018-08-19 Thread Emilio G. Cota
Signed-off-by: Emilio G. Cota 
---
 tests/test-rcu-list.c | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/tests/test-rcu-list.c b/tests/test-rcu-list.c
index b4ed130081..dc58091996 100644
--- a/tests/test-rcu-list.c
+++ b/tests/test-rcu-list.c
@@ -93,7 +93,7 @@ static void reclaim_list_el(struct rcu_head *prcu)
 struct list_element *el = container_of(prcu, struct list_element, rcu);
 g_free(el);
 /* Accessed only from call_rcu thread.  */
-n_reclaims++;
+atomic_set(&n_reclaims, n_reclaims + 1);
 }
 
 static QLIST_HEAD(q_list_head, list_element) Q_list_head;
@@ -182,7 +182,7 @@ static void *rcu_q_updater(void *arg)
 qemu_mutex_lock(&counts_mutex);
 n_nodes += n_nodes_local;
 n_updates += n_updates_local;
-n_nodes_removed += n_removed_local;
+atomic_set(&n_nodes_removed, n_nodes_removed + n_removed_local);
 qemu_mutex_unlock(&counts_mutex);
 return NULL;
 }
@@ -239,16 +239,18 @@ static void rcu_qtest(const char *test, int duration, int 
nreaders)
 n_nodes_removed += n_removed_local;
 qemu_mutex_unlock(&counts_mutex);
 synchronize_rcu();
-while (n_nodes_removed > n_reclaims) {
+while (atomic_read(&n_nodes_removed) > atomic_read(&n_reclaims)) {
 g_usleep(100);
 synchronize_rcu();
 }
 if (g_test_in_charge) {
-g_assert_cmpint(n_nodes_removed, ==, n_reclaims);
+g_assert_cmpint(atomic_read(&n_nodes_removed), ==,
+atomic_read(&n_reclaims));
 } else {
 printf("%s: %d readers; 1 updater; nodes read: "  \
"%lld, nodes removed: %lld; nodes reclaimed: %lld\n",
-   test, nthreadsrunning - 1, n_reads, n_nodes_removed, 
n_reclaims);
+   test, nthreadsrunning - 1, n_reads,
+   atomic_read(&n_nodes_removed), atomic_read(&n_reclaims));
 exit(0);
 }
 }
-- 
2.17.1




[Qemu-devel] [PATCH v2 10/11] spapr: do not use CPU_FOREACH_REVERSE

2018-08-19 Thread Emilio G. Cota
This paves the way for implementing the CPU list with an RCU list,
which cannot be traversed in reverse order.

Note that this is the only caller of CPU_FOREACH_REVERSE.

Acked-by: David Gibson 
Signed-off-by: Emilio G. Cota 
---
 hw/ppc/spapr.c | 16 +++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 421b2dd09b..2ef5be2790 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -622,9 +622,12 @@ static void spapr_populate_cpu_dt(CPUState *cs, void *fdt, 
int offset,
 
 static void spapr_populate_cpus_dt_node(void *fdt, sPAPRMachineState *spapr)
 {
+CPUState **rev;
 CPUState *cs;
+int n_cpus;
 int cpus_offset;
 char *nodename;
+int i;
 
 cpus_offset = fdt_add_subnode(fdt, 0, "cpus");
 _FDT(cpus_offset);
@@ -635,8 +638,19 @@ static void spapr_populate_cpus_dt_node(void *fdt, 
sPAPRMachineState *spapr)
  * We walk the CPUs in reverse order to ensure that CPU DT nodes
  * created by fdt_add_subnode() end up in the right order in FDT
  * for the guest kernel the enumerate the CPUs correctly.
+ *
+ * The CPU list cannot be traversed in reverse order, so we need
+ * to do extra work.
  */
-CPU_FOREACH_REVERSE(cs) {
+n_cpus = 0;
+rev = NULL;
+CPU_FOREACH(cs) {
+rev = g_renew(CPUState *, rev, n_cpus + 1);
+rev[n_cpus++] = cs;
+}
+
+for (i = n_cpus - 1; i >= 0; i--) {
+CPUState *cs = rev[i];
 PowerPCCPU *cpu = POWERPC_CPU(cs);
 int index = spapr_get_vcpu_id(cpu);
 DeviceClass *dc = DEVICE_GET_CLASS(cs);
-- 
2.17.1




[Qemu-devel] [PATCH v2 03/11] rcu_queue: add RCU QSIMPLEQ

2018-08-19 Thread Emilio G. Cota
Signed-off-by: Emilio G. Cota 
---
 include/qemu/rcu_queue.h | 65 
 1 file changed, 65 insertions(+)

diff --git a/include/qemu/rcu_queue.h b/include/qemu/rcu_queue.h
index 6881ea5274..e0395c989a 100644
--- a/include/qemu/rcu_queue.h
+++ b/include/qemu/rcu_queue.h
@@ -128,6 +128,71 @@ extern "C" {
   ((next_var) = atomic_rcu_read(&(var)->field.le_next), 1);  \
(var) = (next_var))
 
+/*
+ * RCU simple queue
+ */
+
+/* Simple queue access methods */
+#define QSIMPLEQ_EMPTY_RCU(head)  (atomic_read(&(head)->sqh_first) == NULL)
+#define QSIMPLEQ_FIRST_RCU(head)   atomic_rcu_read(&(head)->sqh_first)
+#define QSIMPLEQ_NEXT_RCU(elm, field)  atomic_rcu_read(&(elm)->field.sqe_next)
+
+/* Simple queue functions */
+#define QSIMPLEQ_INSERT_HEAD_RCU(head, elm, field) do { \
+(elm)->field.sqe_next = (head)->sqh_first;  \
+if ((elm)->field.sqe_next == NULL) {\
+(head)->sqh_last = &(elm)->field.sqe_next;  \
+}   \
+atomic_rcu_set(&(head)->sqh_first, (elm));  \
+} while (/*CONSTCOND*/0)
+
+#define QSIMPLEQ_INSERT_TAIL_RCU(head, elm, field) do {\
+(elm)->field.sqe_next = NULL;  \
+atomic_rcu_set((head)->sqh_last, (elm));   \
+(head)->sqh_last = &(elm)->field.sqe_next; \
+} while (/*CONSTCOND*/0)
+
+#define QSIMPLEQ_INSERT_AFTER_RCU(head, listelm, elm, field) do {   \
+(elm)->field.sqe_next = (listelm)->field.sqe_next;  \
+if ((elm)->field.sqe_next == NULL) {\
+(head)->sqh_last = &(elm)->field.sqe_next;  \
+}   \
+atomic_rcu_set(&(listelm)->field.sqe_next, (elm));  \
+} while (/*CONSTCOND*/0)
+
+#define QSIMPLEQ_REMOVE_HEAD_RCU(head, field) do { \
+atomic_set(&(head)->sqh_first, (head)->sqh_first->field.sqe_next); \
+if ((head)->sqh_first == NULL) {   \
+(head)->sqh_last = &(head)->sqh_first; \
+}  \
+} while (/*CONSTCOND*/0)
+
+#define QSIMPLEQ_REMOVE_RCU(head, elm, type, field) do {\
+if ((head)->sqh_first == (elm)) {   \
+QSIMPLEQ_REMOVE_HEAD_RCU((head), field);\
+} else {\
+struct type *curr = (head)->sqh_first;  \
+while (curr->field.sqe_next != (elm)) { \
+curr = curr->field.sqe_next;\
+}   \
+atomic_set(&curr->field.sqe_next,   \
+   curr->field.sqe_next->field.sqe_next);   \
+if (curr->field.sqe_next == NULL) { \
+(head)->sqh_last = &(curr)->field.sqe_next; \
+}   \
+}   \
+} while (/*CONSTCOND*/0)
+
+#define QSIMPLEQ_FOREACH_RCU(var, head, field)  \
+for ((var) = atomic_rcu_read(&(head)->sqh_first);   \
+ (var); \
+ (var) = atomic_rcu_read(&(var)->field.sqe_next))
+
+#define QSIMPLEQ_FOREACH_SAFE_RCU(var, head, field, next)\
+for ((var) = atomic_rcu_read(&(head)->sqh_first);\
+ (var) && ((next) = atomic_rcu_read(&(var)->field.sqe_next), 1); \
+ (var) = (next))
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1




[Qemu-devel] [PATCH v2 02/11] rcu_queue: remove barrier from QLIST_EMPTY_RCU

2018-08-19 Thread Emilio G. Cota
It's unnecessary because the pointer isn't dereferenced.

Signed-off-by: Emilio G. Cota 
---
 include/qemu/rcu_queue.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/qemu/rcu_queue.h b/include/qemu/rcu_queue.h
index dd7b3be043..6881ea5274 100644
--- a/include/qemu/rcu_queue.h
+++ b/include/qemu/rcu_queue.h
@@ -36,7 +36,7 @@ extern "C" {
 /*
  * List access methods.
  */
-#define QLIST_EMPTY_RCU(head) (atomic_rcu_read(&(head)->lh_first) == NULL)
+#define QLIST_EMPTY_RCU(head) (atomic_read(&(head)->lh_first) == NULL)
 #define QLIST_FIRST_RCU(head) (atomic_rcu_read(&(head)->lh_first))
 #define QLIST_NEXT_RCU(elm, field) (atomic_rcu_read(&(elm)->field.le_next))
 
-- 
2.17.1




[Qemu-devel] [PATCH v2 01/11] rcu_queue: use atomic_set in QLIST_REMOVE_RCU

2018-08-19 Thread Emilio G. Cota
To avoid undefined behaviour.

Signed-off-by: Emilio G. Cota 
---
 include/qemu/rcu_queue.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/qemu/rcu_queue.h b/include/qemu/rcu_queue.h
index 01be77407b..dd7b3be043 100644
--- a/include/qemu/rcu_queue.h
+++ b/include/qemu/rcu_queue.h
@@ -112,7 +112,7 @@ extern "C" {
(elm)->field.le_next->field.le_prev =\
 (elm)->field.le_prev;   \
 }   \
-*(elm)->field.le_prev =  (elm)->field.le_next;  \
+atomic_set((elm)->field.le_prev, (elm)->field.le_next); \
 } while (/*CONSTCOND*/0)
 
 /* List traversal must occur within an RCU critical section.  */
-- 
2.17.1




[Qemu-devel] [PATCH v2 11/11] qom: convert the CPU list to RCU

2018-08-19 Thread Emilio G. Cota
Iterating over the list without using atomics is undefined behaviour,
since the list can be modified concurrently by other threads (e.g.
every time a new thread is created in user-mode).

Fix it by implementing the CPU list as an RCU QTAILQ. This requires
a little bit of extra work to traverse list in reverse order (see
previous patch), but other than that the conversion is trivial.

Signed-off-by: Emilio G. Cota 
---
 include/qom/cpu.h | 11 +--
 cpus-common.c |  4 ++--
 cpus.c|  2 +-
 linux-user/main.c |  2 +-
 linux-user/syscall.c  |  2 +-
 target/s390x/cpu_models.c |  2 +-
 6 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index ecf6ed556a..dc130cd307 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -26,6 +26,7 @@
 #include "exec/memattrs.h"
 #include "qapi/qapi-types-run-state.h"
 #include "qemu/bitmap.h"
+#include "qemu/rcu_queue.h"
 #include "qemu/queue.h"
 #include "qemu/thread.h"
 
@@ -442,13 +443,11 @@ struct CPUState {
 
 QTAILQ_HEAD(CPUTailQ, CPUState);
 extern struct CPUTailQ cpus;
-#define CPU_NEXT(cpu) QTAILQ_NEXT(cpu, node)
-#define CPU_FOREACH(cpu) QTAILQ_FOREACH(cpu, &cpus, node)
+#define first_cpuQTAILQ_FIRST_RCU(&cpus)
+#define CPU_NEXT(cpu)QTAILQ_NEXT_RCU(cpu, node)
+#define CPU_FOREACH(cpu) QTAILQ_FOREACH_RCU(cpu, &cpus, node)
 #define CPU_FOREACH_SAFE(cpu, next_cpu) \
-QTAILQ_FOREACH_SAFE(cpu, &cpus, node, next_cpu)
-#define CPU_FOREACH_REVERSE(cpu) \
-QTAILQ_FOREACH_REVERSE(cpu, &cpus, CPUTailQ, node)
-#define first_cpu QTAILQ_FIRST(&cpus)
+QTAILQ_FOREACH_SAFE_RCU(cpu, &cpus, node, next_cpu)
 
 extern __thread CPUState *current_cpu;
 
diff --git a/cpus-common.c b/cpus-common.c
index 59f751ecf9..98dd8c6ff1 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -84,7 +84,7 @@ void cpu_list_add(CPUState *cpu)
 } else {
 assert(!cpu_index_auto_assigned);
 }
-QTAILQ_INSERT_TAIL(&cpus, cpu, node);
+QTAILQ_INSERT_TAIL_RCU(&cpus, cpu, node);
 qemu_mutex_unlock(&qemu_cpu_list_lock);
 
 finish_safe_work(cpu);
@@ -101,7 +101,7 @@ void cpu_list_remove(CPUState *cpu)
 
 assert(!(cpu_index_auto_assigned && cpu != QTAILQ_LAST(&cpus, CPUTailQ)));
 
-QTAILQ_REMOVE(&cpus, cpu, node);
+QTAILQ_REMOVE_RCU(&cpus, cpu, node);
 cpu->cpu_index = UNASSIGNED_CPU_INDEX;
 qemu_mutex_unlock(&qemu_cpu_list_lock);
 }
diff --git a/cpus.c b/cpus.c
index b5844b7103..d60f8603fd 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1491,7 +1491,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
 atomic_mb_set(&cpu->exit_request, 0);
 }
 
-qemu_tcg_rr_wait_io_event(cpu ? cpu : QTAILQ_FIRST(&cpus));
+qemu_tcg_rr_wait_io_event(cpu ? cpu : first_cpu);
 deal_with_unplugged_cpus();
 }
 
diff --git a/linux-user/main.c b/linux-user/main.c
index ea00dd9057..923cbb753a 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -126,7 +126,7 @@ void fork_end(int child)
Discard information about the parent threads.  */
 CPU_FOREACH_SAFE(cpu, next_cpu) {
 if (cpu != thread_cpu) {
-QTAILQ_REMOVE(&cpus, cpu, node);
+QTAILQ_REMOVE_RCU(&cpus, cpu, node);
 }
 }
 qemu_init_cpu_list();
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index bb42a225eb..95ac0102bf 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -8051,7 +8051,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
 TaskState *ts;
 
 /* Remove the CPU from the list.  */
-QTAILQ_REMOVE(&cpus, cpu, node);
+QTAILQ_REMOVE_RCU(&cpus, cpu, node);
 
 cpu_list_unlock();
 
diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
index 604898a882..d1a45bd8c0 100644
--- a/target/s390x/cpu_models.c
+++ b/target/s390x/cpu_models.c
@@ -1110,7 +1110,7 @@ void s390_set_qemu_cpu_model(uint16_t type, uint8_t gen, 
uint8_t ec_ga,
 const S390CPUDef *def = s390_find_cpu_def(type, gen, ec_ga, NULL);
 
 g_assert(def);
-g_assert(QTAILQ_EMPTY(&cpus));
+g_assert(QTAILQ_EMPTY_RCU(&cpus));
 
 /* TCG emulates some features that can usually not be enabled with
  * the emulated machine generation. Make sure they can be enabled
-- 
2.17.1




Re: [Qemu-devel] [PATCH 3/3] qom: implement CPU list with an RCU QLIST

2018-08-19 Thread Emilio G. Cota
On Fri, Aug 17, 2018 at 19:53:40 +0200, Paolo Bonzini wrote:
> On 15/08/2018 02:34, Emilio G. Cota wrote:
> > On Tue, Aug 14, 2018 at 08:26:54 +0200, Paolo Bonzini wrote:
> >> On 13/08/2018 18:38, Emilio G. Cota wrote:
> >>> Fix it by implementing the CPU list as an RCU QLIST. This requires
> >>> a little bit of extra work to insert CPUs at the tail of
> >>> the list and to iterate over the list in reverse order (see previous 
> >>> patch).
> >>>
> >>> One might be tempted to just insert new CPUs at the head of the list.
> >>> However, I think this might lead to hard-to-debug issues, since it is
> >>> possible that callers are assuming that CPUs are inserted at the tail
> >>> (just like spapr code did in the previous patch). So instead of auditing
> >>> all callers, this patch simply keeps the old behaviour.
> >>
> >> Why not add an RCU_QSIMPLEQ
> > 
> > Because we can't atomically update both head.last and item.next.
> 
> Why do you need that?  Updates are protected by a mutex in RCU-protected
> lists, it is not necessary to make them atomic.  Also, feel free to
> implement a subset of the write-side macros, for example only
> INSERT_{HEAD,TAIL,AFTER} and REMOVE_HEAD.

Yes I got confused, was thinking you wanted to support the
reverse traversal (simpleq doesn't even have the reverse pointers,
so I don't know how I reached that conclusion).

v2 incoming.

Thanks,

E.



Re: [Qemu-devel] Bugs when cross-compiling qemu for Windows with mingw 8.1, executable doesn't run

2018-08-19 Thread Stefan Weil
Am 18.08.2018 um 22:51 schrieb Howard Spoelstra:
> On Sat, Aug 18, 2018 at 9:09 PM, Stefan Weil  wrote:
>> Am 17.08.2018 um 09:32 schrieb David Hildenbrand:
>>> No being a win32/mingw expert, Stefan any idea?
>>
>>
>> I'd try a debug build (configure [...] --enable-debug).
>>
>> My installers (https://qemu.weilnetz.de/w64/) were built with
>> x86_64-w64-mingw32-gcc (GCC) 6.3.0 20170516 (from Debian Stretch).
>> Howard, perhaps you can try whether they show the same runtime SIGSEGV.
>> When I run your command line with a dummy disk image, OpenBIOS boots fine.
>>
>> Kind regards,
>> Stefan
> 
> The error I reported already came from a debug build.
> Other builds with less recent mingw (7.3 in Fedora 28) do not SIGSEGV,
> neither do Stefan's.
> I can confirm the strncpy warnings are gone using Philippe's patches.
> 
> Best,
> Howard


I can now reproduce the runtime problem (although I get a different error):

Debian experimental provides x86_64-w64-mingw32-gcc (GCC) 8.2-win32
20180726. I now used that compiler for my build. In addition to the
compiler errors reported by Howard, I also get similar errors for
hw/acpi/core.c and hw/acpi/aml-build.c.

The resulting binary starts running OpenBIOS, but then it fails (tested
with wine):

./configure --cross-prefix=x86_64-w64-mingw32- && make

dd if=/dev/zero of=9.2.qcow2 bs=1M count=32

wine ppc-softmmu/qemu-system-ppc.exe -L pc-bios -boot c -m 256 -M
"mac99,via=pmu" -prom-env "boot-args=-v" -prom-env "auto-boot?=true"
-prom-env "vga-ndrv?=true" -hda 9.2.qcow2 -netdev "user,id=network01"
-device "sungem,netdev=network01" -d int

*** stack smashing detected ***:  terminated
wine: Unhandled illegal instruction at address 0x68ac2fe0 (thread 002c),

QEMU for Linux with gcc 8.2 works fine.

Stefan



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] vTPM 2.0 is recognized as vTPM 1.2 on the Win 10 virtual machine

2018-08-19 Thread 汤福
I tried it according to your method, but I have some problems. My host is 
centos 7.2 with the TPM 2.0 hardware and qemu v2.10.2. The driver for the TPM 
2.0 hardware is crb device,Execute lsmod to view the tpm 2.0 driver information 
as follows:
[root@localhost BUILD]# lsmod | grep tpm
tpm_crb12972  0 

I downloaded the OVMF-20182028-5.noarch.src.rpm package from the rpm search 
website. And rebulid it with -DTPM2_ENABLE and -DSECURE_BOOT_ENABLE, Rebulid 
everything well and generate the OVMF.fd and OVMF_ARGS.fd file,so I copy 
OVMF.fd to my qemu-kvm project and start qemu to install windows 10 virtual 
machine.

I first created a blank img file named win10.img,and install win10  virtual 
machine as follows:
[root@localhost BUILD]#qemu-system-x86_64 -display sdl -enable-kvm  -m 4096 
-boot d  -cdrom win10.iso -bios OVMF.fd  -net none  -boot menu=on -tpmdev 
cuse-tpm,id=tpm0,cancel-path=/dev/null,type=passthrough,path=/dev/tpm0  -device 
tpm-tis,tpmdev=tpm0 win10.img

The installation process is very very slow, the system automatically restarts 
after the installation is complete. But it seems can't enter the desktop. The 
system restarts cyclically, it looks like there is a problem with BIOS boot. I 
think of what you said that  for Windows TPM 2 support will need the TPM CRB 
device, so I start qemu with parameter of -device tpm-crb but it didn't work. 
Prompt the following error message:
[root@localhost BUILD]#qemu-system-x86_64 -display sdl -enable-kvm  -m 4096 
-boot d  -bios OVMF.fd  -net none  -boot menu=on -tpmdev 
cuse-tpm,id=tpm0,cancel-path=/dev/null,type=passthrough,path=/dev/tpm0  -device 
tpm-crb,tpmdev=tpm0 win10.img
[root@localhost BUILD]#qemu-system-x86_64: -device tpm-crb,tpmdev=tpm0: 
'tpm-crb' is not a valid device model name

I don't know where the problem is, I need you to give me some help. Thank you 
very much!


> -原始邮件-
> 发件人: "Marc-André Lureau" 
> 发送时间: 2018-08-16 16:56:52 (星期四)
> 收件人: tan...@gohighsec.com
> 抄送: QEMU 
> 主题: Re: [Qemu-devel] vTPM 2.0 is recognized as vTPM 1.2 on the Win 10 virtual 
> machine
> 
> Hi
> On Thu, Aug 16, 2018 at 3:29 AM 汤福  wrote:
> >
> > Hi,
> >
> > I want to use the vTPM in a qemu Windows image. Unfortunately, it didn't 
> > work.
> > First, the equipment:
> > TPM 2.0 hardware
> > CentOS 7.2
> > Qemu v2.10.2
> > SeaBIOS 1.11.0
> > libtpm and so on
> >
> > My host is centos 7.2 with the TPM 2.0 hardware and qemu v2.10.2.
> > I make the libtpm and seabios with ./configure, make and so on. I checked 
> > seabios with make menuconfig the TPM setting. It is enabled tpm by default.
> > Eventually, all works without errors.
> >
> > I start the Widnows 10 image with:
> > qemu-system-x86_64 -display sdl -enable-kvm -m 2048 -boot d -bios bios.bin 
> > -boot menu=on -tpmdev 
> > cuse-tpm,id=tpm0,cancel-path=/dev/null,type=passthrough,path=/dev/tpm0  
> > -device tpm-tis,tpmdev=tpm0 win10.img
> >
> >
> > First it looks all fine. Windows 10 booted up but the vTPM was recognized 
> > as TPM 1.2 instead of TPM 2.0 in device manager. I open the tpm Manager 
> > with tpm.msc but get error with No compatible TPM found.
> > If I use vTPM in a qemu linux image, everything gose well. I think of what 
> > you said 
> >
> >
> > So, what could be the problem?
> 
> You need to build libtpms & swtpm from Stefan tpm2-preview branches.
> (Alternatively, there is now an experimental fedora copr repository:
> https://copr.fedorainfracloud.org/coprs/stefanberger/swtpm/)
> 
> I suggest to setup the VM with libvirt upstream, which will do the
> preliminary swtpm_setup for you, or follow
> https://github.com/stefanberger/swtpm/wiki/Certificiates-created-by-swtpm_setup
> 
> For Windows TPM 2 support, you will need the TPM CRB device, and
> upstream OVMF compiled with  -D TPM2_ENABLE (TIS & Bios are 1.2 only
> for Windows, even if seabios does have some 2.0 support with them)
> 
> Furthermore, to pass the WLK tests, you need PPI & MOR interface,
> which are still pending merge ([PATCH v9 0/6] Add support for TPM
> Physical Presence interface)
> 
> 
> 
> 
> -- 
> Marc-André Lureau