Re: [PATCH 1/2] Revert "target/arm: Merge regime_is_secure into get_phys_addr"

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 04:41, Richard Henderson wrote:

This reverts commit 03bea66e7fa3af42976ceafb20512c59abf2e699,
but restore into ptw.c instead of internals.h.

Signed-off-by: Richard Henderson 
---
  target/arm/ptw.c | 28 ++--
  1 file changed, 14 insertions(+), 14 deletions(-)


Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH 2/2] target/arm: Fix arm_cpu_get_phys_page_attrs_debug for m-profile

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 04:41, Richard Henderson wrote:

M-profile is not supported by arm_is_secure, so using it as
a replacement when bypassing get_phys_addr was incorrect.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1421
Fixes: 4a35855682ce ("target/arm: Plumb debug into S1Translate")
Signed-off-by: Richard Henderson 
---
  target/arm/ptw.c | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/arm/ptw.c b/target/arm/ptw.c
index cb073ac477..057cc9f641 100644
--- a/target/arm/ptw.c
+++ b/target/arm/ptw.c
@@ -2974,9 +2974,10 @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, 
vaddr addr,
  {
  ARMCPU *cpu = ARM_CPU(cs);
  CPUARMState *env = &cpu->env;
+ARMMMUIdx mmu_idx = arm_mmu_idx(env);
  S1Translate ptw = {
-.in_mmu_idx = arm_mmu_idx(env),
-.in_secure = arm_is_secure(env),
+.in_mmu_idx = mmu_idx,
+.in_secure = regime_is_secure(env, mmu_idx),
  .in_debug = true,
  };
  GetPhysAddrResult res = {};


Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH 1/2] qga/win32: Remove change action from MSI installer

2023-02-21 Thread Philippe Mathieu-Daudé

On 20/2/23 18:41, Konstantin Kostiuk wrote:

resolves: rhbz#2167436


"You are not authorized to access bug #2167436."


fixes: CVE-2023-0664


This commit description is rather scarce...

I understand you are trying to fix a CVE, but we shouldn't play
the "security by obscurity" card. How can the community and
distributions know this security fix is enough with the bare
"Remove change action from MSI installer" justification?
Can't we do better?


Signed-off-by: Konstantin Kostiuk 
---
  qga/installer/qemu-ga.wxs | 1 +
  1 file changed, 1 insertion(+)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index 51340f7ecc..feb629ec47 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -31,6 +31,7 @@
/>
  
  1
+
  
--
2.25.1







Re: [PATCH v5 27/29] MAINTAINERS: Add Akihiko Odaki as a e1000e reviewer

2023-02-21 Thread Philippe Mathieu-Daudé

On 1/2/23 04:35, Akihiko Odaki wrote:

I want to know to be notified when there is a new change for e1000e
as e1000e is similar to igb and such a change may also be applicable for
igb.

Signed-off-by: Akihiko Odaki 
---
  MAINTAINERS | 2 ++
  1 file changed, 2 insertions(+)


Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH v5 28/29] MAINTAINERS: Add e1000e test files

2023-02-21 Thread Philippe Mathieu-Daudé

On 1/2/23 04:35, Akihiko Odaki wrote:

Signed-off-by: Akihiko Odaki 
Acked-by: Thomas Huth 
---
  MAINTAINERS | 2 ++
  1 file changed, 2 insertions(+)


Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH 3/4] win32: stop mixing SOCKET and file descriptor space

2023-02-21 Thread Paolo Bonzini

On 2/20/23 16:29, Marc-André Lureau wrote:

7. A Windows SOCKET is also a HANDLE.  Maybe.  I guess.  Docs are
confusing.


Kind of, but not really. I think a HANDLE is a kind of void*. You need to
be careful using it appropriately with the right functions. Sometime, a
HANDLE can work with generic functions, like ReadFile, but you should not
use a CloseHandle on SOCKET, or registry key..


A Windows SOCKET *is* a file HANDLE except it's always in overlapped 
mode so Windows provides send()/recv() in case you don't want to deal 
with overlapped mode.  But you can use it with ReadFile too (Windows API 
documentation says that is only true sometimes, but Winsock API is 30 
years old and right now you pretty much always can).


However, sockets also has some extra information on the side, so you 
need to close them with closesocket() and CloseHandle() is not enough.


The problem is that close() of something opened with _open_osfhandle() 
*does* do that CloseHandle(), so basically you are closing the handle 
twice.  IIRC there used to be undocumented functions _alloc_osfhnd() and 
similar, but they don't exist anymore (even Wine does not have them), so 
we're stuck; unfortunately this is the reason why QEMU is not already 
doing something like what you have in this patch.


Is this a real bug or is it theoretical?  Do file descriptor and socket 
spaces overlap in practice?


Paolo




Re: [PATCH v5 29/29] e1000e: Combine rx traces

2023-02-21 Thread Philippe Mathieu-Daudé

On 1/2/23 04:35, Akihiko Odaki wrote:

Whether a packet will be written back to the guest depends on the
remaining space of the queue. Therefore, e1000e_rx_written_to_guest and
e1000e_rx_not_written_to_guest should log the index of the queue instead
of generated interrupts. This also removes the need of
e1000e_rx_rss_dispatched_to_queue, which logs the queue index.

Signed-off-by: Akihiko Odaki 
---
  hw/net/e1000e_core.c | 6 ++
  hw/net/trace-events  | 5 ++---
  2 files changed, 4 insertions(+), 7 deletions(-)


Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH v5 06/29] e1000e: Mask registers when writing

2023-02-21 Thread Philippe Mathieu-Daudé

On 1/2/23 04:35, Akihiko Odaki wrote:

When a register has effective bits fewer than their width, the old code
inconsistently masked when writing or reading. Make the code consistent
by always masking when writing, and remove some code duplication.

Signed-off-by: Akihiko Odaki 
---
  hw/net/e1000e_core.c | 94 +++-
  1 file changed, 40 insertions(+), 54 deletions(-)

diff --git a/hw/net/e1000e_core.c b/hw/net/e1000e_core.c
index 181c1e0c2a..e6fc85ea51 100644
--- a/hw/net/e1000e_core.c
+++ b/hw/net/e1000e_core.c
@@ -2440,17 +2440,19 @@ e1000e_set_fcrtl(E1000ECore *core, int index, uint32_t 
val)
  core->mac[FCRTL] = val & 0x8000FFF8;
  }
  
-static inline void

-e1000e_set_16bit(E1000ECore *core, int index, uint32_t val)
-{
-core->mac[index] = val & 0x;
-}
+#define E1000E_LOW_BITS_SET_FUNC(num)\
+static void  \
+e1000e_set_##num##bit(E1000ECore *core, int index, uint32_t val) \
+{\
+core->mac[index] = val & (BIT(num) - 1); \
+}
  
-static void

-e1000e_set_12bit(E1000ECore *core, int index, uint32_t val)
-{
-core->mac[index] = val & 0xfff;
-}
+E1000E_LOW_BITS_SET_FUNC(4)
+E1000E_LOW_BITS_SET_FUNC(6)
+E1000E_LOW_BITS_SET_FUNC(11)
+E1000E_LOW_BITS_SET_FUNC(12)
+E1000E_LOW_BITS_SET_FUNC(13)
+E1000E_LOW_BITS_SET_FUNC(16)


This looks correct but is hard to be sure, too many changes at once
(for my taste at least).

Split suggestions:
- move macros and 6/16-bit masks
- move 4/11 masks
- move 13-bit mask

Or:
- move macros and 13-bit masks
- move the rest

Except if Jason is OK to merge as is.



Re: [PATCH] hw/acpi: Set memory regions to native endian as a work around

2023-02-21 Thread Paolo Bonzini

On 2/21/23 00:25, BALATON Zoltan wrote:

I think fundamentally you need to check for the condition
Size < mr->ops->impl.min_access_size in memory_region_dispatch_write
and then make a read, combine the result with
the value and make a write.


I neither know that part nor feel confident enough breaking such low 
level stuff so I think setting the affected regions NATIVE_ENDIAN for 
now until somebody takes care of this is safer and not likely to break 
anyting (or if it does, much less widely and I'm more likely to be able 
to fix that than your proposed changes). So I'd rather let you do that 
but I'd like this fixed one way or another at last.


Sorry about not replying.

The case of impl.min_access_size < valid.min_access_size is not
supported in the memory core.  Until that is done, the correct fix is to
fix acpi_pm_evt_ops to have impl.min_access_size == 1, something like
this:

diff --git a/hw/acpi/core.c b/hw/acpi/core.c
index 6da275c599c6..96eb88fa7e27 100644
--- a/hw/acpi/core.c
+++ b/hw/acpi/core.c
@@ -429,20 +429,35 @@ void acpi_pm1_evt_reset(ACPIREGS *ar)
 static uint64_t acpi_pm_evt_read(void *opaque, hwaddr addr, unsigned width)
 {
 ACPIREGS *ar = opaque;
+uint16_t val;
 switch (addr) {
 case 0:
-return acpi_pm1_evt_get_sts(ar);
+val = acpi_pm1_evt_get_sts(ar);
 case 2:
-return ar->pm1.evt.en;
+val = ar->pm1.evt.en;
 default:
 return 0;
 }
+
+if (width == 1) {
+int bitofs = (addr & 1) * 8;
+val >>= bitofs;
+}
+return val;
 }
 
 static void acpi_pm_evt_write(void *opaque, hwaddr addr, uint64_t val,

   unsigned width)
 {
 ACPIREGS *ar = opaque;
+if (width == 1) {
+int bitofs = (addr & 1) * 8;
+uint16_t old_val = acpi_pm_evt_read(ar, addr, val & ~1);
+uint16_t mask = 0xFF << bitofs;
+val = (old_val & ~mask) | (val << bitofs);
+addr &= ~1;
+}
+
 switch (addr) {
 case 0:
 acpi_pm1_evt_write_sts(ar, val);
@@ -458,7 +473,7 @@ static void acpi_pm_evt_write(void *opaque, hwaddr addr, 
uint64_t val,
 static const MemoryRegionOps acpi_pm_evt_ops = {
 .read = acpi_pm_evt_read,
 .write = acpi_pm_evt_write,
-.impl.min_access_size = 2,
+.impl.min_access_size = 1,
 .valid.min_access_size = 1,
 .valid.max_access_size = 2,
 .endianness = DEVICE_LITTLE_ENDIAN,


This assumes that the bus is little-endian, i.e. reading the byte at PM_EVT 
returns
bits 0..7 and reading the byte at PM_EVT+1 returns bits 8..15.

If this is incorrect, endianness needs to be changed as well.

Paolo




[PATCH] hw/arm/virt: Prevent CPUs in one socket to span mutiple NUMA nodes

2023-02-21 Thread Gavin Shan
Linux kernel guest reports warning when two CPUs in one socket have
been associated with different NUMA nodes, using the following command
lines.

  -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
  -numa node,nodeid=0,cpus=0-1,memdev=ram0\
  -numa node,nodeid=1,cpus=2-3,memdev=ram1\
  -numa node,nodeid=2,cpus=4-5,memdev=ram2\

  [ cut here ]
  WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 
build_sched_domains+0x284/0x910
  Modules linked in:
  CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
  pstate: 0045 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
  pc : build_sched_domains+0x284/0x910
  lr : build_sched_domains+0x184/0x910
  sp : 8804bd50
  x29: 8804bd50 x28: 0002 x27: 
  x26: 89cf9a80 x25:  x24: 89cbf840
  x23: 80325000 x22: 005df800 x21: 8a4ce508
  x20:  x19: 80324440 x18: 0014
  x17: 388925c0 x16: 5386a066 x15: 9c10cc2e
  x14: 01c0 x13: 0001 x12: 7fffb1a0
  x11: 7fffb180 x10: 8a4ce508 x9 : 0041
  x8 : 8a4ce500 x7 : 8a4cf920 x6 : 0001
  x5 : 0001 x4 : 0007 x3 : 0002
  x2 : 1000 x1 : 8a4cf928 x0 : 0001
  Call trace:
   build_sched_domains+0x284/0x910
   sched_init_domains+0xac/0xe0
   sched_init_smp+0x48/0xc8
   kernel_init_freeable+0x140/0x1ac
   kernel_init+0x28/0x140
   ret_from_fork+0x10/0x20

Fix it by preventing mutiple CPUs in one socket to be associated with
different NUMA nodes.

Reported-by: Yihuang Yu 
Signed-off-by: Gavin Shan 
---
 hw/arm/virt.c | 37 +
 1 file changed, 37 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index ac626b3bef..e0af267c77 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -230,6 +230,39 @@ static bool cpu_type_valid(const char *cpu)
 return false;
 }
 
+static bool numa_state_valid(MachineState *ms)
+{
+MachineClass *mc = MACHINE_GET_CLASS(ms);
+NumaState *state = ms->numa_state;
+const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
+const CPUArchId *cpus = possible_cpus->cpus;
+int len = possible_cpus->len, i, j;
+
+if (!state || state->num_nodes <= 1 || len <= 1) {
+return true;
+}
+
+for (i = 0; i < len; i++) {
+for (j = i + 1; j < len; j++) {
+if (cpus[i].props.has_socket_id &&
+cpus[i].props.has_node_id &&
+cpus[j].props.has_socket_id &&
+cpus[j].props.has_node_id &&
+cpus[i].props.socket_id == cpus[j].props.socket_id &&
+cpus[i].props.node_id != cpus[j].props.node_id) {
+error_report("CPU-%d and CPU-%d in socket-%ld have been "
+ "associated with node-%ld and node-%ld",
+ i, j, cpus[i].props.socket_id,
+ cpus[i].props.node_id,
+ cpus[j].props.node_id);
+return false;
+}
+}
+}
+
+return true;
+}
+
 static void create_randomness(MachineState *ms, const char *node)
 {
 struct {
@@ -2040,6 +2073,10 @@ static void machvirt_init(MachineState *machine)
 exit(1);
 }
 
+if (!numa_state_valid(machine)) {
+exit(1);
+}
+
 possible_cpus = mc->possible_cpu_arch_ids(machine);
 
 /*
-- 
2.23.0




Re: [RFC v5 0/3] migration: reduce time of loading non-iterable vmstate

2023-02-21 Thread Chuang Xu

Hi, Peter

This email is a supplement to the previous one.

On 2023/2/21 上午11:38, Chuang Xu wrote:


I think we need a memory_region_transaction_commit_force() to force 
commit

some transactions when load vmstate. This function is designed like this:

/*
 * memory_region_transaction_commit_force() is desgined to
 * force the mr transaction to be commited in the process
 * of loading vmstate.
 */
void memory_region_transaction_commit_force(void)
{
    AddressSpace *as;
    unsigned int memory_region_transaction_depth_copy = 
memory_region_transaction_depth;


    /*
 * Temporarily replace memory_region_transaction_depth with 0 to 
prevent
 * memory_region_transaction_commit_force() and 
address_space_to_flatview()

 * call each other recursively.
 */
    memory_region_transaction_depth = 0;

    assert(qemu_mutex_iothread_locked());


    if (memory_region_update_pending) {
    flatviews_reset();

    MEMORY_LISTENER_CALL_GLOBAL(begin, Forward);

    QTAILQ_FOREACH(as, &address_spaces, address_spaces_link) {
    address_space_set_flatview(as);
    address_space_update_ioeventfds(as);
    }
    memory_region_update_pending = false;
    ioeventfd_update_pending = false;
    MEMORY_LISTENER_CALL_GLOBAL(commit, Forward);
    } else if (ioeventfd_update_pending) {
    QTAILQ_FOREACH(as, &address_spaces, address_spaces_link) {
    address_space_update_ioeventfds(as);
    }
    ioeventfd_update_pending = false;
    }

    /* recover memory_region_transaction_depth */
    memory_region_transaction_depth = 
memory_region_transaction_depth_copy;

}

Now there are two options to use this function:
1. call it in address_space_to_flatview():

static inline FlatView *address_space_to_flatview(AddressSpace *as)
{
    /*
 * Before using any flatview, check whether we're during a memory
 * region transaction. If so, force commit the memory region 
transaction.

 */
    if (memory_region_transaction_in_progress())


Here we need to add the condition of BQL holding, or some threads without
BQL held running here will trigger the assertion in
memory_region_transaction_commit_force().

I'm not sure whether this condition is sufficient, at least for the mr access
in the load thread it is sufficient (because the load thread will hold the BQL
when accessing mr). But for other cases, it seems that we will return to
our discussion on sanity check..

Another point I worry about is whether the number of mr transaction commits
has increased in some other scenarios because of this force commit. Although
So far, I haven't seen a simple virtual machine lifecycle trigger this force 
commit.
I did a little test: replace commit_force() with abort() and run qtest.
Almost all error I can see is related to migration..


memory_region_transaction_commit_force();
    return qatomic_rcu_read(&as->current_map);
}

2. call it before each post_load()

I prefer to use the former one, it is more general than the latter.
And with this function, the sanity check is not necessary any more.
Although we may inevitably call memory_region_transaction_commit_force()
several times, in my actual test, the number of calls to
memory_region_transaction_commit_force() is still insignificant compared
with the number of calls to memory_region_transaction_commit() before
optimization. As a result, This code won't affect the optimization 
effect,

but it can ensure reliability.


Looking forward to your opinion, Thanks!




[PATCH 2/9] simpletrace: Annotate magic constants from QEMU code

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

It wasn't clear where the constants and structs came from, so I added
comments to help.

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index 9211caaec1..7ba805443d 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -15,15 +15,15 @@
 from tracetool import read_events, Event
 from tracetool.backend.simple import is_string
 
-header_event_id = 0x
-header_magic= 0xf2b177cb0aa429b4
-dropped_event_id = 0xfffe
+header_event_id = 0x # trace/simple.c::HEADER_EVENT_ID
+header_magic= 0xf2b177cb0aa429b4 # trace/simple.c::HEADER_MAGIC
+dropped_event_id = 0xfffe # trace/simple.c::DROPPED_EVENT_ID
 
-record_type_mapping = 0
-record_type_event = 1
+record_type_mapping = 0 # trace/simple.c::TRACE_RECORD_TYPE_MAPPING
+record_type_event = 1 # trace/simple.c::TRACE_RECORD_TYPE_EVENT
 
-log_header_fmt = '=QQQ'
-rec_header_fmt = '=QQII'
+log_header_fmt = '=QQQ' # trace/simple.c::TraceLogHeader
+rec_header_fmt = '=QQII' # trace/simple.c::TraceRecord
 
 def read_header(fobj, hfmt):
 '''Read a trace record header'''
-- 
2.38.1




[PATCH 5/9] simpletrace: Changed Analyzer class to become context-manager

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

Instead of explicitly calling `begin` and `end`, we can change the class
to use the context-manager paradigm. This is mostly a styling choice,
used in modern Python code. But it also allows for more advanced analyzers
to handle exceptions gracefully in the `__exit__` method (not
demonstrated here).

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index 7444a6e090..01bd47a130 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -121,12 +121,12 @@ def read_trace_records(event_mapping, event_id_to_name, 
fobj):
 
 yield rec
 
-class Analyzer(object):
+class Analyzer:
 """A trace file analyzer which processes trace records.
 
-An analyzer can be passed to run() or process().  The begin() method is
-invoked, then each trace record is processed, and finally the end() method
-is invoked.
+An analyzer can be passed to run() or process().  The __enter__() method is
+invoked when opening the analyzer using the `with` statement, then each 
trace
+record is processed, and finally the __exit__() method is invoked.
 
 If a method matching a trace event name exists, it is invoked to process
 that trace record.  Otherwise the catchall() method is invoked.
@@ -152,19 +152,19 @@ def runstate_set(self, timestamp, pid, new_state):
   ...
 """
 
-def begin(self):
+def __enter__(self):
 """Called at the start of the trace."""
-pass
+return self
 
 def catchall(self, event, rec):
 """Called if no specific method for processing a trace event has been 
found."""
 pass
 
-def end(self):
+def __exit__(self, _type, value, traceback):
 """Called at the end of the trace."""
 pass
 
-def process(events, log, analyzer, read_header=True):
+def process(events, log, analyzer_class, read_header=True):
 """Invoke an analyzer on each event in a log."""
 if read_header:
 read_trace_header(log)
@@ -203,15 +203,15 @@ def build_fn(analyzer, event):
 # Just arguments, no timestamp or pid
 return lambda _, rec: fn(*rec[3:3 + event_argcount])
 
-analyzer.begin()
-fn_cache = {}
-for rec in read_trace_records(event_mapping, event_id_to_name, log):
-event_num = rec[0]
-event = event_mapping[event_num]
-if event_num not in fn_cache:
-fn_cache[event_num] = build_fn(analyzer, event)
-fn_cache[event_num](event, rec)
-analyzer.end()
+with analyzer_class() as analyzer:
+fn_cache = {}
+for rec in read_trace_records(event_mapping, event_id_to_name, log):
+event_num = rec[0]
+event = event_mapping[event_num]
+if event_num not in fn_cache:
+fn_cache[event_num] = build_fn(analyzer, event)
+fn_cache[event_num](event, rec)
+
 
 def run(analyzer):
 """Execute an analyzer on a trace file given on the command-line.
@@ -254,4 +254,4 @@ def catchall(self, event, rec):
 i += 1
 print(' '.join(fields))
 
-run(Formatter())
+run(Formatter)
-- 
2.38.1




[PATCH 8/9] simpletrace: define exception and add handling

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

Define `SimpleException` to differentiate our exceptions from generic
exceptions (IOError, etc.). Adapted simpletrace to support this and
output to stderr.

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 23 +++
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index 61af68618f..35d2a757c4 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -25,12 +25,15 @@
 log_header_fmt = '=QQQ' # trace/simple.c::TraceLogHeader
 rec_header_fmt = '=QQII' # trace/simple.c::TraceRecord
 
+class SimpleException(Exception):
+pass
+
 def read_header(fobj, hfmt):
 '''Read a trace record header'''
 hlen = struct.calcsize(hfmt)
 hdr = fobj.read(hlen)
 if len(hdr) != hlen:
-raise ValueError('Error reading header. Wrong filetype provided?')
+raise SimpleException('Error reading header. Wrong filetype provided?')
 return struct.unpack(hfmt, hdr)
 
 def get_record(event_mapping, event_id_to_name, rechdr, fobj):
@@ -42,10 +45,10 @@ def get_record(event_mapping, event_id_to_name, rechdr, 
fobj):
 try:
 event = event_mapping[name]
 except KeyError as e:
-sys.stderr.write(f'{e} event is logged but is not declared ' \
- 'in the trace events file, try using ' \
- 'trace-events-all instead.\n')
-sys.exit(1)
+raise SimpleException(
+f'{e} event is logged but is not declared in the trace events'
+'file, try using trace-events-all instead.'
+)
 
 rec = (name, timestamp_ns, pid)
 for type, name in event.args:
@@ -201,8 +204,7 @@ def run(analyzer):
 *no_header, trace_event_path, trace_file_path = sys.argv[1:]
 assert no_header == [] or no_header == ['--no-header'], 'Invalid 
no-header argument'
 except (AssertionError, ValueError):
-sys.stderr.write(f'usage: {sys.argv[0]} [--no-header]  
\n')
-sys.exit(1)
+raise SimpleException(f'usage: {sys.argv[0]} [--no-header] 
 \n')
 
 with open(trace_event_path, 'r') as events_fobj, open(trace_file_path, 
'rb') as log_fobj:
 events = read_events(events_fobj, trace_event_path)
@@ -225,4 +227,9 @@ def catchall(self, *rec_args, event, timestamp_ns, pid, 
event_id):
 ]
 print(f'{event.name} {delta_ns / 1000:0.3f} {pid=} ' + ' 
'.join(fields))
 
-run(Formatter)
+try:
+run(Formatter)
+except SimpleException as e:
+sys.stderr.write(e + "\n")
+sys.exit(1)
+
-- 
2.38.1




[PATCH 3/9] simpletrace: changed naming of edict and idtoname to improve readability

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

Readability is subjective, but I've expanded the naming of the variables
and arguments, to help with understanding for new eyes on the code.

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 34 +-
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index 7ba805443d..9981699630 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -33,17 +33,17 @@ def read_header(fobj, hfmt):
 return None
 return struct.unpack(hfmt, hdr)
 
-def get_record(edict, idtoname, rechdr, fobj):
+def get_record(event_mapping, event_id_to_name, rechdr, fobj):
 """Deserialize a trace record from a file into a tuple
(name, timestamp, pid, arg1, ..., arg6)."""
 if rechdr is None:
 return None
 if rechdr[0] != dropped_event_id:
 event_id = rechdr[0]
-name = idtoname[event_id]
+name = event_id_to_name[event_id]
 rec = (name, rechdr[1], rechdr[3])
 try:
-event = edict[name]
+event = event_mapping[name]
 except KeyError as e:
 sys.stderr.write('%s event is logged but is not declared ' \
  'in the trace events file, try using ' \
@@ -72,10 +72,10 @@ def get_mapping(fobj):
 
 return (event_id, name)
 
-def read_record(edict, idtoname, fobj):
+def read_record(event_mapping, event_id_to_name, fobj):
 """Deserialize a trace record from a file into a tuple (event_num, 
timestamp, pid, arg1, ..., arg6)."""
 rechdr = read_header(fobj, rec_header_fmt)
-return get_record(edict, idtoname, rechdr, fobj)
+return get_record(event_mapping, event_id_to_name, rechdr, fobj)
 
 def read_trace_header(fobj):
 """Read and verify trace file header"""
@@ -96,14 +96,14 @@ def read_trace_header(fobj):
 raise ValueError('Log format %d not supported with this QEMU release!'
  % log_version)
 
-def read_trace_records(edict, idtoname, fobj):
+def read_trace_records(event_mapping, event_id_to_name, fobj):
 """Deserialize trace records from a file, yielding record tuples 
(event_num, timestamp, pid, arg1, ..., arg6).
 
-Note that `idtoname` is modified if the file contains mapping records.
+Note that `event_id_to_name` is modified if the file contains mapping 
records.
 
 Args:
-edict (str -> Event): events dict, indexed by name
-idtoname (int -> str): event names dict, indexed by event ID
+event_mapping (str -> Event): events dict, indexed by name
+event_id_to_name (int -> str): event names dict, indexed by event ID
 fobj (file): input file
 
 """
@@ -115,9 +115,9 @@ def read_trace_records(edict, idtoname, fobj):
 (rectype, ) = struct.unpack('=Q', t)
 if rectype == record_type_mapping:
 event_id, name = get_mapping(fobj)
-idtoname[event_id] = name
+event_id_to_name[event_id] = name
 else:
-rec = read_record(edict, idtoname, fobj)
+rec = read_record(event_mapping, event_id_to_name, fobj)
 
 yield rec
 
@@ -172,16 +172,16 @@ def process(events, log, analyzer, read_header=True):
 frameinfo = inspect.getframeinfo(inspect.currentframe())
 dropped_event = Event.build("Dropped_Event(uint64_t num_events_dropped)",
 frameinfo.lineno + 1, frameinfo.filename)
-edict = {"dropped": dropped_event}
-idtoname = {dropped_event_id: "dropped"}
+event_mapping = {"dropped": dropped_event}
+event_id_to_name = {dropped_event_id: "dropped"}
 
 for event in events:
-edict[event.name] = event
+event_mapping[event.name] = event
 
 # If there is no header assume event ID mapping matches events list
 if not read_header:
 for event_id, event in enumerate(events):
-idtoname[event_id] = event.name
+event_id_to_name[event_id] = event.name
 
 def build_fn(analyzer, event):
 if isinstance(event, str):
@@ -205,9 +205,9 @@ def build_fn(analyzer, event):
 
 analyzer.begin()
 fn_cache = {}
-for rec in read_trace_records(edict, idtoname, log):
+for rec in read_trace_records(event_mapping, event_id_to_name, log):
 event_num = rec[0]
-event = edict[event_num]
+event = event_mapping[event_num]
 if event_num not in fn_cache:
 fn_cache[event_num] = build_fn(analyzer, event)
 fn_cache[event_num](event, rec)
-- 
2.38.1




[PATCH 4/9] simpletrace: update code for Python 3.11

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

The call to `getargspec` was deprecated and in Python 3.11 it has been
removed in favor of `getfullargspec`.

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index 9981699630..7444a6e090 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -192,7 +192,7 @@ def build_fn(analyzer, event):
 return analyzer.catchall
 
 event_argcount = len(event.args)
-fn_argcount = len(inspect.getargspec(fn)[0]) - 1
+fn_argcount = len(inspect.getfullargspec(fn)[0]) - 1
 if fn_argcount == event_argcount + 1:
 # Include timestamp as first argument
 return lambda _, rec: fn(*(rec[1:2] + rec[3:3 + event_argcount]))
-- 
2.38.1




[PATCH 6/9] simpletrace: Simplify construction of tracing methods

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

By moving the dynamic argument construction to keyword-arguments,
we can remove all of the specialized handling, and streamline it.
If a tracing method wants to access these, they can define the
kwargs, or ignore it be placing `**kwargs` at the end of the
function's arguments list.

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 84 --
 1 file changed, 32 insertions(+), 52 deletions(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index 01bd47a130..df52b09dbc 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -131,16 +131,25 @@ class Analyzer:
 If a method matching a trace event name exists, it is invoked to process
 that trace record.  Otherwise the catchall() method is invoked.
 
+The methods are called with a set of keyword-arguments. These can be 
ignored
+using `**kwargs` or defined like any keyword-argument.
+
+The following keyword-arguments are available:
+event: Event object of current trace
+event_id: The id of the event in the current trace file
+timestamp_ns: The timestamp in nanoseconds of the trace
+pid: The process id recorded for the given trace
+
 Example:
 The following method handles the runstate_set(int new_state) trace event::
 
-  def runstate_set(self, new_state):
+  def runstate_set(self, new_state, **kwargs):
   ...
 
-The method can also take a timestamp argument before the trace event
-arguments::
+The method can also explicitly take a timestamp keyword-argument with the
+trace event arguments::
 
-  def runstate_set(self, timestamp, new_state):
+  def runstate_set(self, new_state, *, timestamp, **kwargs):
   ...
 
 Timestamps have the uint64_t type and are in nanoseconds.
@@ -148,7 +157,7 @@ def runstate_set(self, timestamp, new_state):
 The pid can be included in addition to the timestamp and is useful when
 dealing with traces from multiple processes::
 
-  def runstate_set(self, timestamp, pid, new_state):
+  def runstate_set(self, new_state, *, timestamp, pid, **kwargs):
   ...
 """
 
@@ -156,7 +165,7 @@ def __enter__(self):
 """Called at the start of the trace."""
 return self
 
-def catchall(self, event, rec):
+def catchall(self, *rec_args, event, timestamp_ns, pid, event_id):
 """Called if no specific method for processing a trace event has been 
found."""
 pass
 
@@ -183,34 +192,11 @@ def process(events, log, analyzer_class, 
read_header=True):
 for event_id, event in enumerate(events):
 event_id_to_name[event_id] = event.name
 
-def build_fn(analyzer, event):
-if isinstance(event, str):
-return analyzer.catchall
-
-fn = getattr(analyzer, event.name, None)
-if fn is None:
-return analyzer.catchall
-
-event_argcount = len(event.args)
-fn_argcount = len(inspect.getfullargspec(fn)[0]) - 1
-if fn_argcount == event_argcount + 1:
-# Include timestamp as first argument
-return lambda _, rec: fn(*(rec[1:2] + rec[3:3 + event_argcount]))
-elif fn_argcount == event_argcount + 2:
-# Include timestamp and pid
-return lambda _, rec: fn(*rec[1:3 + event_argcount])
-else:
-# Just arguments, no timestamp or pid
-return lambda _, rec: fn(*rec[3:3 + event_argcount])
-
 with analyzer_class() as analyzer:
-fn_cache = {}
-for rec in read_trace_records(event_mapping, event_id_to_name, log):
-event_num = rec[0]
-event = event_mapping[event_num]
-if event_num not in fn_cache:
-fn_cache[event_num] = build_fn(analyzer, event)
-fn_cache[event_num](event, rec)
+for event_id, timestamp_ns, record_pid, *rec_args in 
read_trace_records(event_mapping, event_id_to_name, log):
+event = event_mapping[event_id]
+fn = getattr(analyzer, event.name, analyzer.catchall)
+fn(*rec_args, event=event, event_id=event_id, 
timestamp_ns=timestamp_ns, pid=record_pid)
 
 
 def run(analyzer):
@@ -234,24 +220,18 @@ def run(analyzer):
 if __name__ == '__main__':
 class Formatter(Analyzer):
 def __init__(self):
-self.last_timestamp = None
-
-def catchall(self, event, rec):
-timestamp = rec[1]
-if self.last_timestamp is None:
-self.last_timestamp = timestamp
-delta_ns = timestamp - self.last_timestamp
-self.last_timestamp = timestamp
-
-fields = [event.name, '%0.3f' % (delta_ns / 1000.0),
-  'pid=%d' % rec[2]]
-i = 3
-for type, name in event.args:
-if is_string(type):
-fields.append('%s=%s' % (name, rec[i]))
-else:
-fields

[PATCH 1/9] simpletrace: Improve parsing of sys.argv; fix files never closed.

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

The arguments extracted from `sys.argv` named and unpacked to make it
clear what the arguments are and what they're used for.

The two input files were opened, but never explicitly closed. File usage
changed to use `with` statement to take care of this. At the same time,
ownership of the file-object is moved up to `run` function. Secondary `open`
inside `process` removed so there's only one place to handle `open`.

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 28 +++-
 1 file changed, 11 insertions(+), 17 deletions(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index 1f6d1ae1f3..9211caaec1 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -9,6 +9,7 @@
 #
 # For help see docs/devel/tracing.rst
 
+import sys
 import struct
 import inspect
 from tracetool import read_events, Event
@@ -44,7 +45,6 @@ def get_record(edict, idtoname, rechdr, fobj):
 try:
 event = edict[name]
 except KeyError as e:
-import sys
 sys.stderr.write('%s event is logged but is not declared ' \
  'in the trace events file, try using ' \
  'trace-events-all instead.\n' % str(e))
@@ -166,11 +166,6 @@ def end(self):
 
 def process(events, log, analyzer, read_header=True):
 """Invoke an analyzer on each event in a log."""
-if isinstance(events, str):
-events = read_events(open(events, 'r'), events)
-if isinstance(log, str):
-log = open(log, 'rb')
-
 if read_header:
 read_trace_header(log)
 
@@ -223,19 +218,18 @@ def run(analyzer):
 
 This function is useful as a driver for simple analysis scripts.  More
 advanced scripts will want to call process() instead."""
-import sys
-
-read_header = True
-if len(sys.argv) == 4 and sys.argv[1] == '--no-header':
-read_header = False
-del sys.argv[1]
-elif len(sys.argv) != 3:
-sys.stderr.write('usage: %s [--no-header]  ' \
- '\n' % sys.argv[0])
+
+try:
+# NOTE: See built-in `argparse` module for a more robust cli interface
+*no_header, trace_event_path, trace_file_path = sys.argv[1:]
+assert no_header == [] or no_header == ['--no-header'], 'Invalid 
no-header argument'
+except (AssertionError, ValueError):
+sys.stderr.write(f'usage: {sys.argv[0]} [--no-header]  
\n')
 sys.exit(1)
 
-events = read_events(open(sys.argv[1], 'r'), sys.argv[1])
-process(events, sys.argv[2], analyzer, read_header=read_header)
+with open(trace_event_path, 'r') as events_fobj, open(trace_file_path, 
'rb') as log_fobj:
+events = read_events(events_fobj, trace_event_path)
+process(events, log_fobj, analyzer, read_header=not no_header)
 
 if __name__ == '__main__':
 class Formatter(Analyzer):
-- 
2.38.1




[PATCH 9/9] simpletrace: Refactor to separate responsibilities

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

Moved event_mapping and event_id_to_name down one level in the function
call-stack to keep variable instantiation and usage closer (`process`
and `run` has no use of the variables; `read_trace_records` does).

Instead of passing event_mapping and event_id_to_name to the bottom of
the call-stack, we move their use to `read_trace_records`. This
separates responsibility and ownership of the information.

`read_record` now just reads the arguments from the file-object by
knowning the total number of bytes. Parsing it to specific arguments is
moved up to `read_trace_records`.

Special handling of dropped events removed, as they can be handled
by the general code.

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 121 +++--
 1 file changed, 55 insertions(+), 66 deletions(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index 35d2a757c4..465fcac609 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -24,6 +24,7 @@
 
 log_header_fmt = '=QQQ' # trace/simple.c::TraceLogHeader
 rec_header_fmt = '=QQII' # trace/simple.c::TraceRecord
+rec_header_fmt_len = struct.calcsize(rec_header_fmt)
 
 class SimpleException(Exception):
 pass
@@ -36,35 +37,6 @@ def read_header(fobj, hfmt):
 raise SimpleException('Error reading header. Wrong filetype provided?')
 return struct.unpack(hfmt, hdr)
 
-def get_record(event_mapping, event_id_to_name, rechdr, fobj):
-"""Deserialize a trace record from a file into a tuple
-   (name, timestamp, pid, arg1, ..., arg6)."""
-event_id, timestamp_ns, length, pid = rechdr
-if event_id != dropped_event_id:
-name = event_id_to_name[event_id]
-try:
-event = event_mapping[name]
-except KeyError as e:
-raise SimpleException(
-f'{e} event is logged but is not declared in the trace events'
-'file, try using trace-events-all instead.'
-)
-
-rec = (name, timestamp_ns, pid)
-for type, name in event.args:
-if is_string(type):
-l = fobj.read(4)
-(len,) = struct.unpack('=L', l)
-s = fobj.read(len)
-rec = rec + (s,)
-else:
-(value,) = struct.unpack('=Q', fobj.read(8))
-rec = rec + (value,)
-else:
-(dropped_count,) = struct.unpack('=Q', fobj.read(8))
-rec = ("dropped", timestamp_ns, pid, dropped_count)
-return rec
-
 def get_mapping(fobj):
 (event_id, ) = struct.unpack('=Q', fobj.read(8))
 (len, ) = struct.unpack('=L', fobj.read(4))
@@ -72,10 +44,11 @@ def get_mapping(fobj):
 
 return (event_id, name)
 
-def read_record(event_mapping, event_id_to_name, fobj):
-"""Deserialize a trace record from a file into a tuple (event_num, 
timestamp, pid, arg1, ..., arg6)."""
-rechdr = read_header(fobj, rec_header_fmt)
-return get_record(event_mapping, event_id_to_name, rechdr, fobj)
+def read_record(fobj):
+"""Deserialize a trace record from a file into a tuple (event_num, 
timestamp, pid, args)."""
+event_id, timestamp_ns, record_length, record_pid = read_header(fobj, 
rec_header_fmt)
+args_payload = fobj.read(record_length - rec_header_fmt_len)
+return (event_id, timestamp_ns, record_pid, args_payload)
 
 def read_trace_header(fobj):
 """Read and verify trace file header"""
@@ -90,30 +63,60 @@ def read_trace_header(fobj):
 if log_version != 4:
 raise ValueError(f'Log format {log_version} not supported with this 
QEMU release!')
 
-def read_trace_records(event_mapping, event_id_to_name, fobj):
-"""Deserialize trace records from a file, yielding record tuples 
(event_num, timestamp, pid, arg1, ..., arg6).
-
-Note that `event_id_to_name` is modified if the file contains mapping 
records.
+def read_trace_records(events, fobj, read_header):
+"""Deserialize trace records from a file, yielding record tuples (event, 
event_num, timestamp, pid, arg1, ..., arg6).
 
 Args:
 event_mapping (str -> Event): events dict, indexed by name
-event_id_to_name (int -> str): event names dict, indexed by event ID
 fobj (file): input file
+read_header (bool): whether to headers were read from fobj
 
 """
-while True:
-t = fobj.read(8)
-if len(t) == 0:
-break
+frameinfo = inspect.getframeinfo(inspect.currentframe())
+dropped_event = Event.build("Dropped_Event(uint64_t num_events_dropped)",
+frameinfo.lineno + 1, frameinfo.filename)
+
+event_mapping = {e.name: e for e in events}
+event_mapping["dropped"] = dropped_event
+event_id_to_name = {dropped_event_id: "dropped"}
+
+# If there is no header assume event ID mapping matches events list
+if not read_header:
+for event_id, event in enumerate(events):
+event_id_to_name[event_id] = event.name
 
+while t := fo

[PATCH 0/9] simpletrace: refactor and general improvements

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

I wanted to use simpletrace.py for an internal project, so I tried to update
and polish the code. Some of the commits resolve specific issues, while some
are more subjective.

I've tried to divide it into commits so we can discuss the
individual changes, and I'm ready to pull things out, if it isn't needed.

Mads Ynddal (9):
  simpletrace: Improve parsing of sys.argv; fix files never closed.
  simpletrace: Annotate magic constants from QEMU code
  simpletrace: changed naming of edict and idtoname to improve
readability
  simpletrace: update code for Python 3.11
  simpletrace: Changed Analyzer class to become context-manager
  simpletrace: Simplify construction of tracing methods
  simpletrace: Improved error handling on struct unpack
  simpletrace: define exception and add handling
  simpletrace: Refactor to separate responsibilities

 scripts/simpletrace.py | 293 ++---
 1 file changed, 127 insertions(+), 166 deletions(-)

-- 
2.38.1




[PATCH 7/9] simpletrace: Improved error handling on struct unpack

2023-02-21 Thread Mads Ynddal
From: Mads Ynddal 

A failed call to `read_header` wouldn't be handled the same for the two
different code paths (one path would try to use `None` as a list).
Changed to raise exception to be handled centrally. This also allows for
easier unpacking, as errors has been filtered out.

Signed-off-by: Mads Ynddal 
---
 scripts/simpletrace.py | 41 -
 1 file changed, 16 insertions(+), 25 deletions(-)

diff --git a/scripts/simpletrace.py b/scripts/simpletrace.py
index df52b09dbc..61af68618f 100755
--- a/scripts/simpletrace.py
+++ b/scripts/simpletrace.py
@@ -30,26 +30,24 @@ def read_header(fobj, hfmt):
 hlen = struct.calcsize(hfmt)
 hdr = fobj.read(hlen)
 if len(hdr) != hlen:
-return None
+raise ValueError('Error reading header. Wrong filetype provided?')
 return struct.unpack(hfmt, hdr)
 
 def get_record(event_mapping, event_id_to_name, rechdr, fobj):
 """Deserialize a trace record from a file into a tuple
(name, timestamp, pid, arg1, ..., arg6)."""
-if rechdr is None:
-return None
-if rechdr[0] != dropped_event_id:
-event_id = rechdr[0]
+event_id, timestamp_ns, length, pid = rechdr
+if event_id != dropped_event_id:
 name = event_id_to_name[event_id]
-rec = (name, rechdr[1], rechdr[3])
 try:
 event = event_mapping[name]
 except KeyError as e:
-sys.stderr.write('%s event is logged but is not declared ' \
+sys.stderr.write(f'{e} event is logged but is not declared ' \
  'in the trace events file, try using ' \
- 'trace-events-all instead.\n' % str(e))
+ 'trace-events-all instead.\n')
 sys.exit(1)
 
+rec = (name, timestamp_ns, pid)
 for type, name in event.args:
 if is_string(type):
 l = fobj.read(4)
@@ -60,9 +58,8 @@ def get_record(event_mapping, event_id_to_name, rechdr, fobj):
 (value,) = struct.unpack('=Q', fobj.read(8))
 rec = rec + (value,)
 else:
-rec = ("dropped", rechdr[1], rechdr[3])
-(value,) = struct.unpack('=Q', fobj.read(8))
-rec = rec + (value,)
+(dropped_count,) = struct.unpack('=Q', fobj.read(8))
+rec = ("dropped", timestamp_ns, pid, dropped_count)
 return rec
 
 def get_mapping(fobj):
@@ -79,22 +76,16 @@ def read_record(event_mapping, event_id_to_name, fobj):
 
 def read_trace_header(fobj):
 """Read and verify trace file header"""
-header = read_header(fobj, log_header_fmt)
-if header is None:
-raise ValueError('Not a valid trace file!')
-if header[0] != header_event_id:
-raise ValueError('Not a valid trace file, header id %d != %d' %
- (header[0], header_event_id))
-if header[1] != header_magic:
-raise ValueError('Not a valid trace file, header magic %d != %d' %
- (header[1], header_magic))
-
-log_version = header[2]
+_header_event_id, _header_magic, log_version = read_header(fobj, 
log_header_fmt)
+if _header_event_id != header_event_id:
+raise ValueError(f'Not a valid trace file, header id 
{_header_event_id} != {header_event_id}')
+if _header_magic != header_magic:
+raise ValueError(f'Not a valid trace file, header magic 
{_header_magic} != {header_magic}')
+
 if log_version not in [0, 2, 3, 4]:
-raise ValueError('Unknown version of tracelog format!')
+raise ValueError(f'Unknown version {log_version} of tracelog format!')
 if log_version != 4:
-raise ValueError('Log format %d not supported with this QEMU release!'
- % log_version)
+raise ValueError(f'Log format {log_version} not supported with this 
QEMU release!')
 
 def read_trace_records(event_mapping, event_id_to_name, fobj):
 """Deserialize trace records from a file, yielding record tuples 
(event_num, timestamp, pid, arg1, ..., arg6).
-- 
2.38.1




Re: [PATCH 1/2] qga/win32: Remove change action from MSI installer

2023-02-21 Thread Mauro Matteo Cascella
Hi Philippe,

On Tue, Feb 21, 2023 at 9:15 AM Philippe Mathieu-Daudé
 wrote:
>
> On 20/2/23 18:41, Konstantin Kostiuk wrote:
> > resolves: rhbz#2167436
>
> "You are not authorized to access bug #2167436."

Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=2167423.
It should now be accessible.

> > fixes: CVE-2023-0664
>
> This commit description is rather scarce...
>
> I understand you are trying to fix a CVE, but we shouldn't play
> the "security by obscurity" card. How can the community and
> distributions know this security fix is enough with the bare
> "Remove change action from MSI installer" justification?
> Can't we do better?

CCing Brian Wiltse, who originally found and reported this issue.

Reported-by: Brian Wiltse 

> > Signed-off-by: Konstantin Kostiuk 
> > ---
> >   qga/installer/qemu-ga.wxs | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
> > index 51340f7ecc..feb629ec47 100644
> > --- a/qga/installer/qemu-ga.wxs
> > +++ b/qga/installer/qemu-ga.wxs
> > @@ -31,6 +31,7 @@
> > />
> >> EmbedCab="yes" />
> >   1
> > +
> >> DowngradeErrorMessage="Error: A newer version of QEMU guest agent 
> > is already installed."
> > />
> > --
> > 2.25.1
> >
> >
>

-- 
Mauro Matteo Cascella
Red Hat Product Security
PGP-Key ID: BB3410B0




Re: [PATCH 3/9] simpletrace: changed naming of edict and idtoname to improve readability

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 10:00, Mads Ynddal wrote:

From: Mads Ynddal 

Readability is subjective, but I've expanded the naming of the variables
and arguments, to help with understanding for new eyes on the code.

Signed-off-by: Mads Ynddal 
---
  scripts/simpletrace.py | 34 +-
  1 file changed, 17 insertions(+), 17 deletions(-)


Reviewed-by: Philippe Mathieu-Daudé 




[PATCH] target/riscv: Add support for Zicond extension

2023-02-21 Thread Weiwei Li
The spec can be found in https://github.com/riscv/riscv-zicond.
Two instructions are added:
 - czero.eqz: Moves zero to a register rd, if the condition rs2 is
   equal to zero, otherwise moves rs1 to rd.
 - czero.nez: Moves zero to a register rd, if the condition rs2 is
   nonzero, otherwise moves rs1 to rd.

Signed-off-by: Weiwei Li 
Signed-off-by: Junqiang Wang 
---
 target/riscv/cpu.c   |  2 +
 target/riscv/cpu.h   |  1 +
 target/riscv/insn32.decode   |  4 ++
 target/riscv/insn_trans/trans_rvzicond.c.inc | 49 
 target/riscv/translate.c |  1 +
 5 files changed, 57 insertions(+)
 create mode 100644 target/riscv/insn_trans/trans_rvzicond.c.inc

diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 0dd2f0c753..80b92930ae 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -74,6 +74,7 @@ struct isa_ext_data {
 static const struct isa_ext_data isa_edata_arr[] = {
 ISA_EXT_DATA_ENTRY(h, false, PRIV_VERSION_1_12_0, ext_h),
 ISA_EXT_DATA_ENTRY(v, false, PRIV_VERSION_1_12_0, ext_v),
+ISA_EXT_DATA_ENTRY(zicond, true, PRIV_VERSION_1_12_0, ext_zicond),
 ISA_EXT_DATA_ENTRY(zicsr, true, PRIV_VERSION_1_10_0, ext_icsr),
 ISA_EXT_DATA_ENTRY(zifencei, true, PRIV_VERSION_1_10_0, ext_ifencei),
 ISA_EXT_DATA_ENTRY(zihintpause, true, PRIV_VERSION_1_10_0, 
ext_zihintpause),
@@ -1143,6 +1144,7 @@ static Property riscv_cpu_extensions[] = {
 DEFINE_PROP_BOOL("xventanacondops", RISCVCPU, cfg.ext_XVentanaCondOps, 
false),
 
 /* These are experimental so mark with 'x-' */
+DEFINE_PROP_BOOL("x-zicond", RISCVCPU, cfg.ext_zicond, false),
 DEFINE_PROP_BOOL("x-j", RISCVCPU, cfg.ext_j, false),
 /* ePMP 0.9.3 */
 DEFINE_PROP_BOOL("x-epmp", RISCVCPU, cfg.epmp, false),
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 7128438d8e..81b7c92e7a 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -447,6 +447,7 @@ struct RISCVCPUConfig {
 bool ext_zkt;
 bool ext_ifencei;
 bool ext_icsr;
+bool ext_zicond;
 bool ext_zihintpause;
 bool ext_smstateen;
 bool ext_sstc;
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
index b7e7613ea2..fb537e922e 100644
--- a/target/riscv/insn32.decode
+++ b/target/riscv/insn32.decode
@@ -890,3 +890,7 @@ sm3p1   00 01000 01001 . 001 . 0010011 @r2
 # *** RV32 Zksed Standard Extension ***
 sm4ed   .. 11000 . . 000 . 0110011 @k_aes
 sm4ks   .. 11010 . . 000 . 0110011 @k_aes
+
+# *** RV32 Zicond Standard Extension ***
+czero_eqz   111  . . 101 . 0110011 @r
+czero_nez   111  . . 111 . 0110011 @r
diff --git a/target/riscv/insn_trans/trans_rvzicond.c.inc 
b/target/riscv/insn_trans/trans_rvzicond.c.inc
new file mode 100644
index 00..645260164e
--- /dev/null
+++ b/target/riscv/insn_trans/trans_rvzicond.c.inc
@@ -0,0 +1,49 @@
+/*
+ * RISC-V translation routines for the Zicond Standard Extension.
+ *
+ * Copyright (c) 2020-2023 PLCT Lab
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see .
+ */
+
+#define REQUIRE_ZICOND(ctx) do {  \
+if (!ctx->cfg_ptr->ext_zicond) {  \
+return false; \
+} \
+} while (0)
+
+static bool trans_czero_eqz(DisasContext *ctx, arg_czero_eqz *a)
+{
+REQUIRE_ZICOND(ctx);
+
+TCGv dest = dest_gpr(ctx, a->rd);
+TCGv src1 = get_gpr(ctx, a->rs1, EXT_NONE);
+TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
+
+tcg_gen_movcond_tl(TCG_COND_EQ, dest, src2, ctx->zero, ctx->zero, src1);
+gen_set_gpr(ctx, a->rd, dest);
+return true;
+}
+
+static bool trans_czero_nez(DisasContext *ctx, arg_czero_nez *a)
+{
+REQUIRE_ZICOND(ctx);
+
+TCGv dest = dest_gpr(ctx, a->rd);
+TCGv src1 = get_gpr(ctx, a->rs1, EXT_NONE);
+TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
+
+tcg_gen_movcond_tl(TCG_COND_NE, dest, src2, ctx->zero, ctx->zero, src1);
+gen_set_gpr(ctx, a->rd, dest);
+return true;
+}
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
index 772f9d7973..6e65c6afca 100644
--- a/target/riscv/translate.c
+++ b/target/riscv/translate.c
@@ -1103,6 +1103,7 @@ static uint32_t opcode_at(DisasContextBase *dcbase, 
target_ulong pc)
 #include "insn_trans/trans_rvh.c.inc"
 #include "insn_trans/trans_rvv.c.inc"
 #include "insn_trans/trans_

Re: [PATCH 8/9] simpletrace: define exception and add handling

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 10:01, Mads Ynddal wrote:

From: Mads Ynddal 

Define `SimpleException` to differentiate our exceptions from generic
exceptions (IOError, etc.). Adapted simpletrace to support this and
output to stderr.

Signed-off-by: Mads Ynddal 
---
  scripts/simpletrace.py | 23 +++
  1 file changed, 15 insertions(+), 8 deletions(-)


Reviewed-by: Philippe Mathieu-Daudé 




Re: [PATCH 3/4] win32: stop mixing SOCKET and file descriptor space

2023-02-21 Thread Marc-André Lureau
Hi

On Tue, Feb 21, 2023 at 12:18 PM Paolo Bonzini  wrote:

> On 2/20/23 16:29, Marc-André Lureau wrote:
> >> 7. A Windows SOCKET is also a HANDLE.  Maybe.  I guess.  Docs are
> >> confusing.
> >>
> > Kind of, but not really. I think a HANDLE is a kind of void*. You need to
> > be careful using it appropriately with the right functions. Sometime, a
> > HANDLE can work with generic functions, like ReadFile, but you should not
> > use a CloseHandle on SOCKET, or registry key..
>
> A Windows SOCKET *is* a file HANDLE except it's always in overlapped
> mode so Windows provides send()/recv() in case you don't want to deal
> with overlapped mode.  But you can use it with ReadFile too (Windows API
> documentation says that is only true sometimes, but Winsock API is 30
> years old and right now you pretty much always can).
>
> However, sockets also has some extra information on the side, so you
> need to close them with closesocket() and CloseHandle() is not enough.
>

Yeah, the question is "is it safe to call CloseHandle() on a SOCKET, before
closesocket()". Testing/error checking seems to say it's okay.. I wouldn't
be surprised if internally the CloseHandle() function does something to
check if the given handle is a SOCKET and skip it. I wish they would
document it..


> The problem is that close() of something opened with _open_osfhandle()
> *does* do that CloseHandle(), so basically you are closing the handle
> twice.  IIRC there used to be undocumented functions _alloc_osfhnd() and
> similar, but they don't exist anymore (even Wine does not have them), so
> we're stuck; unfortunately this is the reason why QEMU is not already
> doing something like what you have in this patch.
>
> Is this a real bug or is it theoretical?  Do file descriptor and socket
> spaces overlap in practice?
>
>
Yes it likely can, the first SOCKET value starts at 92 in a simple test. It
looks like it may depend on the system number of opened sockets.

I think the second big issue is that we have many places where we assume a
fd is a fd, and we simply call close() (which would result in CloseHandle,
but missing closesocket).

sigh, if the CRT would allow us to steal the handle back..


Re: [PATCH] hw/arm/virt: Prevent CPUs in one socket to span mutiple NUMA nodes

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 09:53, Gavin Shan wrote:

Linux kernel guest reports warning when two CPUs in one socket have
been associated with different NUMA nodes, using the following command
lines.

   -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
   -numa node,nodeid=0,cpus=0-1,memdev=ram0\
   -numa node,nodeid=1,cpus=2-3,memdev=ram1\
   -numa node,nodeid=2,cpus=4-5,memdev=ram2\

   [ cut here ]
   WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 
build_sched_domains+0x284/0x910
   Modules linked in:
   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
   pstate: 0045 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
   pc : build_sched_domains+0x284/0x910
   lr : build_sched_domains+0x184/0x910
   sp : 8804bd50
   x29: 8804bd50 x28: 0002 x27: 
   x26: 89cf9a80 x25:  x24: 89cbf840
   x23: 80325000 x22: 005df800 x21: 8a4ce508
   x20:  x19: 80324440 x18: 0014
   x17: 388925c0 x16: 5386a066 x15: 9c10cc2e
   x14: 01c0 x13: 0001 x12: 7fffb1a0
   x11: 7fffb180 x10: 8a4ce508 x9 : 0041
   x8 : 8a4ce500 x7 : 8a4cf920 x6 : 0001
   x5 : 0001 x4 : 0007 x3 : 0002
   x2 : 1000 x1 : 8a4cf928 x0 : 0001
   Call trace:
build_sched_domains+0x284/0x910
sched_init_domains+0xac/0xe0
sched_init_smp+0x48/0xc8
kernel_init_freeable+0x140/0x1ac
kernel_init+0x28/0x140
ret_from_fork+0x10/0x20

Fix it by preventing mutiple CPUs in one socket to be associated with
different NUMA nodes.

Reported-by: Yihuang Yu 
Signed-off-by: Gavin Shan 
---
  hw/arm/virt.c | 37 +
  1 file changed, 37 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index ac626b3bef..e0af267c77 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -230,6 +230,39 @@ static bool cpu_type_valid(const char *cpu)
  return false;
  }
  
+static bool numa_state_valid(MachineState *ms)

+{
+MachineClass *mc = MACHINE_GET_CLASS(ms);
+NumaState *state = ms->numa_state;
+const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
+const CPUArchId *cpus = possible_cpus->cpus;
+int len = possible_cpus->len, i, j;
+
+if (!state || state->num_nodes <= 1 || len <= 1) {
+return true;
+}
+
+for (i = 0; i < len; i++) {
+for (j = i + 1; j < len; j++) {
+if (cpus[i].props.has_socket_id &&
+cpus[i].props.has_node_id &&
+cpus[j].props.has_socket_id &&
+cpus[j].props.has_node_id &&
+cpus[i].props.socket_id == cpus[j].props.socket_id &&
+cpus[i].props.node_id != cpus[j].props.node_id) {
+error_report("CPU-%d and CPU-%d in socket-%ld have been "
+ "associated with node-%ld and node-%ld",
+ i, j, cpus[i].props.socket_id,
+ cpus[i].props.node_id,
+ cpus[j].props.node_id);
+return false;
+}
+}
+}
+
+return true;
+}
+
  static void create_randomness(MachineState *ms, const char *node)
  {
  struct {
@@ -2040,6 +2073,10 @@ static void machvirt_init(MachineState *machine)
  exit(1);
  }
  
+if (!numa_state_valid(machine)) {

+exit(1);
+}


Why restrict to the virt machine?



Re: [PATCH v2 11/14] target/arm: Implement gdbstub pauth extension

2023-02-21 Thread Luis Machado

Hi,

On 2/21/23 02:19, Richard Henderson wrote:

The extension is primarily defined by the Linux kernel NT_ARM_PAC_MASK
ptrace register set.

The original gdb feature consists of two masks, data and code, which are
used to mask out the authentication code within a pointer.  Following
discussion with Luis Machado, add two more masks in order to support
pointers within the high half of the address space (i.e. TTBR1 vs TTBR0).

Cc: Luis Machado 
Cc: Thiago Jung Bauermann 
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1105
Signed-off-by: Richard Henderson 
---
  configs/targets/aarch64-linux-user.mak|  2 +-
  configs/targets/aarch64-softmmu.mak   |  2 +-
  configs/targets/aarch64_be-linux-user.mak |  2 +-
  target/arm/internals.h|  2 ++
  target/arm/gdbstub.c  |  5 
  target/arm/gdbstub64.c| 34 +++
  gdb-xml/aarch64-pauth.xml | 15 ++
  7 files changed, 59 insertions(+), 3 deletions(-)
  create mode 100644 gdb-xml/aarch64-pauth.xml

diff --git a/configs/targets/aarch64-linux-user.mak 
b/configs/targets/aarch64-linux-user.mak
index db552f1839..ba8bc5fe3f 100644
--- a/configs/targets/aarch64-linux-user.mak
+++ b/configs/targets/aarch64-linux-user.mak
@@ -1,6 +1,6 @@
  TARGET_ARCH=aarch64
  TARGET_BASE_ARCH=arm
-TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml
+TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml 
gdb-xml/aarch64-pauth.xml
  TARGET_HAS_BFLT=y
  CONFIG_SEMIHOSTING=y
  CONFIG_ARM_COMPATIBLE_SEMIHOSTING=y
diff --git a/configs/targets/aarch64-softmmu.mak 
b/configs/targets/aarch64-softmmu.mak
index d489e6da83..b4338e9568 100644
--- a/configs/targets/aarch64-softmmu.mak
+++ b/configs/targets/aarch64-softmmu.mak
@@ -1,5 +1,5 @@
  TARGET_ARCH=aarch64
  TARGET_BASE_ARCH=arm
  TARGET_SUPPORTS_MTTCG=y
-TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml 
gdb-xml/arm-core.xml gdb-xml/arm-vfp.xml gdb-xml/arm-vfp3.xml 
gdb-xml/arm-vfp-sysregs.xml gdb-xml/arm-neon.xml gdb-xml/arm-m-profile.xml 
gdb-xml/arm-m-profile-mve.xml
+TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml 
gdb-xml/arm-core.xml gdb-xml/arm-vfp.xml gdb-xml/arm-vfp3.xml 
gdb-xml/arm-vfp-sysregs.xml gdb-xml/arm-neon.xml gdb-xml/arm-m-profile.xml 
gdb-xml/arm-m-profile-mve.xml gdb-xml/aarch64-pauth.xml
  TARGET_NEED_FDT=y
diff --git a/configs/targets/aarch64_be-linux-user.mak 
b/configs/targets/aarch64_be-linux-user.mak
index dc78044fb1..acb5620cdb 100644
--- a/configs/targets/aarch64_be-linux-user.mak
+++ b/configs/targets/aarch64_be-linux-user.mak
@@ -1,7 +1,7 @@
  TARGET_ARCH=aarch64
  TARGET_BASE_ARCH=arm
  TARGET_BIG_ENDIAN=y
-TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml
+TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml 
gdb-xml/aarch64-pauth.xml
  TARGET_HAS_BFLT=y
  CONFIG_SEMIHOSTING=y
  CONFIG_ARM_COMPATIBLE_SEMIHOSTING=y
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 370655061e..fb88b16579 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1331,6 +1331,8 @@ int aarch64_gdb_get_sve_reg(CPUARMState *env, GByteArray 
*buf, int reg);
  int aarch64_gdb_set_sve_reg(CPUARMState *env, uint8_t *buf, int reg);
  int aarch64_gdb_get_fpu_reg(CPUARMState *env, GByteArray *buf, int reg);
  int aarch64_gdb_set_fpu_reg(CPUARMState *env, uint8_t *buf, int reg);
+int aarch64_gdb_get_pauth_reg(CPUARMState *env, GByteArray *buf, int reg);
+int aarch64_gdb_set_pauth_reg(CPUARMState *env, uint8_t *buf, int reg);
  void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
  void arm_cpu_sme_finalize(ARMCPU *cpu, Error **errp);
  void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
index bf8aff7824..062c8d447a 100644
--- a/target/arm/gdbstub.c
+++ b/target/arm/gdbstub.c
@@ -355,6 +355,11 @@ void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
   aarch64_gdb_set_fpu_reg,
   34, "aarch64-fpu.xml", 0);
  }
+if (isar_feature_aa64_pauth(&cpu->isar)) {
+gdb_register_coprocessor(cs, aarch64_gdb_get_pauth_reg,
+ aarch64_gdb_set_pauth_reg,
+ 4, "aarch64-pauth.xml", 0);
+}
  #endif
  } else {
  if (arm_feature(env, ARM_FEATURE_NEON)) {
diff --git a/target/arm/gdbstub64.c b/target/arm/gdbstub64.c
index 3d9e9e97c8..3bee892fb7 100644
--- a/target/arm/gdbstub64.c
+++ b/target/arm/gdbstub64.c
@@ -210,6 +210,40 @@ int aarch64_gdb_set_sve_reg(CPUARMState *env, uint8_t 
*buf, int reg)
  return 0;
  }

+int aarch64_gdb_get_pauth_reg(CPUARMState *env, GByteArray *buf, int reg)
+{
+switch (reg) {
+case 0: /* pauth_dmask */
+case 1: /* pauth_cmask */
+case 2: /* pauth_dmask_high */
+case 3: /* pauth_cmask_high */
+/*
+ * Note that o

Re: QAPI unions as branches / unifying struct and union types

2023-02-21 Thread Het Gala


On 17/02/23 5:25 pm, Daniel P. Berrangé wrote:

On Fri, Feb 17, 2023 at 04:48:59PM +0530, Het Gala wrote:

On 14/02/23 3:46 pm, Markus Armbruster wrote:

Het Gala  writes:


On 10/02/23 12:54 pm, Markus Armbruster wrote:

Daniel P. Berrangé  writes:

[...]


+##
+# @MigrateAddress:
+#
+# The options available for communication transport mechanisms for migration
+#
+# Since 8.0
+##
+{ 'union' : 'MigrateAddress',
+  'base' : { 'transport' : 'MigrateTransport'},
+  'discriminator' : 'transport',
+  'data' : {
+'socket' : 'MigrateSocketAddr',
+'exec' : 'MigrateExecAddr',
+'rdma': 'MigrateRdmaAddr' } }

Ideally this would be

  'data' : {
'socket' : 'SocketAddress',
'exec' : 'MigrateCommand',
'rdma': 'InetSocketAddress' } }

though the first SocketAddress isn't possible unless it is easy to
lift the QAPI limitation.

Context: SocketAddress is a QAPI union, and "the QAPI limitation" is

   scripts/qapi-gen.py: In file included from ../qapi/qapi-schema.json:79:
   ../qapi/migration.json: In union 'MigrateAddress':
   ../qapi/migration.json:1505: branch 'socket' cannot use union type 
'SocketAddress'

Emitted by schema.py like this:

   if (not isinstance(v.type, QAPISchemaObjectType)
   or v.type.variants):
   raise QAPISemError(
   self.info,
   "%s cannot use %s"
   % (v.describe(self.info), v.type.describe()))

This enforces docs/devel/qapi-code-gen.rst's clause

   The BRANCH's value defines the branch's properties, in particular its
   type.  The type must a struct type.  [...]

Next paragraph:

   In the Client JSON Protocol, a union is represented by an object with
   the common members (from the base type) and the selected branch's
   members.  The two sets of member names must be disjoint.

So, we're splicing in the members of the branch's JSON object.  For that
to even make sense, the branch type needs to map to a JSON object.  This
is fundamental.  It's the first part of the condition in the code
snippet above.

We have two kinds of QAPI types that map to a JSON object: struct and
union.  The second part of the condition restricts to struct.  Unless
I'm missing something (imperfect memory...), this is *not* fundamental,
just a matter of implementing it.  But I'd have to try to be sure.


Instead of simply allowing unions in addition to structs here, I'd like
to go one step further, and fuse the two into "objects".  Let me
explain.

If we abstract from syntax, structs have become almost a special kind of
union.  Unions have a set of common members and sets of variant members,
and a special common member (the tag) selects the set of variant
members.  Structs are unions with zero variants and no tag.

The generator code actually represents both structs and unions as a
common QAPISchemaObjectType already.  QAPI/QMP introspection does the
same: it uses a single meta type 'object' for both.


There is another spot where only structs are allowed: a struct or
union's base type.  That restriction will be awkward to lift, as I made
the mistake of baking the assumption "object type has at most one tag
member" into QAPI/QMP introspection .

Hi Markus, thankyou for explaning in such detail. I tried to understand of what 
you explained.

So IIUC, you mentioned the QAPI generator treats both structs and unions same, 
but basically in the schema.py checks is where it tries to distinguish between 
the two ? and because of the fact that docs/devel/qapi-code-gen.rst states that 
for a union, it's branches must be 'struct', and that's the reason it gives an 
error ?

Permit me a brief digression into history.

The initial QAPI design language provided product types (structs) and
sum types (unions containing exactly one of several types, and a tag
member that tells which one).  The two are orthogonal.

These unions turned out rather awkward.

The unions we have today are more general.  They have common members,
and one of them is the tag member, of enumeration type.  For each tag
value, they have variant members.  Both the common members and each tag
value's variant members are given as struct types.

What if the tag's enumeration type is empty, i.e. has no values?  We get
a union with no variant members, only common ones.  Isn't that a struct?

Not quite.  To get a struct, we also have to drop the tag member.  It
has no possible values anyway.

You see, struct types are almost a special case of today's union types.
To overcome "almost", we can introduce the notion of "object type":

* An object type has common members, one of them can be a tag member, of
enumeration type, not empty.  For each tag value, it additionally has
variant members.

* A union type is an object type with a tag member and variant members.

* A struct type is an object type without tag member and variant
members.

The QAPI generator cod

Re: [PATCH] hw/arm/virt: Prevent CPUs in one socket to span mutiple NUMA nodes

2023-02-21 Thread Gavin Shan

On 2/21/23 8:15 PM, Philippe Mathieu-Daudé wrote:

On 21/2/23 09:53, Gavin Shan wrote:

Linux kernel guest reports warning when two CPUs in one socket have
been associated with different NUMA nodes, using the following command
lines.

   -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
   -numa node,nodeid=0,cpus=0-1,memdev=ram0    \
   -numa node,nodeid=1,cpus=2-3,memdev=ram1    \
   -numa node,nodeid=2,cpus=4-5,memdev=ram2    \

   [ cut here ]
   WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 
build_sched_domains+0x284/0x910
   Modules linked in:
   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
   pstate: 0045 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
   pc : build_sched_domains+0x284/0x910
   lr : build_sched_domains+0x184/0x910
   sp : 8804bd50
   x29: 8804bd50 x28: 0002 x27: 
   x26: 89cf9a80 x25:  x24: 89cbf840
   x23: 80325000 x22: 005df800 x21: 8a4ce508
   x20:  x19: 80324440 x18: 0014
   x17: 388925c0 x16: 5386a066 x15: 9c10cc2e
   x14: 01c0 x13: 0001 x12: 7fffb1a0
   x11: 7fffb180 x10: 8a4ce508 x9 : 0041
   x8 : 8a4ce500 x7 : 8a4cf920 x6 : 0001
   x5 : 0001 x4 : 0007 x3 : 0002
   x2 : 1000 x1 : 8a4cf928 x0 : 0001
   Call trace:
    build_sched_domains+0x284/0x910
    sched_init_domains+0xac/0xe0
    sched_init_smp+0x48/0xc8
    kernel_init_freeable+0x140/0x1ac
    kernel_init+0x28/0x140
    ret_from_fork+0x10/0x20

Fix it by preventing mutiple CPUs in one socket to be associated with
different NUMA nodes.

Reported-by: Yihuang Yu 
Signed-off-by: Gavin Shan 
---
  hw/arm/virt.c | 37 +
  1 file changed, 37 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index ac626b3bef..e0af267c77 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -230,6 +230,39 @@ static bool cpu_type_valid(const char *cpu)
  return false;
  }
+static bool numa_state_valid(MachineState *ms)
+{
+    MachineClass *mc = MACHINE_GET_CLASS(ms);
+    NumaState *state = ms->numa_state;
+    const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
+    const CPUArchId *cpus = possible_cpus->cpus;
+    int len = possible_cpus->len, i, j;
+
+    if (!state || state->num_nodes <= 1 || len <= 1) {
+    return true;
+    }
+
+    for (i = 0; i < len; i++) {
+    for (j = i + 1; j < len; j++) {
+    if (cpus[i].props.has_socket_id &&
+    cpus[i].props.has_node_id &&
+    cpus[j].props.has_socket_id &&
+    cpus[j].props.has_node_id &&
+    cpus[i].props.socket_id == cpus[j].props.socket_id &&
+    cpus[i].props.node_id != cpus[j].props.node_id) {
+    error_report("CPU-%d and CPU-%d in socket-%ld have been "
+ "associated with node-%ld and node-%ld",
+ i, j, cpus[i].props.socket_id,
+ cpus[i].props.node_id,
+ cpus[j].props.node_id);
+    return false;
+    }
+    }
+    }
+
+    return true;
+}
+
  static void create_randomness(MachineState *ms, const char *node)
  {
  struct {
@@ -2040,6 +2073,10 @@ static void machvirt_init(MachineState *machine)
  exit(1);
  }
+    if (!numa_state_valid(machine)) {
+    exit(1);
+    }


Why restrict to the virt machine?



We tried x86 machines and virt machine, but the issue isn't reproducible on x86 
machines.
So I think it's machine or architecture specific issue. However, I believe 
RiscV should
have similar issue because linux/drivers/base/arch_topology.c is shared by 
ARM64 and RiscV.
x86 doesn't use the driver to populate its CPU topology.

Thanks,
Gavin




Re: [PATCH 1/2] qga/win32: Remove change action from MSI installer

2023-02-21 Thread Konstantin Kostiuk
On Tue, Feb 21, 2023 at 10:15 AM Philippe Mathieu-Daudé 
wrote:

> On 20/2/23 18:41, Konstantin Kostiuk wrote:
> > resolves: rhbz#2167436
>
> "You are not authorized to access bug #2167436."
>
> > fixes: CVE-2023-0664
>
> This commit description is rather scarce...
>
> I understand you are trying to fix a CVE, but we shouldn't play
> the "security by obscurity" card. How can the community and
> distributions know this security fix is enough with the bare
> "Remove change action from MSI installer" justification?
> Can't we do better?
>

This patch is part of the fix. I remove the 'change' button because
the installer has no components to choose from and the installer
always installs everything.

The second patch removes the interactive command shell.


>
> > Signed-off-by: Konstantin Kostiuk 
> > ---
> >   qga/installer/qemu-ga.wxs | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
> > index 51340f7ecc..feb629ec47 100644
> > --- a/qga/installer/qemu-ga.wxs
> > +++ b/qga/installer/qemu-ga.wxs
> > @@ -31,6 +31,7 @@
> > />
> >EmbedCab="yes" />
> >   1
> > +
> >> DowngradeErrorMessage="Error: A newer version of QEMU guest
> agent is already installed."
> > />
> > --
> > 2.25.1
> >
> >
>
>


Re: [PATCH v3 6/8] target/i386/intel-pt: Enable host pass through of Intel PT

2023-02-21 Thread Xiaoyao Li

On 2/21/2023 1:14 PM, Wang, Lei wrote:


On 12/8/2022 2:25 PM, Xiaoyao Li wrote:

commit e37a5c7fa459 ("i386: Add Intel Processor Trace feature support")
added the support of Intel PT by making CPUID[14] of PT as fixed feature
set (from ICX) for any CPU model on any host. This truly breaks the PT
exposure on Intel SPR platform because SPR has less supported bitmap of
CPUID(0x14,1):EBX[15:0] than ICX.

To fix the problem, enable pass through of host's PT capabilities for
the cases "-cpu host/max" that it won't use default fixed PT feature set
of ICX but expand automatically based on get_supported_cpuid reported by
host. Meanwhile, it needs to ensure named CPU model still has the fixed
PT feature set to not break the live migration case of
"-cpu named_cpu_model,+intel-pt"

Introduces env->use_default_intel_pt flag.
  - True means it's old CPU model that uses fixed PT feature set of ICX.
  - False means the named CPU model has its own PT feature set.

Besides, to keep the same behavior for old CPU models that validate PT
feature set against default fixed PT feature set of ICX in addition to
validate from host's capabilities (via get_supported_cpuid) in
x86_cpu_filter_features().

In the future, new named CPU model, e.g., Sapphire Rapids, can define
its own PT feature set by setting @has_specific_intel_pt_feature_set to
true and defines it's own FEAT_14_0_EBX, FEAT_14_0_ECX, FEAT_14_1_EAX
and FEAT_14_1_EBX.

Signed-off-by: Xiaoyao Li 
---
  target/i386/cpu.c | 71 ++-
  target/i386/cpu.h |  1 +
  2 files changed, 40 insertions(+), 32 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index e302cbbebfc5..24f3c7b06698 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5194,6 +5194,21 @@ static void x86_cpu_load_model(X86CPU *cpu, X86CPUModel 
*model)
  env->features[w] = def->features[w];
  }
  
+/*

+ * All (old) named CPU models have the same default values for INTEL_PT_*
+ *
+ * Assign the default value here since we don't want to manually copy/paste
+ * it to all entries in builtin_x86_defs.
+ */
+if (!env->features[FEAT_14_0_EBX] && !env->features[FEAT_14_0_ECX] &&
+!env->features[FEAT_14_1_EAX] && !env->features[FEAT_14_1_EBX]) {
+env->use_default_intel_pt = true;
+env->features[FEAT_14_0_EBX] = INTEL_PT_DEFAULT_0_EBX;
+env->features[FEAT_14_0_ECX] = INTEL_PT_DEFAULT_0_ECX;
+env->features[FEAT_14_1_EAX] = INTEL_PT_DEFAULT_1_EAX;
+env->features[FEAT_14_1_EBX] = INTEL_PT_DEFAULT_1_EBX;
+}
+
  /* legacy-cache defaults to 'off' if CPU model provides cache info */
  cpu->legacy_cache = !def->cache_info;
  
@@ -5716,14 +5731,11 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
  
  if (count == 0) {

  *eax = INTEL_PT_MAX_SUBLEAF;
-*ebx = INTEL_PT_DEFAULT_0_EBX;
-*ecx = INTEL_PT_DEFAULT_0_ECX;
-if (env->features[FEAT_14_0_ECX] & CPUID_14_0_ECX_LIP) {
-*ecx |= CPUID_14_0_ECX_LIP;
-}
+*ebx = env->features[FEAT_14_0_EBX];
+*ecx = env->features[FEAT_14_0_ECX];
  } else if (count == 1) {
-*eax = INTEL_PT_DEFAULT_1_EAX;
-*ebx = INTEL_PT_DEFAULT_1_EBX;
+*eax = env->features[FEAT_14_1_EAX];
+*ebx = env->features[FEAT_14_1_EBX];
  }
  break;
  }
@@ -6425,6 +6437,7 @@ static void x86_cpu_filter_features(X86CPU *cpu, bool 
verbose)
  CPUX86State *env = &cpu->env;
  FeatureWord w;
  const char *prefix = NULL;
+uint64_t host_feat;
  
  if (verbose) {

  prefix = accel_uses_host_cpuid()
@@ -6433,8 +6446,7 @@ static void x86_cpu_filter_features(X86CPU *cpu, bool 
verbose)
  }
  
  for (w = 0; w < FEATURE_WORDS; w++) {

-uint64_t host_feat =
-x86_cpu_get_supported_feature_word(w, false);
+host_feat = x86_cpu_get_supported_feature_word(w, false);
  uint64_t requested_features = env->features[w];
  uint64_t unavailable_features;
  
@@ -6458,31 +6470,26 @@ static void x86_cpu_filter_features(X86CPU *cpu, bool verbose)

  mark_unavailable_features(cpu, w, unavailable_features, prefix);
  }
  
-if ((env->features[FEAT_7_0_EBX] & CPUID_7_0_EBX_INTEL_PT) &&

-kvm_enabled()) {
-KVMState *s = CPU(cpu)->kvm_state;
-uint32_t eax_0 = kvm_arch_get_supported_cpuid(s, 0x14, 0, R_EAX);
-uint32_t ebx_0 = kvm_arch_get_supported_cpuid(s, 0x14, 0, R_EBX);
-uint32_t ecx_0 = kvm_arch_get_supported_cpuid(s, 0x14, 0, R_ECX);
-uint32_t eax_1 = kvm_arch_get_supported_cpuid(s, 0x14, 1, R_EAX);
-uint32_t ebx_1 = kvm_arch_get_supported_cpuid(s, 0x14, 1, R_EBX);
-
-if (!eax_0 ||
-   ((ebx_0 & INTEL_PT_DEFAULT_0_EBX) != INTEL_PT_DEFAULT_0_EBX) ||
-   ((ecx_0 & INTEL_PT_DEFAULT_0_ECX) != INTEL_PT_DEFAULT_0_ECX) ||
-  

Re: [PATCH v2 3/4] qga/vss-win32: fix warning for clang++-15

2023-02-21 Thread Pierrick Bouvier

Done!

On 2/20/23 20:30, Philippe Mathieu-Daudé wrote:

Fixes: 917ebcb170 ("qga-win: Fix QGA VSS Provider service stop failure")
Reviewed-by: Philippe Mathieu-Daudé


[PATCH v2 00/14] testing/next: docker, avocado, unit, gitlab

2023-02-21 Thread Alex Bennée
Since the last posting:

  - included Anton's curl fix (to keep CI green)
  - split socat MacOS changes across two patches
  - dropped ninja verbosity patch
  - reduced chattiness of fp-test
- see 
https://gitlab.com/qemu-project/berkeley-testfloat-3/-/merge_requests/1
  - usual cleanups bellow the ---

The following patches still need review:

tests: ensure we export job results for some cross builds
tests: add tuxrun baseline test to avocado
tests: skip the nios2 replay_kernel test
tests: don't run benchmarks for the tsan build
gitlab: extend custom runners with base_job_template
tests: make fp-test less chatty when running from test suite
tests: be a bit more strict cleaning up fifos

Alex Bennée (10):
  tests: don't run socat tests on MacOS as well
  tests: add socat dependency for tests
  tests: be a bit more strict cleaning up fifos
  tests: make fp-test less chatty when running from test suite
  gitlab: extend custom runners with base_job_template
  tests: don't run benchmarks for the tsan build
  testing: update ubuntu2004 to ubuntu2204
  tests: skip the nios2 replay_kernel test
  tests: add tuxrun baseline test to avocado
  tests: ensure we export job results for some cross builds

Anton Johansson (1):
  block: Handle curl 7.55.0, 7.85.0 version changes

Bastian Koppelmann (1):
  tests/docker: Use binaries for debian-tricore-cross

Thomas Huth (2):
  gitlab-ci: Use artifacts instead of dumping logs in the Cirrus-CI jobs
  cirrus.yml: Improve the windows_msys2_task

 MAINTAINERS   |   1 +
 docs/devel/testing.rst|   4 +-
 tests/tcg/tricore/macros.h|   2 +-
 block/curl.c  |  44 +-
 tests/fp/fp-test.c|  19 +-
 tests/unit/test-io-channel-command.c  |  10 +-
 .cirrus.yml   |   8 +-
 .gitlab-ci.d/buildtest.yml|  23 +-
 .gitlab-ci.d/cirrus/build.yml |   7 +-
 .gitlab-ci.d/cirrus/freebsd-12.vars   |   2 +-
 .gitlab-ci.d/cirrus/freebsd-13.vars   |   2 +-
 .gitlab-ci.d/cirrus/macos-12.vars |   2 +-
 .gitlab-ci.d/containers.yml   |   4 +-
 .gitlab-ci.d/crossbuild-template.yml  |  11 +
 .gitlab-ci.d/crossbuilds.yml  |  12 +-
 .gitlab-ci.d/custom-runners.yml   |   3 +-
 .../custom-runners/ubuntu-20.04-s390x.yml |  10 +-
 .../custom-runners/ubuntu-22.04-aarch32.yml   |   2 +-
 .../custom-runners/ubuntu-22.04-aarch64.yml   |  10 +-
 tests/avocado/replay_kernel.py|   1 +
 tests/avocado/tuxrun_baselines.py | 423 ++
 tests/docker/dockerfiles/alpine.docker|   1 +
 tests/docker/dockerfiles/centos8.docker   |   1 +
 .../dockerfiles/debian-amd64-cross.docker |   1 +
 tests/docker/dockerfiles/debian-amd64.docker  |   1 +
 .../dockerfiles/debian-arm64-cross.docker |   1 +
 .../dockerfiles/debian-armel-cross.docker |   1 +
 .../dockerfiles/debian-armhf-cross.docker |   1 +
 .../dockerfiles/debian-mips64el-cross.docker  |   1 +
 .../dockerfiles/debian-mipsel-cross.docker|   1 +
 .../dockerfiles/debian-ppc64el-cross.docker   |   1 +
 .../dockerfiles/debian-s390x-cross.docker |   1 +
 .../dockerfiles/debian-tricore-cross.docker   |  10 +-
 .../dockerfiles/fedora-win32-cross.docker |   1 +
 .../dockerfiles/fedora-win64-cross.docker |   1 +
 tests/docker/dockerfiles/fedora.docker|   1 +
 tests/docker/dockerfiles/opensuse-leap.docker |   1 +
 tests/docker/dockerfiles/ubuntu2004.docker|   4 +-
 tests/docker/dockerfiles/ubuntu2204.docker| 147 ++
 tests/docker/test-tsan|   2 +-
 tests/fp/berkeley-testfloat-3 |   2 +-
 tests/fp/meson.build  |   2 +-
 tests/lcitool/projects/qemu.yml   |   1 +
 tests/lcitool/refresh |  11 +-
 tests/tcg/tricore/Makefile.softmmu-target |   6 +-
 45 files changed, 720 insertions(+), 80 deletions(-)
 create mode 100644 tests/avocado/tuxrun_baselines.py
 create mode 100644 tests/docker/dockerfiles/ubuntu2204.docker

-- 
2.39.1




[PATCH v2 09/14] testing: update ubuntu2004 to ubuntu2204

2023-02-21 Thread Alex Bennée
The 22.04 LTS release has been out for almost a year now so its time
to update all the remaining images to the current LTS. We can also
drop some hacks we need for older clang TSAN support.

We will keep the ubuntu2004 container around for those who wish to
test builds on the currently still supported baseline.

Signed-off-by: Alex Bennée 
Reviewed-by: John Snow 


v2
  - keep ubuntu2004 about
---
 docs/devel/testing.rst |   4 +-
 .gitlab-ci.d/buildtest.yml |  22 +--
 .gitlab-ci.d/containers.yml|   4 +-
 tests/docker/dockerfiles/ubuntu2004.docker |   3 -
 tests/docker/dockerfiles/ubuntu2204.docker | 147 +
 tests/docker/test-tsan |   2 +-
 tests/lcitool/refresh  |  11 +-
 7 files changed, 165 insertions(+), 28 deletions(-)
 create mode 100644 tests/docker/dockerfiles/ubuntu2204.docker

diff --git a/docs/devel/testing.rst b/docs/devel/testing.rst
index e10c47b5a7..309a575abe 100644
--- a/docs/devel/testing.rst
+++ b/docs/devel/testing.rst
@@ -574,13 +574,13 @@ 
https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
 
 Thread Sanitizer in Docker
 ~~
-TSan is currently supported in the ubuntu2004 docker.
+TSan is currently supported in the ubuntu2204 docker.
 
 The test-tsan test will build using TSan and then run make check.
 
 .. code::
 
-  make docker-test-tsan@ubuntu2004
+  make docker-test-tsan@ubuntu2204
 
 TSan warnings under docker are placed in files located at build/tsan/.
 
diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml
index e9a67e0191..1686558d80 100644
--- a/.gitlab-ci.d/buildtest.yml
+++ b/.gitlab-ci.d/buildtest.yml
@@ -38,9 +38,9 @@ avocado-system-alpine:
 build-system-ubuntu:
   extends: .native_build_job_template
   needs:
-job: amd64-ubuntu2004-container
+job: amd64-ubuntu2204-container
   variables:
-IMAGE: ubuntu2004
+IMAGE: ubuntu2204
 CONFIGURE_ARGS: --enable-docs --enable-fdt=system --enable-capstone
 TARGETS: aarch64-softmmu alpha-softmmu cris-softmmu hppa-softmmu
   microblazeel-softmmu mips64el-softmmu
@@ -56,7 +56,7 @@ check-system-ubuntu:
 - job: build-system-ubuntu
   artifacts: true
   variables:
-IMAGE: ubuntu2004
+IMAGE: ubuntu2204
 MAKE_CHECK_ARGS: check
 
 avocado-system-ubuntu:
@@ -65,7 +65,7 @@ avocado-system-ubuntu:
 - job: build-system-ubuntu
   artifacts: true
   variables:
-IMAGE: ubuntu2004
+IMAGE: ubuntu2204
 MAKE_CHECK_ARGS: check-avocado
 
 build-system-debian:
@@ -459,10 +459,10 @@ avocado-cfi-x86_64:
 tsan-build:
   extends: .native_build_job_template
   needs:
-job: amd64-ubuntu2004-container
+job: amd64-ubuntu2204-container
   variables:
-IMAGE: ubuntu2004
-CONFIGURE_ARGS: --enable-tsan --cc=clang-10 --cxx=clang++-10
+IMAGE: ubuntu2204
+CONFIGURE_ARGS: --enable-tsan --cc=clang --cxx=clang++
   --enable-trace-backends=ust --enable-fdt=system --disable-slirp
 TARGETS: x86_64-softmmu ppc64-softmmu riscv64-softmmu x86_64-linux-user
 
@@ -470,10 +470,10 @@ tsan-build:
 gcov:
   extends: .native_build_job_template
   needs:
-job: amd64-ubuntu2004-container
+job: amd64-ubuntu2204-container
   timeout: 80m
   variables:
-IMAGE: ubuntu2004
+IMAGE: ubuntu2204
 CONFIGURE_ARGS: --enable-gcov
 TARGETS: aarch64-softmmu ppc64-softmmu s390x-softmmu x86_64-softmmu
 MAKE_CHECK_ARGS: check
@@ -538,9 +538,9 @@ build-tci:
 build-coroutine-sigaltstack:
   extends: .native_build_job_template
   needs:
-job: amd64-ubuntu2004-container
+job: amd64-ubuntu2204-container
   variables:
-IMAGE: ubuntu2004
+IMAGE: ubuntu2204
 CONFIGURE_ARGS: --with-coroutine=sigaltstack --disable-tcg
 --enable-trace-backends=ftrace
 MAKE_CHECK_ARGS: check-unit
diff --git a/.gitlab-ci.d/containers.yml b/.gitlab-ci.d/containers.yml
index 96d2a3b58b..8637a13d86 100644
--- a/.gitlab-ci.d/containers.yml
+++ b/.gitlab-ci.d/containers.yml
@@ -13,10 +13,10 @@ amd64-debian-container:
   variables:
 NAME: debian-amd64
 
-amd64-ubuntu2004-container:
+amd64-ubuntu2204-container:
   extends: .container_job_template
   variables:
-NAME: ubuntu2004
+NAME: ubuntu2204
 
 amd64-opensuse-leap-container:
   extends: .container_job_template
diff --git a/tests/docker/dockerfiles/ubuntu2004.docker 
b/tests/docker/dockerfiles/ubuntu2004.docker
index f34d88d33d..75233064de 100644
--- a/tests/docker/dockerfiles/ubuntu2004.docker
+++ b/tests/docker/dockerfiles/ubuntu2004.docker
@@ -146,6 +146,3 @@ ENV LANG "en_US.UTF-8"
 ENV MAKE "/usr/bin/make"
 ENV NINJA "/usr/bin/ninja"
 ENV PYTHON "/usr/bin/python3"
-# Apply patch https://reviews.llvm.org/D75820
-# This is required for TSan in clang-10 to compile with QEMU.
-RUN sed -i 's/^const/static const/g' 
/usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
diff --git a/tests/docker/dockerfiles/ubuntu2204.docker 
b/test

[PATCH v2 13/14] tests: ensure we export job results for some cross builds

2023-02-21 Thread Alex Bennée
We do run tests on some cross builds. Provide a template to ensure we
export the testlog to the build artefacts and report the test results
via the junit.

Signed-off-by: Alex Bennée 
Reported-by: Peter Maydell 

---
v2
  - properly format extends
---
 .gitlab-ci.d/crossbuild-template.yml | 11 +++
 .gitlab-ci.d/crossbuilds.yml | 12 +---
 2 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/.gitlab-ci.d/crossbuild-template.yml 
b/.gitlab-ci.d/crossbuild-template.yml
index 6d709628f1..24a41a7b21 100644
--- a/.gitlab-ci.d/crossbuild-template.yml
+++ b/.gitlab-ci.d/crossbuild-template.yml
@@ -48,3 +48,14 @@
   nios2-linux-user or1k-linux-user ppc-linux-user sparc-linux-user
   xtensa-linux-user $CROSS_SKIP_TARGETS"
 - make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS
+
+# We can still run some tests on some of our cross build jobs. They can add 
this
+# template to their extends to save the build logs and test results
+.cross_test_artifacts:
+  artifacts:
+name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
+expire_in: 7 days
+paths:
+  - build/meson-logs/testlog.txt
+reports:
+  junit: build/meson-logs/testlog.junit.xml
diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 74d6259b90..bbc013725c 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -44,7 +44,9 @@ cross-arm64-user:
 IMAGE: debian-arm64-cross
 
 cross-i386-system:
-  extends: .cross_system_build_job
+  extends:
+- .cross_system_build_job
+- .cross_test_artifacts
   needs:
 job: i386-fedora-cross-container
   variables:
@@ -52,7 +54,9 @@ cross-i386-system:
 MAKE_CHECK_ARGS: check-qtest
 
 cross-i386-user:
-  extends: .cross_user_build_job
+  extends:
+- .cross_user_build_job
+- .cross_test_artifacts
   needs:
 job: i386-fedora-cross-container
   variables:
@@ -60,7 +64,9 @@ cross-i386-user:
 MAKE_CHECK_ARGS: check
 
 cross-i386-tci:
-  extends: .cross_accel_build_job
+  extends:
+- .cross_accel_build_job
+- .cross_test_artifacts
   timeout: 60m
   needs:
 job: i386-fedora-cross-container
-- 
2.39.1




[PATCH v2 12/14] tests/docker: Use binaries for debian-tricore-cross

2023-02-21 Thread Alex Bennée
From: Bastian Koppelmann 

since binutils is pretty old, it fails our CI repeatedly during the
compilation of tricore-binutils. We created a precompiled version using
the debian docker image and download it instead of building it ourself.

We also updated the package to include a newer version of binutils, gcc,
and newlib. The default TriCore ISA version used by tricore-as changed
from the old version, so we have to specify it now. If we don't
'test_fadd' fails with 'unknown opcode'.

The new assembler also picks a new encoding in ld.h which fails the
'test_ld_h' test. We fix that by using the newest TriCore CPU for QEMU.

The old assembler accepted an extra ')' in 'test_imask'. The new one
does not, so lets remove it.

Signed-off-by: Bastian Koppelmann 
Message-Id: <20230209145812.46730-1-kbast...@mail.uni-paderborn.de>
Signed-off-by: Alex Bennée 
---
 tests/tcg/tricore/macros.h   |  2 +-
 tests/docker/dockerfiles/debian-tricore-cross.docker | 10 +++---
 tests/tcg/tricore/Makefile.softmmu-target|  6 +++---
 3 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/tests/tcg/tricore/macros.h b/tests/tcg/tricore/macros.h
index ec4f5bff52..3df2e0de82 100644
--- a/tests/tcg/tricore/macros.h
+++ b/tests/tcg/tricore/macros.h
@@ -174,7 +174,7 @@ test_ ## num:   
 \
 TEST_CASE_E(num, res_lo, res_hi,   \
 LI(DREG_RS1, rs1); \
 rstv;  \
-insn EREG_CALC_RESULT, imm1, DREG_RS1, imm2);  \
+insn EREG_CALC_RESULT, imm1, DREG_RS1, imm2;   \
 )
 
 
diff --git a/tests/docker/dockerfiles/debian-tricore-cross.docker 
b/tests/docker/dockerfiles/debian-tricore-cross.docker
index 5ae58efa09..82e4576485 100644
--- a/tests/docker/dockerfiles/debian-tricore-cross.docker
+++ b/tests/docker/dockerfiles/debian-tricore-cross.docker
@@ -20,6 +20,7 @@ RUN apt update && \
bzip2 \
ca-certificates \
ccache \
+   curl \
flex \
g++ \
gcc \
@@ -34,13 +35,8 @@ RUN apt update && \
python3-setuptools \
python3-wheel
 
-RUN git clone --single-branch \
-https://github.com/bkoppelmann/tricore-binutils.git \
-/usr/src/binutils && \
-cd /usr/src/binutils && chmod +x missing && \
-CFLAGS=-w ./configure --prefix=/usr/local --disable-nls --target=tricore 
&& \
-make && make install && \
-rm -rf /usr/src/binutils
+RUN curl -#SL 
https://github.com/bkoppelmann/package_940/releases/download/tricore-toolchain-9.40/tricore-toolchain-9.4.0.tar.gz
 \
+| tar -xzC /usr/local/
 
 # This image can only build a very minimal QEMU as well as the tests
 ENV DEF_TARGET_LIST tricore-softmmu
diff --git a/tests/tcg/tricore/Makefile.softmmu-target 
b/tests/tcg/tricore/Makefile.softmmu-target
index d2446af8b4..b3cd56fffc 100644
--- a/tests/tcg/tricore/Makefile.softmmu-target
+++ b/tests/tcg/tricore/Makefile.softmmu-target
@@ -1,7 +1,7 @@
 TESTS_PATH = $(SRC_PATH)/tests/tcg/tricore
 
-LDFLAGS = -T$(TESTS_PATH)/link.ld
-ASFLAGS =
+LDFLAGS = -T$(TESTS_PATH)/link.ld --mcpu=tc162
+ASFLAGS = -mtc162
 
 TESTS += test_abs.tst
 TESTS += test_bmerge.tst
@@ -19,7 +19,7 @@ TESTS += test_madd.tst
 TESTS += test_msub.tst
 TESTS += test_muls.tst
 
-QEMU_OPTS += -M tricore_testboard -nographic -kernel
+QEMU_OPTS += -M tricore_testboard -cpu tc27x -nographic -kernel
 
 %.pS: $(TESTS_PATH)/%.S
$(HOST_CC) -E -o $@ $<
-- 
2.39.1




[PATCH v2 10/14] tests: skip the nios2 replay_kernel test

2023-02-21 Thread Alex Bennée
It is buggy and keeps failing.

Suggested-by: Peter Maydell 
Signed-off-by: Alex Bennée 
---
 tests/avocado/replay_kernel.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/avocado/replay_kernel.py b/tests/avocado/replay_kernel.py
index 00a26e4a0c..f13456e1ec 100644
--- a/tests/avocado/replay_kernel.py
+++ b/tests/avocado/replay_kernel.py
@@ -349,6 +349,7 @@ def test_or1k_sim(self):
 file_path = self.fetch_asset(tar_url, asset_hash=tar_hash)
 self.do_test_advcal_2018(file_path, 'vmlinux')
 
+@skip("nios2 emulation is buggy under record/replay")
 def test_nios2_10m50(self):
 """
 :avocado: tags=arch:nios2
-- 
2.39.1




Re: [PATCH v3 2/4] sysemu/os-win32: fix setjmp/longjmp on windows-arm64

2023-02-21 Thread Pierrick Bouvier
@Philippe Mathieu-Daudé, is that version satisfying for you, regarding 
your initial comments about setjmp/longjmp detection in meson?


I can integrate more changes if needed.

Thanks,
Pierrick

On 2/20/23 16:16, Pierrick Bouvier wrote:

Acked-by: Richard Henderson 

On 2/20/23 12:12, Pierrick Bouvier wrote:

Windows implementation of setjmp/longjmp is done in
C:/WINDOWS/system32/ucrtbase.dll. Alas, on arm64, it seems to *always*
perform stack unwinding, which crashes from generated code.

By using alternative implementation built in mingw, we avoid doing stack
unwinding and this fixes crash when calling longjmp.

Signed-off-by: Pierrick Bouvier 
---
   include/sysemu/os-win32.h | 25 +++--
   meson.build   | 24 
   2 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/include/sysemu/os-win32.h b/include/sysemu/os-win32.h
index 5b38c7bd04..1f6c141d39 100644
--- a/include/sysemu/os-win32.h
+++ b/include/sysemu/os-win32.h
@@ -51,14 +51,35 @@ typedef struct sockaddr_un {
   extern "C" {
   #endif
   
-#if defined(_WIN64)

+#if defined(__aarch64__)
+#ifndef CONFIG_MINGW64_HAS_SETJMP_LONGJMP
+#error mingw must provide setjmp/longjmp for windows-arm64
+#endif
+/*
+ * On windows-arm64, setjmp is available in only one variant, and longjmp 
always
+ * does stack unwinding. This crash with generated code.
+ * Thus, we use another implementation of setjmp (not windows one), coming from
+ * mingw, which never performs stack unwinding.
+ */
+#undef setjmp
+#undef longjmp
+/*
+ * These functions are not declared in setjmp.h because __aarch64__ defines
+ * setjmp to _setjmpex instead. However, they are still defined in 
libmingwex.a,
+ * which gets linked automatically.
+ */
+extern int __mingw_setjmp(jmp_buf);
+extern void __attribute__((noreturn)) __mingw_longjmp(jmp_buf, int);
+#define setjmp(env) __mingw_setjmp(env)
+#define longjmp(env, val) __mingw_longjmp(env, val)
+#elif defined(_WIN64)
   /* On w64, setjmp is implemented by _setjmp which needs a second parameter.
* If this parameter is NULL, longjump does no stack unwinding.
* That is what we need for QEMU. Passing the value of register rsp (default)
* lets longjmp try a stack unwinding which will crash with generated code. 
*/
   # undef setjmp
   # define setjmp(env) _setjmp(env, NULL)
-#endif
+#endif /* __aarch64__ */
   /* QEMU uses sigsetjmp()/siglongjmp() as the portable way to specify
* "longjmp and don't touch the signal masks". Since we know that the
* savemask parameter will always be zero we can safely define these
diff --git a/meson.build b/meson.build
index 4ba3bf3431..e968ed9e7a 100644
--- a/meson.build
+++ b/meson.build
@@ -2450,6 +2450,30 @@ if targetos == 'windows'
   }''', name: '_lock_file and _unlock_file'))
   endif
   
+if targetos == 'windows'

+  mingw_has_setjmp_longjmp = cc.links('''
+#include 
+int main(void) {
+  /*
+   * These functions are not available in setjmp header, but may be
+   * available at link time, from libmingwex.a.
+   */
+  extern int __mingw_setjmp(jmp_buf);
+  extern void __attribute__((noreturn)) __mingw_longjmp(jmp_buf, int);
+  jmp_buf env;
+  __mingw_setjmp(env);
+  __mingw_longjmp(env, 0);
+}
+  ''', name: 'mingw setjmp and longjmp')
+
+  config_host_data.set('CONFIG_MINGW64_HAS_SETJMP_LONGJMP',
+   mingw_has_setjmp_longjmp)
+
+  if cpu == 'aarch64' and not mingw_has_setjmp_longjmp
+error('mingw must provide setjmp/longjmp for windows-arm64')
+  endif
+endif
+
   
   # Target configuration #
   


[PATCH v2 08/14] tests: don't run benchmarks for the tsan build

2023-02-21 Thread Alex Bennée
All we are really doing here is checking that TSAN builds compile and are
therefor a tool available to developers. The benchmarks are not
representative of QEMU's actual threading behaviour and they burn
precious CI time. Indeed switching to check-unit reveals many
unaddressed issues which have been logged at:

  https://gitlab.com/qemu-project/qemu/-/issues/1496

So for now disable the make check and make this a build only
test.

Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/buildtest.yml | 1 -
 1 file changed, 1 deletion(-)

diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml
index 8f332fc36f..e9a67e0191 100644
--- a/.gitlab-ci.d/buildtest.yml
+++ b/.gitlab-ci.d/buildtest.yml
@@ -465,7 +465,6 @@ tsan-build:
 CONFIGURE_ARGS: --enable-tsan --cc=clang-10 --cxx=clang++-10
   --enable-trace-backends=ust --enable-fdt=system --disable-slirp
 TARGETS: x86_64-softmmu ppc64-softmmu riscv64-softmmu x86_64-linux-user
-MAKE_CHECK_ARGS: bench V=1
 
 # gcov is a GCC features
 gcov:
-- 
2.39.1




[PATCH v2 14/14] cirrus.yml: Improve the windows_msys2_task

2023-02-21 Thread Alex Bennée
From: Thomas Huth 

There's no need to run a full-blown bash just to create a directory.
And we can skip the "cd build" each time by doing it once at the
beginning.

Additionally, let's exclude some targets (that we already compile-test
with MinGW in the gitlab jobs) from the build, since the build time of
this task is very long already (between 80 and 90 minutes).

Signed-off-by: Thomas Huth 
Message-Id: <20230208103046.618154-1-th...@redhat.com>
Signed-off-by: Alex Bennée 
---
 .cirrus.yml | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 4895987da4..5fb00da73d 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -100,9 +100,11 @@ windows_msys2_task:
   tar xf C:\tools\archive\msys64.tar
   Write-Output "Extract msys2 time taken: 
$((Get-Date).Subtract($start_time))"
   script:
-- C:\tools\msys64\usr\bin\bash.exe -lc "mkdir build"
-- C:\tools\msys64\usr\bin\bash.exe -lc "cd build && ../configure 
--python=python3"
-- C:\tools\msys64\usr\bin\bash.exe -lc "cd build && make -j8"
+- mkdir build
+- cd build
+- C:\tools\msys64\usr\bin\bash.exe -lc "../configure --python=python3
+
--target-list-exclude=i386-softmmu,ppc64-softmmu,aarch64-softmmu,mips64-softmmu,mipsel-softmmu,sh4-softmmu"
+- C:\tools\msys64\usr\bin\bash.exe -lc "make -j8"
 - exit $LastExitCode
   test_script:
 - C:\tools\msys64\usr\bin\bash.exe -lc "cd build && make V=1 check"
-- 
2.39.1




[PATCH v2 07/14] gitlab: extend custom runners with base_job_template

2023-02-21 Thread Alex Bennée
The base job template is responsible for controlling how we kick off
testing on our various branches. Rename and extend the
custom_runner_template so we can take advantage of all that control.

Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/custom-runners.yml  |  3 ++-
 .gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml   | 10 +-
 .gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml |  2 +-
 .gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml | 10 +-
 4 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/.gitlab-ci.d/custom-runners.yml b/.gitlab-ci.d/custom-runners.yml
index 9fdc476c48..34a1e6f327 100644
--- a/.gitlab-ci.d/custom-runners.yml
+++ b/.gitlab-ci.d/custom-runners.yml
@@ -15,7 +15,8 @@ variables:
 
 # All custom runners can extend this template to upload the testlog
 # data as an artifact and also feed the junit report
-.custom_artifacts_template:
+.custom_runner_template:
+  extends: .base_job_template
   artifacts:
 name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
 expire_in: 7 days
diff --git a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml 
b/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
index f512eaeaa3..cdae6c5212 100644
--- a/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
+++ b/.gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml
@@ -3,7 +3,7 @@
 # "Install basic packages to build QEMU on Ubuntu 20.04/20.04"
 
 ubuntu-20.04-s390x-all-linux-static:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
@@ -24,7 +24,7 @@ ubuntu-20.04-s390x-all-linux-static:
  - make --output-sync -j`nproc` check
 
 ubuntu-20.04-s390x-all:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
@@ -43,7 +43,7 @@ ubuntu-20.04-s390x-all:
  - make --output-sync -j`nproc` check
 
 ubuntu-20.04-s390x-alldbg:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
@@ -66,7 +66,7 @@ ubuntu-20.04-s390x-alldbg:
  - make --output-sync -j`nproc` check
 
 ubuntu-20.04-s390x-clang:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
@@ -108,7 +108,7 @@ ubuntu-20.04-s390x-tci:
  - make --output-sync -j`nproc`
 
 ubuntu-20.04-s390x-notcg:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
diff --git a/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml 
b/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml
index 42137aaf2a..50e5646a44 100644
--- a/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml
+++ b/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml
@@ -3,7 +3,7 @@
 # "Install basic packages to build QEMU on Ubuntu 20.04"
 
 ubuntu-22.04-aarch32-all:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
diff --git a/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml 
b/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml
index 8ba85be440..13e14a0f87 100644
--- a/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml
+++ b/.gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml
@@ -3,7 +3,7 @@
 # "Install basic packages to build QEMU on Ubuntu 20.04"
 
 ubuntu-22.04-aarch64-all-linux-static:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
@@ -24,7 +24,7 @@ ubuntu-22.04-aarch64-all-linux-static:
  - make --output-sync -j`nproc --ignore=40` check
 
 ubuntu-22.04-aarch64-all:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
@@ -46,7 +46,7 @@ ubuntu-22.04-aarch64-all:
  - make --output-sync -j`nproc --ignore=40` check
 
 ubuntu-22.04-aarch64-alldbg:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
@@ -65,7 +65,7 @@ ubuntu-22.04-aarch64-alldbg:
  - make --output-sync -j`nproc --ignore=40` check
 
 ubuntu-22.04-aarch64-clang:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
@@ -107,7 +107,7 @@ ubuntu-22.04-aarch64-tci:
  - make --output-sync -j`nproc --ignore=40`
 
 ubuntu-22.04-aarch64-notcg:
- extends: .custom_artifacts_template
+ extends: .custom_runner_template
  needs: []
  stage: build
  tags:
-- 
2.39.1




[PATCH v2 03/14] tests: add socat dependency for tests

2023-02-21 Thread Alex Bennée
We only use it for test-io-channel-command at the moment.
Unfortunately bringing socat into CI exposed an existing bug in the
test-io-channel-command unit test so we disabled it for MacOS in the
previous patch.

Signed-off-by: Alex Bennée 
Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
Cc: Marc-André Lureau 
---
 .gitlab-ci.d/cirrus/freebsd-12.vars   | 2 +-
 .gitlab-ci.d/cirrus/freebsd-13.vars   | 2 +-
 .gitlab-ci.d/cirrus/macos-12.vars | 2 +-
 tests/docker/dockerfiles/alpine.docker| 1 +
 tests/docker/dockerfiles/centos8.docker   | 1 +
 tests/docker/dockerfiles/debian-amd64-cross.docker| 1 +
 tests/docker/dockerfiles/debian-amd64.docker  | 1 +
 tests/docker/dockerfiles/debian-arm64-cross.docker| 1 +
 tests/docker/dockerfiles/debian-armel-cross.docker| 1 +
 tests/docker/dockerfiles/debian-armhf-cross.docker| 1 +
 tests/docker/dockerfiles/debian-mips64el-cross.docker | 1 +
 tests/docker/dockerfiles/debian-mipsel-cross.docker   | 1 +
 tests/docker/dockerfiles/debian-ppc64el-cross.docker  | 1 +
 tests/docker/dockerfiles/debian-s390x-cross.docker| 1 +
 tests/docker/dockerfiles/fedora-win32-cross.docker| 1 +
 tests/docker/dockerfiles/fedora-win64-cross.docker| 1 +
 tests/docker/dockerfiles/fedora.docker| 1 +
 tests/docker/dockerfiles/opensuse-leap.docker | 1 +
 tests/docker/dockerfiles/ubuntu2004.docker| 1 +
 tests/lcitool/projects/qemu.yml   | 1 +
 20 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/.gitlab-ci.d/cirrus/freebsd-12.vars 
b/.gitlab-ci.d/cirrus/freebsd-12.vars
index 8934e5d57f..44d8a2a511 100644
--- a/.gitlab-ci.d/cirrus/freebsd-12.vars
+++ b/.gitlab-ci.d/cirrus/freebsd-12.vars
@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
 NINJA='/usr/local/bin/ninja'
 PACKAGING_COMMAND='pkg'
 PIP3='/usr/local/bin/pip-3.8'
-PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache 
cdrkit-genisoimage cmocka ctags curl cyrus-sasl dbus diffutils dtc flex 
fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi 
libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm 
lzo2 meson ncurses nettle ninja opencv pixman pkgconf png py39-numpy 
py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-yaml python3 
rpm2cpio sdl2 sdl2_image snappy sndio spice-protocol tesseract usbredir 
virglrenderer vte3 zstd'
+PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache 
cdrkit-genisoimage cmocka ctags curl cyrus-sasl dbus diffutils dtc flex 
fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi 
libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm 
lzo2 meson ncurses nettle ninja opencv pixman pkgconf png py39-numpy 
py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-yaml python3 
rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir 
virglrenderer vte3 zstd'
 PYPI_PKGS=''
 PYTHON='/usr/local/bin/python3'
diff --git a/.gitlab-ci.d/cirrus/freebsd-13.vars 
b/.gitlab-ci.d/cirrus/freebsd-13.vars
index 65ce456c48..7622c849b2 100644
--- a/.gitlab-ci.d/cirrus/freebsd-13.vars
+++ b/.gitlab-ci.d/cirrus/freebsd-13.vars
@@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake'
 NINJA='/usr/local/bin/ninja'
 PACKAGING_COMMAND='pkg'
 PIP3='/usr/local/bin/pip-3.8'
-PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache 
cdrkit-genisoimage cmocka ctags curl cyrus-sasl dbus diffutils dtc flex 
fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi 
libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm 
lzo2 meson ncurses nettle ninja opencv pixman pkgconf png py39-numpy 
py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-yaml python3 
rpm2cpio sdl2 sdl2_image snappy sndio spice-protocol tesseract usbredir 
virglrenderer vte3 zstd'
+PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache 
cdrkit-genisoimage cmocka ctags curl cyrus-sasl dbus diffutils dtc flex 
fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi 
libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm 
lzo2 meson ncurses nettle ninja opencv pixman pkgconf png py39-numpy 
py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-yaml python3 
rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir 
virglrenderer vte3 zstd'
 PYPI_PKGS=''
 PYTHON='/usr/local/bin/python3'
diff --git a/.gitlab-ci.d/cirrus/macos-12.vars 
b/.gitlab-ci.d/cirrus/macos-12.vars
index 65b78fa08f..da6aa6469b 100644
--- a/.gitlab-ci.d/cirrus/macos-12.vars
+++ b/.gitlab-ci.d/cirrus/macos-12.vars
@@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake'
 NINJA='/opt/homebrew/bin/ninja'
 PACKAGING_COMMAND='brew'
 PIP3='/opt/homebrew/bin/pip3'
-PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc 
flex gcovr gettext git glib gnu-sed gnutls gtk

[PATCH v2 05/14] tests: make fp-test less chatty when running from test suite

2023-02-21 Thread Alex Bennée
As we like to run tests under CI with V=1 flags the softfloat tests
can add up to a fair amount of extra log lines. With an update to the
testfloat library we can now call fp-test with the -q flag and reduce
the output to a terse one line per function tested.

  make check-softfloat V=1 | wc -l
  759

Signed-off-by: Alex Bennée 
---
 tests/fp/fp-test.c| 19 ++-
 tests/fp/berkeley-testfloat-3 |  2 +-
 tests/fp/meson.build  |  2 +-
 3 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/tests/fp/fp-test.c b/tests/fp/fp-test.c
index 35829ad5f7..36b5712cda 100644
--- a/tests/fp/fp-test.c
+++ b/tests/fp/fp-test.c
@@ -106,7 +106,8 @@ static const char commands_string[] =
 " -l = thoroughness level (1 (default), 2)\n"
 " -r = rounding mode (even (default), zero, down, up, tieaway, odd)\n"
 "  Set to 'all' to test all rounding modes, if applicable\n"
-" -s = stop when a test fails";
+" -s = stop when a test fails\n"
+" -q = minimise noise when testing, just show each function being tested";
 
 static void usage_complete(int argc, char *argv[])
 {
@@ -190,9 +191,11 @@ static void do_testfloat(int op, int rmode, bool exact)
 ab_f128M_z_bool true_ab_f128M_z_bool;
 ab_f128M_z_bool subj_ab_f128M_z_bool;
 
-fputs(">> Testing ", stderr);
-verCases_writeFunctionName(stderr);
-fputs("\n", stderr);
+if (verCases_verbosity) {
+fputs(">> Testing ", stderr);
+verCases_writeFunctionName(stderr);
+fputs("\n", stderr);
+}
 
 if (!is_allowed(op, rmode)) {
 not_implemented();
@@ -837,7 +840,7 @@ static void parse_args(int argc, char *argv[])
 int c;
 
 for (;;) {
-c = getopt(argc, argv, "he:f:l:r:s");
+c = getopt(argc, argv, "he:f:l:r:sq");
 if (c < 0) {
 break;
 }
@@ -874,9 +877,15 @@ static void parse_args(int argc, char *argv[])
 }
 }
 break;
+/*
+ * The following flags are declared in 
testfloat/source/verCases_common.c
+ */
 case 's':
 verCases_errorStop = true;
 break;
+case 'q':
+verCases_verbosity = 0;
+break;
 case '?':
 /* invalid option or missing argument; getopt prints error info */
 exit(EXIT_FAILURE);
diff --git a/tests/fp/berkeley-testfloat-3 b/tests/fp/berkeley-testfloat-3
index 5a59dcec19..40619cbb3b 16
--- a/tests/fp/berkeley-testfloat-3
+++ b/tests/fp/berkeley-testfloat-3
@@ -1 +1 @@
-Subproject commit 5a59dcec19327396a011a17fd924aed4fec416b3
+Subproject commit 40619cbb3bf32872df8c53cc457039229428a263
diff --git a/tests/fp/meson.build b/tests/fp/meson.build
index 312a4d301f..f9ca6a93b4 100644
--- a/tests/fp/meson.build
+++ b/tests/fp/meson.build
@@ -609,7 +609,7 @@ softfloat_tests = {
 # The full test suite can take a bit of time, default to a quick run
 # "-l 2 -r all" can take more than a day for some operations and is best
 # run manually
-fptest_args = ['-s', '-l', '1']
+fptest_args = ['-q', '-s', '-l', '1']
 fptest_rounding_args = ['-r', 'all']
 
 # Conversion Routines:
-- 
2.39.1




[PATCH v2 02/14] tests: don't run socat tests on MacOS as well

2023-02-21 Thread Alex Bennée
In preparation for the next patch when we enable socat for our CI
images we need to disable this part of the test for MacOS. The bug has
been raised here:

  https://gitlab.com/qemu-project/qemu/-/issues/1495

Once that is fixed we should re-enable the test.

Signed-off-by: Alex Bennée 
Reviewed-by: Philippe Mathieu-Daudé 
Tested-by: Philippe Mathieu-Daudé 
---
 tests/unit/test-io-channel-command.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/unit/test-io-channel-command.c 
b/tests/unit/test-io-channel-command.c
index 425e2f5594..04b75ab3b4 100644
--- a/tests/unit/test-io-channel-command.c
+++ b/tests/unit/test-io-channel-command.c
@@ -31,7 +31,7 @@
 
 static char *socat = NULL;
 
-#ifndef _WIN32
+#if !defined(_WIN32) && !defined(CONFIG_DARWIN)
 static void test_io_channel_command_fifo(bool async)
 {
 g_autofree gchar *tmpdir = g_dir_make_tmp("qemu-test-io-channel.XX", 
NULL);
@@ -128,7 +128,7 @@ int main(int argc, char **argv)
 
 socat = g_find_program_in_path("socat");
 
-#ifndef _WIN32
+#if !defined(_WIN32) && !defined(CONFIG_DARWIN)
 g_test_add_func("/io/channel/command/fifo/sync",
 test_io_channel_command_fifo_sync);
 g_test_add_func("/io/channel/command/fifo/async",
-- 
2.39.1




[PATCH v2 04/14] tests: be a bit more strict cleaning up fifos

2023-02-21 Thread Alex Bennée
When we re-factored we dropped the unlink() step which turns out to be
required for rmdir to do its thing. If we had been checking the return
value we would have noticed so lets do that with this fix.

Fixes: 68406d1085 (tests/unit: cleanups for test-io-channel-command)
Signed-off-by: Alex Bennée 
Suggested-by: Philippe Mathieu-Daudé 
---
 tests/unit/test-io-channel-command.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tests/unit/test-io-channel-command.c 
b/tests/unit/test-io-channel-command.c
index 04b75ab3b4..c6e66a8c33 100644
--- a/tests/unit/test-io-channel-command.c
+++ b/tests/unit/test-io-channel-command.c
@@ -42,6 +42,7 @@ static void test_io_channel_command_fifo(bool async)
 g_auto(GStrv) dstargv = g_strsplit(dstargs, " ", -1);
 QIOChannel *src, *dst;
 QIOChannelTest *test;
+int err;
 
 if (mkfifo(fifo, 0600)) {
 g_error("mkfifo: %s", strerror(errno));
@@ -61,7 +62,10 @@ static void test_io_channel_command_fifo(bool async)
 object_unref(OBJECT(src));
 object_unref(OBJECT(dst));
 
-g_rmdir(tmpdir);
+err = g_unlink(fifo);
+g_assert(err == 0);
+err = g_rmdir(tmpdir);
+g_assert(err == 0);
 }
 
 static void test_io_channel_command_fifo_async(void)
-- 
2.39.1




[PATCH v2 06/14] gitlab-ci: Use artifacts instead of dumping logs in the Cirrus-CI jobs

2023-02-21 Thread Alex Bennée
From: Thomas Huth 

The meson log files can get very big, especially if running the tests in
verbose mode. So dumping those logs to the console was a bad idea, since
gitlab truncates the output if it is getting too big. Let's publish the
logs as artifacts instead. This has the disadvantage that you have to
look up the logs on cirrus-ci.com now instead, but that's still better
than not having the important part of the log at all since it got
truncated.

Fixes: 998f334722 ("gitlab: show testlog.txt contents ...")
Signed-off-by: Thomas Huth 
Reviewed-by: Daniel P. Berrangé 
Message-Id: <20230215142503.90660-1-th...@redhat.com>
Signed-off-by: Alex Bennée 
---
 .gitlab-ci.d/cirrus/build.yml | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/.gitlab-ci.d/cirrus/build.yml b/.gitlab-ci.d/cirrus/build.yml
index 7ef6af8d33..a9444902ec 100644
--- a/.gitlab-ci.d/cirrus/build.yml
+++ b/.gitlab-ci.d/cirrus/build.yml
@@ -32,6 +32,9 @@ build_task:
 - $MAKE -j$(sysctl -n hw.ncpu)
 - for TARGET in $TEST_TARGETS ;
   do
-$MAKE -j$(sysctl -n hw.ncpu) $TARGET V=1
-|| { cat meson-logs/testlog.txt; exit 1; } ;
+$MAKE -j$(sysctl -n hw.ncpu) $TARGET V=1 ;
   done
+  always:
+build_result_artifacts:
+  path: build/meson-logs/*log.txt
+  type: text/plain
-- 
2.39.1




[PATCH v2 01/14] block: Handle curl 7.55.0, 7.85.0 version changes

2023-02-21 Thread Alex Bennée
From: Anton Johansson 

* 7.55.0 deprecates CURLINFO_CONTENT_LENGTH_DOWNLOAD in favour of a *_T
  version, which returns curl_off_t instead of a double.
* 7.85.0 deprecates CURLOPT_PROTOCOLS and CURLOPT_REDIR_PROTOCOLS in
  favour of *_STR variants, specifying the desired protocols via a
  string.

Signed-off-by: Anton Johansson 
Reviewed-by: Philippe Mathieu-Daudé 
Message-Id: <20230123201431.23118-1-a...@rev.ng>
[AJB: fixed author]
Signed-off-by: Alex Bennée 
---
 block/curl.c | 44 +---
 1 file changed, 37 insertions(+), 7 deletions(-)

diff --git a/block/curl.c b/block/curl.c
index cbada22e9e..8efd97b349 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -38,8 +38,15 @@
 
 // #define DEBUG_VERBOSE
 
+/* CURL 7.85.0 switches to a string based API for specifying
+ * the desired protocols.
+ */
+#if LIBCURL_VERSION_NUM >= 0x075500
+#define PROTOCOLS "HTTP,HTTPS,FTP,FTPS"
+#else
 #define PROTOCOLS (CURLPROTO_HTTP | CURLPROTO_HTTPS | \
CURLPROTO_FTP | CURLPROTO_FTPS)
+#endif
 
 #define CURL_NUM_STATES 8
 #define CURL_NUM_ACB8
@@ -510,9 +517,18 @@ static int curl_init_state(BDRVCURLState *s, CURLState 
*state)
  * obscure protocols.  For example, do not allow POP3/SMTP/IMAP see
  * CVE-2013-0249.
  *
- * Restricting protocols is only supported from 7.19.4 upwards.
+ * Restricting protocols is only supported from 7.19.4 upwards. Note:
+ * version 7.85.0 deprecates CURLOPT_*PROTOCOLS in favour of a string
+ * based CURLOPT_*PROTOCOLS_STR API.
  */
-#if LIBCURL_VERSION_NUM >= 0x071304
+#if LIBCURL_VERSION_NUM >= 0x075500
+if (curl_easy_setopt(state->curl,
+ CURLOPT_PROTOCOLS_STR, PROTOCOLS) ||
+curl_easy_setopt(state->curl,
+ CURLOPT_REDIR_PROTOCOLS_STR, PROTOCOLS)) {
+goto err;
+}
+#elif LIBCURL_VERSION_NUM >= 0x071304
 if (curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS) ||
 curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS)) 
{
 goto err;
@@ -670,7 +686,12 @@ static int curl_open(BlockDriverState *bs, QDict *options, 
int flags,
 const char *file;
 const char *cookie;
 const char *cookie_secret;
-double d;
+/* CURL >= 7.55.0 uses curl_off_t for content length instead of a double */
+#if LIBCURL_VERSION_NUM >= 0x073700
+curl_off_t cl;
+#else
+double cl;
+#endif
 const char *secretid;
 const char *protocol_delimiter;
 int ret;
@@ -797,27 +818,36 @@ static int curl_open(BlockDriverState *bs, QDict 
*options, int flags,
 }
 if (curl_easy_perform(state->curl))
 goto out;
-if (curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &d)) {
+/* CURL 7.55.0 deprecates CURLINFO_CONTENT_LENGTH_DOWNLOAD in favour of
+ * the *_T version which returns a more sensible type for content length.
+ */
+#if LIBCURL_VERSION_NUM >= 0x073700
+if (curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD_T, 
&cl)) {
+goto out;
+}
+#else
+if (curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &cl)) 
{
 goto out;
 }
+#endif
 /* Prior CURL 7.19.4 return value of 0 could mean that the file size is not
  * know or the size is zero. From 7.19.4 CURL returns -1 if size is not
  * known and zero if it is really zero-length file. */
 #if LIBCURL_VERSION_NUM >= 0x071304
-if (d < 0) {
+if (cl < 0) {
 pstrcpy(state->errmsg, CURL_ERROR_SIZE,
 "Server didn't report file size.");
 goto out;
 }
 #else
-if (d <= 0) {
+if (cl <= 0) {
 pstrcpy(state->errmsg, CURL_ERROR_SIZE,
 "Unknown file size or zero-length file.");
 goto out;
 }
 #endif
 
-s->len = d;
+s->len = cl;
 
 if ((!strncasecmp(s->url, "http://";, strlen("http://";))
 || !strncasecmp(s->url, "https://";, strlen("https://";)))
-- 
2.39.1




[RFC QEMU] docs: vhost-user: Add custom memory mapping support

2023-02-21 Thread Viresh Kumar
The current model of memory mapping at the back-end works fine with
Qemu, where a standard call to mmap() for the respective file
descriptor, passed from front-end, is generally all we need to do before
the front-end can start accessing the guest memory.

There are other complex cases though, where we need more information at
the backend and need to do more than just an mmap() call. For example,
Xen, a type-1 hypervisor, currently supports memory mapping via two
different methods, foreign-mapping (via /dev/privcmd) and grant-dev (via
/dev/gntdev). In both these cases, the back-end needs to call mmap()
followed by an ioctl() (or vice-versa), and need to pass extra
information via the ioctl(), like the Xen domain-id of the guest whose
memory we are trying to map.

Add a new protocol feature, 'VHOST_USER_PROTOCOL_F_CUSTOM_MMAP', which
lets the back-end know about the additional memory mapping requirements.
When this feature is negotiated, the front-end can send the
'VHOST_USER_CUSTOM_MMAP' message type to provide the additional
information to the back-end.

Signed-off-by: Viresh Kumar 
---
 docs/interop/vhost-user.rst | 32 
 1 file changed, 32 insertions(+)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 3f18ab424eb0..f2b1d705593a 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -258,6 +258,23 @@ Inflight description
 
 :queue size: a 16-bit size of virtqueues
 
+Custom mmap description
+^^^
+
++---+---+
+| flags | value |
++---+---+
+
+:flags: 64-bit bit field
+
+- Bit 0 is Xen foreign memory access flag - needs Xen foreign memory mapping.
+- Bit 1 is Xen grant memory access flag - needs Xen grant memory mapping.
+
+:value: a 64-bit hypervisor specific value.
+
+- For Xen foreign or grant memory access, this is set with guest's xen domain
+  id.
+
 C structure
 ---
 
@@ -867,6 +884,7 @@ Protocol features
   #define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14
   #define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS  15
   #define VHOST_USER_PROTOCOL_F_STATUS   16
+  #define VHOST_USER_PROTOCOL_F_CUSTOM_MMAP  17
 
 Front-end message types
 ---
@@ -1422,6 +1440,20 @@ Front-end message types
   query the back-end for its device status as defined in the Virtio
   specification.
 
+``VHOST_USER_CUSTOM_MMAP``
+  :id: 41
+  :equivalent ioctl: N/A
+  :request payload: Custom mmap description
+  :reply payload: N/A
+
+  When the ``VHOST_USER_PROTOCOL_F_CUSTOM_MMAP`` protocol feature has been
+  successfully negotiated, this message is submitted by the front-end to
+  notify the back-end of the special memory mapping requirements, that the
+  back-end needs to take care of, while mapping any memory regions sent
+  over by the front-end. The front-end must send this message before
+  any memory-regions are sent to the back-end via ``VHOST_USER_SET_MEM_TABLE``
+  or ``VHOST_USER_ADD_MEM_REG`` message types.
+
 
 Back-end message types
 --
-- 
2.31.1.272.g89b43f80a514




[PATCH v2 11/14] tests: add tuxrun baseline test to avocado

2023-02-21 Thread Alex Bennée
The TuxRun project (www.tuxrun.org) uses QEMU to run tests on a wide
variety of kernel configurations on wide range of our emulated
platforms. They publish a known good set of images at:

  https://storage.tuxboot.com/

to help with bisecting regressions in either the kernel, firmware or
QEMU itself. The tests are pretty lightweight as they contain just a
kernel with a minimal rootfs which boots a lot faster than most of the
distros. In time they might be persuaded to version there known good
baselines and we can then enable proper checksums.

For a couple of tests we currently skip:

  - mips64, a regression against previous stable release
  - sh4, very unstable with intermittent oops

Total run time: 340s (default) -> 890s (debug)

Overall coverage rate (tested targets + disabled tests):
  lines..: 16.1% (126894 of 789848 lines)
  functions..: 20.6% (15954 of 77489 functions)
  branches...: 9.3% (40727 of 439365 branches)

Signed-off-by: Alex Bennée 
Cc: Anders Roxell 

---
v2
  - renamed to tuxrun_baselines, update commit message
  - add remaining targets
  - add more metadata tags for the differences
  - refactor tag code
  - skip mips64 and sh4 tests in CI
  - slightly increase delay for login
  - include in MAINTAINERS
---
 MAINTAINERS   |   1 +
 tests/avocado/tuxrun_baselines.py | 423 ++
 2 files changed, 424 insertions(+)
 create mode 100644 tests/avocado/tuxrun_baselines.py

diff --git a/MAINTAINERS b/MAINTAINERS
index 21595f0aad..d264f1d982 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3760,6 +3760,7 @@ F: scripts/ci/
 F: tests/docker/
 F: tests/vm/
 F: tests/lcitool/
+F: tests/avocado/tuxrun_baselines.py
 F: scripts/archive-source.sh
 F: docs/devel/testing.rst
 W: https://gitlab.com/qemu-project/qemu/pipelines
diff --git a/tests/avocado/tuxrun_baselines.py 
b/tests/avocado/tuxrun_baselines.py
new file mode 100644
index 00..30aaefc1d3
--- /dev/null
+++ b/tests/avocado/tuxrun_baselines.py
@@ -0,0 +1,423 @@
+# Functional test that boots known good tuxboot images the same way
+# that tuxrun (www.tuxrun.org) does. This tool is used by things like
+# the LKFT project to run regression tests on kernels.
+#
+# Copyright (c) 2023 Linaro Ltd.
+#
+# Author:
+#  Alex Bennée 
+#
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+import os
+import time
+
+from avocado import skip, skipIf
+from avocado_qemu import QemuSystemTest
+from avocado_qemu import exec_command, exec_command_and_wait_for_pattern
+from avocado_qemu import wait_for_console_pattern
+from avocado.utils import process
+from avocado.utils.path import find_command
+
+class TuxRunBaselineTest(QemuSystemTest):
+"""
+:avocado: tags=accel:tcg
+"""
+
+KERNEL_COMMON_COMMAND_LINE = 'printk.time=0'
+# Tests are ~10-40s, allow for --debug/--enable-gcov overhead
+timeout = 100
+
+def get_tag(self, tagname, default=None):
+"""
+Get the metadata tag or return the default.
+"""
+utag = self._get_unique_tag_val(tagname)
+print(f"{tagname}/{default} -> {utag}")
+if utag:
+return utag
+
+return default
+
+def setUp(self):
+super().setUp()
+
+# We need zstd for all the tuxrun tests
+# See https://github.com/avocado-framework/avocado/issues/5609
+zstd = find_command('zstd', False)
+if zstd is False:
+self.cancel('Could not find "zstd", which is required to '
+'decompress rootfs')
+self.zstd = zstd
+
+# Process the TuxRun specific tags, most machines work with
+# reasonable defaults but we sometimes need to tweak the
+# config. To avoid open coding everything we store all these
+# details in the metadata for each test.
+
+# The tuxboot tag matches the root directory
+self.tuxboot = self.get_tag('tuxboot')
+
+# Most Linux's use ttyS0 for their serial port
+self.console = self.get_tag('console', "ttyS0")
+
+# Does the machine shutdown QEMU nicely on "halt"
+self.shutdown = self.get_tag('shutdown')
+
+# The name of the kernel Image file
+self.image = self.get_tag('image', "Image")
+
+# The block device drive type
+self.drive = self.get_tag('drive', "virtio-blk-device")
+
+self.root = self.get_tag('root', "vda")
+
+# Occasionally we need extra devices to hook things up
+self.extradev = self.get_tag('extradev')
+
+def wait_for_console_pattern(self, success_message, vm=None):
+wait_for_console_pattern(self, success_message,
+ failure_message='Kernel panic - not syncing',
+ vm=vm)
+
+def fetch_tuxrun_assets(self, dt=None):
+"""
+Fetch the TuxBoot assets. They are stored in a standard way so we
+use the per-test tags to fetch details.
+"""
+base_url = f"https://storage.tuxboot.com/{

Re: [PATCH v2 07/21] include/exec/memattrs: Add two bits of space to MemTxAttrs

2023-02-21 Thread Richard Henderson

On 2/20/23 21:56, Philippe Mathieu-Daudé wrote:

'secure' & 'user' seem mutually exclusive. If we get short in bits,
they could be shared.


They are not.  ARM has Secure EL0, or secure user mode.


r~



Re: [PATCH v2 05/14] tests: make fp-test less chatty when running from test suite

2023-02-21 Thread Thomas Huth

On 21/02/2023 10.45, Alex Bennée wrote:

As we like to run tests under CI with V=1 flags the softfloat tests
can add up to a fair amount of extra log lines. With an update to the
testfloat library we can now call fp-test with the -q flag and reduce
the output to a terse one line per function tested.

   make check-softfloat V=1 | wc -l
   759

Signed-off-by: Alex Bennée 
---
  tests/fp/fp-test.c| 19 ++-
  tests/fp/berkeley-testfloat-3 |  2 +-
  tests/fp/meson.build  |  2 +-
  3 files changed, 16 insertions(+), 7 deletions(-)


FWIW, the testfloat counterpart can be seen here:

https://gitlab.com/qemu-project/berkeley-testfloat-3/-/commit/40619cbb3bf32872df8c53cc457039229428a263

Thanks for tackling this!

Reviewed-by: Thomas Huth 




Re: Detecting qemu from guest on arm/hvf (apple arm)

2023-02-21 Thread Daniel P . Berrangé
On Wed, Feb 15, 2023 at 03:48:46PM -0800, John-Mark Gurney wrote:
> Hello,
> 
> I was wondering what the best way to detect that FreeBSD is running
> under qemu/hvf on Apple ARM?  FreeBSD doesn't see the ACPI FADT table,
> so I'm wondering if keying off of something like the QEMU0002 device
> in ACPI is the best way?  Or is there another option?

Aside from Peter's suggestion to check for actual features that
matter, if you genuinely need to detect a specific platform,
the virt-what tool is generally what I point people to. It has
loads of checks in it. It can detect QEMU on aarch64, but doesn't
distinguish KVM unless SMBIOS is present too. The 'virt' machine
type will include SMBIOS, so I expect the KVM check will probably
trigger for HVF too.

  
http://git.annexia.org/?p=virt-what.git;a=blob;f=virt-what.in;h=01e9acaf689416d9bff6eaca1b849dc4e798a0af;hb=HEAD#l340

This is wrong from virt-what POV, but I presume for your needs,
you'd be happy to detect *any* hypervisor as you'll want to tweak
the clock frequency for both HVF and KVM VMs

If you do come up with any tweaks to it, patches would be welcome.

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v2 04/14] tests: be a bit more strict cleaning up fifos

2023-02-21 Thread Thomas Huth

On 21/02/2023 10.45, Alex Bennée wrote:

When we re-factored we dropped the unlink() step which turns out to be
required for rmdir to do its thing. If we had been checking the return
value we would have noticed so lets do that with this fix.

Fixes: 68406d1085 (tests/unit: cleanups for test-io-channel-command)
Signed-off-by: Alex Bennée 
Suggested-by: Philippe Mathieu-Daudé 
---
  tests/unit/test-io-channel-command.c | 6 +-
  1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tests/unit/test-io-channel-command.c 
b/tests/unit/test-io-channel-command.c
index 04b75ab3b4..c6e66a8c33 100644
--- a/tests/unit/test-io-channel-command.c
+++ b/tests/unit/test-io-channel-command.c
@@ -42,6 +42,7 @@ static void test_io_channel_command_fifo(bool async)
  g_auto(GStrv) dstargv = g_strsplit(dstargs, " ", -1);
  QIOChannel *src, *dst;
  QIOChannelTest *test;
+int err;
  
  if (mkfifo(fifo, 0600)) {

  g_error("mkfifo: %s", strerror(errno));
@@ -61,7 +62,10 @@ static void test_io_channel_command_fifo(bool async)
  object_unref(OBJECT(src));
  object_unref(OBJECT(dst));
  
-g_rmdir(tmpdir);

+err = g_unlink(fifo);
+g_assert(err == 0);
+err = g_rmdir(tmpdir);
+g_assert(err == 0);
  }
  
  static void test_io_channel_command_fifo_async(void)


Reviewed-by: Thomas Huth 




Re: [PATCH v2 13/14] tests: ensure we export job results for some cross builds

2023-02-21 Thread Thomas Huth

On 21/02/2023 10.45, Alex Bennée wrote:

We do run tests on some cross builds. Provide a template to ensure we
export the testlog to the build artefacts and report the test results
via the junit.

Signed-off-by: Alex Bennée 
Reported-by: Peter Maydell 



Reviewed-by: Thomas Huth 




Re: [PATCH 1/2] qga/win32: Remove change action from MSI installer

2023-02-21 Thread Daniel P . Berrangé
On Tue, Feb 21, 2023 at 09:15:15AM +0100, Philippe Mathieu-Daudé wrote:
> On 20/2/23 18:41, Konstantin Kostiuk wrote:
> > resolves: rhbz#2167436
> 
> "You are not authorized to access bug #2167436."
> 
> > fixes: CVE-2023-0664
> 
> This commit description is rather scarce...
> 
> I understand you are trying to fix a CVE, but we shouldn't play
> the "security by obscurity" card. How can the community and
> distributions know this security fix is enough with the bare
> "Remove change action from MSI installer" justification?
> Can't we do better?

Yes, commit messages should always describe the problem being
solved directly. Bug trackers usually make people wade through
piles of irrelevant comments & potentially misleading blind
alleys during the back & forth of triage. The important info
needs to be distilled down and put in the commit message,
concisely describing the problem faced. Bug tracker links have
been known to bit-rot too.

The commit message needs to focus on /why/ the change was made,
much more than describing /what/ was changed.

> > Signed-off-by: Konstantin Kostiuk 
> > ---
> >   qga/installer/qemu-ga.wxs | 1 +
> >   1 file changed, 1 insertion(+)
> > 
> > diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
> > index 51340f7ecc..feb629ec47 100644
> > --- a/qga/installer/qemu-ga.wxs
> > +++ b/qga/installer/qemu-ga.wxs
> > @@ -31,6 +31,7 @@
> > />
> >> EmbedCab="yes" />
> >   1
> > +
> >> DowngradeErrorMessage="Error: A newer version of QEMU guest agent 
> > is already installed."
> > />

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v2 11/14] tests: add tuxrun baseline test to avocado

2023-02-21 Thread Thomas Huth

On 21/02/2023 10.45, Alex Bennée wrote:

The TuxRun project (www.tuxrun.org) uses QEMU to run tests on a wide
variety of kernel configurations on wide range of our emulated
platforms. They publish a known good set of images at:

   https://storage.tuxboot.com/

to help with bisecting regressions in either the kernel, firmware or
QEMU itself. The tests are pretty lightweight as they contain just a
kernel with a minimal rootfs which boots a lot faster than most of the
distros. In time they might be persuaded to version there known good


"*their* known good baselines" ?


baselines and we can then enable proper checksums.

For a couple of tests we currently skip:

   - mips64, a regression against previous stable release
   - sh4, very unstable with intermittent oops

Total run time: 340s (default) -> 890s (debug)

Overall coverage rate (tested targets + disabled tests):
   lines..: 16.1% (126894 of 789848 lines)
   functions..: 20.6% (15954 of 77489 functions)
   branches...: 9.3% (40727 of 439365 branches)


Acked-by: Thomas Huth 




Re: [PATCH v2 01/15] util: drop qemu_fork()

2023-02-21 Thread Daniel P . Berrangé
On Tue, Feb 21, 2023 at 01:48:45AM +0400, marcandre.lur...@redhat.com wrote:
> From: Marc-André Lureau 
> 
> Fortunately, qemu_fork() is no longer used since commit
> a95570e3e4d6 ("io/command: use glib GSpawn, instead of open-coding
> fork/exec"). (GSpawn uses posix_spawn() whenever possible instead)
> 
> Signed-off-by: Marc-André Lureau 
> ---
>  include/qemu/osdep.h | 14 -
>  util/oslib-posix.c   | 70 
>  util/oslib-win32.c   |  9 --
>  3 files changed, 93 deletions(-)

Reviewed-by: Daniel P. Berrangé 


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH] hw/arm/virt: Prevent CPUs in one socket to span mutiple NUMA nodes

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 10:21, Gavin Shan wrote:

On 2/21/23 8:15 PM, Philippe Mathieu-Daudé wrote:

On 21/2/23 09:53, Gavin Shan wrote:

Linux kernel guest reports warning when two CPUs in one socket have
been associated with different NUMA nodes, using the following command
lines.

   -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
   -numa node,nodeid=0,cpus=0-1,memdev=ram0    \
   -numa node,nodeid=1,cpus=2-3,memdev=ram1    \
   -numa node,nodeid=2,cpus=4-5,memdev=ram2    \

   [ cut here ]
   WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 
build_sched_domains+0x284/0x910

   Modules linked in:
   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
   pstate: 0045 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
   pc : build_sched_domains+0x284/0x910
   lr : build_sched_domains+0x184/0x910
   sp : 8804bd50
   x29: 8804bd50 x28: 0002 x27: 
   x26: 89cf9a80 x25:  x24: 89cbf840
   x23: 80325000 x22: 005df800 x21: 8a4ce508
   x20:  x19: 80324440 x18: 0014
   x17: 388925c0 x16: 5386a066 x15: 9c10cc2e
   x14: 01c0 x13: 0001 x12: 7fffb1a0
   x11: 7fffb180 x10: 8a4ce508 x9 : 0041
   x8 : 8a4ce500 x7 : 8a4cf920 x6 : 0001
   x5 : 0001 x4 : 0007 x3 : 0002
   x2 : 1000 x1 : 8a4cf928 x0 : 0001
   Call trace:
    build_sched_domains+0x284/0x910
    sched_init_domains+0xac/0xe0
    sched_init_smp+0x48/0xc8
    kernel_init_freeable+0x140/0x1ac
    kernel_init+0x28/0x140
    ret_from_fork+0x10/0x20

Fix it by preventing mutiple CPUs in one socket to be associated with
different NUMA nodes.

Reported-by: Yihuang Yu 
Signed-off-by: Gavin Shan 
---
  hw/arm/virt.c | 37 +
  1 file changed, 37 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index ac626b3bef..e0af267c77 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -230,6 +230,39 @@ static bool cpu_type_valid(const char *cpu)
  return false;
  }
+static bool numa_state_valid(MachineState *ms)
+{
+    MachineClass *mc = MACHINE_GET_CLASS(ms);
+    NumaState *state = ms->numa_state;
+    const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
+    const CPUArchId *cpus = possible_cpus->cpus;
+    int len = possible_cpus->len, i, j;
+
+    if (!state || state->num_nodes <= 1 || len <= 1) {
+    return true;
+    }
+
+    for (i = 0; i < len; i++) {
+    for (j = i + 1; j < len; j++) {
+    if (cpus[i].props.has_socket_id &&
+    cpus[i].props.has_node_id &&
+    cpus[j].props.has_socket_id &&
+    cpus[j].props.has_node_id &&
+    cpus[i].props.socket_id == cpus[j].props.socket_id &&
+    cpus[i].props.node_id != cpus[j].props.node_id) {
+    error_report("CPU-%d and CPU-%d in socket-%ld have 
been "

+ "associated with node-%ld and node-%ld",
+ i, j, cpus[i].props.socket_id,
+ cpus[i].props.node_id,
+ cpus[j].props.node_id);
+    return false;
+    }
+    }
+    }
+
+    return true;
+}
+
  static void create_randomness(MachineState *ms, const char *node)
  {
  struct {
@@ -2040,6 +2073,10 @@ static void machvirt_init(MachineState *machine)
  exit(1);
  }
+    if (!numa_state_valid(machine)) {
+    exit(1);
+    }


Why restrict to the virt machine?



We tried x86 machines and virt machine, but the issue isn't reproducible 
on x86 machines.
So I think it's machine or architecture specific issue. However, I 
believe RiscV should
have similar issue because linux/drivers/base/arch_topology.c is shared 
by ARM64 and RiscV.

x86 doesn't use the driver to populate its CPU topology.


Oh, I haven't thought about the other archs, I meant this seem a generic
issue which affects all (ARM) machines, so why restrict to the (ARM)
virt machine?




Re: [PATCH v3 3/4] qga/vss-win32: fix warning for clang++-15

2023-02-21 Thread Philippe Mathieu-Daudé

On 20/2/23 12:12, Pierrick Bouvier wrote:

Reported when compiling with clang-windows-arm64.

../qga/vss-win32/install.cpp:537:9: error: variable 'hr' is used uninitialized 
whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
 if (!(ControlService(service, SERVICE_CONTROL_STOP, NULL))) {
 ^~
../qga/vss-win32/install.cpp:545:12: note: uninitialized use occurs here
 return hr;
^~
Signed-off-by: Pierrick Bouvier 
---
  qga/vss-win32/install.cpp | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/qga/vss-win32/install.cpp b/qga/vss-win32/install.cpp
index b57508fbe0..b8087e5baa 100644
--- a/qga/vss-win32/install.cpp
+++ b/qga/vss-win32/install.cpp
@@ -518,7 +518,7 @@ namespace _com_util
  /* Stop QGA VSS provider service using Winsvc API  */
  STDAPI StopService(void)
  {
-HRESULT hr;
+HRESULT hr = S_OK;
  SC_HANDLE manager = OpenSCManager(NULL, NULL, SC_MANAGER_ALL_ACCESS);
  SC_HANDLE service = NULL;
  


Fixes: 917ebcb170 ("qga-win: Fix QGA VSS Provider service stop failure")
Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 06/21] gdbstub: move GDBState to shared internals header

2023-02-21 Thread Alex Bennée


Richard Henderson  writes:

> On 1/5/23 08:43, Alex Bennée wrote:
>> We are about to split softmmu and user mode helpers into different
>> files. To facilitate this we will need to share access to the GDBState
>> between those files.
>> To keep building we have to temporarily define CONFIG_USER_ONLY just
>> before we include internals.h for the user-mode side of things. This
>> will get removed once the state is fully moved.
>
> You don't have to have this hack if you don't ...
>
>> +typedef struct GDBState {
>> +bool init;   /* have we been initialised? */
>> +CPUState *c_cpu; /* current CPU for step/continue ops */
>> +CPUState *g_cpu; /* current CPU for other ops */
>> +CPUState *query_cpu; /* for q{f|s}ThreadInfo */
>> +enum RSState state; /* parsing state */
>> +char line_buf[MAX_PACKET_LENGTH];
>> +int line_buf_index;
>> +int line_sum; /* running checksum */
>> +int line_csum; /* checksum at the end of the packet */
>> +GByteArray *last_packet;
>> +int signal;
>> +#ifdef CONFIG_USER_ONLY
>> +GDBUserState user;
>> +#else
>> +GDBSystemState system;
>> +#endif
>
> ... nest these.  What's the point?

Well you end up having to ensure the chardev definitions are then
available in all files that include internals.h and I'm not sure that is
better:

--8<---cut here---start->8---
modified   gdbstub/internals.h
@@ -33,18 +33,16 @@ enum RSState {
 };
 
 /* Temporary home */
-#ifdef CONFIG_USER_ONLY
 typedef struct {
 int fd;
 char *socket_path;
 int running_state;
 } GDBUserState;
-#else
+
 typedef struct {
 CharBackend chr;
 Chardev *mon_chr;
 } GDBSystemState;
-#endif
 
 typedef struct GDBState {
 bool init;   /* have we been initialised? */
@@ -58,11 +56,8 @@ typedef struct GDBState {
 int line_csum; /* checksum at the end of the packet */
 GByteArray *last_packet;
 int signal;
-#ifdef CONFIG_USER_ONLY
 GDBUserState user;
-#else
 GDBSystemState system;
-#endif
 bool multiprocess;
 GDBProcess *processes;
 int process_num;
modified   gdbstub/gdbstub.c
@@ -48,6 +48,8 @@
 #include "exec/exec-all.h"
 #include "exec/hwaddr.h"
 #include "sysemu/replay.h"
+#include "chardev/char.h"
+#include "chardev/char-fe.h"
 
 #include "internals.h"
 
modified   gdbstub/user.c
@@ -13,8 +13,8 @@
 #include "exec/hwaddr.h"
 #include "exec/gdbstub.h"
 #include "hw/core/cpu.h"
-/* temp hack */
-#define CONFIG_USER_ONLY 1
+#include "chardev/char.h"
+#include "chardev/char-fe.h"
 #include "internals.h"
 
 bool gdb_supports_guest_debug(void)
--8<---cut here---end--->8---


>
>
> r~


-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: [PATCH v2 14/15] os-posix: remove useless ioctlsocket() define

2023-02-21 Thread Daniel P . Berrangé
On Tue, Feb 21, 2023 at 01:48:58AM +0400, marcandre.lur...@redhat.com wrote:
> From: Marc-André Lureau 
> 
> The API is specific to win32.
> 
> Signed-off-by: Marc-André Lureau 
> ---
>  include/sysemu/os-posix.h | 1 -
>  1 file changed, 1 deletion(-)

Reviewed-by: Daniel P. Berrangé 


With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v3 2/4] sysemu/os-win32: fix setjmp/longjmp on windows-arm64

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 10:47, Pierrick Bouvier wrote:
@Philippe Mathieu-Daudé, is that version satisfying for you, regarding 
your initial comments about setjmp/longjmp detection in meson?


Yes, the meson check is what I had in mind.


I can integrate more changes if needed.

Thanks,
Pierrick

On 2/20/23 16:16, Pierrick Bouvier wrote:

Acked-by: Richard Henderson 

On 2/20/23 12:12, Pierrick Bouvier wrote:

Windows implementation of setjmp/longjmp is done in
C:/WINDOWS/system32/ucrtbase.dll. Alas, on arm64, it seems to *always*
perform stack unwinding, which crashes from generated code.

By using alternative implementation built in mingw, we avoid doing stack
unwinding and this fixes crash when calling longjmp.

Signed-off-by: Pierrick Bouvier 
---
   include/sysemu/os-win32.h | 25 +++--
   meson.build   | 24 
   2 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/include/sysemu/os-win32.h b/include/sysemu/os-win32.h
index 5b38c7bd04..1f6c141d39 100644
--- a/include/sysemu/os-win32.h
+++ b/include/sysemu/os-win32.h
@@ -51,14 +51,35 @@ typedef struct sockaddr_un {
   extern "C" {
   #endif
-#if defined(_WIN64)
+#if defined(__aarch64__)
+#ifndef CONFIG_MINGW64_HAS_SETJMP_LONGJMP
+#error mingw must provide setjmp/longjmp for windows-arm64


Per the meson error [*], this now seems impossible, thus we can
simply drop the CONFIG_MINGW64_HAS_SETJMP_LONGJMP definition?


+#endif
+/*
+ * On windows-arm64, setjmp is available in only one variant, and 
longjmp always

+ * does stack unwinding. This crash with generated code.
+ * Thus, we use another implementation of setjmp (not windows one), 
coming from

+ * mingw, which never performs stack unwinding.
+ */
+#undef setjmp
+#undef longjmp
+/*
+ * These functions are not declared in setjmp.h because __aarch64__ 
defines
+ * setjmp to _setjmpex instead. However, they are still defined in 
libmingwex.a,

+ * which gets linked automatically.
+ */
+extern int __mingw_setjmp(jmp_buf);
+extern void __attribute__((noreturn)) __mingw_longjmp(jmp_buf, int);
+#define setjmp(env) __mingw_setjmp(env)
+#define longjmp(env, val) __mingw_longjmp(env, val)
+#elif defined(_WIN64)
   /* On w64, setjmp is implemented by _setjmp which needs a second 
parameter.

    * If this parameter is NULL, longjump does no stack unwinding.
    * That is what we need for QEMU. Passing the value of register 
rsp (default)
    * lets longjmp try a stack unwinding which will crash with 
generated code. */

   # undef setjmp
   # define setjmp(env) _setjmp(env, NULL)
-#endif
+#endif /* __aarch64__ */


This comment doesn't seem accurate. Maybe "64-bit"?


   /* QEMU uses sigsetjmp()/siglongjmp() as the portable way to specify
    * "longjmp and don't touch the signal masks". Since we know that the
    * savemask parameter will always be zero we can safely define these
diff --git a/meson.build b/meson.build
index 4ba3bf3431..e968ed9e7a 100644
--- a/meson.build
+++ b/meson.build
@@ -2450,6 +2450,30 @@ if targetos == 'windows'
   }''', name: '_lock_file and _unlock_file'))
   endif
+if targetos == 'windows'
+  mingw_has_setjmp_longjmp = cc.links('''
+    #include 
+    int main(void) {
+  /*
+   * These functions are not available in setjmp header, but may be
+   * available at link time, from libmingwex.a.
+   */
+  extern int __mingw_setjmp(jmp_buf);
+  extern void __attribute__((noreturn)) __mingw_longjmp(jmp_buf, 
int);

+  jmp_buf env;
+  __mingw_setjmp(env);
+  __mingw_longjmp(env, 0);
+    }
+  ''', name: 'mingw setjmp and longjmp')
+
+  config_host_data.set('CONFIG_MINGW64_HAS_SETJMP_LONGJMP',
+   mingw_has_setjmp_longjmp)
+
+  if cpu == 'aarch64' and not mingw_has_setjmp_longjmp
+    error('mingw must provide setjmp/longjmp for windows-arm64')


(This is the [*] error I mentioned earlier).


+  endif
+endif
+
   
   # Target configuration #
   





Re: [PATCH v2 06/21] gdbstub: move GDBState to shared internals header

2023-02-21 Thread Richard Henderson

On 2/21/23 00:24, Alex Bennée wrote:


Richard Henderson  writes:


On 1/5/23 08:43, Alex Bennée wrote:

We are about to split softmmu and user mode helpers into different
files. To facilitate this we will need to share access to the GDBState
between those files.
To keep building we have to temporarily define CONFIG_USER_ONLY just
before we include internals.h for the user-mode side of things. This
will get removed once the state is fully moved.


You don't have to have this hack if you don't ...


+typedef struct GDBState {
+bool init;   /* have we been initialised? */
+CPUState *c_cpu; /* current CPU for step/continue ops */
+CPUState *g_cpu; /* current CPU for other ops */
+CPUState *query_cpu; /* for q{f|s}ThreadInfo */
+enum RSState state; /* parsing state */
+char line_buf[MAX_PACKET_LENGTH];
+int line_buf_index;
+int line_sum; /* running checksum */
+int line_csum; /* checksum at the end of the packet */
+GByteArray *last_packet;
+int signal;
+#ifdef CONFIG_USER_ONLY
+GDBUserState user;
+#else
+GDBSystemState system;
+#endif


... nest these.  What's the point?


Well you end up having to ensure the chardev definitions are then
available in all files that include internals.h and I'm not sure that is
better:

--8<---cut here---start->8---
modified   gdbstub/internals.h
@@ -33,18 +33,16 @@ enum RSState {
  };
  
  /* Temporary home */

-#ifdef CONFIG_USER_ONLY
  typedef struct {
  int fd;
  char *socket_path;
  int running_state;
  } GDBUserState;
-#else
+
  typedef struct {
  CharBackend chr;
  Chardev *mon_chr;
  } GDBSystemState;
-#endif


No, these typedefs and their data can be completely private to the two 
implementations.


r~



Re: [PATCH v2 04/14] tests: be a bit more strict cleaning up fifos

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 10:45, Alex Bennée wrote:

When we re-factored we dropped the unlink() step which turns out to be
required for rmdir to do its thing. If we had been checking the return
value we would have noticed so lets do that with this fix.

Fixes: 68406d1085 (tests/unit: cleanups for test-io-channel-command)
Signed-off-by: Alex Bennée 
Suggested-by: Philippe Mathieu-Daudé 
---
  tests/unit/test-io-channel-command.c | 6 +-
  1 file changed, 5 insertions(+), 1 deletion(-)


Per v1
https://lore.kernel.org/qemu-devel/20230215192530.299263-5-alex.ben...@linaro.org/

Reviewed-by: Philippe Mathieu-Daudé 
Reviewed-by: Richard Henderson 
Reviewed-by: Thomas Huth 




Re: [PATCH 3/4] win32: stop mixing SOCKET and file descriptor space

2023-02-21 Thread Marc-André Lureau
Hi

On Tue, Feb 21, 2023 at 1:13 PM Marc-André Lureau
 wrote:
>
> Hi
>
> On Tue, Feb 21, 2023 at 12:18 PM Paolo Bonzini  wrote:
>>
>> On 2/20/23 16:29, Marc-André Lureau wrote:
>> >> 7. A Windows SOCKET is also a HANDLE.  Maybe.  I guess.  Docs are
>> >> confusing.
>> >>
>> > Kind of, but not really. I think a HANDLE is a kind of void*. You need to
>> > be careful using it appropriately with the right functions. Sometime, a
>> > HANDLE can work with generic functions, like ReadFile, but you should not
>> > use a CloseHandle on SOCKET, or registry key..
>>
>> A Windows SOCKET *is* a file HANDLE except it's always in overlapped
>> mode so Windows provides send()/recv() in case you don't want to deal
>> with overlapped mode.  But you can use it with ReadFile too (Windows API
>> documentation says that is only true sometimes, but Winsock API is 30
>> years old and right now you pretty much always can).
>>
>> However, sockets also has some extra information on the side, so you
>> need to close them with closesocket() and CloseHandle() is not enough.
>
>
> Yeah, the question is "is it safe to call CloseHandle() on a SOCKET, before 
> closesocket()". Testing/error checking seems to say it's okay.. I wouldn't be 
> surprised if internally the CloseHandle() function does something to check if 
> the given handle is a SOCKET and skip it. I wish they would document it..
>
>>
>> The problem is that close() of something opened with _open_osfhandle()
>> *does* do that CloseHandle(), so basically you are closing the handle
>> twice.  IIRC there used to be undocumented functions _alloc_osfhnd() and
>> similar, but they don't exist anymore (even Wine does not have them), so
>> we're stuck; unfortunately this is the reason why QEMU is not already
>> doing something like what you have in this patch.
>>
>> Is this a real bug or is it theoretical?  Do file descriptor and socket
>> spaces overlap in practice?
>>
>
> Yes it likely can, the first SOCKET value starts at 92 in a simple test. It 
> looks like it may depend on the system number of opened sockets.
>
> I think the second big issue is that we have many places where we assume a fd 
> is a fd, and we simply call close() (which would result in CloseHandle, but 
> missing closesocket).
>
> sigh, if the CRT would allow us to steal the handle back..
>

I found an interesting option here, using HANDLE_FLAG_PROTECT_FROM_CLOSE:
https://github.com/ksmurph1/VulkanConfigurator/blob/986992a8b963a6b271785a77d5efd349b6e6ea4f/src/folly/src/net/detail/SocketFileDescriptorMap.cpp#L36


And also an old KB:
https://jeffpar.github.io/kbarchive/kb/185/Q185727/
When _open_osfhandle is used on a socket descriptor, both _close() and
closesocket() should be called before exiting. _close() only closes the file
handle. closesocket() has to be called as well to close the socket descriptor
and clean up the underlying socket object.

I'll work on a v3 of the patch/series with the flag trick.




-- 
Marc-André Lureau



Re: [PATCH v2 07/21] include/exec/memattrs: Add two bits of space to MemTxAttrs

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 11:01, Richard Henderson wrote:

On 2/20/23 21:56, Philippe Mathieu-Daudé wrote:

'secure' & 'user' seem mutually exclusive. If we get short in bits,
they could be shared.


They are not.  ARM has Secure EL0, or secure user mode.


Oh, I misunderstood this field with user-emulation then (I tried
commenting it and my TCG/HVF build succeeded).




Re: [PATCH v2 07/21] include/exec/memattrs: Add two bits of space to MemTxAttrs

2023-02-21 Thread Richard Henderson

On 2/21/23 00:42, Philippe Mathieu-Daudé wrote:

On 21/2/23 11:01, Richard Henderson wrote:

On 2/20/23 21:56, Philippe Mathieu-Daudé wrote:

'secure' & 'user' seem mutually exclusive. If we get short in bits,
they could be shared.


They are not.  ARM has Secure EL0, or secure user mode.


Oh, I misunderstood this field with user-emulation then (I tried
commenting it and my TCG/HVF build succeeded).



target/arm/ptw.c:2853:result->f.attrs.user = regime_is_user(env, mmu_idx);

So... it shouldn't have built?

r~



Re: [PATCH v2 07/14] gitlab: extend custom runners with base_job_template

2023-02-21 Thread Thomas Huth

On 21/02/2023 10.45, Alex Bennée wrote:

The base job template is responsible for controlling how we kick off
testing on our various branches. Rename and extend the
custom_runner_template so we can take advantage of all that control.

Signed-off-by: Alex Bennée 
---
  .gitlab-ci.d/custom-runners.yml  |  3 ++-
  .gitlab-ci.d/custom-runners/ubuntu-20.04-s390x.yml   | 10 +-
  .gitlab-ci.d/custom-runners/ubuntu-22.04-aarch32.yml |  2 +-
  .gitlab-ci.d/custom-runners/ubuntu-22.04-aarch64.yml | 10 +-
  4 files changed, 13 insertions(+), 12 deletions(-)


From v1:
Reviewed-by: Thomas Huth 




Re: [PATCH 3/4] win32: stop mixing SOCKET and file descriptor space

2023-02-21 Thread Paolo Bonzini

On 2/21/23 11:40, Marc-André Lureau wrote:

Yes it likely can, the first SOCKET value starts at 92 in a simple
test. It looks like it may depend on the system number of opened
sockets.

I think the second big issue is that we have many places where we
assume a fd is a fd, and we simply call close() (which would result
in CloseHandle, but missing closesocket).

sigh, if the CRT would allow us to steal the handle back..

I found an interesting option here, using HANDLE_FLAG_PROTECT_FROM_CLOSE: 
https://github.com/ksmurph1/VulkanConfigurator/blob/986992a8b963a6b271785a77d5efd349b6e6ea4f/src/folly/src/net/detail/SocketFileDescriptorMap.cpp#L36

Wow, that's the ugliest thing ever but it seems to be made for this. :)

Paolo




Re: [PATCH v2 08/14] tests: don't run benchmarks for the tsan build

2023-02-21 Thread Thomas Huth

On 21/02/2023 10.45, Alex Bennée wrote:

All we are really doing here is checking that TSAN builds compile and are
therefor a tool available to developers. The benchmarks are not
representative of QEMU's actual threading behaviour and they burn
precious CI time. Indeed switching to check-unit reveals many
unaddressed issues which have been logged at:

   https://gitlab.com/qemu-project/qemu/-/issues/1496

So for now disable the make check and make this a build only
test.

Signed-off-by: Alex Bennée 
---
  .gitlab-ci.d/buildtest.yml | 1 -
  1 file changed, 1 deletion(-)

diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml
index 8f332fc36f..e9a67e0191 100644
--- a/.gitlab-ci.d/buildtest.yml
+++ b/.gitlab-ci.d/buildtest.yml
@@ -465,7 +465,6 @@ tsan-build:
  CONFIGURE_ARGS: --enable-tsan --cc=clang-10 --cxx=clang++-10
--enable-trace-backends=ust --enable-fdt=system --disable-slirp
  TARGETS: x86_64-softmmu ppc64-softmmu riscv64-softmmu x86_64-linux-user
-MAKE_CHECK_ARGS: bench V=1
  
  # gcov is a GCC features

  gcov:


Reviewed-by: Thomas Huth 




Re: [PATCH v2 09/14] testing: update ubuntu2004 to ubuntu2204

2023-02-21 Thread Thomas Huth

On 21/02/2023 10.45, Alex Bennée wrote:

The 22.04 LTS release has been out for almost a year now so its time
to update all the remaining images to the current LTS. We can also
drop some hacks we need for older clang TSAN support.

We will keep the ubuntu2004 container around for those who wish to
test builds on the currently still supported baseline.

Signed-off-by: Alex Bennée 
Reviewed-by: John Snow 


Reviewed-by: Thomas Huth 




Re: [PATCH] block/mirror: add 'write-blocking-after-ready' copy mode

2023-02-21 Thread Fiona Ebner
Am 14.02.23 um 17:19 schrieb Vladimir Sementsov-Ogievskiy:
> On 02.02.23 16:27, Fiona Ebner wrote:
>> Am 02.02.23 um 12:34 schrieb Kevin Wolf:
>>> But having to switch the mirror job to sync mode just to avoid doing I/O
>>> on an inactive device sounds wrong to me. It doesn't fix the root cause
>>> of that problem, but just papers over it.
>>
>> If you say the root cause is "the job not being completed before
>> switchover", then yes. But if the root cause is "switchover happening
>> while the drive is not actively synced", then a way to switch modes can
>> fix the root cause :)
>>
>>>
>>> Why does your management tool not complete the mirror job before it
>>> does the migration switchover that inactivates images?
>>
>> I did talk with my team leader about the possibility, but we decided to
>> not go for it, because it requires doing the migration in two steps with
>> pause-before-switchover and has the potential to increase guest downtime
>> quite a bit. So I went for this approach instead.
>>
> 
> 
> Interesting point. Maybe we need a way to automatically complete all the
> jobs before switchower?  It seems no reason to break the jobs if user
> didn't cancel them. (and of course no reason to allow a code path
> leading to assertion).
> 

Wouldn't that be a bit unexpected? There could be jobs unrelated to
migration or jobs at early stages. But sure, being able to trigger the
assertion is not nice.

Potential alternatives could be pausing the jobs or failing migration
with a clean error?

For us, the former is still best done in combination with a way to
switch to active (i.e. write-blocking) mode for drive-mirror.

The latter would force us to complete the drive-mirror job before
switchover even with active (i.e. write-blocking) mode, breaking our
usage of drive-mirror+migration that worked (in almost all cases, but it
would have been all cases if we had used active mode ;)) for many years now.

Maybe adding an option for how the jobs should behave upon switchover
(e.g. complete/pause/cancel/cancel-migration) could help? Either as a
job-specific option (more flexible) or a migration option?

CC-ing migration maintainers

Best Regards,
Fiona




Re: [PATCH v2 01/14] block: Handle curl 7.55.0, 7.85.0 version changes

2023-02-21 Thread Kevin Wolf
Am 21.02.2023 um 10:45 hat Alex Bennée geschrieben:
> From: Anton Johansson 
> 
> * 7.55.0 deprecates CURLINFO_CONTENT_LENGTH_DOWNLOAD in favour of a *_T
>   version, which returns curl_off_t instead of a double.
> * 7.85.0 deprecates CURLOPT_PROTOCOLS and CURLOPT_REDIR_PROTOCOLS in
>   favour of *_STR variants, specifying the desired protocols via a
>   string.
> 
> Signed-off-by: Anton Johansson 
> Reviewed-by: Philippe Mathieu-Daudé 
> Message-Id: <20230123201431.23118-1-a...@rev.ng>
> [AJB: fixed author]
> Signed-off-by: Alex Bennée 

I assume you only CCed me for this patch? It has arrived in git master
meanwhile, so when you rebase, it should just be dropped.

Kevin




Re: [PATCH v2 06/21] gdbstub: move GDBState to shared internals header

2023-02-21 Thread Alex Bennée


Richard Henderson  writes:

> On 2/21/23 00:24, Alex Bennée wrote:
>> Richard Henderson  writes:
>> 
>>> On 1/5/23 08:43, Alex Bennée wrote:
 We are about to split softmmu and user mode helpers into different
 files. To facilitate this we will need to share access to the GDBState
 between those files.
 To keep building we have to temporarily define CONFIG_USER_ONLY just
 before we include internals.h for the user-mode side of things. This
 will get removed once the state is fully moved.
>>>
>>> You don't have to have this hack if you don't ...
>>>
 +typedef struct GDBState {
 +bool init;   /* have we been initialised? */
 +CPUState *c_cpu; /* current CPU for step/continue ops */
 +CPUState *g_cpu; /* current CPU for other ops */
 +CPUState *query_cpu; /* for q{f|s}ThreadInfo */
 +enum RSState state; /* parsing state */
 +char line_buf[MAX_PACKET_LENGTH];
 +int line_buf_index;
 +int line_sum; /* running checksum */
 +int line_csum; /* checksum at the end of the packet */
 +GByteArray *last_packet;
 +int signal;
 +#ifdef CONFIG_USER_ONLY
 +GDBUserState user;
 +#else
 +GDBSystemState system;
 +#endif
>>>
>>> ... nest these.  What's the point?
>> Well you end up having to ensure the chardev definitions are then
>> available in all files that include internals.h and I'm not sure that is
>> better:
>> --8<---cut here---start->8---
>> modified   gdbstub/internals.h
>> @@ -33,18 +33,16 @@ enum RSState {
>>   };
>> /* Temporary home */
>> -#ifdef CONFIG_USER_ONLY
>>   typedef struct {
>>   int fd;
>>   char *socket_path;
>>   int running_state;
>>   } GDBUserState;
>> -#else
>> +
>>   typedef struct {
>>   CharBackend chr;
>>   Chardev *mon_chr;
>>   } GDBSystemState;
>> -#endif
>
> No, these typedefs and their data can be completely private to the two
> implementations.

Ahh right, so I've split that up now in:

  gdbstub: define separate user/system structures

and its made the subsequent patches much cleaner

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: [PATCH v3 2/6] configure: Add courtesy hint to Python version failure message

2023-02-21 Thread Paolo Bonzini

On 2/21/23 02:24, John Snow wrote:

If we begin requiring Python 3.7+, a few platforms are going to need to
install an additional Python interpreter package.

As a courtesy to the user, suggest the optional package they might need
to install. This will hopefully minimize any downtime caused by the
change in Python dependency.

Signed-off-by: John Snow
---
  configure | 5 -
  1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index 6abf5a72078..0d0cca53f09 100755
--- a/configure
+++ b/configure
@@ -1059,7 +1059,10 @@ fi
  
  if ! check_py_version "$python"; then

error_exit "Cannot use '$python', Python >= 3.6 is required." \
-  "Use --python=/path/to/python to specify a supported Python."
+ "Use --python=/path/to/python to specify a supported Python." \
+ "Maybe try:" \
+ "  openSUSE Leap 15.3+: zypper install python39" \
+ "  CentOS 8: dnf install python38"
  fi
  
  # Suppress writing compiled files

-- 2.39.0


Let's delay this past the actual patch to introduce 3.7.

Paolo




Re: [PATCH v3 1/6] configure: Look for auxiliary Python installations

2023-02-21 Thread Paolo Bonzini

On 2/21/23 02:24, John Snow wrote:

At the moment, we look for just "python3" and "python", which is good
enough almost all of the time. But ... if you are on a platform that
uses an older Python by default and only offers a newer Python as an
option, you'll have to specify --python=/usr/bin/foo every time.

As a courtesy, we can make a cursory attempt to locate a suitable Python
binary ourselves, looking for the remaining well-known binaries. This
also has the added benefit of making configure "just work" more often
on various BSD distributions that do not have the concept of a
"platform default python".

This configure loop will prefer, in order:

1. Whatever is specified in $PYTHON
2. python3
3. python (Which is usually 2.x, but might be 3.x on some platforms.)
4. python3.11 down through python3.6

Notes:

- Python virtual environments provide binaries for "python3", "python",
   and whichever version you used to create the venv,
   e.g. "python3.8". If configure is invoked from inside of a venv, this
   configure loop will not "break out" of that venv unless that venv is
   created using an explicitly non-suitable version of Python that we
   cannot use.

- In the event that no suitable python is found, the first python found
   is the version used to generate the human-readable error message.

- The error message isn't printed right away to allow later
   configuration code to pick up an explicitly configured python.

Signed-off-by: John Snow 
---
  configure | 34 ++
  1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/configure b/configure
index cf6db3d5518..6abf5a72078 100755
--- a/configure
+++ b/configure
@@ -592,20 +592,40 @@ esac
  
  : ${make=${MAKE-make}}
  
-# We prefer python 3.x. A bare 'python' is traditionally

-# python 2.x, but some distros have it as python 3.x, so
-# we check that too
+
+check_py_version() {
+# We require python >= 3.6.
+# NB: a True python conditional creates a non-zero return code (Failure)
+"$1" -c 'import sys; sys.exit(sys.version_info < (3,6))'
+}
+
  python=
+first_python=
  explicit_python=no
-for binary in "${PYTHON-python3}" python
+# Check for $PYTHON, python3, python, then explicitly-versioned interpreters.
+for binary in "${PYTHON-python3}" ${PYTHON:+python3} python \
+  python3.11 python3.10 python3.9 \
+  python3.8 python3.7 python3.6


I think if PYTHON is set we shouldn't look at anything else.

Paolo


  do
  if has "$binary"
  then
  python=$(command -v "$binary")
-break
+if test -z "$first_python"; then
+   first_python=$python
+fi
+if check_py_version "$python"; then
+# This one is good.
+first_python=
+break
+fi
  fi
  done
  
+# If first_python is set, we didn't find a suitable binary.

+# Use this one for possible future error messages.
+if test -n "$first_python"; then
+python="$first_python"
+fi
  
  # Check for ancillary tools used in testing

  genisoimage=
@@ -1037,9 +1057,7 @@ then
  error_exit "GNU make ($make) not found"
  fi
  
-# Note that if the Python conditional here evaluates True we will exit

-# with status 1 which is a shell 'false' value.
-if ! $python -c 'import sys; sys.exit(sys.version_info < (3,6))'; then
+if ! check_py_version "$python"; then
error_exit "Cannot use '$python', Python >= 3.6 is required." \
"Use --python=/path/to/python to specify a supported Python."
  fi





[PATCH v4] configure: Add 'mkdir build' check

2023-02-21 Thread Dinah Baum
QEMU configure script goes into an infinite error printing loop
when in read only directory due to 'build' dir never being created.

Checking if 'mkdir dir' succeeds prevents this error.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/321
Signed-off-by: Dinah Baum 
Reviewed-by: Peter Maydell 
---
 configure | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index cf6db3d551..1ef3e7f77d 100755
--- a/configure
+++ b/configure
@@ -31,7 +31,12 @@ then
 fi
 fi
 
-mkdir build
+if ! mkdir build || ! touch $MARKER
+then
+echo "ERROR: Could not create ./build directory. Check the permissions 
on"
+echo "your source directory, or try doing an out-of-tree build."
+exit 1
+fi
 touch $MARKER
 
 cat > GNUmakefile <<'EOF'
-- 
2.30.2




Re: [PATCH v2 01/14] block: Handle curl 7.55.0, 7.85.0 version changes

2023-02-21 Thread Alex Bennée


Kevin Wolf  writes:

> Am 21.02.2023 um 10:45 hat Alex Bennée geschrieben:
>> From: Anton Johansson 
>> 
>> * 7.55.0 deprecates CURLINFO_CONTENT_LENGTH_DOWNLOAD in favour of a *_T
>>   version, which returns curl_off_t instead of a double.
>> * 7.85.0 deprecates CURLOPT_PROTOCOLS and CURLOPT_REDIR_PROTOCOLS in
>>   favour of *_STR variants, specifying the desired protocols via a
>>   string.
>> 
>> Signed-off-by: Anton Johansson 
>> Reviewed-by: Philippe Mathieu-Daudé 
>> Message-Id: <20230123201431.23118-1-a...@rev.ng>
>> [AJB: fixed author]
>> Signed-off-by: Alex Bennée 
>
> I assume you only CCed me for this patch? It has arrived in git master
> meanwhile, so when you rebase, it should just be dropped.

Well git-publish did but yeah. I've just re-based and it skipped
cleanly.

>
> Kevin


-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: [PULL 00/22] Block layer patches

2023-02-21 Thread Peter Maydell
On Mon, 20 Feb 2023 at 19:02, Philippe Mathieu-Daudé  wrote:
>
> On 20/2/23 18:01, Peter Maydell wrote:
> > On Fri, 17 Feb 2023 at 15:08, Kevin Wolf  wrote:
> >>
> >> The following changes since commit 
> >> 6dffbe36af79e26a4d23f94a9a1c1201de99c261:
> >>
> >>Merge tag 'migration-20230215-pull-request' of 
> >> https://gitlab.com/juan.quintela/qemu into staging (2023-02-16 13:09:51 
> >> +)
> >>
> >> are available in the Git repository at:
> >>
> >>https://repo.or.cz/qemu/kevin.git tags/for-upstream
> >>
> >> for you to fetch changes up to a4d5224c2cb650b5a401d626d3f36e42e6987aa7:
> >>
> >>hbitmap: fix hbitmap_status() return value for first dirty bit case 
> >> (2023-02-17 14:34:24 +0100)
> >>
> >> 
> >> Block layer patches
> >>
> >> - configure: Enable -Wthread-safety if present
> >> - no_co_wrapper to fix bdrv_open*() calls from coroutine context
> >> - curl fixes, including enablement of newer libcurl versions
> >> - MAINTAINERS: drop Vladimir from parallels block driver
> >> - hbitmap: fix hbitmap_status() return value for first dirty bit case
> >> - file-posix: Fix assertion failure in write_zeroes after moving
> >>bdrv_getlength() to co_wrapper
> >>
> >
> > Reviewed-by: Peter Maydell 
>
> Applied as commit 2d89cb1fe5 ;)

Whoops, yes, wrong canned-email. Try this one:


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/8.0
for any user-visible changes.

-- PMM



Re: [PATCH 17/22] target/arm: Use get_phys_addr_with_struct for stage2

2023-02-21 Thread Peter Maydell
On Mon, 20 Feb 2023 at 22:15, Richard Henderson
 wrote:
>
> On 2/10/23 03:28, Peter Maydell wrote:
> > On Tue, 24 Jan 2023 at 00:01, Richard Henderson
> >  wrote:
> >>
> >> This fixes a bug in which we failed to initialize
> >> the result attributes properly after the memset.
> >>
> >> Signed-off-by: Richard Henderson 
> >> ---
> >>   target/arm/ptw.c | 13 +
> >>   1 file changed, 1 insertion(+), 12 deletions(-)
> >>
> >> diff --git a/target/arm/ptw.c b/target/arm/ptw.c
> >> index eaa47f6b62..3205339957 100644
> >> --- a/target/arm/ptw.c
> >> +++ b/target/arm/ptw.c
> >> @@ -32,12 +32,6 @@ typedef struct S1Translate {
> >>   void *out_host;
> >>   } S1Translate;
> >>
> >> -static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
> >> -   uint64_t address,
> >> -   MMUAccessType access_type,
> >> -   GetPhysAddrResult *result, ARMMMUFaultInfo 
> >> *fi)
> >> -__attribute__((nonnull));
> >
> > The definition of the function doesn't have the __attribute__,
> > so if we drop this forward declaration we need to move the attribute.
>
> Eh.  It was useful as an intermediary during one of the ptw reorgs, but now 
> that we've
> eliminated the use case in which NULL had been passed, it can go away.  I 
> assume you'd
> prefer that as a separate patch?

Yes, if we want to deliberately drop the attribute we should
do that separately with justification for why it's not needed.

thanks
-- PMM



Re: [PATCH v4] configure: Add 'mkdir build' check

2023-02-21 Thread Thomas Huth

On 21/02/2023 12.06, Dinah Baum wrote:

QEMU configure script goes into an infinite error printing loop
when in read only directory due to 'build' dir never being created.

Checking if 'mkdir dir' succeeds prevents this error.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/321
Signed-off-by: Dinah Baum 
Reviewed-by: Peter Maydell 
---
  configure | 7 ++-
  1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index cf6db3d551..1ef3e7f77d 100755
--- a/configure
+++ b/configure
@@ -31,7 +31,12 @@ then
  fi
  fi
  
-mkdir build

+if ! mkdir build || ! touch $MARKER
+then
+echo "ERROR: Could not create ./build directory. Check the permissions 
on"
+echo "your source directory, or try doing an out-of-tree build."
+exit 1
+fi
  touch $MARKER


Nit: I think the final "touch $MARKER" could now be removed, too, since the 
code either exits above, or runs the "|| ! touch $MARKER" part there already.


Anyway, it's just a nit, and maybe could also be fixed while picking up the 
patch, thus:


Reviewed-by: Thomas Huth 




[PATCH v2 0/2] QGA installer fixes

2023-02-21 Thread Konstantin Kostiuk
resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2167423
fixes: CVE-2023-0664

CVE Technical details: The cached installer for QEMU Guest Agent in 
c:\windows\installer
(https://github.com/qemu/qemu/blob/master/qga/installer/qemu-ga.wxs),
can be leveraged to begin a repair of the installation without validation
that the repair is being performed by an administrative user. The MSI repair
custom action "RegisterCom" and "UnregisterCom" is not set for impersonation
which allows for the actions to occur as the SYSTEM account
(LINE 137 AND 145 of qemu-ga.wxs). The custom action also leverages cmd.exe
to run qemu-ga.exe in line 134 and 142 which causes an interactive command
shell to spawn even though the MSI is set to be non-interactive on line 53.

v1: https://lists.nongnu.org/archive/html/qemu-devel/2023-02/msg05661.html
v1 -> v2:
  Add explanation into commit messages

Konstantin Kostiuk (2):
  qga/win32: Remove change action from MSI installer
  qga/win32: Use rundll for VSS installation

 qga/installer/qemu-ga.wxs | 11 ++-
 qga/vss-win32/install.cpp |  9 +
 qga/vss-win32/qga-vss.def |  2 ++
 3 files changed, 17 insertions(+), 5 deletions(-)

--
2.25.1




[PATCH v2 1/2] qga/win32: Remove change action from MSI installer

2023-02-21 Thread Konstantin Kostiuk
Remove the 'change' button from "Programs and Features" because it does
not checks if a user is an admin or not. The installer has no components
to choose from and always installs everything. So the 'change' button is
not obviously needed but can create a security issue.

resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2167423
fixes: CVE-2023-0664

Signed-off-by: Konstantin Kostiuk 
Reviewed-by: Yan Vugenfirer 
---
 qga/installer/qemu-ga.wxs | 1 +
 1 file changed, 1 insertion(+)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index 51340f7ecc..feb629ec47 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -31,6 +31,7 @@
   />
 
 1
+
 
--
2.25.1




[PATCH v2 2/2] qga/win32: Use rundll for VSS installation

2023-02-21 Thread Konstantin Kostiuk
The custom action uses cmd.exe to run VSS Service installation
and removal which causes an interactive command shell to spawn.
This shell can be used to execute any commands as a SYSTEM user.
Even if call qemu-ga.exe directly the interactive command shell
will be spawned as qemu-ga.exe is a console application and used
by users from the console as well as a service.

As VSS Service runs from DLL which contains the installer and
uninstaller code, it can be run directly by rundll32.exe without
any interactive command shell.

Add specific entry points for rundll which is just a wrapper
for COMRegister/COMUnregister functions with proper arguments.

resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2167423
fixes: CVE-2023-0664

Signed-off-by: Konstantin Kostiuk 
Reviewed-by: Yan Vugenfirer 
---
 qga/installer/qemu-ga.wxs | 10 +-
 qga/vss-win32/install.cpp |  9 +
 qga/vss-win32/qga-vss.def |  2 ++
 3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/qga/installer/qemu-ga.wxs b/qga/installer/qemu-ga.wxs
index feb629ec47..46ae9e7a13 100644
--- a/qga/installer/qemu-ga.wxs
+++ b/qga/installer/qemu-ga.wxs
@@ -127,22 +127,22 @@
   
 

-
+
 

 
 
 
 
diff --git a/qga/vss-win32/install.cpp b/qga/vss-win32/install.cpp
index b57508fbe0..68662a6dfc 100644
--- a/qga/vss-win32/install.cpp
+++ b/qga/vss-win32/install.cpp
@@ -357,6 +357,15 @@ out:
 return hr;
 }

+STDAPI_(void) CALLBACK DLLCOMRegister(HWND, HINSTANCE, LPSTR, int)
+{
+COMRegister();
+}
+
+STDAPI_(void) CALLBACK DLLCOMUnregister(HWND, HINSTANCE, LPSTR, int)
+{
+COMUnregister();
+}

 static BOOL CreateRegistryKey(LPCTSTR key, LPCTSTR value, LPCTSTR data)
 {
diff --git a/qga/vss-win32/qga-vss.def b/qga/vss-win32/qga-vss.def
index 927782c31b..ee97a81427 100644
--- a/qga/vss-win32/qga-vss.def
+++ b/qga/vss-win32/qga-vss.def
@@ -1,6 +1,8 @@
 LIBRARY  "QGA-PROVIDER.DLL"

 EXPORTS
+   DLLCOMRegister
+   DLLCOMUnregister
COMRegister PRIVATE
COMUnregister   PRIVATE
DllCanUnloadNow PRIVATE
--
2.25.1




Re: [PULL 00/12] VFIO updates 2023-02-16

2023-02-21 Thread Peter Maydell
On Thu, 16 Feb 2023 at 23:03, Alex Williamson
 wrote:
>
> The following changes since commit 6dffbe36af79e26a4d23f94a9a1c1201de99c261:
>
>   Merge tag 'migration-20230215-pull-request' of 
> https://gitlab.com/juan.quintela/qemu into staging (2023-02-16 13:09:51 +)
>
> are available in the Git repository at:
>
>   https://gitlab.com/alex.williamson/qemu.git tags/vfio-updates-20230216.0
>
> for you to fetch changes up to 57edb7e44489ec4d85075acba47223127ecf1521:
>
>   MAINTAINERS: Add myself as VFIO reviewer (2023-02-16 12:14:40 -0700)
>
> 
> VFIO updates 2023-02-16
>
>  * Initial v2 migration support for vfio (Avihai Horon)
>
>  * Add Cédric as vfio reviewer (Cédric Le Goater)


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/8.0
for any user-visible changes.

-- PMM



Re: [PULL 0/4] virtiofs queue

2023-02-21 Thread Peter Maydell
On Thu, 16 Feb 2023 at 18:27, Dr. David Alan Gilbert (git)
 wrote:
>
> From: "Dr. David Alan Gilbert" 
>
> The following changes since commit 6dffbe36af79e26a4d23f94a9a1c1201de99c261:
>
>   Merge tag 'migration-20230215-pull-request' of 
> https://gitlab.com/juan.quintela/qemu into staging (2023-02-16 13:09:51 +)
>
> are available in the Git repository at:
>
>   https://gitlab.com/dagrh/qemu.git tags/pull-virtiofs-20230216b
>
> for you to fetch changes up to a6bfdaed4a735a2cf59f265e6955fe2adcc99637:
>
>   virtiofsd: Swing deprecated message to removed-features (2023-02-16 
> 18:15:08 +)
>
> 
> Remove C virtiofsd
>
> We deprecated the C virtiofsd in commit 34deee7b6a1418f3d62a
> in v7.0 in favour of the Rust implementation at
>
>   https://gitlab.com/virtio-fs/virtiofsd
>
> since then, the Rust version has had more development and
> has held up well.  It's time to say goodbye to the C version
> that got us going.
>
> Signed-off-by: Dr. David Alan Gilbert 
>
> 


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/8.0
for any user-visible changes.

-- PMM



Re: [PATCH] block/mirror: add 'write-blocking-after-ready' copy mode

2023-02-21 Thread Kevin Wolf
Am 21.02.2023 um 11:57 hat Fiona Ebner geschrieben:
> Am 14.02.23 um 17:19 schrieb Vladimir Sementsov-Ogievskiy:
> > On 02.02.23 16:27, Fiona Ebner wrote:
> >> Am 02.02.23 um 12:34 schrieb Kevin Wolf:
> >>> But having to switch the mirror job to sync mode just to avoid doing I/O
> >>> on an inactive device sounds wrong to me. It doesn't fix the root cause
> >>> of that problem, but just papers over it.
> >>
> >> If you say the root cause is "the job not being completed before
> >> switchover", then yes. But if the root cause is "switchover happening
> >> while the drive is not actively synced", then a way to switch modes can
> >> fix the root cause :)
> >>
> >>>
> >>> Why does your management tool not complete the mirror job before it
> >>> does the migration switchover that inactivates images?
> >>
> >> I did talk with my team leader about the possibility, but we decided to
> >> not go for it, because it requires doing the migration in two steps with
> >> pause-before-switchover and has the potential to increase guest downtime
> >> quite a bit. So I went for this approach instead.
> >>
> > 
> > 
> > Interesting point. Maybe we need a way to automatically complete all the
> > jobs before switchower?  It seems no reason to break the jobs if user
> > didn't cancel them. (and of course no reason to allow a code path
> > leading to assertion).
> > 
> 
> Wouldn't that be a bit unexpected? There could be jobs unrelated to
> migration or jobs at early stages. But sure, being able to trigger the
> assertion is not nice.
> 
> Potential alternatives could be pausing the jobs or failing migration
> with a clean error?

I wonder if the latter is what we would get if child_job.inactivate()
just returned an error if the job isn't completed yet?

It would potentially result in a mix of active and inactive BDSes,
though, which is a painful state to be in. If you don't 'cont'
afterwards, you're likely to hit another assertion once you try to do
something with an inactive BDS.

Pausing the job feels a bit dangerous, because it means that you can
resume it as a user. We'd have to add code to check that the image has
actually been activated again before we allow to resume the job.

> For us, the former is still best done in combination with a way to
> switch to active (i.e. write-blocking) mode for drive-mirror.

Switching between these modes is a useful thing to have either way.

> The latter would force us to complete the drive-mirror job before
> switchover even with active (i.e. write-blocking) mode, breaking our
> usage of drive-mirror+migration that worked (in almost all cases, but it
> would have been all cases if we had used active mode ;)) for many years now.
> 
> Maybe adding an option for how the jobs should behave upon switchover
> (e.g. complete/pause/cancel/cancel-migration) could help? Either as a
> job-specific option (more flexible) or a migration option?

Kevin




Re: [PATCH] hw/riscv: Skip re-generating DT nodes for a given DTB

2023-02-21 Thread Daniel Henrique Barboza




On 2/21/23 03:12, Bin Meng wrote:

Lanuch qemu-system-riscv64 with a given dtb for 'sifive_u' and 'virt'
machines, QEMU complains:

   qemu_fdt_add_subnode: Failed to create subnode /soc: FDT_ERR_EXISTS

The whole DT generation logic should be skipped when a given DTB is
present.

Fixes: b1f19f238cae ("hw/riscv: write bootargs 'chosen' FDT after 
riscv_load_kernel()")


Thanks for cleaning my mess :)

I was wondering whether we should move the ms->dtb verification/load bits 
outside of
create_fdt(), and put it explicitly in sifive_u_machine_init() and 
virt_machine_init().
Like this:

/* load/create device tree*/
if (ms->dtb) {
ms->fdt = load_device_tree(ms->dtb, &s->fdt_size);
if (!ms->fdt) {
error_report("load_device_tree() failed");
exit(1);
}
} else {
create_fdt(s, memmap);
}


This looks clearer to me because create_fdt() will actually create a fdt, not 
load or create
a fdt. create_fdt() from spike works this way.

I'll leave to your discretion. The patch is already good enough as is.


Reviewed-by: Daniel Henrique Barboza 


Signed-off-by: Bin Meng 
---

  hw/riscv/sifive_u.c | 1 +
  hw/riscv/virt.c | 1 +
  2 files changed, 2 insertions(+)

diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
index ad3bb35b34..76db5ed3dd 100644
--- a/hw/riscv/sifive_u.c
+++ b/hw/riscv/sifive_u.c
@@ -118,6 +118,7 @@ static void create_fdt(SiFiveUState *s, const MemMapEntry 
*memmap,
  error_report("load_device_tree() failed");
  exit(1);
  }
+return;
  } else {
  fdt = ms->fdt = create_device_tree(&fdt_size);
  if (!fdt) {
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index 86c4adc0c9..0c7b4a1e46 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -1014,6 +1014,7 @@ static void create_fdt(RISCVVirtState *s, const 
MemMapEntry *memmap)
  error_report("load_device_tree() failed");
  exit(1);
  }
+return;
  } else {
  ms->fdt = create_device_tree(&s->fdt_size);
  if (!ms->fdt) {




Re: [PATCH v3 5/6] meson: prefer 'sphinx-build' to 'sphinx-build-3'

2023-02-21 Thread Paolo Bonzini

On 2/21/23 02:24, John Snow wrote:

Once upon a time, "sphinx-build" on certain RPM platforms invoked
specifically a Python 2.x version, while "sphinx-build-3" was a distro
shim for the Python 3.x version.

These days, none of our supported platforms utilize a 2.x version, so it
should be safe to search for 'sphinx-build' prior to 'sphinx-build-3',
which will prefer pip/venv installed versions of sphinx if they're
available.

This adds an extremely convenient ability to test document building
ability in QEMU across multiple versions of Sphinx for the purposes of
compatibility testing.


Can we just use "$PYTHON -m sphinx.cmd.build" instead, to ensure that we don't
escape the virtual environment?  Or even better, we could have a simple script
like this:

#! /usr/bin/env python3

from pkg_resources import load_entry_point

if __name__ == '__main__':
if sys.argv[1] == '--check':
try:
load_entry_point(sys.argv[2], 'console_scripts', sys.argv[3])
sys.exit(0)
except ImportError:
sys.exit(1)
else:
entry_point = load_entry_point(sys.argv[1], 'console_scripts', 
sys.argv[2])
# The second argument to python-run.py becomes sys.argv[0]
del sys.argv[0:1]
sys.exit(entry_point())

then docs/meson.build can do this:

python_run = find_program('scripts/python-run.py')
build_docs = false
if get_feature('docs') \
  .require(run_command(python_run, '--check', 'sphinx', 'sphinx-build',
   check: false).returncode() == 0,
   error: 'Could not find sphinx installation') \
  .allowed()
  # The sphinx module is installed
  SPHINX_ARGS = ['env', 'CONFDIR=' + qemu_confdir,
 python_run, 'sphinx', 'sphinx-build', '-q']
  ...
  build_docs = (sphinx_build_test_out.returncode() == 0)
  ...
endif

This again ensures that sphinx-build will not escape the virtual environment
if there is one.  configure can also use the script to run meson, though that
can come later.

Paolo




Re: [PATCH] hw/ide/ahci: trace ncq write command as write instead of read

2023-02-21 Thread Kevin Wolf
Am 17.02.2023 um 11:31 hat Fiona Ebner geschrieben:
> Fixes: e4baa9f00b ("AHCI: Replace DPRINTF with trace-events")
> Signed-off-by: Fiona Ebner 

Reviewed-by: Kevin Wolf 




Re: [PATCH v3 0/6] Python: Drop support for Python 3.6

2023-02-21 Thread Paolo Bonzini

On 2/21/23 08:00, Markus Armbruster wrote:

John Snow  writes:


CI: https://gitlab.com/jsnow/qemu/-/pipelines/783612696
 [Updated for v3, still all green.]
GL: https://gitlab.com/jsnow/qemu/-/commits/python-require-37

Hi, discussion about this series is ongoing. This series (v3) is not
meant to address all of that discussion, but rather is an updated
baseline for what we are capable of right now, today, without much
additional engineering. It's meant to serve as a reference for further
discussion.


Misses the RFC tag then :)


To my knowledge, the inconveniences caused by this patchset as currently
written are:

(1) Users of CentOS 8 and OpenSUSE 15.4 would need to install an
 additional python package that will exist side-by-side with their
 base platform's Python 3.6 package.

 "zypper install python39" or "dnf install python38" is enough;
 configure will do the rest of the work.

 It's my understanding that this is largely a non-issue.

(2) Due to our Sphinx plugin that imports QAPI code from the tree,


I can read this as "Due to our Sphinx plugin (which by the way imports
some QAPI code)" or as "Due to out Sphinx plugin importing QAPI code".
The former is more accurate.  We need a newer Sphinx because we use a
plugin, the plugin is written in Python, so our new Python requirement
applies.  Fine print: the code the plugin imports from QAPI is going to
break first.


 distro-provided versions of Sphinx that are installed and tied to
 Python 3.6 will no longer be suitable. Users may forego building
 docs or install a suitable sphinx using "pip".

 It's my understanding that this one is "kind of a bummer".

I feel that the inconvenience caused by (1) is minimized as is possible;
the inconvenience caused by (2) is slightly worse and I concede the
workaround has some complexities that I would otherwise seek to avoid.

As far as I am aware, the way forward is to work with Paolo to implement
a proper venv solution for the build tree that will help mitigate the
fallout from (2) by automating the use of a pip-provided Sphinx in the
cases where the distro-provided version is insufficient.


So, your current plan is to rebase this series less its DO-NOT-MERGE
parts, on top of Paolo's.  Correct?


Yes, I will post a non-RFC v4 once all the feedback is gathered.

Paolo




Re: [PATCH v2 07/21] include/exec/memattrs: Add two bits of space to MemTxAttrs

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 11:50, Richard Henderson wrote:

On 2/21/23 00:42, Philippe Mathieu-Daudé wrote:

On 21/2/23 11:01, Richard Henderson wrote:

On 2/20/23 21:56, Philippe Mathieu-Daudé wrote:

'secure' & 'user' seem mutually exclusive. If we get short in bits,
they could be shared.


They are not.  ARM has Secure EL0, or secure user mode.


Oh, I misunderstood this field with user-emulation then (I tried
commenting it and my TCG/HVF build succeeded).



target/arm/ptw.c:2853:    result->f.attrs.user = regime_is_user(env, 
mmu_idx);


So... it shouldn't have built?


Eh correct... I guess I wasn't sitting in a directory with ARM target
selected when I tried that 🤦

../../hw/misc/armv7m_ras.c:19:15: error: no member named 'user' in 
'struct MemTxAttrs'

if (attrs.user) {
~ ^





Re: [PATCH v2 0/2] QGA installer fixes

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 12:21, Konstantin Kostiuk wrote:

resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2167423
fixes: CVE-2023-0664

CVE Technical details: The cached installer for QEMU Guest Agent in 
c:\windows\installer
(https://github.com/qemu/qemu/blob/master/qga/installer/qemu-ga.wxs),
can be leveraged to begin a repair of the installation without validation
that the repair is being performed by an administrative user. The MSI repair
custom action "RegisterCom" and "UnregisterCom" is not set for impersonation
which allows for the actions to occur as the SYSTEM account
(LINE 137 AND 145 of qemu-ga.wxs). The custom action also leverages cmd.exe
to run qemu-ga.exe in line 134 and 142 which causes an interactive command
shell to spawn even though the MSI is set to be non-interactive on line 53.

v1: https://lists.nongnu.org/archive/html/qemu-devel/2023-02/msg05661.html


Per 
https://lore.kernel.org/qemu-devel/caa8xkjuqfbvgdvj059fvgosjkv+kz5jb1gfmnz+ao-twh7f...@mail.gmail.com/:


Reported-by: Brian Wiltse 


v1 -> v2:
   Add explanation into commit messages


Thanks, much appreciated!


Konstantin Kostiuk (2):
   qga/win32: Remove change action from MSI installer
   qga/win32: Use rundll for VSS installation

  qga/installer/qemu-ga.wxs | 11 ++-
  qga/vss-win32/install.cpp |  9 +
  qga/vss-win32/qga-vss.def |  2 ++
  3 files changed, 17 insertions(+), 5 deletions(-)

--
2.25.1






Re: [PATCH v4] configure: Add 'mkdir build' check

2023-02-21 Thread Philippe Mathieu-Daudé

On 21/2/23 12:14, Thomas Huth wrote:

On 21/02/2023 12.06, Dinah Baum wrote:

QEMU configure script goes into an infinite error printing loop
when in read only directory due to 'build' dir never being created.

Checking if 'mkdir dir' succeeds prevents this error.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/321
Signed-off-by: Dinah Baum 
Reviewed-by: Peter Maydell 
---
  configure | 7 ++-
  1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index cf6db3d551..1ef3e7f77d 100755
--- a/configure
+++ b/configure
@@ -31,7 +31,12 @@ then
  fi
  fi
-    mkdir build
+    if ! mkdir build || ! touch $MARKER
+    then
+    echo "ERROR: Could not create ./build directory. Check the 
permissions on"

+    echo "your source directory, or try doing an out-of-tree build."
+    exit 1
+    fi
  touch $MARKER


Nit: I think the final "touch $MARKER" could now be removed, too, since 
the code either exits above, or runs the "|| ! touch $MARKER" part there 
already.


Anyway, it's just a nit, and maybe could also be fixed while picking up 
the patch,


Yes please :)

Reviewed-by: Philippe Mathieu-Daudé 


thus:

Reviewed-by: Thomas Huth 







Re: [PATCH v2 07/18] hw/ide/piix: Ensure IDE output IRQs are wired at realization

2023-02-21 Thread Daniel P . Berrangé
On Sun, Feb 19, 2023 at 10:54:34PM +0100, Philippe Mathieu-Daudé wrote:
> +Daniel, Igor, Marcel & libvirt
> 
> On 16/2/23 16:33, Philippe Mathieu-Daudé wrote:
> > On 16/2/23 15:43, Bernhard Beschow wrote:
> > > 
> > > 
> > > On Wed, Feb 15, 2023 at 5:20 PM Philippe Mathieu-Daudé
> > > mailto:phi...@linaro.org>> wrote:
> > > 
> > >     Ensure both IDE output IRQ lines are wired.
> > > 
> > >     We can remove the last use of isa_get_irq(NULL).
> > > 
> > >     Signed-off-by: Philippe Mathieu-Daudé  > >     >
> > >     ---
> > >   hw/ide/piix.c | 13 -
> > >   1 file changed, 8 insertions(+), 5 deletions(-)
> > > 
> > >     diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > >     index 9d876dd4a7..b75a4ddcca 100644
> > >     --- a/hw/ide/piix.c
> > >     +++ b/hw/ide/piix.c
> > >     @@ -133,14 +133,17 @@ static bool pci_piix_init_bus(PCIIDEState *d,
> > >     unsigned i, Error **errp)
> > >       static const struct {
> > >           int iobase;
> > >           int iobase2;
> > >     -        int isairq;
> > >       } port_info[] = {
> > >     -        {0x1f0, 0x3f6, 14},
> > >     -        {0x170, 0x376, 15},
> > >     +        {0x1f0, 0x3f6},
> > >     +        {0x170, 0x376},
> > >       };
> > >       int ret;
> > > 
> > >     -    qemu_irq irq_out = d->irq[i] ? : isa_get_irq(NULL,
> > >     port_info[i].isairq);
> > >     +    if (!d->irq[i]) {
> > >     +        error_setg(errp, "output IDE IRQ %u not connected", i);
> > >     +        return false;
> > >     +    }
> > >     +
> > >       ide_bus_init(&d->bus[i], sizeof(d->bus[i]), DEVICE(d), i, 2);
> > >       ret = ide_init_ioport(&d->bus[i], NULL, port_info[i].iobase,
> > >                             port_info[i].iobase2);
> > >     @@ -149,7 +152,7 @@ static bool pci_piix_init_bus(PCIIDEState *d,
> > >     unsigned i, Error **errp)
> > >                            object_get_typename(OBJECT(d)), i);
> > >           return false;
> > >       }
> > >     -    ide_bus_init_output_irq(&d->bus[i], irq_out);
> > >     +    ide_bus_init_output_irq(&d->bus[i], d->irq[i]);
> > > 
> > >       bmdma_init(&d->bus[i], &d->bmdma[i], d);
> > >       d->bmdma[i].bus = &d->bus[i];
> > >     --     2.38.1
> > > 
> > > 
> > > This now breaks user-created  piix3-ide:
> > > 
> > >    qemu-system-x86_64 -M q35 -device piix3-ide
> > > 
> > > Results in:
> > > 
> > >    qemu-system-x86_64: -device piix3-ide: output IDE IRQ 0 not connected
> > 
> > Thank you for this real-life-impossible-but-exists-in-QEMU test-case!
> 
> Do we really want to maintain this Frankenstein use case?
> 
> 1/ Q35 comes with a PCIe bus on which sits a ICH chipset, which already
>contains a AHCI function exposing multiple IDE buses.
>What is the point on using an older tech?
> 
> 2/ Why can we plug a PCI function on a PCIe bridge without using a
>pcie-pci-bridge?

FYI, this scenario is explicitly permitted by QEMU's PCIE docs,
as without any bus= attr, the -device piix3-ide will end up
attached to the PCI-E root complex. It thus becomes an integrated
endpoint and does not support hotplug.

If you want hotpluggable PCI-only device, you need to add a
pcie-pci-bridge and a pci-bridge, attaching the endpoint to
the latter

PCE-E endpoints should always be in a pcie-root-port (or
switch downstream port)

> 3/ Chipsets come as a whole. Software drivers might expect all PCI
>functions working (Linux considering each function individually
>is not the norm)


> But the hardware really looks Frankenstein now:
> 
> (qemu) info qom-tree
> /machine (pc-q35-8.0-machine)
>   /peripheral-anon (container)
> /device[0] (piix3-ide)
>   /bmdma[0] (memory-region)
>   /bmdma[1] (memory-region)
>   /bus master container[0] (memory-region)
>   /bus master[0] (memory-region)
>   /ide.6 (IDE)
>   /ide.7 (IDE)
>   /ide[0] (memory-region)
>   /ide[1] (memory-region)
>   /ide[2] (memory-region)
>   /ide[3] (memory-region)
>   /piix-bmdma-container[0] (memory-region)
>   /piix-bmdma[0] (memory-region)
>   /piix-bmdma[1] (memory-region)
>   /q35 (q35-pcihost)
>   /unattached (container)
> /device[17] (ich9-ahci)
>   /ahci-idp[0] (memory-region)
>   /ahci[0] (memory-region)
>   /bus master container[0] (memory-region)
>   /bus master[0] (memory-region)
>   /ide.0 (IDE)
>   /ide.1 (IDE)
>   /ide.2 (IDE)
>   /ide.3 (IDE)
>   /ide.4 (IDE)
>   /ide.5 (IDE)
> /device[2] (ICH9-LPC)
>   /bus master container[0] (memory-region)
>   /bus master[0] (memory-region)
>   /ich9-pm[0] (memory-region)
>   /isa.0 (ISA)
> 
> I guess the diff [*] is acceptable as a kludge, but we might consider
> deprecating such use.
> 
> Paolo, Michael & libvirt folks, what do you think?

AFAICT, libvirt will never use '-device piix3-ide'. I vaguely recall
that we discussed allowing it to enable use of > 4 IDE units, but
decid

Re: [PATCH 2/4] virtio-input: add a virtio-mulitouch device

2023-02-21 Thread Pavel Dovgalyuk

replay/replay-input.c part:
Reviewed-by: Pavel Dovgalyuk 

On 18.02.2023 19:22, Sergio Lopez wrote:

Add a virtio-multitouch device to the family of devices emulated by
virtio-input implementing the Multi-touch protocol as descripted here:

https://www.kernel.org/doc/html/latest/input/multi-touch-protocol.html?highlight=multi+touch

This patch just add the device itself, without connecting it to any
backends. The following patches will add helpers in "ui" and will enable
the GTK3 backend to transpose multi-touch events from the host to the
guest.

Signed-off-by: Sergio Lopez 
---
  hw/input/virtio-input-hid.c  | 123 ++-
  hw/virtio/virtio-input-pci.c |  25 ++-
  include/hw/virtio/virtio-input.h |   9 ++-
  include/ui/input.h   |   3 +
  qapi/ui.json |  45 ++-
  replay/replay-input.c|  18 +
  ui/input.c   |   6 ++
  ui/trace-events  |   1 +
  8 files changed, 216 insertions(+), 14 deletions(-)

diff --git a/hw/input/virtio-input-hid.c b/hw/input/virtio-input-hid.c
index d28dab69ba..34109873ac 100644
--- a/hw/input/virtio-input-hid.c
+++ b/hw/input/virtio-input-hid.c
@@ -16,9 +16,11 @@
  
  #include "standard-headers/linux/input.h"
  
-#define VIRTIO_ID_NAME_KEYBOARD "QEMU Virtio Keyboard"

-#define VIRTIO_ID_NAME_MOUSE"QEMU Virtio Mouse"
-#define VIRTIO_ID_NAME_TABLET   "QEMU Virtio Tablet"
+#define VIRTIO_ID_NAME_KEYBOARD "QEMU Virtio Keyboard"
+#define VIRTIO_ID_NAME_MOUSE"QEMU Virtio Mouse"
+#define VIRTIO_ID_NAME_TABLET   "QEMU Virtio Tablet"
+#define VIRTIO_ID_NAME_MULTITOUCH   "QEMU Virtio Multitouch"
+#define VIRTIO_ID_SERIAL_MULTITOUCH "virtio-touchscreen-0"
  
  /* - */
  
@@ -30,6 +32,7 @@ static const unsigned short keymap_button[INPUT_BUTTON__MAX] = {

  [INPUT_BUTTON_WHEEL_DOWN]= BTN_GEAR_DOWN,
  [INPUT_BUTTON_SIDE]  = BTN_SIDE,
  [INPUT_BUTTON_EXTRA] = BTN_EXTRA,
+[INPUT_BUTTON_TOUCH] = BTN_TOUCH,
  };
  
  static const unsigned short axismap_rel[INPUT_AXIS__MAX] = {

@@ -42,6 +45,11 @@ static const unsigned short axismap_abs[INPUT_AXIS__MAX] = {
  [INPUT_AXIS_Y]   = ABS_Y,
  };
  
+static const unsigned short axismap_tch[INPUT_AXIS__MAX] = {

+[INPUT_AXIS_X]   = ABS_MT_POSITION_X,
+[INPUT_AXIS_Y]   = ABS_MT_POSITION_Y,
+};
+
  /* - */
  
  static void virtio_input_extend_config(VirtIOInput *vinput,

@@ -81,6 +89,7 @@ static void virtio_input_handle_event(DeviceState *dev, 
QemuConsole *src,
  InputKeyEvent *key;
  InputMoveEvent *move;
  InputBtnEvent *btn;
+InputMultitouchEvent *mtt;
  
  switch (evt->type) {

  case INPUT_EVENT_KIND_KEY:
@@ -137,6 +146,24 @@ static void virtio_input_handle_event(DeviceState *dev, 
QemuConsole *src,
  event.value = cpu_to_le32(move->value);
  virtio_input_send(vinput, &event);
  break;
+case INPUT_EVENT_KIND_MTT:
+mtt = evt->u.mtt.data;
+if (mtt->type == INPUT_MULTITOUCH_TYPE_DATA) {
+event.type  = cpu_to_le16(EV_ABS);
+event.code  = cpu_to_le16(axismap_tch[mtt->axis]);
+event.value = cpu_to_le32(mtt->value);
+virtio_input_send(vinput, &event);
+} else {
+event.type  = cpu_to_le16(EV_ABS);
+event.code  = cpu_to_le16(ABS_MT_SLOT);
+event.value = cpu_to_le32(mtt->slot);
+virtio_input_send(vinput, &event);
+event.type  = cpu_to_le16(EV_ABS);
+event.code  = cpu_to_le16(ABS_MT_TRACKING_ID);
+event.value = cpu_to_le32(mtt->tracking_id);
+virtio_input_send(vinput, &event);
+}
+break;
  default:
  /* keep gcc happy */
  break;
@@ -515,12 +542,102 @@ static const TypeInfo virtio_tablet_info = {
  
  /* - */
  
+static QemuInputHandler virtio_multitouch_handler = {

+.name  = VIRTIO_ID_NAME_MULTITOUCH,
+.mask  = INPUT_EVENT_MASK_BTN | INPUT_EVENT_MASK_MTT,
+.event = virtio_input_handle_event,
+.sync  = virtio_input_handle_sync,
+};
+
+static struct virtio_input_config virtio_multitouch_config[] = {
+{
+.select= VIRTIO_INPUT_CFG_ID_NAME,
+.size  = sizeof(VIRTIO_ID_NAME_MULTITOUCH),
+.u.string  = VIRTIO_ID_NAME_MULTITOUCH,
+},{
+.select= VIRTIO_INPUT_CFG_ID_SERIAL,
+.size  = sizeof(VIRTIO_ID_SERIAL_MULTITOUCH),
+.u.string  = VIRTIO_ID_SERIAL_MULTITOUCH,
+},{
+.select= VIRTIO_INPUT_CFG_ID_DEVIDS,
+.size  = sizeof(struct virtio_input_devids),
+.u.ids = {
+.bustype = const_le16(BUS_VIRTUAL),
+.vendor  = const_le16(0x

Re: [PATCH v2 0/7] Python: Drop support for Python 3.6

2023-02-21 Thread Thomas Huth

On 20/02/2023 20.56, John Snow wrote:

On Mon, Feb 20, 2023 at 1:16 AM Thomas Huth  wrote:


On 17/02/2023 21.46, John Snow wrote:

On Thu, Feb 16, 2023 at 5:58 AM Thomas Huth  wrote:


On 15/02/2023 20.05, Markus Armbruster wrote:

The discussion under PATCH 6 makes me think there's a bit of confusion
about the actual impact of dropping support for Python 3.6.  Possibly
because it's spelled out in the commit message of PATCH 7.  Let me
summarize it in one sentence:

   *** All supported host systems continue to work ***

Evidence: CI remains green.


The CI remains green since one of the patches disabled the building of the
docs on CentOS 8. That's not how I'd describe "continue to work", at least
not in the same extend as before.


On some supported host systems, different packages need to be installed.
On CentOS 8, for instance, we need to install Python 3.8.13 or 3.9.16
instead of 3.6.8.  Let me stress again: same repository, different
package.  No downsides I can see.

The *one* exception is Sphinx on CentOS 8.  CentOS 8 does not ship a
version of Sphinx that works with Python 3.7 or newer.  This series
proposes to simply stop building the docs there, unless the user
provides a suitable version of Sphinx (which is easy enough with pip).


I think we've all understood that. The thing that you obviously did not
understood: This breaks our support statement.
I'm pretty sure that you could also build the whole QEMU suite successfully
on an ancient CentOS 7 or Ubuntu 18.04 system if you manually install a
newer version of GCC and some of the required libraries first. But that's
not how we understand our support statement.

Sure, you can argue that you can use "pip install" to get a newer version of
Sphinx on RHEL 8 / CentOS 8 to continue building the docs there - but is
that really that much different from installing a newer version of GCC and
libraries on an ancient distro that we do not officially support anymore?
I'd say no. You also have to consider that not every build host has access
to the internet, maybe some companies only have an internal mirror of the
distro packages in their intranet (I remember some discussion about such a
case in the past) - so while you were perfectly fine to build the whole of
QEMU on a CentOS 8 there before this change, you could now not build parts
of QEMU anymore there due to the missing possibility to run "pip install"
without full internet connection.


There are good points elsewhere in this thread and I am taking notes,
but this critique caught my eye as something I was not specifically
planning around, so I wanted to get an elaboration here if I may.

Do we have a support statement for this? I find this critique somewhat
surprising -- If we don't have internet, how did we get the other 20
to 30 dependencies needed to build QEMU? To what extent are we
*required* to preserve a build that works without internet access?


It's not written in stone, but I saw it this way: If I have a complete
mirror of a distro repository in my intrAnet, I can use that mirror to set
up a QEMU build host system that has no access to the internet. Or maybe
think of a DVD image(s) with all distro packages that you use to install a
host without network access (and you copy the QEMU tarball there via USB
stick). I think it's not that uncommon to have such scenarios out there.

For example, do you remember that SDL 1.2 discussion a some years ago? See:

   https://www.mail-archive.com/qemu-devel@nongnu.org/msg631628.html

It was not exactly the same situation, since those folks were even unable to
install a SDL2-devel package on their pre-installed hosts, though it was
theoretically available as an update in their distro, but I think it gives
an impression of what people are using and expecting out there... (and no,
I'm not happy with this, I'd also rather love if we could move faster in the
QEMU project sometimes).

   Thomas


Well, in this case I believe our support policy generally is written
to require a fully up-to-date version of the LTS distros, e.g. we
don't really test against "release day" 16.04, in the same way we
don't offer support for RHEL 8.0, just the latest point release.


Yes, sure, that's what I meant with "not exactly the same situation" ... it 
was just an example of people trying to compile QEMU offline.



I think really all we need is the ability to know a priori what we
need to build QEMU before going offline without any last second
surprises during configure, make, or make check. Right?


I think it should be OK with the patch that Paolo suggested for the support 
policy and maybe a note somewhere that you have to make sure to install a 
newer Sphinx with pip in case you still want to build the docs on older 
enterprise distros...


 Thomas




Re: [PULL 03/11] block/iscsi: fix double-free on BUSY or similar statuses

2023-02-21 Thread Fiona Ebner
Am 08.02.23 um 18:19 schrieb Paolo Bonzini:
> Commit 8c460269aa77 ("iscsi: base all handling of check condition on
> scsi_sense_to_errno", 2019-07-15) removed a "goto out" so that the
> same coroutine is re-entered twice; once from iscsi_co_generic_cb,
> once from the timer callback iscsi_retry_timer_expired.  This can
> cause a crash.
> 
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1378
> Reported-by: Grzegorz Zdanowski 
> Signed-off-by: Paolo Bonzini 
> ---
>  block/iscsi.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/block/iscsi.c b/block/iscsi.c
> index b3e10f40b649..3aacd0709f93 100644
> --- a/block/iscsi.c
> +++ b/block/iscsi.c
> @@ -269,6 +269,7 @@ iscsi_co_generic_cb(struct iscsi_context *iscsi, int 
> status,
>  timer_mod(&iTask->retry_timer,
>qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + 
> retry_time);
>  iTask->do_retry = 1;
> +return;
>  } else if (status == SCSI_STATUS_CHECK_CONDITION) {
>  int error = iscsi_translate_sense(&task->sense);
>  if (error == EAGAIN) {

Thank you for the fix! CC-ing stable, because this is a regression fix.




Re: [PATCH v10 0/9] KVM: mm: fd-based approach for supporting KVM

2023-02-21 Thread Chao Peng
> Hi Sean,
> 
> We've rebased the SEV+SNP support onto your updated UPM base support
> tree and things seem to be working okay, but we needed some fixups on
> top of the base support get things working, along with 1 workaround
> for an issue that hasn't been root-caused yet:
> 
>   https://github.com/mdroth/linux/commits/upmv10b-host-snp-v8-wip
> 
>   *stash (upm_base_support): mm: restrictedmem: Kirill's pinning 
> implementation
>   *workaround (use_base_support): mm: restrictedmem: loosen exclusivity check

What I'm seeing is Slot#3 gets added first and then deleted. When it's
gets added, Slot#0 already has the same range bound to restrictedmem so
trigger the exclusive check. This check is exactly the current code for.

>   *fixup (upm_base_support): KVM: use inclusive ranges for restrictedmem 
> binding/unbinding
>   *fixup (upm_base_support): mm: restrictedmem: use inclusive ranges for 
> issuing invalidations

As many kernel APIs treat 'end' as exclusive, I would rather keep using
exclusive 'end' for these APIs(restrictedmem_bind/restrictedmem_unbind
and notifier callbacks) but fix it internally in the restrictedmem. E.g.
all the places where xarray API needs a 'last'/'max' we use 'end - 1'.
See below for the change.

>   *fixup (upm_base_support): KVM: fix restrictedmem GFN range calculations

Subtracting slot->restrictedmem.index for start/end in
restrictedmem_get_gfn_range() is the correct fix.

>   *fixup (upm_base_support): KVM: selftests: CoCo compilation fixes
> 
> We plan to post an updated RFC for v8 soon, but also wanted to share
> the staging tree in case you end up looking at the UPM integration aspects
> before then.
> 
> -Mike

This is the restrictedmem fix to solve 'end' being stored and checked in xarray:

--- a/mm/restrictedmem.c
+++ b/mm/restrictedmem.c
@@ -46,12 +46,12 @@ static long restrictedmem_punch_hole(struct restrictedmem 
*rm, int mode,
 */
down_read(&rm->lock);
 
-   xa_for_each_range(&rm->bindings, index, notifier, start, end)
+   xa_for_each_range(&rm->bindings, index, notifier, start, end - 1)
notifier->ops->invalidate_start(notifier, start, end);
 
ret = memfd->f_op->fallocate(memfd, mode, offset, len);
 
-   xa_for_each_range(&rm->bindings, index, notifier, start, end)
+   xa_for_each_range(&rm->bindings, index, notifier, start, end - 1)
notifier->ops->invalidate_end(notifier, start, end);
 
up_read(&rm->lock);
@@ -224,7 +224,7 @@ static int restricted_error_remove_page(struct 
address_space *mapping,
}
spin_unlock(&inode->i_lock);
 
-   xa_for_each_range(&rm->bindings, index, notifier, start, end)
+   xa_for_each_range(&rm->bindings, index, notifier, start, end - 
1)
notifier->ops->error(notifier, start, end);
break;
}
@@ -301,11 +301,12 @@ int restrictedmem_bind(struct file *file, pgoff_t start, 
pgoff_t end,
if (exclusive != rm->exclusive)
goto out_unlock;
 
-   if (exclusive && xa_find(&rm->bindings, &start, end, 
XA_PRESENT))
+   if (exclusive &&
+   xa_find(&rm->bindings, &start, end - 1, XA_PRESENT))
goto out_unlock;
}
 
-   xa_store_range(&rm->bindings, start, end, notifier, GFP_KERNEL);
+   xa_store_range(&rm->bindings, start, end - 1, notifier, GFP_KERNEL);
rm->exclusive = exclusive;
ret = 0;
 out_unlock:
@@ -320,7 +321,7 @@ void restrictedmem_unbind(struct file *file, pgoff_t start, 
pgoff_t end,
struct restrictedmem *rm = file->f_mapping->private_data;
 
down_write(&rm->lock);
-   xa_store_range(&rm->bindings, start, end, NULL, GFP_KERNEL);
+   xa_store_range(&rm->bindings, start, end - 1, NULL, GFP_KERNEL);
synchronize_rcu();
up_write(&rm->lock);
 }



  1   2   3   4   >