[Qemu-devel] [PATCH] vl.c: print error message if load fw_cfg file failed

2018-10-06 Thread Li Qiang
It makes sense to print the error message while reading
file failed.

Signed-off-by: Li Qiang 
---
 vl.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/vl.c b/vl.c
index cc55fe04a2..3db410e771 100644
--- a/vl.c
+++ b/vl.c
@@ -2207,8 +2207,9 @@ static int parse_fw_cfg(void *opaque, QemuOpts *opts, 
Error **errp)
 size = strlen(str); /* NUL terminator NOT included in fw_cfg blob */
 buf = g_memdup(str, size);
 } else {
-if (!g_file_get_contents(file, &buf, &size, NULL)) {
-error_report("can't load %s", file);
+GError *error = NULL;
+if (!g_file_get_contents(file, &buf, &size, &error)) {
+error_report("can't load %s, %s", file, error->message);
 return -1;
 }
 }
-- 
2.17.1





[Qemu-devel] [Bug 1483070] Re: VIRTIO Sequential Write IOPS limits not working

2018-10-06 Thread Launchpad Bug Tracker
[Expired for QEMU because there has been no activity for 60 days.]

** Changed in: qemu
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1483070

Title:
  VIRTIO Sequential Write IOPS limits not working

Status in QEMU:
  Expired

Bug description:
  Root Problem:
  IOPS limit does not work for VIRTIO devices if the disk workload is a 
sequential write.

  To confirm:
  IDE disk devices - the IOPS limit works fine. Disk transfer speed limit works 
fine.
  VIRTIO disk devices - the IOPS limit works fine for random IO (write/read) 
and sequential read, but not for sequential write. Disk transfer speed limits 
work fine.

  Tested on Windows 7,10 and 2k12 (Fedora drivers used and here is the twist):
  virtio-win-0.1.96 (stable) or older won't limit write IO if workload is 
sequential.
  virtio-win-0.1.105 (latest) or newer will limit but I have had two test 
machines crash when under high workload using IOPS limit.

  For Linux:
  The issue is also apparent, tested on Ubuntu 14.04

  On the hypervisor (using KVM) machine I have tried with Qemu 2.1.2
  (3.16.0-4-amd64 - Debian 8) and Qemu 2.3.0 (3.19.8-1-pve - Proxmox 3.4
  and 4) using multiple machines but all are 64bit intel.

  Even though the latest VIRTIO guest drivers fix the problem, the guest
  drivers shouldn't be able to ignore the limits the host puts in place
  or am I missing something??

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1483070/+subscriptions



Re: [Qemu-devel] [RFC PATCH 04/21] trace: enable the exec_tb trace events

2018-10-06 Thread Emilio G. Cota
On Fri, Oct 05, 2018 at 16:48:53 +0100, Alex Bennée wrote:
> Our performance isn't so critical that we can't spare a simple flag
> check when we exec a TB considering everything else we check in the
> outer loop.

[I know this is just done to illustrate how function names
in plugins can bind to tracing calls, but someone might
get confused by expecting more from "exec_tb" than it
actually does.]

This flag check costs nothing because "exec_tb" is
almost never called. The way it works right now, we
need -d nochain for "exec_tb" to actually generate
an event every time a TB executes.

IMO an eventual plugin API should let plugins decide whether
to subscribe to the execution of a particular TB, when
said TB is being translated, instead of providing
an all-or-nothing switch.

Thanks,

E.



Re: [Qemu-devel] [RFC PATCH 14/21] trace: add support for plugin infrastructure

2018-10-06 Thread Emilio G. Cota
On Fri, Oct 05, 2018 at 16:49:03 +0100, Alex Bennée wrote:
(snip)
> +static int bind_to_tracepoints(GModule *g_module, GPtrArray *events)
> +{
> +int count = 0;
> +TraceEventIter iter;
> +TraceEvent *ev;
> +
> +trace_event_iter_init(&iter, "*");
> +while ((ev = trace_event_iter_next(&iter)) != NULL) {
> +const char *name = trace_event_get_name(ev);
> +gpointer fn;
> +
> +if (g_module_symbol(g_module, name, &fn)) {
> +ev->plugin = (uintptr_t) fn;
> +trace_event_set_state_dynamic(ev, true);
> +count++;
> +}
> +}

I'd rather have subscription functions exposed to the
plugins via an API, so that
- Plugins can turn on and off subscriptions to callbacks
  as they see fit, instead of "being called from
  the very beginning, and then disable forever"
- We can have compile-time failures when doing something
  wrong with callback names :-)

Thanks,

E.



Re: [Qemu-devel] [RFC PATCH 00/21] Trace updates and plugin RFC

2018-10-06 Thread Emilio G. Cota
On Fri, Oct 05, 2018 at 16:48:49 +0100, Alex Bennée wrote:
(snip)
> ==Known Limitations==
> 
> Currently there is only one hook allowed per trace event. We could
> make this more flexible or simply just error out if two plugins try
> and hook to the same point. What are the expectations of running
> multiple plugins hooking into the same point in QEMU?

It's very common. All popular instrumentation tools (e.g. PANDA,
DynamoRIO, Pin) support multiple plugins.

> ==TCG Hooks==
> 
> Thanks to Lluís' work the trace API already splits up TCG events into
> translation time and exec time events and provides the machinery for
> hooking a trace helper into the translation stream. Currently that
> helper is unconditionally added although perhaps we could expand the
> call convention a little for TCG events to allow the translation time
> event to filter out planting the execution time helper?

A TCG helper is suboptimal for these kind of events, e.g. instruction/TB/
mem callbacks, because (1) these events happen *very* often, and
(2) a helper then has to iterate over a list of callbacks (assuming
we support multiple plugins). That is, one TCG helper call,
plus cache misses for the callback pointers, plus function calls
to call the callbacks. That adds up to 2x average slowdown
for SPEC06int, instead of 1.5x slowdown when embedding the
callbacks directly into the generated code. Yes, you have to
flush the code when unsubscribing from the event, but that cost
is amortized by the savings you get when the callbacks occur,
which are way more frequent.

Besides performance, to provide a pleasant plugin experience we need
something better than the current tracing callbacks.

> ===Instruction Tracing===
> 
> Pavel's series had a specific hook for instrumenting individual
> instructions. I have not yet added it to this series but I think it be
> done in a slightly cleaner way now we have the ability to insert TCG
> ops into the instruction stream.

I thought Peter explicitly disallowed TCG generation from plugins.
Also, IIRC others also mentioned that exposing QEMU internals
(e.g. "struct TranslationBlock", or "struct CPUState") to plugins
was not on the table.

> If we add a tracepoint for post
> instruction generation which passes a buffer with the instruction
> translated and method to insert a helper before or after the
> instruction. This would avoid exposing the cpu_ldst macros to the
> plugins.

Again, for performance you'd avoid the tracepoint (i.e. calling
a helper to call another function) and embed directly the
callback from TCG. Same thing applies to TB's.

> So what do people think? Could this be a viable way to extend QEMU
> with plugins?

For frequent events such as the ones mentioned above, I am
not sure plugins can be efficiently implemented under
tracing. For others (e.g. cpu_init events), sure, they could.
But still, differently from tracers, plugins can come and go
anytime, so I am not convinced that merging the two features
is a good idea.

Thanks,

Emilio



[Qemu-devel] [RFC 0/6] Dynamic TLB sizing

2018-10-06 Thread Emilio G. Cota
After reading this paper [1], I wondered whether how far one
could push the idea of dynamic TLB resizing. We discussed
it briefly in this thread:

 https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg02340.html

Since then, (1) rth helped me (thanks!) with TCG backend code,
and (2) I've abandoned the idea of substituting malloc
for memset, and instead focused on dynamically resizing the
TLBs. The rationale is that if a process touches a lot of
memory, having a large TLB will pay off, since the perf
gains will dwarf the increased cost of flushing via memset.

This series shows that the indirection necessary to do this
does not cause a perf decrease, at least for x86_64 hosts.

This series is incomplete, since it only implements changes
to the i386 backend, and it probably only works on x86_64.
But the whole point is to (1) see whether the performance gains
are worth it, and (2) discuss how crazy this approach is. I was
looking for things to break badly, but so far I've found no obvious
issues. But there might be some assumptions about the TLB size
baked in the code that I might have missed, so please point those
out if they exist.

Performance numbers are in the last patch.

You can fetch this series from:
  https://github.com/cota/qemu/tree/tlb-dyn

Note that it applies on top of my tlb-lock-v3 series:
  https://lists.gnu.org/archive/html/qemu-devel/2018-10/msg01087.html

Thanks,

Emilio

[1] "Optimizing Memory Translation Emulation in Full System Emulators",
Tong et al, TACO'15 https://dl.acm.org/citation.cfm?id=2686034





[Qemu-devel] [RFC 6/6] cputlb: dynamically resize TLBs based on use rate

2018-10-06 Thread Emilio G. Cota
Perform the resizing only on flushes, otherwise we'd
have to take a perf hit by either rehashing the array
or unnecessarily flushing it.

We grow the array aggressively, and reduce the size more
slowly. This accommodates mixed workloads, where some
processes might be memory-heavy while others are not.

As the following experiments show, this a net perf gain,
particularly for memory-heavy workloads. Experiments
are run on an Intel i7-6700K CPU @ 4.00GHz.

1. System boot + shudown, debian aarch64:

- Before (tb-lock-v3):
 Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs):

   7469.363393  task-clock (msec) #0.998 CPUs utilized  
  ( +-  0.07% )
31,507,707,190  cycles#4.218 GHz
  ( +-  0.07% )
57,101,577,452  instructions  #1.81  insns per cycle
  ( +-  0.08% )
10,265,531,804  branches  # 1374.352 M/sec  
  ( +-  0.07% )
   173,020,681  branch-misses #1.69% of all branches
  ( +-  0.10% )

   7.483359063 seconds time elapsed 
 ( +-  0.08% )

- After:
 Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs):

   7185.036730  task-clock (msec) #0.999 CPUs utilized  
  ( +-  0.11% )
30,303,501,143  cycles#4.218 GHz
  ( +-  0.11% )
54,198,386,487  instructions  #1.79  insns per cycle
  ( +-  0.08% )
 9,726,518,945  branches  # 1353.719 M/sec  
  ( +-  0.08% )
   167,082,307  branch-misses #1.72% of all branches
  ( +-  0.08% )

   7.195597842 seconds time elapsed 
 ( +-  0.11% )

That is, a 3.8% improvement.

2. System boot + shutdown, ubuntu 18.04 x86_64:

- Before (tb-lock-v3):
Performance counter stats for 'taskset -c 0 ../img/x86_64/ubuntu-die.sh 
-nographic' (2 runs):

  49971.036482  task-clock (msec) #0.999 CPUs utilized  
  ( +-  1.62% )
   210,766,077,140  cycles#4.218 GHz
  ( +-  1.63% )
   428,829,830,790  instructions  #2.03  insns per cycle
  ( +-  0.75% )
77,313,384,038  branches  # 1547.164 M/sec  
  ( +-  0.54% )
   835,610,706  branch-misses #1.08% of all branches
  ( +-  2.97% )

  50.003855102 seconds time elapsed 
 ( +-  1.61% )

- After:
 Performance counter stats for 'taskset -c 0 ../img/x86_64/ubuntu-die.sh 
-nographic' (2 runs):

  50118.124477  task-clock (msec) #0.999 CPUs utilized  
  ( +-  4.30% )
   132,396  context-switches  #0.003 M/sec  
  ( +-  1.20% )
 0  cpu-migrations#0.000 K/sec  
  ( +-100.00% )
   167,754  page-faults   #0.003 M/sec  
  ( +-  0.06% )
   211,414,701,601  cycles#4.218 GHz
  ( +-  4.30% )
 stalled-cycles-frontend
 stalled-cycles-backend
   431,618,818,597  instructions  #2.04  insns per cycle
  ( +-  6.40% )
80,197,256,524  branches  # 1600.165 M/sec  
  ( +-  8.59% )
   794,830,352  branch-misses #0.99% of all branches
  ( +-  2.05% )

  50.177077175 seconds time elapsed 
 ( +-  4.23% )

No improvement (within noise range).

3. x86_64 SPEC06int:
  SPEC06int (test set)
 [ Y axis: speedup over master ]
  8 +-+--+++-+++++++-+++--+-+
|   |
|   tlb-lock-v3 |
  7 +-+..$$$...+indirection   +-+
|$ $  +resizing |
|$ $|
  6 +-+..$.$..+-+
|$ $|
|$ $|
  5 +-+..$.$..+-+
|$ $|
|$ $|
  4 +-+..$.$..+-+
|$ $

[Qemu-devel] [RFC 5/6] cpu-defs: define MIN_CPU_TLB_SIZE

2018-10-06 Thread Emilio G. Cota
Signed-off-by: Emilio G. Cota 
---
 include/exec/cpu-defs.h   | 6 +++---
 accel/tcg/cputlb.c| 2 +-
 tcg/i386/tcg-target.inc.c | 3 ++-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
index af9fe04b0b..27b9433976 100644
--- a/include/exec/cpu-defs.h
+++ b/include/exec/cpu-defs.h
@@ -67,7 +67,7 @@ typedef uint64_t target_ulong;
 #define CPU_TLB_ENTRY_BITS 5
 #endif
 
-/* TCG_TARGET_TLB_DISPLACEMENT_BITS is used in CPU_TLB_BITS to ensure that
+/* TCG_TARGET_TLB_DISPLACEMENT_BITS is used in MIN_CPU_TLB_BITS to ensure that
  * the TLB is not unnecessarily small, but still small enough for the
  * TLB lookup instruction sequence used by the TCG target.
  *
@@ -89,7 +89,7 @@ typedef uint64_t target_ulong;
  * 0x18 (the offset of the addend field in each TLB entry) plus the offset
  * of tlb_table inside env (which is non-trivial but not huge).
  */
-#define CPU_TLB_BITS \
+#define MIN_CPU_TLB_BITS \
 MIN(8,   \
 TCG_TARGET_TLB_DISPLACEMENT_BITS - CPU_TLB_ENTRY_BITS -  \
 (NB_MMU_MODES <= 1 ? 0 : \
@@ -97,7 +97,7 @@ typedef uint64_t target_ulong;
  NB_MMU_MODES <= 4 ? 2 : \
  NB_MMU_MODES <= 8 ? 3 : 4))
 
-#define CPU_TLB_SIZE (1 << CPU_TLB_BITS)
+#define MIN_CPU_TLB_SIZE (1 << MIN_CPU_TLB_BITS)
 
 typedef struct CPUTLBEntry {
 /* bit TARGET_LONG_BITS to TARGET_PAGE_BITS : virtual address
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index ed19ac0e40..1ca71ecfc4 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -82,7 +82,7 @@ void tlb_init(CPUState *cpu)
 for (i = 0; i < NB_MMU_MODES; i++) {
 CPUTLBDesc *desc = &env->tlb_desc[i];
 
-desc->size = CPU_TLB_SIZE;
+desc->size = MIN_CPU_TLB_SIZE;
 desc->mask = (desc->size - 1) << CPU_TLB_ENTRY_BITS;
 desc->used = 0;
 env->tlb_table[i] = g_new(CPUTLBEntry, desc->size);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index fce6a94e22..60d8ed5264 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1626,7 +1626,8 @@ static inline void tcg_out_tlb_load(TCGContext *s, TCGReg 
addrlo, TCGReg addrhi,
 }
 if (TCG_TYPE_PTR == TCG_TYPE_I64) {
 hrexw = P_REXW;
-if (TARGET_PAGE_BITS + CPU_TLB_BITS > 32) {
+/* XXX the size here is variable */
+if (TARGET_PAGE_BITS + MIN_CPU_TLB_BITS > 32) {
 tlbtype = TCG_TYPE_I64;
 tlbrexw = P_REXW;
 }
-- 
2.17.1




[Qemu-devel] [RFC 4/6] tcg: define TCG_TARGET_TLB_MAX_INDEX_BITS

2018-10-06 Thread Emilio G. Cota
From: Pranith Kumar 

This paves the way for implementing a dynamically-sized softmmu.

Signed-off-by: Pranith Kumar 
Signed-off-by: Emilio G. Cota 
---
 tcg/aarch64/tcg-target.h | 1 +
 tcg/arm/tcg-target.h | 1 +
 tcg/i386/tcg-target.h| 2 ++
 tcg/mips/tcg-target.h| 2 ++
 tcg/ppc/tcg-target.h | 1 +
 tcg/s390/tcg-target.h| 1 +
 tcg/sparc/tcg-target.h   | 1 +
 tcg/tci/tcg-target.h | 1 +
 8 files changed, 10 insertions(+)

diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h
index 9aea1d1771..55af43d55f 100644
--- a/tcg/aarch64/tcg-target.h
+++ b/tcg/aarch64/tcg-target.h
@@ -15,6 +15,7 @@
 
 #define TCG_TARGET_INSN_UNIT_SIZE  4
 #define TCG_TARGET_TLB_DISPLACEMENT_BITS 24
+#define TCG_TARGET_TLB_MAX_INDEX_BITS 32
 #undef TCG_TARGET_STACK_GROWSUP
 
 typedef enum {
diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h
index 94b3578c55..0cd07906b3 100644
--- a/tcg/arm/tcg-target.h
+++ b/tcg/arm/tcg-target.h
@@ -60,6 +60,7 @@ extern int arm_arch;
 #undef TCG_TARGET_STACK_GROWSUP
 #define TCG_TARGET_INSN_UNIT_SIZE 4
 #define TCG_TARGET_TLB_DISPLACEMENT_BITS 16
+#define TCG_TARGET_TLB_MAX_INDEX_BITS 8
 
 typedef enum {
 TCG_REG_R0 = 0,
diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
index 9fdf37f23c..4e79e0a550 100644
--- a/tcg/i386/tcg-target.h
+++ b/tcg/i386/tcg-target.h
@@ -200,6 +200,8 @@ extern bool have_avx2;
 # define TCG_AREG0 TCG_REG_EBP
 #endif
 
+#define TCG_TARGET_TLB_MAX_INDEX_BITS (32 - CPU_TLB_ENTRY_BITS)
+
 static inline void flush_icache_range(uintptr_t start, uintptr_t stop)
 {
 }
diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h
index a8222476f0..b791e2b4cd 100644
--- a/tcg/mips/tcg-target.h
+++ b/tcg/mips/tcg-target.h
@@ -39,6 +39,8 @@
 #define TCG_TARGET_TLB_DISPLACEMENT_BITS 16
 #define TCG_TARGET_NB_REGS 32
 
+#define TCG_TARGET_TLB_MAX_INDEX_BITS (16 - CPU_TLB_ENTRY_BITS)
+
 typedef enum {
 TCG_REG_ZERO = 0,
 TCG_REG_AT,
diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h
index be52ad1d2e..e0ad7c122d 100644
--- a/tcg/ppc/tcg-target.h
+++ b/tcg/ppc/tcg-target.h
@@ -34,6 +34,7 @@
 #define TCG_TARGET_NB_REGS 32
 #define TCG_TARGET_INSN_UNIT_SIZE 4
 #define TCG_TARGET_TLB_DISPLACEMENT_BITS 16
+#define TCG_TARGET_TLB_MAX_INDEX_BITS 32
 
 typedef enum {
 TCG_REG_R0,  TCG_REG_R1,  TCG_REG_R2,  TCG_REG_R3,
diff --git a/tcg/s390/tcg-target.h b/tcg/s390/tcg-target.h
index 6f2b06a7d1..a1e25e13b3 100644
--- a/tcg/s390/tcg-target.h
+++ b/tcg/s390/tcg-target.h
@@ -27,6 +27,7 @@
 
 #define TCG_TARGET_INSN_UNIT_SIZE 2
 #define TCG_TARGET_TLB_DISPLACEMENT_BITS 19
+#define TCG_TARGET_TLB_MAX_INDEX_BITS 32
 
 typedef enum TCGReg {
 TCG_REG_R0 = 0,
diff --git a/tcg/sparc/tcg-target.h b/tcg/sparc/tcg-target.h
index d8339bf010..72ace760d5 100644
--- a/tcg/sparc/tcg-target.h
+++ b/tcg/sparc/tcg-target.h
@@ -29,6 +29,7 @@
 
 #define TCG_TARGET_INSN_UNIT_SIZE 4
 #define TCG_TARGET_TLB_DISPLACEMENT_BITS 32
+#define TCG_TARGET_TLB_MAX_INDEX_BITS 12
 #define TCG_TARGET_NB_REGS 32
 
 typedef enum {
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
index 26140d78cb..3f28219afc 100644
--- a/tcg/tci/tcg-target.h
+++ b/tcg/tci/tcg-target.h
@@ -43,6 +43,7 @@
 #define TCG_TARGET_INTERPRETER 1
 #define TCG_TARGET_INSN_UNIT_SIZE 1
 #define TCG_TARGET_TLB_DISPLACEMENT_BITS 32
+#define TCG_TARGET_TLB_MAX_INDEX_BITS 32
 
 #if UINTPTR_MAX == UINT32_MAX
 # define TCG_TARGET_REG_BITS 32
-- 
2.17.1




[Qemu-devel] [RFC 2/6] cputlb: do not evict invalid entries to the vtlb

2018-10-06 Thread Emilio G. Cota
Currently we evict an entry to the victim TLB when it doesn't match
the current address. But it could be that there's no match because
the current entry is invalid. Do not evict the entry to the vtlb
in that case.

This change will help us keep track of the TLB's use rate.

Signed-off-by: Emilio G. Cota 
---
 include/exec/cpu-all.h | 14 ++
 accel/tcg/cputlb.c |  2 +-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 117d2fbbca..d938dedafc 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -362,6 +362,20 @@ static inline bool tlb_hit(target_ulong tlb_addr, 
target_ulong addr)
 return tlb_hit_page(tlb_addr, addr & TARGET_PAGE_MASK);
 }
 
+/**
+ * tlb_is_valid - return true if at least one of the addresses is valid
+ * @te: pointer to CPUTLBEntry
+ *
+ * This is useful when we don't have a particular address to compare against,
+ * and we just want to know whether any entry holds valid data.
+ */
+static inline bool tlb_is_valid(const CPUTLBEntry *te)
+{
+return !(te->addr_read & TLB_INVALID_MASK) ||
+   !(te->addr_write & TLB_INVALID_MASK) ||
+   !(te->addr_code & TLB_INVALID_MASK);
+}
+
 void dump_exec_info(FILE *f, fprintf_function cpu_fprintf);
 void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf);
 #endif /* !CONFIG_USER_ONLY */
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 0b51efc374..0e2c149d6b 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -695,7 +695,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong 
vaddr,
  * Only evict the old entry to the victim tlb if it's for a
  * different page; otherwise just overwrite the stale data.
  */
-if (!tlb_hit_page_anyprot(te, vaddr_page)) {
+if (!tlb_hit_page_anyprot(te, vaddr_page) && tlb_is_valid(te)) {
 unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE;
 CPUTLBEntry *tv = &env->tlb_v_table[mmu_idx][vidx];
 
-- 
2.17.1




[Qemu-devel] [RFC 3/6] cputlb: track TLB use rates

2018-10-06 Thread Emilio G. Cota
This paves the way for implementing a dynamically-sized softmmu.

Signed-off-by: Emilio G. Cota 
---
 include/exec/cpu-defs.h |  1 +
 accel/tcg/cputlb.c  | 17 ++---
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
index fa95a4257e..af9fe04b0b 100644
--- a/include/exec/cpu-defs.h
+++ b/include/exec/cpu-defs.h
@@ -144,6 +144,7 @@ typedef struct CPUIOTLBEntry {
 typedef struct CPUTLBDesc {
 size_t size;
 size_t mask; /* (.size - 1) << CPU_TLB_ENTRY_BITS for TLB fast path */
+size_t used;
 } CPUTLBDesc;
 
 #define CPU_COMMON_TLB  \
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 0e2c149d6b..ed19ac0e40 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -84,6 +84,7 @@ void tlb_init(CPUState *cpu)
 
 desc->size = CPU_TLB_SIZE;
 desc->mask = (desc->size - 1) << CPU_TLB_ENTRY_BITS;
+desc->used = 0;
 env->tlb_table[i] = g_new(CPUTLBEntry, desc->size);
 env->iotlb[i] = g_new0(CPUIOTLBEntry, desc->size);
 }
@@ -152,6 +153,7 @@ static void tlb_flush_nocheck(CPUState *cpu)
 for (i = 0; i < NB_MMU_MODES; i++) {
 memset(env->tlb_table[i], -1,
env->tlb_desc[i].size * sizeof(CPUTLBEntry));
+env->tlb_desc[i].used = 0;
 }
 memset(env->tlb_v_table, -1, sizeof(env->tlb_v_table));
 qemu_spin_unlock(&env->tlb_lock);
@@ -216,6 +218,7 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, 
run_on_cpu_data data)
 memset(env->tlb_table[mmu_idx], -1,
env->tlb_desc[mmu_idx].size * sizeof(CPUTLBEntry));
 memset(env->tlb_v_table[mmu_idx], -1, sizeof(env->tlb_v_table[0]));
+env->tlb_desc[mmu_idx].used = 0;
 }
 }
 qemu_spin_unlock(&env->tlb_lock);
@@ -276,12 +279,14 @@ static inline bool tlb_hit_page_anyprot(CPUTLBEntry 
*tlb_entry,
 }
 
 /* Called with tlb_lock held */
-static inline void tlb_flush_entry_locked(CPUTLBEntry *tlb_entry,
+static inline bool tlb_flush_entry_locked(CPUTLBEntry *tlb_entry,
   target_ulong page)
 {
 if (tlb_hit_page_anyprot(tlb_entry, page)) {
 memset(tlb_entry, -1, sizeof(*tlb_entry));
+return true;
 }
+return false;
 }
 
 /* Called with tlb_lock held */
@@ -321,7 +326,9 @@ static void tlb_flush_page_async_work(CPUState *cpu, 
run_on_cpu_data data)
 for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
 int i = (addr >> TARGET_PAGE_BITS) & (env->tlb_desc[mmu_idx].size - 1);
 
-tlb_flush_entry_locked(&env->tlb_table[mmu_idx][i], addr);
+if (tlb_flush_entry_locked(&env->tlb_table[mmu_idx][i], addr)) {
+env->tlb_desc[mmu_idx].used--;
+}
 tlb_flush_vtlb_page_locked(env, mmu_idx, addr);
 }
 qemu_spin_unlock(&env->tlb_lock);
@@ -365,7 +372,9 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUState 
*cpu,
 
 page = (addr >> TARGET_PAGE_BITS) & (env->tlb_desc[mmu_idx].size - 1);
 if (test_bit(mmu_idx, &mmu_idx_bitmap)) {
-tlb_flush_entry_locked(&env->tlb_table[mmu_idx][page], addr);
+if (tlb_flush_entry_locked(&env->tlb_table[mmu_idx][page], addr)) {
+env->tlb_desc[mmu_idx].used--;
+}
 tlb_flush_vtlb_page_locked(env, mmu_idx, addr);
 }
 }
@@ -702,6 +711,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong 
vaddr,
 /* Evict the old entry into the victim tlb.  */
 copy_tlb_helper_locked(tv, te);
 env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
+env->tlb_desc[mmu_idx].used--;
 }
 
 /* refill the tlb */
@@ -753,6 +763,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong 
vaddr,
 }
 
 copy_tlb_helper_locked(te, &tn);
+env->tlb_desc[mmu_idx].used++;
 qemu_spin_unlock(&env->tlb_lock);
 }
 
-- 
2.17.1




[Qemu-devel] [RFC 1/6] (XXX) cputlb: separate MMU allocation + run-time sizing

2018-10-06 Thread Emilio G. Cota
No dynamic sizing yet, but the indirection is there.

XXX:
- convert other TCG backends

Signed-off-by: Emilio G. Cota 
---
 accel/tcg/softmmu_template.h | 14 +
 include/exec/cpu-defs.h  | 14 ++---
 include/exec/cpu_ldst.h  |  2 +-
 include/exec/cpu_ldst_template.h |  6 ++--
 accel/tcg/cputlb.c   | 49 +---
 tcg/i386/tcg-target.inc.c| 26 -
 6 files changed, 68 insertions(+), 43 deletions(-)

diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
index 1e50263871..3f5a0d4017 100644
--- a/accel/tcg/softmmu_template.h
+++ b/accel/tcg/softmmu_template.h
@@ -112,7 +112,7 @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong 
addr,
 TCGMemOpIdx oi, uintptr_t retaddr)
 {
 unsigned mmu_idx = get_mmuidx(oi);
-int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+int index = (addr >> TARGET_PAGE_BITS) & (env->tlb_desc[mmu_idx].size - 1);
 target_ulong tlb_addr = env->tlb_table[mmu_idx][index].ADDR_READ;
 unsigned a_bits = get_alignment_bits(get_memop(oi));
 uintptr_t haddr;
@@ -180,7 +180,7 @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong 
addr,
 TCGMemOpIdx oi, uintptr_t retaddr)
 {
 unsigned mmu_idx = get_mmuidx(oi);
-int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+int index = (addr >> TARGET_PAGE_BITS) & (env->tlb_desc[mmu_idx].size - 1);
 target_ulong tlb_addr = env->tlb_table[mmu_idx][index].ADDR_READ;
 unsigned a_bits = get_alignment_bits(get_memop(oi));
 uintptr_t haddr;
@@ -276,7 +276,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong 
addr, DATA_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
 {
 unsigned mmu_idx = get_mmuidx(oi);
-int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+int index = (addr >> TARGET_PAGE_BITS) & (env->tlb_desc[mmu_idx].size - 1);
 target_ulong tlb_addr =
 atomic_read(&env->tlb_table[mmu_idx][index].addr_write);
 unsigned a_bits = get_alignment_bits(get_memop(oi));
@@ -322,7 +322,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong 
addr, DATA_TYPE val,
is already guaranteed to be filled, and that the second page
cannot evict the first.  */
 page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK;
-index2 = (page2 >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+index2 = (page2 >> TARGET_PAGE_BITS) &
+(env->tlb_desc[mmu_idx].size - 1);
 tlb_addr2 = atomic_read(&env->tlb_table[mmu_idx][index2].addr_write);
 if (!tlb_hit_page(tlb_addr2, page2)
 && !VICTIM_TLB_HIT(addr_write, page2)) {
@@ -355,7 +356,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong 
addr, DATA_TYPE val,
TCGMemOpIdx oi, uintptr_t retaddr)
 {
 unsigned mmu_idx = get_mmuidx(oi);
-int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+int index = (addr >> TARGET_PAGE_BITS) & (env->tlb_desc[mmu_idx].size - 1);
 target_ulong tlb_addr =
 atomic_read(&env->tlb_table[mmu_idx][index].addr_write);
 unsigned a_bits = get_alignment_bits(get_memop(oi));
@@ -401,7 +402,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong 
addr, DATA_TYPE val,
is already guaranteed to be filled, and that the second page
cannot evict the first.  */
 page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK;
-index2 = (page2 >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
+index2 = (page2 >> TARGET_PAGE_BITS) &
+(env->tlb_desc[mmu_idx].size - 1);
 tlb_addr2 = atomic_read(&env->tlb_table[mmu_idx][index2].addr_write);
 if (!tlb_hit_page(tlb_addr2, page2)
 && !VICTIM_TLB_HIT(addr_write, page2)) {
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
index 4ff62f32bf..fa95a4257e 100644
--- a/include/exec/cpu-defs.h
+++ b/include/exec/cpu-defs.h
@@ -141,13 +141,19 @@ typedef struct CPUIOTLBEntry {
 MemTxAttrs attrs;
 } CPUIOTLBEntry;
 
-#define CPU_COMMON_TLB \
+typedef struct CPUTLBDesc {
+size_t size;
+size_t mask; /* (.size - 1) << CPU_TLB_ENTRY_BITS for TLB fast path */
+} CPUTLBDesc;
+
+#define CPU_COMMON_TLB  \
 /* The meaning of the MMU modes is defined in the target code. */   \
-/* tlb_lock serializes updates to tlb_table and tlb_v_table */  \
+/* tlb_lock serializes updates to tlb_desc, tlb_table and tlb_v_table */ \
 QemuSpin tlb_lock;  \
-CPUTLBEntry tlb_table[NB_MMU_MODES][CPU_TLB_SIZE];  \
+CPUTLBDesc tlb_desc[NB_MMU_MODES];  \
+CPUTLBEntry *tlb_table[NB_MMU_MODES];   \
 CPUTLBEntry tlb_v_table[NB_MMU_MODES][CPU_VTLB_SIZE];   \
-CPUIOTLBEntry iotlb[NB_MMU_MODES][CPU_TLB_SIZE];\
+   

[Qemu-devel] [Bug 1796520] [NEW] autogen crashes on qemu-sh4-user after 61dedf2af7

2018-10-06 Thread John Paul Adrian Glaubitz
Public bug reported:

Running "autogen --help" crashes on qemu-sh4-user with:

(sid-sh4-sbuild)root@nofan:/# autogen --help
Unhandled trap: 0x180
pc=0xf64dd2de sr=0x pr=0xf63b9c74 fpscr=0x0008
spc=0x ssr=0x gbr=0xf61102a8 vbr=0x
sgr=0x dbr=0x delayed_pc=0xf64dd2a0 fpul=0x0003
r0=0xf6fc1320 r1=0x r2=0x5dc4 r3=0xf67bfb50
r4=0xf6fc1230 r5=0xf6fc141c r6=0x03ff r7=0x
r8=0x0004 r9=0xf63e20bc r10=0xf6fc141c r11=0xf63e28f0
r12=0xf63e2258 r13=0xf63eae1c r14=0x0804 r15=0xf6fc1220
r16=0x r17=0x r18=0x r19=0x
r20=0x r21=0x r22=0x r23=0x
(sid-sh4-sbuild)root@nofan:/#

Bi-secting found this commit to be the culprit:

61dedf2af79fb5866dc7a0f972093682f2185e17 is the first bad commit
commit 61dedf2af79fb5866dc7a0f972093682f2185e17
Author: Richard Henderson 
Date:   Tue Jul 18 10:02:50 2017 -1000

target/sh4: Add missing FPSCR.PR == 0 checks

Both frchg and fschg require PR == 0, otherwise undefined_operation.

Reviewed-by: Aurelien Jarno 
Signed-off-by: Richard Henderson 
Message-Id: <20170718200255.31647-26-...@twiddle.net>
Signed-off-by: Aurelien Jarno 

:04 04 980d79b69ae712f23a1e4c56983e97a843153b4a
1024c109f506c7ad57367c63bc8bbbc8a7a36cd7 M  target

Reverting 61dedf2af79fb5866dc7a0f972093682f2185e17 fixes the problem for
me.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1796520

Title:
  autogen crashes on qemu-sh4-user after 61dedf2af7

Status in QEMU:
  New

Bug description:
  Running "autogen --help" crashes on qemu-sh4-user with:

  (sid-sh4-sbuild)root@nofan:/# autogen --help
  Unhandled trap: 0x180
  pc=0xf64dd2de sr=0x pr=0xf63b9c74 fpscr=0x0008
  spc=0x ssr=0x gbr=0xf61102a8 vbr=0x
  sgr=0x dbr=0x delayed_pc=0xf64dd2a0 fpul=0x0003
  r0=0xf6fc1320 r1=0x r2=0x5dc4 r3=0xf67bfb50
  r4=0xf6fc1230 r5=0xf6fc141c r6=0x03ff r7=0x
  r8=0x0004 r9=0xf63e20bc r10=0xf6fc141c r11=0xf63e28f0
  r12=0xf63e2258 r13=0xf63eae1c r14=0x0804 r15=0xf6fc1220
  r16=0x r17=0x r18=0x r19=0x
  r20=0x r21=0x r22=0x r23=0x
  (sid-sh4-sbuild)root@nofan:/#

  Bi-secting found this commit to be the culprit:

  61dedf2af79fb5866dc7a0f972093682f2185e17 is the first bad commit
  commit 61dedf2af79fb5866dc7a0f972093682f2185e17
  Author: Richard Henderson 
  Date:   Tue Jul 18 10:02:50 2017 -1000

  target/sh4: Add missing FPSCR.PR == 0 checks
  
  Both frchg and fschg require PR == 0, otherwise undefined_operation.
  
  Reviewed-by: Aurelien Jarno 
  Signed-off-by: Richard Henderson 
  Message-Id: <20170718200255.31647-26-...@twiddle.net>
  Signed-off-by: Aurelien Jarno 

  :04 04 980d79b69ae712f23a1e4c56983e97a843153b4a
  1024c109f506c7ad57367c63bc8bbbc8a7a36cd7 M  target

  Reverting 61dedf2af79fb5866dc7a0f972093682f2185e17 fixes the problem
  for me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1796520/+subscriptions



[Qemu-devel] vhost: add virtio-vhost-user transport

2018-10-06 Thread Nikos Dragazis
Hi everyone,

In response to a previous email of mine here:

https://lists.01.org/pipermail/spdk/2018-September/002488.html

I would like to share that I have added support for a new
virtio-vhost-user transport to SPDK. and have a working demo of the SPDK
vhost-scsi target over this transport. I have tested it successfully
with Malloc bdev, NVMe bdev and virtio-scsi bdev.

My code currently lives here:

https://github.com/ndragazis/spdk

I'd love to get your feedback and have it merged eventually. I see there
is a relevant conversation on this topic here:

https://lists.01.org/pipermail/spdk/2018-March/001557.html

Is there anyone in this community currently working on this? What would
my next step in contributing this be?

Looking forward to your feedback,
Nikos

--
Nikos Dragazis
Undergraduate Student
School of Electrical and Computer Engineering
National Technical University of Athens



Re: [Qemu-devel] [SPDK] virtio-vhost-user with virtio-scsi: end-to-end setup

2018-10-06 Thread Nikos Dragazis
Hi Pawel,

Thank you for your quick reply. I appreciate your help.

I’m sorry for the late response. I am glad to tell you that I have a
working demo at last. I have managed to solve my problem.

You were right about the IO channels. Function
spdk_scsi_dev_allocate_io_channels() fails to allocate the IO channel
for the virtio-scsi bdev target and function spdk_vhost_scsi_start()
fails to verify its return value. My actual segfault was due to a race
on the unique virtio-scsi bdev request queue between the creation and
the destruction of the IO channel in the vhost device backend. This led
to the IO channel pointer lun->io_channel being NULL after the
vhost-user negotiation, and the bdev layer segfaulted when accessing it
in response to an IO request.

After discovering this, and spending quite some time debugging it, I
searched the bug tracker and the commit history in case I had missed
something. It seems this was a recently discovered bug, which has
fortunately already been solved:

https://github.com/spdk/spdk/commit/9ddf6438310cc97b346d805a5969af7507e84fde#diff-d361b53e911663e8c6c5890fb046a79b

I had overlooked pulling from the official repo for a while, so I missed
the patch. It works just fine after pulling the newest changes.

So, I’ll make sure to work on the latest commits next time :)

Thanks again,
Nikos


On 21/09/2018 10:31 πμ, Wodkowski, PawelX wrote:
> Hi Nikos,
>
> About SPKD backtrace you got. There is something wrong with IO channel 
> allocation.
> SPKD vhost-scsi  should check the result of 
> spdk_scsi_dev_allocate_io_channels() in
> spdk_vhost_scsi_dev_add_tgt(). But this result is not checked :(
> You can add some check or assert there.
>
> Paweł



Re: [Qemu-devel] [RFC PATCH 10/21] qom/cpu: add a cpu_exit trace event

2018-10-06 Thread Richard Henderson
On 10/5/18 8:48 AM, Alex Bennée wrote:
> This is useful for tracing cpu_exit events where we signal the CPU to
> come back to the main loop.
> 
> Signed-off-by: Alex Bennée 
> ---
>  qom/cpu.c| 3 +++
>  qom/trace-events | 4 
>  2 files changed, 7 insertions(+)

Reviewed-by: Richard Henderson 


r~





[Qemu-devel] [PATCH] qemu-io-cmds: Fix two format strings

2018-10-06 Thread Stefan Weil
Use %zu instead of %zd for unsigned numbers.

This fixes two error messages from the LSTM static code analyzer:

This argument should be of type 'ssize_t' but is of type 'unsigned long'

Signed-off-by: Stefan Weil 
---
 qemu-io-cmds.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
index db0b3ee5ef..5363482213 100644
--- a/qemu-io-cmds.c
+++ b/qemu-io-cmds.c
@@ -907,7 +907,7 @@ static int readv_f(BlockBackend *blk, int argc, char **argv)
 memset(cmp_buf, pattern, qiov.size);
 if (memcmp(buf, cmp_buf, qiov.size)) {
 printf("Pattern verification failed at offset %"
-   PRId64 ", %zd bytes\n", offset, qiov.size);
+   PRId64 ", %zu bytes\n", offset, qiov.size);
 ret = -EINVAL;
 }
 g_free(cmp_buf);
@@ -1294,7 +1294,7 @@ static void aio_read_done(void *opaque, int ret)
 memset(cmp_buf, ctx->pattern, ctx->qiov.size);
 if (memcmp(ctx->buf, cmp_buf, ctx->qiov.size)) {
 printf("Pattern verification failed at offset %"
-   PRId64 ", %zd bytes\n", ctx->offset, ctx->qiov.size);
+   PRId64 ", %zu bytes\n", ctx->offset, ctx->qiov.size);
 }
 g_free(cmp_buf);
 }
-- 
2.11.0




Re: [Qemu-devel] [RFC PATCH 06/21] trace: show trace point counts in the monitor

2018-10-06 Thread Richard Henderson
On 10/5/18 8:48 AM, Alex Bennée wrote:
> Now we have counts for each trace point we can expose them in the
> monitor when the user queries what trace points are available.
> 
> Signed-off-by: Alex Bennée 
> ---
>  monitor.c   | 5 +++--
>  qapi/trace.json | 3 ++-
>  trace/qmp.c | 1 +
>  3 files changed, 6 insertions(+), 3 deletions(-)

I would have merged this with previous, but whatever.
Reviewed-by: Richard Henderson 


r~



Re: [Qemu-devel] [RFC PATCH 05/21] trace: keep a count of trace-point hits

2018-10-06 Thread Richard Henderson
On 10/5/18 8:48 AM, Alex Bennée wrote:
> @@ -81,6 +81,8 @@ def generate_c(event, group):
>  cond = "trace_event_get_state(%s)" % event_id
>  
>  out('',
> +'%(event_obj)s.count++;',
> +'',
>  'if (!%(cond)s) {',
>  'return;',
>  '}',

Is it really "hit" if the condition isn't true?  And you'd want to document the
non-atomicity in a comment, lest it be "fixed" later.


r~



Re: [Qemu-devel] [RFC PATCH 03/21] linux-user: add -dfilter progtext shortcut

2018-10-06 Thread Richard Henderson
On 10/5/18 8:48 AM, Alex Bennée wrote:
> When debugging you often don't care about the libraries but just the
> code in the testcase. Rather than make the user build this by hand
> offer a shortcut.
> 
> Signed-off-by: Alex Bennée 
> Reviewed-by: Laurent Vivier 
> ---
>  linux-user/main.c | 16 +++-
>  1 file changed, 15 insertions(+), 1 deletion(-)

Reviewed-by: Richard Henderson 


r~




Re: [Qemu-devel] [RFC PATCH 04/21] trace: enable the exec_tb trace events

2018-10-06 Thread Richard Henderson
On 10/5/18 8:48 AM, Alex Bennée wrote:
> Our performance isn't so critical that we can't spare a simple flag
> check when we exec a TB considering everything else we check in the
> outer loop.
> 
> Signed-off-by: Alex Bennée 
> ---
>  accel/tcg/trace-events | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson 


r~




Re: [Qemu-devel] [RFC PATCH 02/21] util/log: add qemu_dfilter_append_range()

2018-10-06 Thread Richard Henderson
On 10/5/18 8:48 AM, Alex Bennée wrote:
> This allows us to add to the dfilter range as we go.
> 
> Signed-off-by: Alex Bennée 
> ---
>  include/qemu/log.h | 1 +
>  util/log.c | 6 ++
>  2 files changed, 7 insertions(+)

Reviewed-by: Richard Henderson 


r~




Re: [Qemu-devel] [RFC PATCH 01/21] util/log: allow -dfilter to stack

2018-10-06 Thread Richard Henderson
On 10/5/18 8:48 AM, Alex Bennée wrote:
> The original dfilter was patched to avoid a leak in the case of
> multiple -dfilter ranges. There is no reason not to allow the user to
> stack several dfilter options rather than push them all into one mega
> line. We avoid the leak by simply only allocating the first time
> around. As we are using a g_array it will automatically re-size as
> needed.
> 
> The allocation is pushed to a helper as future patches will offer
> additional ways to add to the dfilter.
> 
> We also add a helper qemu_reset_dfilter_ranges() so we can be explicit
> in our unit tests.
> 
> Signed-off-by: Alex Bennée 
> ---
>  include/qemu/log.h   |  1 +
>  tests/test-logging.c | 14 ++
>  util/log.c   | 23 +--
>  3 files changed, 32 insertions(+), 6 deletions(-)

Reviewed-by: Richard Henderson 


r~



[Qemu-devel] [PATCH] target/i386: kvm: just return after migrate_add_blocker failed

2018-10-06 Thread Li Qiang
When migrate_add_blocker failed, the invtsc_mig_blocker is not
appended so no need to remove. This can save several instructions.

Signed-off-by: Li Qiang 
---
 target/i386/kvm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 0b2a07d3a4..6ba84a39f3 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -1153,7 +1153,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
 if (local_err) {
 error_report_err(local_err);
 error_free(invtsc_mig_blocker);
-goto fail;
+return r;
 }
 /* for savevm */
 vmstate_x86_cpu.unmigratable = 1;
-- 
2.17.1