On 10/17/2018 04:14 PM, Paolo Bonzini wrote:
On 16/10/2018 18:40, Emilio G. Cota wrote:
+#define SMP_CACHE_BYTES 64
+#define cacheline_aligned_in_smp \
+__attribute__((__aligned__(SMP_CACHE_BYTES)))
You could use QEMU_ALIGNED() here.
Yes, you are right.
+
+#define WRIT
On 16/10/2018 18:40, Emilio G. Cota wrote:
>> +#define SMP_CACHE_BYTES 64
>> +#define cacheline_aligned_in_smp \
>> +__attribute__((__aligned__(SMP_CACHE_BYTES)))
> You could use QEMU_ALIGNED() here.
>
>> +
>> +#define WRITE_ONCE(ptr, val) \
>> +(*((volatile typeof(ptr) *)(&(p
On Tue, Oct 16, 2018 at 19:10:03 +0800, guangrong.x...@gmail.com wrote:
(snip)
> diff --git a/include/qemu/ptr_ring.h b/include/qemu/ptr_ring.h
> new file mode 100644
> index 00..d8266d45f6
> --- /dev/null
> +++ b/include/qemu/ptr_ring.h
> @@ -0,0 +1,235 @@
(snip)
> +#define SMP_CACHE_BYTES
From: Xiao Guangrong
ptr_ring is good to minimize cache-contention and has the simple model
of memory barrier which will be used by lockless threads model to pass
requests between main migration thread and compression threads
Some changes are made:
1) drop unnecessary APIs, e.g, for _irq, _bh AP