Marc-André Lureau writes:
> Signed-off-by: Marc-André Lureau
> ---
> qobject/json-streamer.c | 4 +++-
> qobject/qjson.c | 5 -
> tests/check-qjson.c | 8
> 3 files changed, 11 insertions(+), 6 deletions(-)
>
> diff --git a/qobject/json-streamer.c b/qobject/json-streame
Marc-André Lureau writes:
> An unterminated string will make parser emit an error (tokens ==
> NULL). Let's report it.
>
> Signed-off-by: Marc-André Lureau
> ---
> qobject/qjson.c | 3 +++
> tests/check-qjson.c | 6 +++---
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/
Hi all,
Fam has just updated Patchew to run the latest version. Changes include:
- a new, more compact look for series visualization
- each message in the series is shown in a separate page
- links to other messages in the series show whether the message has had
replies or not
- the interdiff
On Thu, Jul 19, 2018 at 08:15:20PM +0800, guangrong.x...@gmail.com wrote:
> From: Xiao Guangrong
>
> flush_compressed_data() needs to wait all compression threads to
> finish their work, after that all threads are free until the
> migration feeds new request to them, reducing its call can improve
On Thu, Jul 19, 2018 at 08:15:19PM +0800, guangrong.x...@gmail.com wrote:
> From: Xiao Guangrong
>
> Try to hold src_page_req_mutex only if the queue is not
> empty
>
> Reviewed-by: Dr. David Alan Gilbert
> Signed-off-by: Xiao Guangrong
Reviewed-by: Peter Xu
--
Peter Xu
Marc-André Lureau writes:
> Hi
>
> On Fri, Jul 20, 2018 at 10:49 AM, Markus Armbruster wrote:
>> Marc-André Lureau writes:
>>
>>> qobject_from_jsonv() returns a single object. Let's make sure that
>>> during parsing we don't leak an intermediary object. Instead of
>>> returning the last object,
On Thu, Jul 19, 2018 at 08:15:18PM +0800, guangrong.x...@gmail.com wrote:
[...]
> @@ -1950,12 +1971,16 @@ retry:
> set_compress_params(&comp_param[idx], block, offset);
> qemu_cond_signal(&comp_param[idx].cond);
> qemu_mutex_unlock(&comp_param[idx].mutex);
>
Introduce a slave message to allow slave to share its
VFIO group fd to master and do the IOMMU programming
based on virtio device's DMA address space for this
group in QEMU.
For the vhost backends which support vDPA, they could
leverage this message to ask master to do the IOMMU
programming in QEM
This patch set introduces a slave message in vhost-user to
allow slave to share its VFIO group fd to master and do the
IOMMU programming based on virtio device's DMA address space
for this group in QEMU.
For the vhost-user backends which support vDPA, they could
leverage this message to ask master
This patch introduces an API to support getting
VFIOGroup from groupfd. This is useful when the
groupfd is opened and shared by another process
via UNIX socket.
Signed-off-by: Tiwei Bie
---
hw/vfio/common.c | 44 +++
include/hw/vfio/vfio-common.h | 1
This patch splits vfio_get_group() into small functions.
It makes it easier to implement other vfio_get_group*()
functions in the future.
Signed-off-by: Tiwei Bie
---
hw/vfio/common.c | 83
1 file changed, 55 insertions(+), 28 deletions(-)
diff -
On Thu, Jul 19, 2018 at 08:15:17PM +0800, guangrong.x...@gmail.com wrote:
> From: Xiao Guangrong
>
> It is not used and cleans the code up a little
>
> Signed-off-by: Xiao Guangrong
Reviewed-by: Peter Xu
--
Peter Xu
On Thu, Jul 19, 2018 at 08:15:16PM +0800, guangrong.x...@gmail.com wrote:
> From: Xiao Guangrong
>
> It will be used by the compression threads
>
> Signed-off-by: Xiao Guangrong
Reviewed-by: Peter Xu
--
Peter Xu
On Thu, Jul 19, 2018 at 08:15:15PM +0800, guangrong.x...@gmail.com wrote:
> @@ -1597,6 +1608,24 @@ static void migration_update_rates(RAMState *rs,
> int64_t end_time)
> rs->xbzrle_cache_miss_prev) / iter_count;
> rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss;
>
[Expired for QEMU because there has been no activity for 60 days.]
** Changed in: qemu
Status: Incomplete => Expired
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1257352
Title:
kvm hangs o
[Expired for QEMU because there has been no activity for 60 days.]
** Changed in: qemu
Status: Incomplete => Expired
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1353947
Title:
Hypervisor
[Expired for qemu (Ubuntu) because there has been no activity for 60
days.]
** Changed in: qemu (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1257352
Ti
[Expired for linux (Ubuntu) because there has been no activity for 60
days.]
** Changed in: linux (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1353947
[Expired for QEMU because there has been no activity for 60 days.]
** Changed in: qemu
Status: Incomplete => Expired
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1425597
Title:
moving wind
On Wed, Jul 18, 2018 at 06:03:29PM +1000, Benjamin Herrenschmidt wrote:
> On Wed, 2018-07-18 at 16:12 +1000, David Gibson wrote:
> > On Thu, Jun 28, 2018 at 10:36:32AM +0200, Cédric Le Goater wrote:
> > > From: Benjamin Herrenschmidt
>
> Can you trim your replies ? It's really hard to find your c
On Thu, Jul 19, 2018 at 08:15:14PM +0800, guangrong.x...@gmail.com wrote:
> From: Xiao Guangrong
>
> The compressed page is not normal page
>
> Signed-off-by: Xiao Guangrong
I think it'll depend on how we are defining the "normal" page. AFAIU
it's the count of raw pages, then I think it's cor
On Thu, Jul 19, 2018 at 08:15:13PM +0800, guangrong.x...@gmail.com wrote:
> @@ -3113,6 +3132,8 @@ static Property migration_properties[] = {
> DEFINE_PROP_UINT8("x-compress-threads", MigrationState,
>parameters.compress_threads,
>DEFAULT_MIGRATE_
On Wed, Jun 20, 2018 at 07:10:12PM +1000, Alexey Kardashevskiy wrote:
> At the moment the PPC64/pseries guest only supports 4K/64K/16M IOMMU
> pages and POWER8 CPU supports the exact same set of page size so
> so far things worked fine.
>
> However POWER9 supports different set of sizes - 4K/64K/2
On Mon, 07/23 03:42, Andrew Randrianasulu wrote:
> It was crashing and crashing, so I tried to debug it a bit ...
>
>
> valgrind --leak-check=yes /dev/shm/qemu/x86_64-softmmu/qemu-system-x86_64
> -display
> sdl,gl=on -M q35 -soundhw
> hda -cdrom /home/guest/Downloads/ISO/slax-English-US-7.0
On Sun, Jul 22, 2018 at 10:06 PM Max Reitz wrote:
>
> On 2018-07-22 04:37, Fam Zheng wrote:
> > On Sun, Jul 22, 2018 at 5:08 AM Max Reitz wrote:
> >>
> >> On 2018-07-19 05:41, Fam Zheng wrote:
> >>> On my Fedora 28, /dev/null is locked by some other process (couldn't
> >>> inspect it due to the c
On 07/22/2018 02:31 PM, Richard Henderson wrote:
> On 07/22/2018 01:47 PM, Jason A. Donenfeld wrote:
>> Hello,
>>
>> Gcc 7.3 compiles bash's array_flush's dual assignment using:
>>
>> STP X20, X20, [X20,#0x10]
>>
>> But gcc 8.1 compiles it as:
>>
>> STR Q0, [X20,#0x10]
>>
>>
When host vector registers and operations were introduced, I failed
to mark the registers call clobbered as required by the ABI.
Fixes: 770c2fc7bb7
Cc: qemu-sta...@nongnu.org
Reported-by: Jason A. Donenfeld
Signed-off-by: Richard Henderson
---
tcg/i386/tcg-target.inc.c | 2 +-
1 file changed, 1
It was crashing and crashing, so I tried to debug it a bit ...
valgrind --leak-check=yes /dev/shm/qemu/x86_64-softmmu/qemu-system-x86_64
-display
sdl,gl=on -M q35 -soundhw
hda -cdrom /home/guest/Downloads/ISO/slax-English-US-7.0.8-x86_64.iso -m
1G -enable-kvm -d trace:e1000e* shows some
Hello!
Currently I'm trying pre-releases of qemu, for avoiding situation when release
was too bugged (2.12, for my taste ..qemu-system-alpha was broken,
qemu-system-x86_64 -M q35 was broken ..)
using
qemu-system-ppc --version
QEMU emulator version 2.12.91 (v3.0.0-rc1-17-g5b3ecd3d94-dirty)
Cop
On 07/22/2018 01:47 PM, Jason A. Donenfeld wrote:
> Hello,
>
> Gcc 7.3 compiles bash's array_flush's dual assignment using:
>
> STP X20, X20, [X20,#0x10]
>
> But gcc 8.1 compiles it as:
>
> STR Q0, [X20,#0x10]
>
> Real processors seem okay, and qemu 2.11 seems okay. But
Could this affect virtio-scsi? I'm not so sure since it's not perfectly
reliable to reproduce, but v2.12.0 was hanging for me for a few minutes
at a time with virtio-scsi cache=writeback showing 100% disk util%. I
never had issues booting up, and didn't try SATA. v2.11.1 was fine.
My first attempt
Hello,
Gcc 7.3 compiles bash's array_flush's dual assignment using:
STP X20, X20, [X20,#0x10]
But gcc 8.1 compiles it as:
STR Q0, [X20,#0x10]
Real processors seem okay, and qemu 2.11 seems okay. But qemu 2.12
results in a segfaulting process. I'm pretty sure this is a T
From: zhanghailiang
Notify all net filters about the checkpoint and failover event.
Signed-off-by: zhanghailiang
Reviewed-by: Dr. David Alan Gilbert
---
migration/colo.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/migration/colo.c b/migration/colo.c
index 688d6f40b2..
From: zhanghailiang
Don't need to flush all VM's ram from cache, only
flush the dirty pages since last checkpoint
Signed-off-by: Li Zhijian
Signed-off-by: Zhang Chen
Signed-off-by: zhanghailiang
Reviewed-by: Dr. David Alan Gilbert
---
migration/ram.c | 10 ++
1 file changed, 10 inse
Libvirt or other high level software can use this command query colo status.
You can test this command like that:
{'execute':'query-colo-status'}
Signed-off-by: Zhang Chen
---
migration/colo.c| 21 +
qapi/migration.json | 32
2 files chang
From: Zhang Chen
This diagram make user better understand COLO.
Suggested by Markus Armbruster.
Signed-off-by: Zhang Chen
---
docs/COLO-FT.txt | 34 ++
1 file changed, 34 insertions(+)
diff --git a/docs/COLO-FT.txt b/docs/COLO-FT.txt
index d7c7dcda8f..d5007895d
We record the address of the dirty pages that received,
it will help flushing pages that cached into SVM.
Here, it is a trick, we record dirty pages by re-using migration
dirty bitmap. In the later patch, we will start the dirty log
for SVM, just like migration, in this way, we can record both
the
After one round of checkpoint, the states between PVM and SVM
become consistent, so it is unnecessary to adjust the sequence
of net packets for old connections, besides, while failover
happens, filter-rewriter will into failover mode that needn't
handle the new TCP connection.
Signed-off-by: zhang
Filter needs to process the event of checkpoint/failover or
other event passed by COLO frame.
Signed-off-by: zhanghailiang
---
include/net/filter.h | 5 +
net/filter.c | 17 +
net/net.c| 28
3 files changed, 50 insertions(+)
From: zhanghailiang
COLO thread may sleep at qemu_sem_wait(&s->colo_checkpoint_sem),
while failover works begin, It's better to wakeup it to quick
the process.
Signed-off-by: zhanghailiang
Reviewed-by: Dr. David Alan Gilbert
---
migration/colo.c | 8
1 file changed, 8 insertions(+)
We should not load PVM's state directly into SVM, because there maybe some
errors happen when SVM is receving data, which will break SVM.
We need to ensure receving all data before load the state into SVM. We use
an extra memory to cache these data (PVM's ram). The ram cache in secondary side
is i
There are several stages during loadvm/savevm process. In different stage,
migration incoming processes different types of sections.
We want to control these stages more accuracy, it will benefit COLO
performance, we don't have to save type of QEMU_VM_SECTION_START
sections everytime while do check
From: Zhang Chen
Suggested by Markus Armbruster rename COLO unknown mode to none mode.
Signed-off-by: Zhang Chen
Reviewed-by: Eric Blake
Reviewed-by: Markus Armbruster
---
migration/colo-failover.c | 2 +-
migration/colo.c | 2 +-
qapi/migration.json | 10 +-
3 files
From: Zhang Chen
We add is_colo_support_client_type() to check the net client type for
COLO-compare. Currently we just support TAP.
Suggested by Jason.
Signed-off-by: Zhang Chen
---
include/net/net.h | 1 +
net/colo-compare.c | 5 +
net/net.c | 14 ++
3 files change
We need to know if migration is going into COLO state for
incoming side before start normal migration.
Instead by using the VMStateDescription to send colo_state
from source side to destination side, we use MIG_CMD_ENABLE_COLO
to indicate whether COLO is enabled or not.
Signed-off-by: zhanghailia
Make sure master start block replication after slave's block
replication started.
Besides, we need to activate VM's blocks before goes into
COLO state.
Signed-off-by: zhanghailiang
Signed-off-by: Li Zhijian
Signed-off-by: Zhang Chen
---
migration/colo.c | 43 +
During the time of VM's running, PVM may dirty some pages, we will transfer
PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint
time. So, the content of SVM's RAM cache will always be same with PVM's memory
after checkpoint.
Instead of flushing all content of PVM's RAM
From: zhanghailiang
If some errors happen during VM's COLO FT stage, it's important to
notify the users of this event. Together with 'x-colo-lost-heartbeat',
Users can intervene in COLO's failover work immediately.
If users don't want to get involved in COLO's failover verdict,
it is still necess
While do checkpoint, we need to flush all the unhandled packets,
By using the filter notifier mechanism, we can easily to notify
every compare object to do this process, which runs inside
of compare threads as a coroutine.
Signed-off-by: zhanghailiang
Signed-off-by: Zhang Chen
---
include/migra
It's a good idea to use notifier to notify COLO frame of
inconsistent packets comparing.
Signed-off-by: Zhang Chen
Signed-off-by: zhanghailiang
---
net/colo-compare.c | 37 ++---
net/colo-compare.h | 2 ++
2 files changed, 28 insertions(+), 11 deletions(-)
diff
For COLO FT, both the PVM and SVM run at the same time,
only sync the state while it needs.
So here, let SVM runs while not doing checkpoint, change
DEFAULT_MIGRATE_X_CHECKPOINT_DELAY to 200*100.
Besides, we forgot to release colo_checkpoint_semd and
colo_delay_timer, fix them here.
Signed-off-b
We add almost full TCP state machine in filter-rewriter, except
TCPS_LISTEN and some simplify in VM active close FIN states.
After a net connection is closed, we didn't clear its releated resources
in connection_track_table, which will lead to memory leak.
Let't track the state of net connection,
COLO Frame, block replication and COLO proxy(colo-compare,filter-mirror,
filter-redirector,filter-rewriter) have been exist in qemu
for long time, it's time to integrate these three parts to make COLO really
works.
In this series, we have some optimizations for COLO frame, including separating
t
Le 19/07/2018 à 14:52, Stefan Markovic a écrit :
> From: Aleksandar Markovic
>
> Synchronize content of linux-user/mips/syscall_nr.h and
> linux-user/mips64/syscall_nr.h with Linux kernel 4.18 headers.
> This adds 7 new syscall numbers, the last being NR_statx.
>
> Signed-off-by: Aleksandar Mark
Le 18/07/2018 à 22:06, Richard Henderson a écrit :
> This allows the tests generated by debian-powerpc-user-cross
> to function properly, especially tests/test-coroutine.
>
> Technically this syscall is available to both ppc32 and ppc64,
> but only ppc32 glibc actually uses it. Thus the ppc64 pat
On Wed, Jul 18, 2018 at 04:46:21PM +0800, Xiao Guangrong wrote:
>
>
> On 07/17/2018 02:58 AM, Dr. David Alan Gilbert wrote:
> > * Xiao Guangrong (guangrong.x...@gmail.com) wrote:
> > >
> > >
> > > On 06/29/2018 05:42 PM, Dr. David Alan Gilbert wrote:
> > > > * Xiao Guangrong (guangrong.x...@gma
On Fri, Jul 20, 2018 at 04:39:32PM +0100, Peter Maydell wrote:
> In kill_qemu() we have an assert that checks that the QEMU process
> didn't dump core:
> assert(!WCOREDUMP(wstatus));
>
> Unfortunately the WCOREDUMP macro here means the resulting message
> is not very easy to comprehend
On 2018-07-22 04:37, Fam Zheng wrote:
> On Sun, Jul 22, 2018 at 5:08 AM Max Reitz wrote:
>>
>> On 2018-07-19 05:41, Fam Zheng wrote:
>>> On my Fedora 28, /dev/null is locked by some other process (couldn't
>>> inspect it due to the current lslocks limitation), so iotests 226 fails
>>> with some un
59 matches
Mail list logo