On 11/16/22 07:56, Markus Armbruster wrote:
Cédric Le Goater writes:
Currently, when a block backend is attached to a m25p80 device and the
associated file size does not match the flash model, QEMU complains
with the error message "failed to read the initial flash content".
This is confusing f
Cédric Le Goater writes:
> Currently, when a block backend is attached to a m25p80 device and the
> associated file size does not match the flash model, QEMU complains
> with the error message "failed to read the initial flash content".
> This is confusing for the user.
>
> Use blk_check_size_and
On Wed, Nov 16, 2022 at 10:59 AM Tobias Fiebig wrote:
>
> Heho,
> I just tested around with the patch;
> Good news: Certainly my builds are being executed. Also, if I patch the old
> code to have a MAX_MTU <= the max MTU on my path, throughput is ok.
>
> Bad news: Something is wrong with getting
On Tue, Nov 15, 2022 at 22:36:07 +, Alex Bennée wrote:
> This is exactly the sort of thing rr is great for. Can you trigger it in
> that?
>
> https://rr-project.org/
The sanitizers should also help.
For TLB flush tracing, defining DEBUG_TLB at the top of cputlb.c
might be useful.
Peter Maydell writes:
> On Tue, 8 Nov 2022 at 15:50, Schspa Shi wrote:
>>
>>
>> Peter Maydell writes:
>>
>> > On Tue, 8 Nov 2022 at 13:54, Peter Maydell
>> > wrote:
>> >>
>> >> On Tue, 8 Nov 2022 at 12:52, Schspa Shi wrote:
>> >> > I think this lowmem does not mean below 4GB. and it is to
On Mon, Nov 14, 2022 at 11:43:37AM +, Alex Bennée wrote:
>
> Chao Peng writes:
>
>
> > Introduction
> >
> > KVM userspace being able to crash the host is horrible. Under current
> > KVM architecture, all guest memory is inherently accessible from KVM
> > userspace and is expose
On Wed, Nov 16, 2022 at 7:43 AM Tobias Fiebig wrote:
>
> Heho,
> Just to keep you in the loop; Just applied the patch, but things didn't
> really get better; I am currently doing a 'make clean; make' for good measure
> (had built head first), and will also double-check that there is no
> accide
On Wed, Nov 16, 2022 at 12:37 AM Stefan Hajnoczi wrote:
>
> The Large-Send Task Offload Tx Descriptor (9.2.1 Transmit) has a
> Large-Send MSS value where the driver specifies the MSS. See the
> datasheet here:
> http://realtek.info/pdf/rtl8139cp.pdf
>
> The code ignores this value and uses a hardc
On Wed, Nov 16, 2022 at 4:17 AM Alex Bennée wrote:
>
>
> John Snow writes:
>
> > Instead of using a hardcoded timeout, just rely on Avocado's built-in
> > test case timeout. This helps avoid timeout issues on machines where 60
> > seconds is not sufficient.
> >
> > Signed-off-by: John Snow
> > -
Fix setprop_sized method in fdt rtc node.
Signed-off-by: Xiaojuan Yang
---
hw/loongarch/virt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/loongarch/virt.c b/hw/loongarch/virt.c
index b9c18ee517..958be74fa1 100644
--- a/hw/loongarch/virt.c
+++ b/hw/loongarch/virt.c
@@
Hello QEMU/KVM folks
I’m hitting a racy SMP boot issue with edk2/OVMF secure boot. It is caused by
the race
when SIPI is issued while CPU is in SMM.
For details, please refer to the edk2/OVMF bugzilla ticket:
https://bugzilla.tianocore.org/show_bug.cgi?id=4132
I’d like to know whether there is
On Tue, Nov 15, 2022 at 7:25 PM Eugenio Perez Martin
wrote:
>
> On Tue, Nov 15, 2022 at 4:04 AM Jason Wang wrote:
> >
> > On Tue, Nov 15, 2022 at 12:31 AM Eugenio Perez Martin
> > wrote:
> > >
> > > On Mon, Nov 14, 2022 at 5:30 AM Jason Wang wrote:
> > > >
> > > >
> > > > 在 2022/11/11 21:12, Eu
On Wed, Nov 16, 2022 at 2:58 AM John Snow wrote:
>
> Instead of using a hardcoded timeout, just rely on Avocado's built-in
> test case timeout. This helps avoid timeout issues on machines where 60
> seconds is not sufficient.
>
> Signed-off-by: John Snow
Reviewed-by: Ani Sinha
> ---
> tests/a
On Tue, Nov 15, 2022 at 04:56:12PM +, Alex Bennée wrote:
>
> Chao Peng writes:
>
> > This new KVM exit allows userspace to handle memory-related errors. It
> > indicates an error happens in KVM at guest memory range [gpa, gpa+size).
> > The flags includes additional information for userspace
On Wed, Nov 16, 2022 at 3:22 AM John Snow wrote:
>
> On Tue, Nov 15, 2022 at 1:47 PM John Snow wrote:
> >
> > On Tue, Nov 15, 2022 at 9:31 AM Ani Sinha wrote:
> > >
> > > On Tue, Nov 15, 2022 at 3:36 PM Ani Sinha wrote:
> > > >
> > > > On Tue, Nov 15, 2022 at 9:07 AM Ani Sinha wrote:
> > > > >
On Wed, Nov 16, 2022 at 12:18 AM John Snow wrote:
>
> On Tue, Nov 15, 2022 at 9:31 AM Ani Sinha wrote:
> >
> > On Tue, Nov 15, 2022 at 3:36 PM Ani Sinha wrote:
> > >
> > > On Tue, Nov 15, 2022 at 9:07 AM Ani Sinha wrote:
> > > >
> > > > On Tue, Nov 15, 2022 at 5:13 AM John Snow wrote:
> > > >
Heho,
I just tested around with the patch;
Good news: Certainly my builds are being executed. Also, if I patch the old
code to have a MAX_MTU <= the max MTU on my path, throughput is ok.
Bad news: Something is wrong with getting the MSS in the patch you shared. When
enabling DPRINT, values are o
On Tue, Nov 15, 2022 at 10:12 AM Frédéric Pétrot
wrote:
>
> Commit 40244040a7a changed the way the S irqs are numbered. This breaks when
> using numa configuration, e.g.:
> ./qemu-system-riscv64 -nographic -machine virt,dumpdtb=numa-tree.dtb \
> -m 2G -smp cpus=16 \
>
Hi, Lei
Dr. David Alan Gilbert has already reviewed the hmp part, could you
please review the cryptodev/virtio-crypto part?
I volunteer to co-maintain the cryptodev part, I'd like to add myself as
cryptodev maintainer in the next version, do you have any suggestion?
On 11/11/22 14:45, zhenw
On Wed, Nov 16, 2022 at 1:13 AM Cédric Le Goater wrote:
>
> Currently, when a block backend is attached to a m25p80 device and the
> associated file size does not match the flash model, QEMU complains
> with the error message "failed to read the initial flash content".
> This is confusing for the
Heho,
Just to keep you in the loop; Just applied the patch, but things didn't really
get better; I am currently doing a 'make clean; make' for good measure (had
built head first), and will also double-check that there is no accidental use
of system-qemu libs.
If that still doesn't show an effe
On 2022/11/15 21:44, Richard Henderson wrote:
On 11/13/22 12:32, Weiwei Li wrote:
{
sq 101 ... ... .. ... 10 @c_sqsp
c_fsd 101 .. . 10 @c_sdsp
+
+ # *** RV64 and RV32 Zcmp Extension ***
+ cm_push 101 11000 .. 10 @zcmp
+ cm_pop
On Thu, 3 Nov 2022 18:16:13 +0200
Avihai Horon wrote:
> Move vfio_dev_get_region_info() logic from vfio_migration_probe() to
> vfio_migration_init(). This logic is specific to v1 protocol and moving
> it will make it easier to add the v2 protocol implementation later.
> No functional changes inte
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/7.2 for any
user-visible changes.
signature.asc
Description: PGP signature
This pull request causes the following CI failure:
https://gitlab.com/qemu-project/qemu/-/jobs/3328449477
I haven't figured out the root cause of the failure. Maybe the pull
request just exposes a latent failure. Please take a look and we can
try again for -rc2.
Thanks,
Stefan
On Thu, 3 Nov 2022 18:16:10 +0200
Avihai Horon wrote:
> Currently, if IOMMU of a VFIO container doesn't support dirty page
> tracking, migration is blocked. This is because a DMA-able VFIO device
> can dirty RAM pages without updating QEMU about it, thus breaking the
> migration.
>
> However, th
On Tue, Nov 15, 2022, 5:48 PM Alex Bennée wrote:
>
> John Snow writes:
>
> > Instead of using a hardcoded timeout, just rely on Avocado's built-in
> > test case timeout. This helps avoid timeout issues on machines where 60
> > seconds is not sufficient.
> >
> > Signed-off-by: John Snow
> > ---
John Snow writes:
> Instead of using a hardcoded timeout, just rely on Avocado's built-in
> test case timeout. This helps avoid timeout issues on machines where 60
> seconds is not sufficient.
>
> Signed-off-by: John Snow
> ---
> tests/avocado/acpi-bits.py | 10 ++
> 1 file changed, 2
Aaron Lindsay writes:
> Hello,
>
> I have been wrestling with what might be a bug in the plugin memory
> callbacks. The immediate error is that I hit the
> `g_assert_not_reached()` in the 'default:' case in
> qemu_plugin_vcpu_mem_cb, indicating the callback type was invalid. When
> breaking on
Hello,
I have been wrestling with what might be a bug in the plugin memory
callbacks. The immediate error is that I hit the
`g_assert_not_reached()` in the 'default:' case in
qemu_plugin_vcpu_mem_cb, indicating the callback type was invalid. When
breaking on this assertion in gdb, the contents of
On Tue, Nov 15, 2022 at 1:47 PM John Snow wrote:
>
> On Tue, Nov 15, 2022 at 9:31 AM Ani Sinha wrote:
> >
> > On Tue, Nov 15, 2022 at 3:36 PM Ani Sinha wrote:
> > >
> > > On Tue, Nov 15, 2022 at 9:07 AM Ani Sinha wrote:
> > > >
> > > > On Tue, Nov 15, 2022 at 5:13 AM John Snow wrote:
> > > > >
Instead of using a hardcoded timeout, just rely on Avocado's built-in
test case timeout. This helps avoid timeout issues on machines where 60
seconds is not sufficient.
Signed-off-by: John Snow
---
tests/avocado/acpi-bits.py | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff
On Tue, Nov 15, 2022 at 04:50:11PM +0100, Philippe Mathieu-Daudé wrote:
> On 15/11/22 16:10, Cédric Le Goater wrote:
> > Currently, when a block backend is attached to a m25p80 device and the
> > associated file size does not match the flash model, QEMU complains
> > with the error message "failed
On Tue, Nov 15, 2022 at 11:29:13PM +0530, manish.mishra wrote:
> > > + while (bytes < nbytes) {
> > > + bytes = klass->io_read_peek(ioc,
> > > + buf,
> > > + nbytes,
> > > + errp);
> > IIUC
On Tue, Nov 15, 2022 at 02:06:57PM +0100, Cédric Le Goater wrote:
> Hello Peter,
>
> On 11/14/22 20:08, Peter Delevoryas wrote:
> > I've been using this patch for a long time so that I don't have to use
> > dd to zero-extend stuff all the time. It's just doing what people are
> > doing already, ri
On Tue, Nov 15, 2022 at 5:02 PM Peter Maydell wrote:
>
> On Mon, 14 Nov 2022 at 19:22, Strahinja Jankovic
> wrote:
> > Ok, I will start preparing that separate patch for error logging for sun4i.
> >
> > Since this is my first time submitting a patch, is there anything else
> > I need to do with t
Please only include bug fixes for 7.2 in pull requests during QEMU
hard freeze. The AVX2 support has issues (see my other email) and
anything else that isn't a bug fix should be dropped too.
Stefan
On Tue, 15 Nov 2022 at 10:40, Juan Quintela wrote:
>
> The following changes since commit 98f10f0e2613ba1ac2ad3f57a5174014f6dcb03d:
>
> Merge tag 'pull-target-arm-20221114' of
> https://git.linaro.org/people/pmaydell/qemu-arm into staging (2022-11-14
> 13:31:17 -0500)
>
> are available in the
In the next patch ZFS TRIM support for FreeBSD will be added. Move
Linux-specific TRIM code to commands-linux.c file.
Signed-off-by: Alexander Ivanov
---
qga/commands-linux.c | 73
qga/commands-posix.c | 72 ---
On Tue, Nov 15, 2022 at 06:11:30PM +, Daniel P. Berrangé wrote:
> On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
> > Current logic assumes that channel connections on the destination side are
> > always established in the same order as the source and the first one will
> > alway
On Tue, Nov 15, 2022 at 9:31 AM Ani Sinha wrote:
>
> On Tue, Nov 15, 2022 at 3:36 PM Ani Sinha wrote:
> >
> > On Tue, Nov 15, 2022 at 9:07 AM Ani Sinha wrote:
> > >
> > > On Tue, Nov 15, 2022 at 5:13 AM John Snow wrote:
> > > >
> > > > On Thu, Nov 10, 2022 at 11:22 PM Ani Sinha wrote:
> > > >
Use zpool tool for ZFS pools trimming in FreeBSD.
Signed-off-by: Alexander Ivanov
---
qga/commands-bsd.c| 109 ++
qga/commands-common.h | 1 +
2 files changed, 110 insertions(+)
diff --git a/qga/commands-bsd.c b/qga/commands-bsd.c
index 15cade2d4c..
Move Linux-specific FS TRIM code to commands-linux.c and add support of
ZFS TRIM for FreeBSD.
Alexander Ivanov (2):
qga: Move FS TRIM code to commands-linux.c
qga: Add ZFS TRIM support for FreeBSD
qga/commands-bsd.c| 109 ++
qga/commands-common.h |
On Tue, Nov 15, 2022, 12:44 Taylor Simpson wrote:
>
>
> > -Original Message-
> > From: Stefan Hajnoczi
> > Sent: Tuesday, November 15, 2022 10:52 AM
> > To: Taylor Simpson
> > Cc: qemu-devel@nongnu.org; richard.hender...@linaro.org;
> > phi...@linaro.org; peter.mayd...@linaro.org; Brian
On Tue, Nov 15, 2022 at 10:48:40AM +, Peter Maydell wrote:
> On Mon, 14 Nov 2022 at 19:08, Peter Delevoryas wrote:
> >
> > I've been using this patch for a long time so that I don't have to use
> > dd to zero-extend stuff all the time. It's just doing what people are
> > doing already, right?
On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
> Current logic assumes that channel connections on the destination side are
> always established in the same order as the source and the first one will
> always be the main channel followed by the multifid or post-copy
> preemption cha
Please don't merge this PULL request,
It contains changes to the "io" subsystem in patch 3 that I
have not reviewed nor acked yet, and which should be been
split as a separate patch from the migration changes too.
With regards,
Daniel
On Tue, Nov 15, 2022 at 04:34:44PM +0100, Juan Quintela wrote
On 15/11/22 11:06 pm, Peter Xu wrote:
Hi, Manish,
On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel foll
On Tue, Nov 15, 2022 at 06:25:27AM -0600, Or Ozeri wrote:
> Starting from ceph Reef, RBD has built-in support for layered encryption,
> where each ancestor image (in a cloned image setting) can be possibly
> encrypted using a unique passphrase.
>
> A new function, rbd_encryption_load2, was added t
On 221115 1119, Peter Xu wrote:
> On Fri, Oct 28, 2022 at 03:16:42PM -0400, Alexander Bulekov wrote:
> > +/* Do not allow more than one simultanous access to a device's IO
> > Regions */
> > +if (mr->owner &&
> > +!mr->ram_device && !mr->ram && !mr->rom_device &&
> > !mr->read
> -Original Message-
> From: Stefan Hajnoczi
> Sent: Tuesday, November 15, 2022 10:52 AM
> To: Taylor Simpson
> Cc: qemu-devel@nongnu.org; richard.hender...@linaro.org;
> phi...@linaro.org; peter.mayd...@linaro.org; Brian Cain
> ; Matheus Bernardino (QUIC)
> ; stefa...@redhat.com
> Subj
Hi, Manish,
On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
> Current logic assumes that channel connections on the destination side are
> always established in the same order as the source and the first one will
> always be the main channel followed by the multifid or post-copy
> p
On Tue, Nov 15, 2022 at 1:25 PM Or Ozeri wrote:
>
> Starting from ceph Reef, RBD has built-in support for layered encryption,
> where each ancestor image (in a cloned image setting) can be possibly
> encrypted using a unique passphrase.
>
> A new function, rbd_encryption_load2, was added to librbd
On Tue, 15 Nov 2022 at 16:17, Alex Bennée wrote:
>
> Hi Peter,
>
> These are the 2 GICv2 patches as you suggested in the last review -
> this time with an updated commit message for the second patch. I don't
> know if they qualify for 7.2 but here they are if you want them.
>
> Alex Bennée (2):
>
Chao Peng writes:
> This new KVM exit allows userspace to handle memory-related errors. It
> indicates an error happens in KVM at guest memory range [gpa, gpa+size).
> The flags includes additional information for userspace to handle the
> error. Currently bit 0 is defined as 'private memory' w
On Tue, 15 Nov 2022 at 11:16, Taylor Simpson wrote:
>
> OK. I wasn't sure if performance improvements would be considered new
> features or not.
No problem! If there is a performance regression in the upcoming
release, then fixes will be accepted. For example, if QEMU 7.1 was
fast but the upcom
On Tue, 15 Nov 2022 at 16:20, Peter Xu wrote:
>
> On Fri, Oct 28, 2022 at 03:16:42PM -0400, Alexander Bulekov wrote:
> > +/* Do not allow more than one simultanous access to a device's IO
> > Regions */
> > +if (mr->owner &&
> > +!mr->ram_device && !mr->ram && !mr->rom_device
Am 15.11.22 um 17:40 schrieb Christian Borntraeger:
Am 15.11.22 um 17:05 schrieb Alex Bennée:
Christian Borntraeger writes:
Am 15.11.22 um 15:31 schrieb Alex Bennée:
"Michael S. Tsirkin" writes:
On Mon, Nov 14, 2022 at 06:15:30PM +0100, Christian Borntraeger wrote:
Am 14.11.22 um
On Tue, 15 Nov 2022 at 16:29, Philippe Mathieu-Daudé wrote:
> Possible future cleanup, define JEP106_ID_ARM:
>
> $ git grep 0x43b
> hw/intc/arm_gic.c:1671:*data = (s->revision << 16) | 0x43b;
> hw/intc/gicv3_internal.h:743:return 0x43b;
> hw/misc/armv7m_ras.c:26:*data = 0x4
Am 15.11.22 um 17:05 schrieb Alex Bennée:
Christian Borntraeger writes:
Am 15.11.22 um 15:31 schrieb Alex Bennée:
"Michael S. Tsirkin" writes:
On Mon, Nov 14, 2022 at 06:15:30PM +0100, Christian Borntraeger wrote:
Am 14.11.22 um 18:10 schrieb Michael S. Tsirkin:
On Mon, Nov 14, 202
The Large-Send Task Offload Tx Descriptor (9.2.1 Transmit) has a
Large-Send MSS value where the driver specifies the MSS. See the
datasheet here:
http://realtek.info/pdf/rtl8139cp.pdf
The code ignores this value and uses a hardcoded MSS of 1500 bytes
instead. When the MTU is less than 1500 bytes t
On 15/11/22 17:17, Alex Bennée wrote:
a66a24585f (hw/intc/arm_gic: Implement read of GICC_IIDR) implemented
this for the CPU interface register. The fact we don't implement it
shows up when running Xen with -d guest_error which is definitely
wrong because the guest is perfectly entitled to read i
On Fri, Oct 28, 2022 at 03:16:42PM -0400, Alexander Bulekov wrote:
> +/* Do not allow more than one simultanous access to a device's IO
> Regions */
> +if (mr->owner &&
> +!mr->ram_device && !mr->ram && !mr->rom_device && !mr->readonly)
> {
> +dev = (DeviceState *) obj
a66a24585f (hw/intc/arm_gic: Implement read of GICC_IIDR) implemented
this for the CPU interface register. The fact we don't implement it
shows up when running Xen with -d guest_error which is definitely
wrong because the guest is perfectly entitled to read it.
Signed-off-by: Alex Bennée
---
v2
Hi Peter,
These are the 2 GICv2 patches as you suggested in the last review -
this time with an updated commit message for the second patch. I don't
know if they qualify for 7.2 but here they are if you want them.
Alex Bennée (2):
hw/intc: clean-up access to GIC multi-byte registers
hw/intc:
gic_dist_readb was returning a word value which just happened to work
as a result of the way we OR the data together. Lets fix it so only
the explicit byte is returned for each part of GICD_TYPER. I've
changed the return type to uint8_t although the overflow is only
detected with an explicit -Wconv
OK. I wasn't sure if performance improvements would be considered new features
or not.
Taylor
> -Original Message-
> From: Stefan Hajnoczi
> Sent: Thursday, November 10, 2022 7:07 PM
> To: Taylor Simpson
> Cc: qemu-devel@nongnu.org; richard.hender...@linaro.org;
> phi...@linaro.org;
Christian Borntraeger writes:
> Am 15.11.22 um 15:31 schrieb Alex Bennée:
>> "Michael S. Tsirkin" writes:
>>
>>> On Mon, Nov 14, 2022 at 06:15:30PM +0100, Christian Borntraeger wrote:
Am 14.11.22 um 18:10 schrieb Michael S. Tsirkin:
> On Mon, Nov 14, 2022 at 05:55:09PM +010
On Mon, 14 Nov 2022 at 19:22, Strahinja Jankovic
wrote:
> Ok, I will start preparing that separate patch for error logging for sun4i.
>
> Since this is my first time submitting a patch, is there anything else
> I need to do with this one? Thanks!
No, I'll take the patch from here. Since this is a
From: Peter Xu
The major change is to replace "!save_page_use_compression()" with
"xbzrle_enabled" to make it clear.
Reasonings:
(1) When compression enabled, "!save_page_use_compression()" is exactly the
same as checking "xbzrle_enabled".
(2) When compression disabled, "!save_page_use_com
On 15/11/22 16:10, Cédric Le Goater wrote:
Currently, when a block backend is attached to a m25p80 device and the
associated file size does not match the flash model, QEMU complains
with the error message "failed to read the initial flash content".
This is confusing for the user.
Use blk_check_s
From: Peter Xu
Since we use PageSearchStatus to represent a channel, it makes perfect
sense to keep last_sent_block (aka, leverage RAM_SAVE_FLAG_CONTINUE) to be
per-channel rather than global because each channel can be sending
different pages on ramblocks.
Hence move it from RAMState into PageS
Signed-off-by: Juan Quintela
Reviewed-by: Dr. David Alan Gilbert
Reviewed-by: David Edmondson
Reviewed-by: Leonardo Bras
---
migration/ram.h | 2 ++
migration/ram.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/migration/ram.h b/migration/ram.h
index c7af65ac74..e844966
We were recalculating it left and right. We plan to change that
values on next patches.
Signed-off-by: Juan Quintela
Reviewed-by: Leonardo Bras
---
migration/multifd.h | 4
migration/multifd.c | 7 ---
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/migration/multifd.h
From: Peter Xu
Since we already have bitmap_mutex to protect either the dirty bitmap or
the clear log bitmap, we don't need atomic operations to set/clear/test on
the clear log bitmap. Switching all ops from atomic to non-atomic
versions, meanwhile touch up the comments to show which lock is in
From: Peter Xu
When starting ram saving procedure (especially at the completion phase),
always set last_seen_block to non-NULL to make sure we can always correctly
detect the case where "we've migrated all the dirty pages".
Then we'll guarantee both last_seen_block and pss.block will be valid
al
From: "manish.mishra"
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel followed by the multifid or post-copy
preemption channel. This may not be always true, as even if a
And it appears that what is wrong is the code. During bulk stage we
need to make sure that some block is dirty, but no games with
max_size at all.
Signed-off-by: Juan Quintela
Reviewed-by: Stefan Hajnoczi
---
migration/block.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --gi
We were calling qemu_target_page_size() left and right.
Signed-off-by: Juan Quintela
Reviewed-by: Leonardo Bras
---
migration/multifd.h | 4
migration/multifd-zlib.c | 14 ++
migration/multifd-zstd.c | 12 +---
migration/multifd.c | 18 --
4 f
From: Peter Xu
Helper to init PSS structures.
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Peter Xu
Reviewed-by: Juan Quintela
Signed-off-by: Juan Quintela
---
migration/ram.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/migration/ram.c b/migration/
From: Peter Xu
We used to allocate PSS structure on the stack for precopy when sending
pages. Make it static, so as to describe per-channel ram migration status.
Here we declared RAM_CHANNEL_MAX instances, preparing for postcopy to use
it, even though this patch has not yet to start using the 2
From: Peter Xu
Multifd thread model does not work for compression, explicitly disable it.
Note that previuosly even we can enable both of them, nothing will go
wrong, because the compression code has higher priority so multifd feature
will just be ignored. Now we'll fail even earlier at config
From: Peter Xu
Migration code has a lot to do with host pages. Teaching PSS core about
the idea of host page helps a lot and makes the code clean. Meanwhile,
this prepares for the future changes that can leverage the new PSS helpers
that this patch introduces to send host page in another thread
From: Peter Xu
The 2nd check on RAM_SAVE_FLAG_CONTINUE is a bit redundant. Use a boolean
to be clearer.
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Peter Xu
Reviewed-by: Juan Quintela
Signed-off-by: Juan Quintela
---
migration/ram.c | 5 +++--
1 file changed, 3 insertions(+), 2 dele
From: Peter Xu
Introduce pss_channel for PageSearchStatus, define it as "the migration
channel to be used to transfer this host page".
We used to have rs->f, which is a mirror to MigrationState.to_dst_file.
After postcopy preempt initial version, rs->f can be dynamically changed
depending on wh
From: Peter Xu
With the new code to send pages in rp-return thread, there's little help to
keep lots of the old code on maintaining the preempt state in migration
thread, because the new way should always be faster..
Then if we'll always send pages in the rp-return thread anyway, we don't
need t
From: Peter Xu
Don't take the bitmap mutex when sending pages, or when being throttled by
migration_rate_limit() (which is a bit tricky to call it here in ram code,
but seems still helpful).
It prepares for the possibility of concurrently sending pages in >1 threads
using the function ram_save_h
From: Peter Xu
In qemu_file_shutdown(), there's a possible race if with current order of
operation. There're two major things to do:
(1) Do real shutdown() (e.g. shutdown() syscall on socket)
(2) Update qemufile's last_error
We must do (2) before (1) otherwise there can be a race condition
From: Peter Xu
With all the facilities ready, send the requested page directly in the
rp-return thread rather than queuing it in the request queue, if and only
if postcopy preempt is enabled. It can achieve so because it uses separate
channel for sending urgent pages. The only shared data is bi
From: ling xu
Unit test code is in test-xbzrle.c, and benchmark code is in xbzrle-bench.c
for performance benchmarking.
Signed-off-by: ling xu
Co-authored-by: Zhou Zhao
Co-authored-by: Jun Jin
Reviewed-by: Juan Quintela
Signed-off-by: Juan Quintela
---
tests/bench/xbzrle-bench.c | 465
From: Peter Xu
To prepare for thread-safety on page accountings, at least below counters
need to be accessed only atomically, they are:
ram_counters.transferred
ram_counters.duplicate
ram_counters.normal
ram_counters.postcopy_bytes
There are a lot of other counte
Signed-off-by: Juan Quintela
Reviewed-by: Leonardo Bras
---
migration/ram.h | 1 +
migration/ram.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/migration/ram.h b/migration/ram.h
index e844966f69..038d52f49f 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -66,6 +66,
To sum up on what was discussed in this serie, I don't really see any
strong objection against these patches, so I will soon send v3 which is
pretty much the same except for patch 1, which will be removed.
I think these patches are useful and will be even more meaningful to the
reviewer when in th
From: Fiona Ebner
in the error case. The documentation in include/io/channel.h states
that -1 or QIO_CHANNEL_ERR_BLOCK should be returned upon error. Simply
passing along the return value from the bdrv-functions has the
potential to confuse the call sides. Non-blocking mode is not
implemented cur
From: Peter Xu
Add the helper to show that postcopy preempt enabled, meanwhile active.
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Peter Xu
Reviewed-by: Juan Quintela
Signed-off-by: Juan Quintela
---
migration/ram.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff
From: Peter Xu
Now with rs->pss we can already cache channels in pss->pss_channels. That
pss_channel contains more infromation than rs->f because it's per-channel.
So rs->f could be replaced by rss->pss[RAM_CHANNEL_PRECOPY].pss_channel,
while rs->f itself is a bit vague now.
Note that vanilla p
The following changes since commit 98f10f0e2613ba1ac2ad3f57a5174014f6dcb03d:
Merge tag 'pull-target-arm-20221114' of
https://git.linaro.org/people/pmaydell/qemu-arm into staging (2022-11-14
13:31:17 -0500)
are available in the Git repository at:
https://gitlab.com/juan.quintela/qemu.git ta
From: Leonardo Bras
Move flushing code from multifd_send_sync_main() to a new helper, and call
it in multifd_send_sync_main().
Signed-off-by: Leonardo Bras
Reviewed-by: Juan Quintela
Signed-off-by: Juan Quintela
---
migration/multifd.c | 30 +++---
1 file changed, 19
From: Peter Xu
Removing referencing to RAMState.f in compress_page_with_multi_thread() and
flush_compressed_data().
Compression code by default isn't compatible with having >1 channels (or it
won't currently know which channel to flush the compressed data), so to
make it simple we always flush o
From: ling xu
This commit updates code of avx512 support for xbzrle_encode_buffer
function to accelerate xbzrle encoding speed. Runtime check of avx512
support and benchmark for this feature are added. Compared with C
version of xbzrle_encode_buffer function, avx512 version can achieve
50%-70% pe
1 - 100 of 235 matches
Mail list logo