On Thursday, July 11, 2024 10:14 PM, Daniel P. Berrangé wrote:
> On Thu, Jul 11, 2024 at 02:13:31PM +0000, Wang, Wei W wrote:
> > On Thursday, July 11, 2024 8:25 PM, Daniel P. Berrangé wrote:
> > > On Thu, Jul 11, 2024 at 12:10:34PM +, Wang, Wei W wrote:
> > > > O
On Thursday, July 11, 2024 8:25 PM, Daniel P. Berrangé wrote:
> On Thu, Jul 11, 2024 at 12:10:34PM +0000, Wang, Wei W wrote:
> > On Thursday, July 11, 2024 7:48 PM, Daniel P. Berrangé wrote:
> > > On Wed, Jul 03, 2024 at 10:49:12PM +0800, Wei Wang wrote:
> > AFAIK,
On Thursday, July 11, 2024 8:25 PM, Daniel P. Berrangé wrote:
> On Thu, Jul 11, 2024 at 12:10:34PM +0000, Wang, Wei W wrote:
> > On Thursday, July 11, 2024 7:48 PM, Daniel P. Berrangé wrote:
> > > On Wed, Jul 03, 2024 at 10:49:12PM +0800, Wei Wang wrote:
> > > > When
On Thursday, July 11, 2024 7:48 PM, Daniel P. Berrangé wrote:
> On Wed, Jul 03, 2024 at 10:49:12PM +0800, Wei Wang wrote:
> > When enforce_cpuid is set to false, the guest is launched with a
> > filtered set of features, meaning that unsupported features by the
> > host are removed from the
On Friday, July 5, 2024 9:34 PM, Peter Xu wrote:
> On Fri, Jul 05, 2024 at 10:22:23AM +0000, Wang, Wei W wrote:
> > On Thursday, July 4, 2024 11:59 PM, Peter Xu wrote:
> > > On Thu, Jul 04, 2024 at 03:10:27PM +, Wang, Wei W wrote:
> > > > > > diff --git a/t
On Thursday, July 4, 2024 11:59 PM, Peter Xu wrote:
> On Thu, Jul 04, 2024 at 03:10:27PM +0000, Wang, Wei W wrote:
> > > > diff --git a/target/i386/cpu.c b/target/i386/cpu.c index
> > > > 4c2e6f3a71..7db4fe4ead 100644
> > > > --- a/target/i38
On Thursday, July 4, 2024 2:04 AM, Peter Xu wrote:
> On Wed, Jul 03, 2024 at 10:49:12PM +0800, Wei Wang wrote:
> > When enforce_cpuid is set to false, the guest is launched with a
> > filtered set of features, meaning that unsupported features by the
> > host are removed from the guest's vCPU
On Saturday, April 6, 2024 5:53 AM, Peter Xu wrote:
> On Fri, Apr 05, 2024 at 11:40:56AM +0800, Wei Wang wrote:
> > Before loading the guest states, ensure that the preempt channel has
> > been ready to use, as some of the states (e.g. via virtio_load) might
> > trigger page faults that will be
On Friday, April 5, 2024 11:41 AM, Wang, Wei W wrote:
>
> Before loading the guest states, ensure that the preempt channel has been
> ready to use, as some of the states (e.g. via virtio_load) might trigger page
> faults that will be handled through the preempt channel. So yield
On Friday, April 5, 2024 10:33 AM, Peter Xu wrote:
> On Fri, Apr 05, 2024 at 01:38:31AM +0000, Wang, Wei W wrote:
> > On Friday, April 5, 2024 4:57 AM, Peter Xu wrote:
> > > On Fri, Apr 05, 2024 at 12:48:15AM +0800, Wang, Lei wrote:
> > > > On 4/5/2024 0:25, Wang, We
On Friday, April 5, 2024 4:57 AM, Peter Xu wrote:
> On Fri, Apr 05, 2024 at 12:48:15AM +0800, Wang, Lei wrote:
> > On 4/5/2024 0:25, Wang, Wei W wrote:> On Thursday, April 4, 2024 10:12
> > PM, Peter Xu wrote:
> > >> On Thu, Apr 04, 2024 at 06:05:50PM +0800, Wei Wang
On Thursday, April 4, 2024 10:12 PM, Peter Xu wrote:
> On Thu, Apr 04, 2024 at 06:05:50PM +0800, Wei Wang wrote:
> > Before loading the guest states, ensure that the preempt channel has
> > been ready to use, as some of the states (e.g. via virtio_load) might
> > trigger page faults that will be
On Thursday, April 4, 2024 12:34 AM, Peter Xu wrote:
> On Wed, Apr 03, 2024 at 04:04:21PM +0000, Wang, Wei W wrote:
> > On Wednesday, April 3, 2024 10:42 PM, Peter Xu wrote:
> > > On Wed, Apr 03, 2024 at 04:35:35PM +0800, Wang, Lei wrote:
> > > > We should
On Wednesday, April 3, 2024 10:42 PM, Peter Xu wrote:
> On Wed, Apr 03, 2024 at 04:35:35PM +0800, Wang, Lei wrote:
> > We should change the following line from
> >
> > while (!qemu_sem_timedwait(>postcopy_qemufile_dst_done,
> 100)) {
> >
> > to
> >
> > while
On Tuesday, April 2, 2024 2:56 PM, Wang, Lei4 wrote:
> On 4/2/2024 0:13, Peter Xu wrote:> On Fri, Mar 29, 2024 at 08:54:07AM +0000,
> Wang, Wei W wrote:
> >> On Friday, March 29, 2024 11:32 AM, Wang, Lei4 wrote:
> >>> When using the post-copy preemption feature t
On Tuesday, April 2, 2024 12:13 AM, Peter Xu wrote:
> On Fri, Mar 29, 2024 at 08:54:07AM +0000, Wang, Wei W wrote:
> > On Friday, March 29, 2024 11:32 AM, Wang, Lei4 wrote:
> > > When using the post-copy preemption feature to perform post-copy
> > > live migration, th
On Friday, March 29, 2024 11:32 AM, Wang, Lei4 wrote:
> When using the post-copy preemption feature to perform post-copy live
> migration, the below scenario could lead to a deadlock and the migration will
> never finish:
>
> - Source connect() the preemption channel in postcopy_start().
> -
On Wednesday, January 10, 2024 12:32 AM, Li, Xiaoyao wrote:
> On 1/9/2024 10:53 PM, Wang, Wei W wrote:
> > On Tuesday, January 9, 2024 1:47 PM, Li, Xiaoyao wrote:
> >> On 12/21/2023 9:47 PM, Wang, Wei W wrote:
> >>> On Thursday, December 21, 2023 7:54 PM, Li, Xiaoyao
On Tuesday, January 9, 2024 1:47 PM, Li, Xiaoyao wrote:
> On 12/21/2023 9:47 PM, Wang, Wei W wrote:
> > On Thursday, December 21, 2023 7:54 PM, Li, Xiaoyao wrote:
> >> On 12/21/2023 6:36 PM, Wang, Wei W wrote:
> >>> No need to specifically check for KVM_MEMORY_ATTRIB
On Thursday, December 21, 2023 7:54 PM, Li, Xiaoyao wrote:
> On 12/21/2023 6:36 PM, Wang, Wei W wrote:
> > No need to specifically check for KVM_MEMORY_ATTRIBUTE_PRIVATE there.
> > I'm suggesting below:
> >
> > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index
On Thursday, December 21, 2023 2:11 PM, Li, Xiaoyao wrote:
> On 12/12/2023 9:56 PM, Wang, Wei W wrote:
> > On Wednesday, November 15, 2023 3:14 PM, Xiaoyao Li wrote:
> >> Introduce the helper functions to set the attributes of a range of
> >>
On Wednesday, November 15, 2023 3:14 PM, Xiaoyao Li wrote:
> Introduce the helper functions to set the attributes of a range of memory to
> private or shared.
>
> This is necessary to notify KVM the private/shared attribute of each gpa
> range.
> KVM needs the information to decide the GPA needs
On Wednesday, October 11, 2023 8:41 PM, Juan Quintela wrote:
> Wei Wang wrote:
> > Current migration_completion function is a bit long. Refactor the long
> > implementation into different subfunctions:
> > - migration_completion_precopy: completion code related to precopy
> > -
On Thursday, October 12, 2023 4:32 AM, Juan Quintela wrote:
> > Yeah, this generates a nicer diff, thanks.
> > I'll rebase and resend it.
>
> Already on the pull request.
>
> I have to fix the conflict, but it has the same changes that yours as far as
> I can
> see.
Yes, just need to remove
On Wednesday, October 11, 2023 8:41 PM, Juan Quintela wrote:
> Wei Wang wrote:
> > Current migration_completion function is a bit long. Refactor the long
> > implementation into different subfunctions:
> > - migration_completion_precopy: completion code related to precopy
> > -
On Friday, August 4, 2023 9:37 PM, Peter Xu wrote:
Fri, Aug 04, 2023 at 05:30:53PM +0800, Wei Wang wrote:
> > Current migration_completion function is a bit long. Refactor the long
> > implementation into different subfunctions:
> > - migration_completion_precopy: completion code related to
On Thursday, July 27, 2023 1:10 AM, Peter Xu wrote:
> On Fri, Jul 21, 2023 at 11:14:55AM +0000, Wang, Wei W wrote:
> > On Friday, July 21, 2023 4:38 AM, Peter Xu wrote:
> > > Looks good to me, after addressing Isaku's comments.
> > >
> > > The current_act
On Friday, July 21, 2023 4:38 AM, Peter Xu wrote:
> Looks good to me, after addressing Isaku's comments.
>
> The current_active_state is very unfortunate, along with most of the calls to
> migrate_set_state() - I bet most of the code will definitely go wrong if that
> cmpxchg didn't succeed
On Tuesday, July 18, 2023 1:44 PM, Isaku Yamahata wrote:
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -2058,6 +2058,21 @@ static int
> await_return_path_close_on_source(MigrationState *ms)
> > return ms->rp_state.error;
> > }
> >
> > +static int
On Wednesday, May 31, 2023 8:58 PM, Peter Xu wrote:
> > > Hmm.. so we used to do socket_start_incoming_migration_internal()
> > > before setting the right num for the preempt test, then I'm curious
> > > why it wasn't failing before this patch when trying to connect with the
> preempt channel..
>
On Tuesday, May 30, 2023 10:41 PM, Peter Xu wrote:
> On Tue, May 30, 2023 at 05:02:59PM +0800, Wei Wang wrote:
> > The Postcopy preempt capability requires to be set before incoming
> > starts, so change the postcopy tests to start with deferred incoming
> > and call migrate-incoming after the cap
On Monday, May 29, 2023 10:58 PM, Peter Xu wrote:
> >
> > #1migrate_params_test_apply(params, );
> >
> > #2 if (!migrate_params_check(, errp)) {
> > /* Invalid parameter */
> > return;
> > }
> > #3 migrate_params_apply(params, errp);
> >
> > #2 tries to do params check
On Saturday, May 27, 2023 5:49 AM, Peter Xu wrote:
> On Wed, May 24, 2023 at 04:01:57PM +0800, Wei Wang wrote:
> > qmp_migrate_set_parameters expects to use tmp for parameters check, so
> > migrate_params_test_apply is expected to copy the related fields from
> > params to tmp. So fix
On Tuesday, May 23, 2023 10:50 PM, Peter Xu wrote:
> On Tue, May 23, 2023 at 02:30:25PM +0000, Wang, Wei W wrote:
> > > It's about whether we want to protect e.g. below steps:
> > >
> > > 1. start dest qemu with -incoming defer 2.
> > > "migrate-set
On Tuesday, May 23, 2023 9:41 PM, Peter Xu wrote:
> On Tue, May 23, 2023 at 01:44:03AM +0000, Wang, Wei W wrote:
> > On Tuesday, May 23, 2023 7:36 AM, Peter Xu wrote:
> > > > > We may also want to trap the channel setups on num:
> > > > &
On Tuesday, May 23, 2023 7:36 AM, Peter Xu wrote:
> > > We may also want to trap the channel setups on num:
> > >
> > > migrate_params_test_apply():
> > >
> > > if (params->has_multifd_channels) {
> > > dest->multifd_channels = params->multifd_channels;
> > > }
> >
> > Didn’t get
On Friday, May 19, 2023 11:34 PM, Peter Xu wrote:
> > Ah yes indeed it keeps working, because we apply -global bits before
> > setup sockets. Then it's fine by me since that's the only thing I
> > would still like to keep it working. :)
> >
> > If so, can we reword the error message a bit?
On Friday, May 19, 2023 10:52 AM, Wang, Lei4 wrote:
> > We can change it to uint16_t or uint32_t, but need to see if listening
> > on a larger value is OK to everyone.
>
> Is there any use case to use >256 migration channels? If not, then I suppose
> it's no need to increase it.
People can
On Friday, May 19, 2023 9:31 AM, Wang, Lei4 wrote:
> On 5/18/2023 17:16, Juan Quintela wrote:
> > Lei Wang wrote:
> >> When destination VM is launched, the "backlog" parameter for listen()
> >> is set to 1 as default in socket_start_incoming_migration_internal(),
> >> which will lead to socket
On Friday, May 19, 2023 3:20 AM, Peter Xu wrote:
> On Fri, May 19, 2023 at 12:00:26AM +0800, Wei Wang wrote:
> > qemu_start_incoming_migration needs to check the number of multifd
> > channels or postcopy ram channels to configure the backlog parameter (i.e.
> > the maximum length to which the
On Thursday, May 18, 2023 8:43 PM, Juan Quintela wrote:
>
>
> Are you using -incoming defer?
>
> No? right.
>
> With multifd, you should use -incoming defer.
Yes, just confirmed that it works well with deferred incoming.
I think we should enforce this kind of requirement in the code.
I'll
On Thursday, May 18, 2023 4:52 PM, Wang, Lei4 wrote:
> When destination VM is launched, the "backlog" parameter for listen() is set
> to 1 as default in socket_start_incoming_migration_internal(), which will
> lead to socket connection error (the queue of pending connections is full)
> when
On Thursday, February 16, 2023 10:36 PM, Wang, Wei W wrote:
> > On Thursday, February 16, 2023 9:57 PM, Juan Quintela wrote:
> > > Just to see what we are having now:
> > >
> > > - single qemu binary moved to next slot (moved to next week?)
> > > Philli
> -Original Message-
> From: Wang, Wei W
> Sent: Thursday, February 16, 2023 10:36 PM
> To: quint...@redhat.com; Alex Bennée
> Cc: Paolo Bonzini ; Pavel Dovgalyuk
> ; qemu-devel@nongnu.org; Richard Henderson
> ; Mark Burton
> ; Bill Mills ; Marco
> Liebel ;
On Thursday, February 16, 2023 9:57 PM, Juan Quintela wrote:
> Just to see what we are having now:
>
> - single qemu binary moved to next slot (moved to next week?)
> Phillipe proposal
> - TDX migration: we have the slides, but no code
> So I guess we can move it to the following slot, when
On Monday, January 30, 2023 1:26 PM, Ackerley Tng wrote:
>
> > +static int restrictedmem_getattr(struct user_namespace *mnt_userns,
> > +const struct path *path, struct kstat *stat,
> > +u32 request_mask, unsigned int query_flags)
> {
> > +
On Tuesday, January 3, 2023 9:40 AM, Chao Peng wrote:
> > Because guest memory defaults to private, and now this patch stores
> > the attributes with KVM_MEMORY_ATTRIBUTE_PRIVATE instead of
> _SHARED,
> > it would bring more KVM_EXIT_MEMORY_FAULT exits at the beginning of
> > boot time. Maybe it
On Thursday, September 15, 2022 10:29 PM, Chao Peng wrote:
> +int inaccessible_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn,
> + int *order)
Better to remove "order" from this interface?
Some callers only need to get pfn, and no need to bother with
defining and
ew
> Morton ; Shuah Khan ;
> Mike Rapoport ; Steven Price ;
> Maciej S . Szmigiero ; Vlastimil Babka
> ; Vishal Annapurve ; Yu Zhang
> ; Kirill A. Shutemov
> ; Nakajima, Jun ;
> Hansen, Dave ; Andi Kleen ;
> aarca...@redhat.com; ddut...@redhat.com; dhild...@redhat.com; Quentin
>
On Wednesday, January 12, 2022 10:51 AM, Zeng, Guang wrote:
> To: Tian, Kevin ; Zhong, Yang ;
> qemu-devel@nongnu.org
> Cc: pbonz...@redhat.com; Christopherson,, Sean ;
> jing2@linux.intel.com; Wang, Wei W
> Subject: Re: [RFC PATCH 6/7] x86: Use new XSAVE ioctls handling
>
On Friday, November 26, 2021 10:31 AM, Jason Wang wrote:
>
> I've tested the code with migration before sending the patches, I see the hint
> works fine.
>
That's great (assume you saw great reduction in the migration time as well).
Reviewed-by: Wei Wang
Thanks,
Wei
On Friday, November 26, 2021 12:11 AM, David Hildenbrand wrote:
> On 25.11.21 17:09, Michael S. Tsirkin wrote:
> > On Thu, Nov 25, 2021 at 09:28:59AM +0100, David Hildenbrand wrote:
> >> On 25.11.21 03:20, Jason Wang wrote:
> >>> We only process the first in sg which may lead to the bitmap of the
On Friday, September 10, 2021 5:14 PM, Ashish Kalra wrote:
> > It seems this is enabled/disabled by the guest, which means that the guest
> can always refuse to be migrated?
> >
>
> Yes.
>
> Are there any specific concerns/issues with that ?
It's kind of wired if everybody rejects to migrate
On Friday, September 10, 2021 4:48 PM, Ashish Kalra wrote:
> On Fri, Sep 10, 2021 at 07:54:10AM +0000, Wang, Wei W wrote:
> There has been a long discussion on this implementation on KVM mailing list.
> Tracking shared memory via a list of ranges instead of using bitmap is more
> o
On Wednesday, August 4, 2021 8:00 PM, Ashish Kalra wrote:
> +/*
> + * Currently this exit is only used by SEV guests for
> + * MSR_KVM_MIGRATION_CONTROL to indicate if the guest
> + * is ready for migration.
> + */
> +static int kvm_handle_x86_msr(X86CPU *cpu, struct kvm_run *run) {
> +static
> From: Brijesh Singh
>
> When memory encryption is enabled, the hypervisor maintains a shared
> regions list which is referred by hypervisor during migration to check if
> page is
> private or shared. This list is built during the VM bootup and must be
> migrated
> to the target host so that
On Friday, July 23, 2021 4:17 PM, David Hildenbrand wrote:
> > On Friday, July 23, 2021 3:50 PM, David Hildenbrand wrote:
> >>
> >> Migration of a 8 GiB VM
> >> * within the same host
> >> * after Linux is up and idle
> >> * free page hinting enabled
> >> * after dirtying most VM memory using
On Friday, July 23, 2021 3:50 PM, David Hildenbrand wrote:
>
> Migration of a 8 GiB VM
> * within the same host
> * after Linux is up and idle
> * free page hinting enabled
> * after dirtying most VM memory using memhog
Thanks for the tests!
I think it would be better to test using idle guests
On Thursday, July 22, 2021 5:48 PM, David Hildenbrand wrote:
> On 22.07.21 10:30, Wei Wang wrote:
> > When skipping free pages to send, their corresponding dirty bits in
> > the memory region dirty bitmap need to be cleared. Otherwise the
> > skipped pages will be sent in the next round after the
On Friday, July 16, 2021 4:26 PM, David Hildenbrand wrote:
> >>> +/*
> >>> + * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
> >>> + * can make things easier sometimes since then start address
> >>> + * of the small chunk will always be 64 pages aligned so the
> >>> +
On Thursday, July 15, 2021 5:29 PM, David Hildenbrand wrote:
> On 15.07.21 09:53, Wei Wang wrote:
> > When skipping free pages to send, their corresponding dirty bits in
> > the memory region dirty bitmap need to be cleared. Otherwise the
> > skipped pages will be sent in the next round after the
On Wednesday, July 14, 2021 6:30 PM, David Hildenbrand wrote:
>
> On 14.07.21 12:27, Michael S. Tsirkin wrote:
> > On Wed, Jul 14, 2021 at 03:51:04AM -0400, Wei Wang wrote:
> >> When skipping free pages, their corresponding dirty bits in the
> >> memory region dirty bitmap need to be cleared.
On Tuesday, July 13, 2021 11:59 PM, Peter Xu wrote:
> On Tue, Jul 13, 2021 at 08:40:21AM +0000, Wang, Wei W wrote:
>
> Didn't get a chance to document it as it's in a pull now; but as long as
> you're okay
> with no-per-page lock (which I still don't agree with), I can
On Tuesday, July 13, 2021 6:22 PM, David Hildenbrand wrote:
> Can you send an official patch for the free page hinting clean_bmap handling I
> reported?
>
> I can then give both tests in combination a quick test (before/after this
> patch
> here).
>
Yes, I'll send, thanks!
Best,
Wei
On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:
> Taking the mutex every time for each dirty bit to clear is too slow,
> especially we'll
> take/release even if the dirty bit is cleared. So far it's only used to sync
> with
> special cases with qemu_guest_free_page_hint() against migration
On Friday, July 9, 2021 10:48 PM, Peter Xu wrote:
> On Fri, Jul 09, 2021 at 08:58:08AM +0000, Wang, Wei W wrote:
> > On Friday, July 9, 2021 2:31 AM, Peter Xu wrote:
> > > > > Yes I think this is the place I didn't make myself clear. It's
> > > > > not about
On Friday, July 9, 2021 2:31 AM, Peter Xu wrote:
> > > Yes I think this is the place I didn't make myself clear. It's not
> > > about sleeping, it's about the cmpxchg being expensive already when the vm
> is huge.
> >
> > OK.
> > How did you root cause that it's caused by cmpxchg, instead of lock
On Thursday, July 8, 2021 12:55 AM, Peter Xu wrote:
> On Wed, Jul 07, 2021 at 08:34:50AM +0000, Wang, Wei W wrote:
> > On Wednesday, July 7, 2021 1:47 AM, Peter Xu wrote:
> > > On Sat, Jul 03, 2021 at 02:53:27AM +, Wang, Wei W wrote:
> > > > + do {
>
On Thursday, July 8, 2021 12:44 AM, Peter Xu wrote:
> > > Not to mention the hard migration issues are mostly with non-idle
> > > guest, in that case having the balloon in the guest will be
> > > disastrous from this pov since it'll start to take mutex for each
> > > page, while balloon would
On Thursday, July 8, 2021 12:45 AM, Peter Xu wrote:
> On Wed, Jul 07, 2021 at 12:45:32PM +0000, Wang, Wei W wrote:
> > Btw, what would you think if we change mutex to QemuSpin? It will also
> > reduce
> the overhead, I think.
>
> As I replied at the other place, the b
On Wednesday, July 7, 2021 1:40 AM, Peter Xu wrote:
> On Tue, Jul 06, 2021 at 12:05:49PM +0200, David Hildenbrand wrote:
> > On 06.07.21 11:41, Wang, Wei W wrote:
> > > On Monday, July 5, 2021 9:42 PM, David Hildenbrand wrote:
> > > > On 03.07.21 04:53, Wang, Wei W wr
On Wednesday, July 7, 2021 2:00 AM, Peter Xu wrote:
> On Fri, Jul 02, 2021 at 02:29:41AM +0000, Wang, Wei W wrote:
> > With that, if free page opt is off, the mutex is skipped, isn't it?
>
> Yes, but when free page is on, it'll check once per page. As I mentioned I
> still
On Wednesday, July 7, 2021 1:47 AM, Peter Xu wrote:
> On Sat, Jul 03, 2021 at 02:53:27AM +0000, Wang, Wei W wrote:
> > + do {
> > +page_to_clear = start + (i++ << block->clear_bmap_shift);
>
> Why "i" needs to be shifted?
Just move to t
On Monday, July 5, 2021 9:42 PM, David Hildenbrand wrote:
> On 03.07.21 04:53, Wang, Wei W wrote:
> > On Friday, July 2, 2021 3:07 PM, David Hildenbrand wrote:
> >> On 02.07.21 04:48, Wang, Wei W wrote:
> >>> On Thursday, July 1, 2021 10:22 PM, David Hildenbrand w
On Friday, July 2, 2021 3:07 PM, David Hildenbrand wrote:
> On 02.07.21 04:48, Wang, Wei W wrote:
> > On Thursday, July 1, 2021 10:22 PM, David Hildenbrand wrote:
> >> On 01.07.21 14:51, Peter Xu wrote:
>
> I think that clearly shows the issue.
>
> My theory I did
On Thursday, July 1, 2021 10:22 PM, David Hildenbrand wrote:
> On 01.07.21 14:51, Peter Xu wrote:
> Spoiler alert: the introduction of clean bitmaps partially broke free page
> hinting
> already (as clearing happens deferred -- and might never happen if we don't
> migrate *any* page within a
On Thursday, July 1, 2021 8:51 PM, Peter Xu wrote:
> On Thu, Jul 01, 2021 at 04:42:38AM +0000, Wang, Wei W wrote:
> > On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:
> > > Taking the mutex every time for each dirty bit to clear is too slow,
> > > especially we'll tak
On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:
> Taking the mutex every time for each dirty bit to clear is too slow,
> especially we'll
> take/release even if the dirty bit is cleared. So far it's only used to sync
> with
> special cases with qemu_guest_free_page_hint() against migration
On Tuesday, October 20, 2020 4:01 PM, Kevin Wolf wrote:
> Am 20.10.2020 um 03:31 hat Wang, Wei W geschrieben:
> > Hi,
> >
> > Does anyone know the reason why raw-format.c doesn't have
> compression
> > support (but qcow has the supported added)? For example, raw ima
Hi,
Does anyone know the reason why raw-format.c doesn't have compression support
(but qcow has the supported added)?
For example, raw image backup with compression, "qemu-img convert -c -O raw
origin.img dist.img", doesn't work.
Thanks,
Wei
On Friday, December 14, 2018 7:17 PM, Dr. David Alan Gilbert wrote:
> > On 12/14/2018 05:56 PM, Dr. David Alan Gilbert wrote:
> > > * Wei Wang (wei.w.w...@intel.com) wrote:
> > > > On 12/13/2018 11:45 PM, Dr. David Alan Gilbert wrote:
> > > > > * Wei Wang (wei.w.w...@intel.com) wrote:
> > > > > >
On Saturday, November 17, 2018 12:48 AM, Paolo Bonzini wrote:
> To: qemu-devel@nongnu.org
> Subject: [Qemu-devel] [PATCH] migration: savevm: consult migration
> blockers
>
> There is really no difference between live migration and savevm, except that
> savevm does not require
On Tuesday, March 13, 2018 3:58 PM, Xiao Guangrong wrote:
>
> As compression is a heavy work, do not do it in migration thread, instead, we
> post it out as a normal page
>
> Signed-off-by: Xiao Guangrong
Hi Guangrong,
Dave asked me to help review your patch, so I
On Monday, March 26, 2018 11:04 PM, Daniel P. Berrangé wrote:
> On Mon, Mar 26, 2018 at 02:54:45PM +0000, Wang, Wei W wrote:
> > On Monday, March 26, 2018 7:09 PM, Daniel P. Berrangé wrote:
> > >
> > > As far as libvirt is concerned there are three sets of threads it
On Monday, March 26, 2018 7:09 PM, Daniel P. Berrangé wrote:
>
> As far as libvirt is concerned there are three sets of threads it provides
> control over
>
> - vCPUs - each VCPU in KVM has a thread. Libvirt provides per-thread
>tunable control
>
> - IOThreads - each named I/O thread can
On Monday, February 26, 2018 1:07 PM, Wei Wang wrote:
> On 02/09/2018 07:50 PM, Dr. David Alan Gilbert wrote:
> > * Wei Wang (wei.w.w...@intel.com) wrote:
> >> Use the free page reporting feature from the balloon device to clear
> >> the bits corresponding to guest free pages from the dirty
On Tuesday, February 6, 2018 5:32 PM, Stefan Hajnoczi wrote:
> On Tue, Feb 06, 2018 at 01:28:25AM +0000, Wang, Wei W wrote:
> > On Tuesday, February 6, 2018 12:26 AM, Stefan Hajnoczi wrote:
> > > On Fri, Feb 02, 2018 at 09:08:44PM +0800, Wei Wang wrote:
> > > > On
On Tuesday, February 6, 2018 12:26 AM, Stefan Hajnoczi wrote:
> On Fri, Feb 02, 2018 at 09:08:44PM +0800, Wei Wang wrote:
> > On 02/02/2018 01:08 AM, Michael S. Tsirkin wrote:
> > > On Tue, Jan 30, 2018 at 08:09:19PM +0800, Wei Wang wrote:
> > > > Issues:
> > > > Suppose we have both the vhost and
On Friday, February 2, 2018 11:26 PM, Stefan Hajnoczi wrote:
> On Tue, Jan 30, 2018 at 08:09:19PM +0800, Wei Wang wrote:
> > Background:
> > The vhost-user negotiation is split into 2 phases currently. The 1st
> > phase happens when the connection is established, and we can find
> > what's done in
On Wednesday, January 17, 2018 7:41 PM, Juan Quintela wrote:
> Wei Wang wrote:
> > +void skip_free_pages_from_dirty_bitmap(RAMBlock *block, ram_addr_t
> offset,
> > + size_t len) {
> > +long start = offset >> TARGET_PAGE_BITS,
> > +
On Friday, January 12, 2018 6:38 PM, Stefan Hajnoczi wrote:
> On Fri, Jan 12, 2018 at 02:44:00PM +0800, Wei Wang wrote:
> > On 01/11/2018 05:56 PM, Stefan Hajnoczi wrote:
> > > On Thu, Jan 11, 2018 at 6:31 AM, Wei Wang
> wrote:
> > > > On 01/11/2018 12:14 AM, Stefan Hajnoczi
On Wednesday, December 20, 2017 8:26 PM, Matthew Wilcox wrote:
> On Wed, Dec 20, 2017 at 06:34:36PM +0800, Wei Wang wrote:
> > On 12/19/2017 10:05 PM, Tetsuo Handa wrote:
> > > I think xb_find_set() has a bug in !node path.
> >
> > I think we can probably remove the "!node" path for now. It would
On Saturday, December 16, 2017 3:22 AM, Matthew Wilcox wrote:
> On Fri, Dec 15, 2017 at 10:49:15AM -0800, Matthew Wilcox wrote:
> > Here's the API I'm looking at right now. The user need take no lock;
> > the locking (spinlock) is handled internally to the implementation.
Another place I saw
> -Original Message-
> From: Tetsuo Handa [mailto:penguin-ker...@i-love.sakura.ne.jp]
> Sent: Sunday, December 17, 2017 6:22 PM
> To: Wang, Wei W <wei.w.w...@intel.com>; wi...@infradead.org
> Cc: virtio-...@lists.oasis-open.org; linux-ker...@vger.kernel.org; qem
On Monday, December 11, 2017 7:12 PM, Stefan Hajnoczi wrote:
> On Sat, Dec 09, 2017 at 04:23:17PM +0000, Wang, Wei W wrote:
> > On Friday, December 8, 2017 4:34 PM, Stefan Hajnoczi wrote:
> > > On Fri, Dec 8, 2017 at 6:43 AM, Wei Wang <wei.w.w...@intel.com>
> wrote:
>
On Friday, December 8, 2017 4:34 PM, Stefan Hajnoczi wrote:
> On Fri, Dec 8, 2017 at 6:43 AM, Wei Wang wrote:
> > On 12/08/2017 07:54 AM, Michael S. Tsirkin wrote:
> >>
> >> On Thu, Dec 07, 2017 at 06:28:19PM +, Stefan Hajnoczi wrote:
> >>>
> >>> On Thu, Dec 7, 2017 at
On Friday, December 8, 2017 10:28 PM, Michael S. Tsirkin wrote:
> On Fri, Dec 08, 2017 at 06:08:05AM +, Stefan Hajnoczi wrote:
> > On Thu, Dec 7, 2017 at 11:54 PM, Michael S. Tsirkin
> wrote:
> > > On Thu, Dec 07, 2017 at 06:28:19PM +, Stefan Hajnoczi wrote:
> > >> On
On Wednesday, December 6, 2017 9:50 PM, Stefan Hajnoczi wrote:
> On Tue, Dec 05, 2017 at 11:33:09AM +0800, Wei Wang wrote:
> > Vhost-pci is a point-to-point based inter-VM communication solution.
> > This patch series implements the vhost-pci-net device setup and
> > emulation. The device is
On Friday, December 1, 2017 9:02 PM, Tetsuo Handa wrote:
> Wei Wang wrote:
> > On 11/30/2017 06:34 PM, Tetsuo Handa wrote:
> > > Wei Wang wrote:
> > >> + * @start: the start of the bit range, inclusive
> > >> + * @end: the end of the bit range, inclusive
> > >> + *
> > >> + * This function is used
On Thursday, November 30, 2017 6:36 PM, Tetsuo Handa wrote:
> Wei Wang wrote:
> > +static inline int xb_set_page(struct virtio_balloon *vb,
> > + struct page *page,
> > + unsigned long *pfn_min,
> > + unsigned long
1 - 100 of 143 matches
Mail list logo