Hi Mingwei,
On Tue, Sep 27, 2022 at 12:27:15AM +, Mingwei Zhang wrote:
> Cleanup __get_fault_info() to take out the code that checks HPFAR. The
> conditions in __get_fault_info() that checks if HPFAR contains a valid IPA
> is slightly messy in that several conditions are written within one IF
On 9/27/22 12:51 AM, Marc Zyngier wrote:
[Same distribution list as Gavin's dirty-ring on arm64 series]
This is an update on the initial series posted as [0].
As Gavin started posting patches enabling the dirty-ring infrastructure
on arm64 [1], it quickly became apparent that the API was never
There are two states, which need to be cleared before next mode
is executed. Otherwise, we will hit failure as the following messages
indicate.
- The variable 'dirty_ring_vcpu_ring_full' shared by main and vcpu
thread. It's indicating if the vcpu exit due to full ring buffer.
The value can be
In the dirty ring case, we rely on vcpu exit due to full dirty ring
state. On ARM64 system, there are 4096 host pages when the host
page size is 64KB. In this case, the vcpu never exits due to the
full dirty ring state. The similar case is 4KB page size on host
and 64KB page size on guest. The vcpu
In vcpu_map_dirty_ring(), the guest's page size is used to figure out
the offset in the virtual area. It works fine when we have same page
sizes on host and guest. However, it fails when the page sizes on host
and guest are different on arm64, like below error messages indicates.
# ./dirty_log_t
Enable ring-based dirty memory tracking on arm64 by selecting
CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL and providing the ring buffer's
physical page offset (KVM_DIRTY_LOG_PAGE_OFFSET).
Signed-off-by: Gavin Shan
---
Documentation/virt/kvm/api.rst| 2 +-
arch/arm64/include/uapi/asm/kvm.h | 1 +
arch
Not all architectures like ARM64 need to override the function. Move
its declaration to kvm_dirty_ring.h to avoid the following compiling
warning on ARM64 when the feature is enabled.
arch/arm64/kvm/../../../virt/kvm/dirty_ring.c:14:12:\
warning: no previous prototype for 'kvm_cpu_dirt
This adds KVM_REQ_RING_SOFT_FULL, which is raised when the dirty
ring of the specific VCPU becomes softly full in kvm_dirty_ring_push().
The VCPU is enforced to exit when the request is raised and its
dirty ring is softly full on its entrance.
The event is checked and handled in the newly introduc
This series enables the ring-based dirty memory tracking for ARM64.
The feature has been available and enabled on x86 for a while. It
is beneficial when the number of dirty pages is small in a checkpointing
system or live migration scenario. More details can be found from
fb04a1eddb1a ("KVM: X86: I
Cleanup __get_fault_info() to take out the code that checks HPFAR. The
conditions in __get_fault_info() that checks if HPFAR contains a valid IPA
is slightly messy in that several conditions are written within one IF
statement acrossing multiple lines and are connected with different logical
operat
Presently stage2_apply_range() works on a batch of memory addressed by a
stage 2 root table entry for the VM. Depending on the IPA limit of the
VM and PAGE_SIZE of the host, this could address a massive range of
memory. Some examples:
4 level, 4K paging -> 512 GB batch size
3 level, 64K pagin
On Thu, Sep 22, 2022 at 07:32:42PM +, Sean Christopherson wrote:
> On Thu, Sep 22, 2022, Ricardo Koller wrote:
> > +/* Returns true to continue the test, and false if it should be skipped. */
> > +static bool punch_hole_in_memslot(struct kvm_vm *vm,
>
> This is a very misleading name, and IMO
Hi,
On Tue, Sep 20, 2022 at 04:59:52PM +0200, Andrew Jones wrote:
> On Tue, Sep 20, 2022 at 02:20:48PM +0100, Alexandru Elisei wrote:
> > Hi,
> >
> > On Tue, Sep 20, 2022 at 10:45:53AM +0200, Andrew Jones wrote:
> > > On Tue, Aug 09, 2022 at 10:15:44AM +0100, Alexandru Elisei wrote:
> > > > With
In order to differenciate between architectures that require no extra
synchronisation when accessing the dirty ring and those who do,
add a new capability (KVM_CAP_DIRTY_LOG_RING_ACQ_REL) that identify
the latter sort. TSO architectures can obviously advertise both, while
relaxed architectures must
Pick KVM_CAP_DIRTY_LOG_RING_ACQ_REL if exposed by the kernel.
Signed-off-by: Marc Zyngier
---
tools/testing/selftests/kvm/dirty_log_test.c | 3 ++-
tools/testing/selftests/kvm/lib/kvm_util.c | 5 -
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/d
Since x86 is TSO (give or take), allow it to advertise the new
ACQ_REL version of the dirty ring capability. No other change is
required for it.
Signed-off-by: Marc Zyngier
---
arch/x86/kvm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
in
Now that the kernel can expose to userspace that its dirty ring
management relies on explicit ordering, document these new requirements
for VMMs to do the right thing.
Signed-off-by: Marc Zyngier
---
Documentation/virt/kvm/api.rst | 17 +++--
1 file changed, 15 insertions(+), 2 delet
The current implementation of the dirty ring has an implicit requirement
that stores to the dirty ring from userspace must be:
- be ordered with one another
- visible from another CPU executing a ring reset
While these implicit requirements work well for x86 (and any other
TSO-like architecture)
[Same distribution list as Gavin's dirty-ring on arm64 series]
This is an update on the initial series posted as [0].
As Gavin started posting patches enabling the dirty-ring infrastructure
on arm64 [1], it quickly became apparent that the API was never intended
to work on relaxed memory ordering
In order to preserve ordering, make sure that the flag accesses
in the dirty log are done using acquire/release accessors.
Signed-off-by: Marc Zyngier
---
tools/testing/selftests/kvm/dirty_log_test.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests
Hi,
On Tue, Sep 20, 2022 at 11:39:56AM +0200, Andrew Jones wrote:
>
> I guess this should be squashed into one of the early patches in this
> series since we don't have this issue with the current code.
Will do, thanks for the suggestion!
Alex
>
> Thanks,
> drew
>
>
> On Tue, Aug 09, 2022 a
Hi,
On Tue, Sep 20, 2022 at 10:58:15AM +0200, Andrew Jones wrote:
> On Tue, Aug 09, 2022 at 10:15:46AM +0100, Alexandru Elisei wrote:
> > phys_end was used to cap the linearly mapped memory to 3G to allow 1G of
> > room for the vmalloc area to grown down. This was made useless in commit
> > c1cd1a
On Tue, 20 Sep 2022 12:06:58 -0700, Elliot Berman wrote:
> Ignore kvm-arm.mode if !is_hyp_mode_available(). Specifically, we want
> to avoid switching kvm_mode to KVM_MODE_PROTECTED if hypervisor mode is
> not available. This prevents "Protected KVM" cpu capability being
> reported when Linux is bo
On Fri, 23 Sep 2022 14:54:47 +0800, Gavin Shan wrote:
> The ITS collection is guranteed to be !NULL when update_affinity_collection()
> is called. So we needn't check ITE's collection with NULL because the
> check has been included to the later one.
>
> Remove the duplicate check in update_affinit
On Mon, 26 Sep 2022 00:21:08 +0100,
Gavin Shan wrote:
>
> Hi Marc,
>
> On 9/24/22 9:56 PM, Marc Zyngier wrote:
> > Side note: please make sure you always Cc all the KVM/arm64 reviewers
> > when sending patches (now added).
> >
>
> Sure. The reason, why I didn't run './scripts/get_maintainer.pl
25 matches
Mail list logo