Re: [PATCH v1 1/5] KVM: arm64: Enable ring-based dirty memory tracking

2022-09-01 Thread Paolo Bonzini

On 8/30/22 16:42, Peter Xu wrote:

Marc,

I thought we won't hit this as long as we properly take care of other
orderings of (a) gfn push, and (b) gfn collect, but after a second thought
I think it's indeed logically possible that with a reversed ordering here
we can be reading some garbage gfn before (a) happens butt also read the
valid flag after (b).

It seems we must have all the barriers correctly applied always.  If that's
correct, do you perhaps mean something like this to just add the last piece
of barrier?


Okay, so I thought about it some more and it's quite tricky.

Strictly speaking, the synchronization is just between userspace and 
kernel. The fact that the actual producer of dirty pages is in another 
CPU is a red herring, because reset only cares about harvested pages.


In other words, the dirty page ring is essentially two ring buffers in 
one and we only care about the "harvested ring", not the "produced ring".


On the other hand, it may happen that userspace has set more RESET flags 
while the ioctl is ongoing:



CPU0 CPU1   CPU2
fill gfn0
store-rel flags for gfn0
fill gfn1
store-rel flags for gfn1
load-acq flags for gfn0
set RESET for gfn0
load-acq flags for gfn1
set RESET for gfn1
do ioctl! --->
 ioctl(RESET_RINGS)
fill gfn2
store-rel flags for gfn2
load-acq flags for gfn2
set RESET for gfn2
 process gfn0
 process gfn1
 process gfn2
do ioctl!
etc.

The three load-acquire in CPU0 synchronize with the three store-release 
in CPU2, but CPU0 and CPU1 are only synchronized up to gfn1 and CPU1 may 
miss gfn2's fields other than flags.


The kernel must be able to cope with invalid values of the fields, and 
userspace will invoke the ioctl once more.  However, once the RESET flag 
is cleared on gfn2, it is lost forever, therefore in the above scenario 
CPU1 must read the correct value of gfn2's fields.


Therefore RESET must be set with a store-release, that will synchronize 
with a load-acquire in CPU1 as you suggested.


Paolo


diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index f4c2a6eb1666..ea620bfb012d 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -84,7 +84,7 @@ static inline void kvm_dirty_gfn_set_dirtied(struct 
kvm_dirty_gfn *gfn)
 
 static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)

 {
-   return gfn->flags & KVM_DIRTY_GFN_F_RESET;
+   return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
 }
 
 int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)

===8<===

Thanks,

--
Peter Xu



___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag

2022-09-01 Thread Catalin Marinas
On Thu, Aug 11, 2022 at 03:16:08PM +0800, kernel test robot wrote:
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on arm64/for-next/core]
> [also build test WARNING on linus/master next-20220811]
> [cannot apply to kvmarm/next arm/for-next soc/for-next xilinx-xlnx/master 
> v5.19]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
> 
> url:
> https://github.com/intel-lab-lkp/linux/commits/Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git 
> for-next/core
> config: loongarch-defconfig 
> (https://download.01.org/0day-ci/archive/20220811/202208111500.62e0bl2l-...@intel.com/config)
> compiler: loongarch64-linux-gcc (GCC) 12.1.0
> reproduce (this is a W=1 build):
> wget 
> https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
> ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # 
> https://github.com/intel-lab-lkp/linux/commit/1a400517d8428df0ec9f86f8d303b2227ee9702f
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review 
> Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
> git checkout 1a400517d8428df0ec9f86f8d303b2227ee9702f
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 
> O=build_dir ARCH=loongarch SHELL=/bin/bash
> 
> If you fix the issue, kindly add following tag where applicable
> Reported-by: kernel test robot 
> 
> All warnings (new ones prefixed by >>):
> 
> >> mm/memory.c:92:2: warning: #warning Unfortunate NUMA and NUMA Balancing 
> >> config, growing page-frame for last_cpupid. [-Wcpp]
>   92 | #warning Unfortunate NUMA and NUMA Balancing config, growing 
> page-frame for last_cpupid.
>  |  ^~~
> 
> 
> vim +92 mm/memory.c
> 
> 42b7772812d15b Jan Beulich2008-07-23  90  
> af27d9403f5b80 Arnd Bergmann  2018-02-16  91  #if 
> defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
> 90572890d20252 Peter Zijlstra 2013-10-07 @92  #warning Unfortunate NUMA and 
> NUMA Balancing config, growing page-frame for last_cpupid.
> 75980e97daccfc Peter Zijlstra 2013-02-22  93  #endif
> 75980e97daccfc Peter Zijlstra 2013-02-22  94  

It looks like ith CONFIG_NUMA_BALANCING=y on loongarch we run out of
spare bits in page->flags to fit last_cpupid. The reason we don't see it
on arm64 is that we select SPARSEMEM_VMEMMAP and SECTIONS_WIDTH becomes
0. On loongarch SECTIONS_WIDTH takes 19 bits (48 - 29) in page->flags.

I think instead of always defining PG_arch_{2,3} if CONFIG_64BIT, we
could add a CONFIG_ARCH_WANTS_PG_ARCH_23 option and only select it on
arm64 for the time being.

-- 
Catalin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics

2022-09-01 Thread Catalin Marinas
On Wed, Aug 10, 2022 at 12:30:27PM -0700, Peter Collingbourne wrote:
> From: Catalin Marinas 
> 
> Currently the PG_mte_tagged page flag mostly means the page contains
> valid tags and it should be set after the tags have been cleared or
> restored. However, in mte_sync_tags() it is set before setting the tags
> to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for
> shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on
> write in another thread can cause the new page to have stale tags.
> Similarly, tag reading via ptrace() can read stale tags of the
> PG_mte_tagged flag is set before actually clearing/restoring the tags.
> 
> Fix the PG_mte_tagged semantics so that it is only set after the tags
> have been cleared or restored. This is safe for swap restoring into a
> MAP_SHARED or CoW page since the core code takes the page lock. Add two
> functions to test and set the PG_mte_tagged flag with acquire and
> release semantics. The downside is that concurrent mprotect(PROT_MTE) on
> a MAP_SHARED page may cause tag loss. This is already the case for KVM
> guests if a VMM changes the page protection while the guest triggers a
> user_mem_abort().
> 
> Signed-off-by: Catalin Marinas 
> Cc: Will Deacon 
> Cc: Marc Zyngier 
> Cc: Steven Price 
> Cc: Peter Collingbourne 
> ---
> v3:
> - fix build with CONFIG_ARM64_MTE disabled

When you post someone else's patches (thanks for updating them BTW),
please add your Signed-off-by line. You should also add a note in the
SoB block about the changes you made, so something like:

[p...@google.com: fix build with CONFIG_ARM64_MTE disabled]
Singed-off-by: your name/address

-- 
Catalin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm