================
@@ -443,8 +443,8 @@ static const MemoryMapParams Linux_I386_MemoryMapParams = {
 static const MemoryMapParams Linux_X86_64_MemoryMapParams = {
     0,              // AndMask (not used)
     0x500000000000, // XorMask
-    0,              // ShadowBase (not used)
-    0x100000000000, // OriginBase
+    0x200000,       // ShadowBase (== kShadowOffset)
----------------
Camsyn wrote:

I don’t think this is something that can be resolved simply by using a closed 
end (i.e. `mem2shadow(end - 1)`).

If we directly embed kShadowOffset into the XOR mask, as follows:
```cpp
#  define MEM_TO_SHADOW(mem)  (((uptr)(mem)) ^ (0x500000000000ULL + 
kShadowOffset))
```

Currently, kShadowOffset is 2 MiB, so this change would cause the lower 2 MiB 
and the upper 2 MiB within each 4 MiB shadow region to be swapped.

This may already be sufficient to mitigate the linear mapping issue  (the 
mapping is linear within each 2 MiB region, but it is still NOT linear.

This kind of non-linearity would require special handling for every contiguous 
copy, which is quite tricky. 
For example, with `memcpy(dst, src, n)`, we would no longer be able to directly 
apply the same operation to the shadow as `memcpy(mem2shadow(dst), 
mem2shadow(src), n)`, because the shadow corresponding to `src` may not be the 
start of the shadow range for the entire `[src, src + n)` region.

https://github.com/llvm/llvm-project/pull/171993
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to