Hi Carlo,

On 29/01/2024 18:17, Carlo Nonato wrote:
> 
> 
> Shared caches in multi-core CPU architectures represent a problem for
> predictability of memory access latency. This jeopardizes applicability
> of many Arm platform in real-time critical and mixed-criticality
> scenarios. We introduce support for cache partitioning with page
> coloring, a transparent software technique that enables isolation
> between domains and Xen, and thus avoids cache interference.
> 
> When creating a domain, a simple syntax (e.g. `0-3` or `4-11`) allows
> the user to define assignments of cache partitions ids, called colors,
> where assigning different colors guarantees no mutual eviction on cache
> will ever happen. This instructs the Xen memory allocator to provide
> the i-th color assignee only with pages that maps to color i, i.e. that
> are indexed in the i-th cache partition.
> 
> The proposed implementation supports the dom0less feature.
> The proposed implementation doesn't support the static-mem feature.
> The solution has been tested in several scenarios, including Xilinx Zynq
> MPSoCs.
> 
> Open points:
> - Michal found some problem here
> https://patchew.org/Xen/20230123154735.74832-1-carlo.non...@minervasys.tech/20230123154735.74832-4-carlo.non...@minervasys.tech/#a7a06a26-ae79-402c-96a4-a1ebfe8b5...@amd.com
>   but I havent fully understood it. In the meantime I want to advance with v6,
>   so I hope we can continue the discussion here.
The problem is that when LLC coloring is enabled, you use allocate_memory() for 
hwdom, just like for any
other domain, so it will get assigned a VA range from a typical Xen guest 
memory map (i.e. GUEST_RAM{0,1}_{BASE,SIZE}).
This can result in memory conflicts given that the HW resources are mapped 1:1 
to it (MMIO, reserved memory regions).
Instead, for hwdom we should use the host memory layout to prevent these 
conflicts. A good example is find_unallocated_memory().
You need to:
 - fetch available RAM,
 - remove reserved-memory regions,
 - report ranges (+aligning the base and skipping banks that are not reasonable 
big)
This will give you a list of memory regions you can then use to pass to 
allocate_bank_memory().
The problem, as always, is to determine the size of the first region so that is 
is sufficiently
large to keep kernel+dtb+initrd in relatively close proximity.

~Michal


Reply via email to