On 22.04.20 09:22, Marco Solieri wrote:
On Wed, Apr 22, 2020 at 08:42:32AM +0200, Jan Kiszka wrote:
On 27.03.19 13:18, Marco Solieri wrote:
Predictability of memory access latency is severely menaced by the
multi-core architectures where the last level of cache (LLC) is
shared, jeopardizing applicability of many Arm platform in real-time
critical and mixed-criticality scenarios. Support for cache coloring
is introduced, a transparent software technique allowing
partitioning the LLC to avoid mutual interference between inmates.
[...]

Thanks for updating this! I will refresh my caches on the topic and
provide feedback soon (I already have some questions and remarks but
I'd like to double-check them).

Looking forward to hear from you.
As you likely read, there are better chances in sight to also address
the root cell issue by booting Jailhouse from a loader.

I share the same view.

On the other hand, it ties the cache colouring with the
Linux-independent boot.  This is not ideal from an quality perspective,
because it introduces a dependency between otherwise unrelated features,
including one definitely optional (as long as Jailhouse will stay a
"Linux-based hypervisor").  Also, from a process perspective, it forces
the colouring-related activities and deliveries to be postponed after
reaching a somewhat stable architecture for the independent loader
(colouring pages is a loader matter).

The other option is the hot-remapping of the root-cell memory, which we
already wrote and tested on an older version of Jailhouse extended with
a SMMU support.  From a quality perspective, it looks comparable, and it
does not introduces constraints on the development process.


As pointed out back then, there are still open questions regarding the reliability of such a hot-remapping approach, besides the complexity.

Anyway, we now do have SMMU support in Jailhouse (first issue to report against your series, patch 9 ;) ), we could look into that systematically.


That would then leave us only with the question how to handle the
hypervisor itself /wrt coloring.

Correct.


Provided that can buy us worthwhile improvements.

We already have experimentally proven on two other hypervisors (Xen and
Bao) that the interrupt response time hugely depends on the cache
performances of the hypervisor's routines for guest injection.  Cache
partitioning is therefore mandatory for predictability.


What measures did you apply on the hypervisors? Replicate the code into memory that has the "right" local color? Ensure that core- and guest-local data is in color memory? How did you handle naturally shared r/w data structures?

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

--
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/9605c893-d940-ce35-8301-832d31382c88%40siemens.com.

Reply via email to