On 20.07.20 18:26, 'Marco Solieri' via Jailhouse wrote:
On Wed, Jun 17, 2020 at 10:49:55AM +0200, Jan Kiszka wrote:
On 15.06.20 10:11, Marco Solieri wrote:
On Wed, May 27, 2020 at 05:20:05PM +0200, Jan Kiszka wrote:
On 26.05.20 15:24, Marco Solieri wrote:
On Mon, May 04, 2020 at 08:54:32PM +0200, Jan Kiszka wrote:
On 22.04.20 10:51, Jan Kiszka wrote:
On 22.04.20 09:22, Marco Solieri wrote:
On Wed, Apr 22, 2020 at 08:42:32AM +0200, Jan Kiszka wrote:
On 27.03.19 13:18, Marco Solieri wrote:
Predictability of memory access latency is severely
menaced by the multi-core architectures where the
last level of cache (LLC) is shared, jeopardizing
applicability of many Arm platform in real-time
critical and mixed-criticality scenarios. Support
for cache coloring is introduced, a transparent
software technique allowing partitioning the LLC to
avoid mutual interference between inmates. [...]

Thanks for updating this! I will refresh my caches on
the topic and provide feedback soon (I already have
some questions and remarks but I'd like to double-check them).

Looking forward to hear from you.


Done with the deeper review. Overall, the series looks fairly good. I see just two bigger open issues:

- inmate loading interface - more architectural independence

But I think those should be solvable.

The major point you raise is that the impact on the hypervisor code size should be minimised -- the inmate loading interface. We took a while to consider and weigh the
various alternative designs.

First of all, let us consider the optimal solution in this sense. That would be placing the whole colouring logic outside the hypervisor, in the Linux driver, or in the userspace tools. No matter how implemented, this solution would require, sooner or later, to pass to the hypervisor a list of memory regions, one per each memory segment to be mapped. Now, such list would grow unacceptably quickly, wasting a lot of memory to store it. Take for instance a Linux inmate, and suppose 128 MiB to be its memory reservation requirement. Now, assume that each consecutive fragment is the shortest possible, i.e. page of 4 KiB. This means we need 32 Ki elements, each sizing 16 B, which is 512
KiB in total.

This brings us to a design conclusion. The mere colouring logic -- i.e. the algorithm which conceptually expands the colour selection within a memory area into the lengthy list of contiguously-mapped segment (next_col) -- must be placed together with the mapping function (paging_create).

We believe we can leave everything else outside the hypervisor without much effort. We can move in the driver: - the cache probe function (get_llc_waysize) - the initialisation routines (coloring_paging_init and coloring_cell_init).

We believe this is the best compromise.

In this case, a minor issue is also worth to be discussed. The cell load function requires an IPA-contiguous mapping for the memcpy to be efficient. This in turn requires such mapping to be performed by the driver (we don't want to add an hypercall, right? ;-)), thus including a second copy of the colouring logic (next_col). It would be nice, perhaps, to have a 'common' section where to place code shared
between hypervisor and the driver.

Thanks for the explanations. My current feeling is that I need
 to look closer into the implementation so that I can argue
here on eye level. Will try to schedule that soon and come back
to you!

Any news about it? We have time available to follow up for the next month or so.

Not yet. Started to look into it but got distracted again. As it is
more complex than I thought, I need to find some hours of continuous work on that. Should be doable before July, though.

We are designing some extensions to the cache colouring feature, namely the root-cell dynamic coloring and SMMU support. Willing to implement it in the next months, it would be quite valuable for us to
have some feedback and agreements about this initial series.


Sorry, I'm juggling tasks, and I had to drop and pick up this one multiple times:

I've hacked up striped copying into cell memory into the driver, also
factoring out a tiny helper (header) to calculate the virtual-to-colored offset into a region. Should be reusable for mapping as well. However, this work is not yet tested. Let me see what I can do tomorrow morning so that I can at least share this and you can possibly pick something up.

Regarding dynamic coloring, I can only repeat what I stated before,
multiple times: I'm extremely pessimistic that you can turn on or
reconfigure an IOMMU while you may have transactions in flight that are
affected by that change. How to collect the pieces when you do not know
if a transaction finished and which address it hit, the one before or
after the change? That is exactly the scenario when trying to move a
root cell from uncolored to colored memory. IOW: You may implement this
but you cannot make it robust.

A more promising path is pre-linux Jailhouse boot, maybe even without
root cells after that at all (needed anyway for shrinking the runtime
code further).

More important to me would be coloring of the runtime paths of the
hypervisor. Here the question is if the simplistic approach taken e.g.
by Xen to just assign a single color-set to the hypervisor, shared by
all cells, is enough. Or do we rather want per-cell coloring of the
hypervisor, using the color of the cell it is serving= The latter is
more complex, I know, but definitely more partitioning friendly (read:
deterministic). Before deciding which way to take, it would be good to
have some numbers.


One feedback I can already provide: Any kind of runtime validation
of the colored config like color_root_cell_management has to be moved into jailhouse-config-check.

Good idea.  We will look into the script soon (hasn't it been
merged, yet, is it?).


It's in master. And that reminds me that I still need to review some related series...

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

--
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/615c92ff-593f-ad69-ea87-1ad439d211e2%40siemens.com.

Reply via email to