On 12/5/22 10:50, Peter Maydell wrote:
@@ -2639,6 +2640,14 @@ static bool get_phys_addr_twostage(CPUARMState *env, 
S1Translate *ptw,
          return ret;
      }

+    /*
+     * Use the maximum of the S1 & S2 page size, so that invalidation
+     * of pages > TARGET_PAGE_SIZE works correctly.
+     */
+    if (result->f.lg_page_size < s1_lgpgsz) {
+        result->f.lg_page_size = s1_lgpgsz;
+    }
+
      /* Combine the S1 and S2 cache attributes. */
      hcr = arm_hcr_el2_eff_secstate(env, is_secure);
      if (hcr & HCR_DC) {

Firstly, what if the lg_page_size is < TARGET_PAGE_SIZE ?
I think this can't happen for VMSA, but for PMSA it will
when the region (in either S1 or S2) is less than the page size
(in which case lg_page_size is 0). Presumably in this case we
want to set the result's lg_page_size to also be 0 to
preserve the "don't put this in the TLB" effect.

Hmm, I hadn't considered that -- probably because I assumed a-profile. You're right that we should preserve the "don't cache the result" behaviour.


Secondly, how does this work for VMSA? Suppose that stage 1
is using 4K pages and stage 2 is using 64K pages. We will then
claim here that the result lg_page_size is 64K, but the
attributes and mapping in the result are only valid for
the 4K page that we looked up in stage 1 -- the surrounding
4K pages could have entirely different permissions/mapping.

This only works because the middle-end only registers one page, at 
TARGET_PAGE_SIZE.

But we need to record this as a large page, so that a flush of the (64k) stage2 page address affects all of the (4k) stage1 page entries that it covers.

Perhaps it would be less confusing in N target/ implementations if we have two lg_page_size structure members, and handle the unioning in the middle-end? Soliciting suggestions for what to name such a beast (considering RME adds a stage3 lookup, and associated page/granule sizes), and how to signal that it is or isn't used (presumably 0, meaning that stageN+1 can't have a "don't record" setting).


r~


Reply via email to