Re: [PATCH 5/5] target/riscv: Add pointer mask support for instruction fetch

2023-03-27 Thread liweiwei



On 2023/3/28 11:31, Richard Henderson wrote:

On 3/27/23 18:55, liweiwei wrote:


On 2023/3/28 02:04, Richard Henderson wrote:

On 3/27/23 03:00, Weiwei Li wrote:
@@ -1248,6 +1265,10 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr 
address, int size,
  qemu_log_mask(CPU_LOG_MMU, "%s ad %" VADDR_PRIx " rw %d 
mmu_idx %d\n",

    __func__, address, access_type, mmu_idx);
  +    if (access_type == MMU_INST_FETCH) {
+    address = adjust_pc_address(env, address);
+    }


Why do you want to do this so late, as opposed to earlier in 
cpu_get_tb_cpu_state?


In this way, the pc for tb may be different from the reg pc. Then the 
pc register will be wrong if sync from tb.


Hmm, true.

But you certainly cannot adjust the address in tlb_fill, as you'll be 
producing different result for read/write and exec.  You could 
plausibly use a separate mmu_idx, but that's not ideal either.


The best solution might be to implement pc-relative translation 
(CF_PCREL).  At which point cpu_pc always has the correct results and 
we make relative adjustments to that.


I'm not very familiar with how CF_PCREL works currently. I'll try this 
way later.


Regards,

Weiwei Li




r~





Re: [PATCH 5/5] target/riscv: Add pointer mask support for instruction fetch

2023-03-27 Thread Richard Henderson

On 3/27/23 18:55, liweiwei wrote:


On 2023/3/28 02:04, Richard Henderson wrote:

On 3/27/23 03:00, Weiwei Li wrote:

@@ -1248,6 +1265,10 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int 
size,
  qemu_log_mask(CPU_LOG_MMU, "%s ad %" VADDR_PRIx " rw %d mmu_idx %d\n",
    __func__, address, access_type, mmu_idx);
  +    if (access_type == MMU_INST_FETCH) {
+    address = adjust_pc_address(env, address);
+    }


Why do you want to do this so late, as opposed to earlier in 
cpu_get_tb_cpu_state?


In this way, the pc for tb may be different from the reg pc. Then the pc register will be 
wrong if sync from tb.


Hmm, true.

But you certainly cannot adjust the address in tlb_fill, as you'll be producing different 
result for read/write and exec.  You could plausibly use a separate mmu_idx, but that's 
not ideal either.


The best solution might be to implement pc-relative translation (CF_PCREL).  At which 
point cpu_pc always has the correct results and we make relative adjustments to that.



r~



Re: [PATCH 5/5] target/riscv: Add pointer mask support for instruction fetch

2023-03-27 Thread liweiwei


On 2023/3/28 10:31, LIU Zhiwei wrote:


On 2023/3/28 9:55, liweiwei wrote:


On 2023/3/28 02:04, Richard Henderson wrote:

On 3/27/23 03:00, Weiwei Li wrote:
@@ -1248,6 +1265,10 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr 
address, int size,
  qemu_log_mask(CPU_LOG_MMU, "%s ad %" VADDR_PRIx " rw %d 
mmu_idx %d\n",

    __func__, address, access_type, mmu_idx);
  +    if (access_type == MMU_INST_FETCH) {
+    address = adjust_pc_address(env, address);
+    }


Why do you want to do this so late, as opposed to earlier in 
cpu_get_tb_cpu_state?


In this way, the pc for tb may be different from the reg pc. 

I don't understand.

Then the pc register will be wrong if sync from tb.


I think you should give an explain here why it is wrong.

Zhiwei


Assume the pc is 0x1fff , pmmask is 0xf000 , if we adjust pc in  
cpu_get_tb_cpu_state,


then the tb->pc will be 0x0fff .

If we sync pc from tb by riscv_cpu_synchronize_from_tb()

Then the pc will be updated to 0x0fff  in this case, which will 
different from the original value.


I ignore many internal steps in above case. Any critical condition I 
missed? or any misunderstood?


Regards,

Weiwei Li





Regards,

Weiwei Li




r~

Re: [PATCH 5/5] target/riscv: Add pointer mask support for instruction fetch

2023-03-27 Thread LIU Zhiwei



On 2023/3/28 9:55, liweiwei wrote:


On 2023/3/28 02:04, Richard Henderson wrote:

On 3/27/23 03:00, Weiwei Li wrote:
@@ -1248,6 +1265,10 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr 
address, int size,
  qemu_log_mask(CPU_LOG_MMU, "%s ad %" VADDR_PRIx " rw %d 
mmu_idx %d\n",

    __func__, address, access_type, mmu_idx);
  +    if (access_type == MMU_INST_FETCH) {
+    address = adjust_pc_address(env, address);
+    }


Why do you want to do this so late, as opposed to earlier in 
cpu_get_tb_cpu_state?


In this way, the pc for tb may be different from the reg pc. 

I don't understand.

Then the pc register will be wrong if sync from tb.


I think you should give an explain here why it is wrong.

Zhiwei



Regards,

Weiwei Li




r~




Re: [PATCH 5/5] target/riscv: Add pointer mask support for instruction fetch

2023-03-27 Thread liweiwei



On 2023/3/28 02:04, Richard Henderson wrote:

On 3/27/23 03:00, Weiwei Li wrote:
@@ -1248,6 +1265,10 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr 
address, int size,
  qemu_log_mask(CPU_LOG_MMU, "%s ad %" VADDR_PRIx " rw %d mmu_idx 
%d\n",

    __func__, address, access_type, mmu_idx);
  +    if (access_type == MMU_INST_FETCH) {
+    address = adjust_pc_address(env, address);
+    }


Why do you want to do this so late, as opposed to earlier in 
cpu_get_tb_cpu_state?


In this way, the pc for tb may be different from the reg pc. Then the pc 
register will be wrong if sync from tb.


Regards,

Weiwei Li




r~





Re: [PATCH 5/5] target/riscv: Add pointer mask support for instruction fetch

2023-03-27 Thread Richard Henderson

On 3/27/23 03:00, Weiwei Li wrote:

@@ -1248,6 +1265,10 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int 
size,
  qemu_log_mask(CPU_LOG_MMU, "%s ad %" VADDR_PRIx " rw %d mmu_idx %d\n",
__func__, address, access_type, mmu_idx);
  
+if (access_type == MMU_INST_FETCH) {

+address = adjust_pc_address(env, address);
+}


Why do you want to do this so late, as opposed to earlier in 
cpu_get_tb_cpu_state?


r~



Re: [PATCH 5/5] target/riscv: Add pointer mask support for instruction fetch

2023-03-27 Thread Daniel Henrique Barboza




On 3/27/23 07:00, Weiwei Li wrote:

Transform the fetch address before page walk when pointer mask is
enabled for instruction fetch.

Signed-off-by: Weiwei Li 
Signed-off-by: Junqiang Wang 
---


Reviewed-by: Daniel Henrique Barboza 


  target/riscv/cpu.h|  1 +
  target/riscv/cpu_helper.c | 25 +++--
  target/riscv/csr.c|  2 --
  3 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 638e47c75a..57bd9c3279 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -368,6 +368,7 @@ struct CPUArchState {
  #endif
  target_ulong cur_pmmask;
  target_ulong cur_pmbase;
+bool cur_pminsn;
  
  /* Fields from here on are preserved across CPU reset. */

  QEMUTimer *stimer; /* Internal timer for S-mode interrupt */
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index f88c503cf4..77132a3e0c 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -124,6 +124,7 @@ void cpu_get_tb_cpu_state(CPURISCVState *env, target_ulong 
*pc,
  void riscv_cpu_update_mask(CPURISCVState *env)
  {
  target_ulong mask = -1, base = 0;
+bool insn = false;
  /*
   * TODO: Current RVJ spec does not specify
   * how the extension interacts with XLEN.
@@ -135,18 +136,21 @@ void riscv_cpu_update_mask(CPURISCVState *env)
  if (env->mmte & M_PM_ENABLE) {
  mask = env->mpmmask;
  base = env->mpmbase;
+insn = env->mmte & MMTE_M_PM_INSN;
  }
  break;
  case PRV_S:
  if (env->mmte & S_PM_ENABLE) {
  mask = env->spmmask;
  base = env->spmbase;
+insn = env->mmte & MMTE_S_PM_INSN;
  }
  break;
  case PRV_U:
  if (env->mmte & U_PM_ENABLE) {
  mask = env->upmmask;
  base = env->upmbase;
+insn = env->mmte & MMTE_U_PM_INSN;
  }
  break;
  default:
@@ -161,6 +165,7 @@ void riscv_cpu_update_mask(CPURISCVState *env)
  env->cur_pmmask = mask;
  env->cur_pmbase = base;
  }
+env->cur_pminsn = insn;
  }
  
  #ifndef CONFIG_USER_ONLY

@@ -1225,6 +1230,17 @@ static void pmu_tlb_fill_incr_ctr(RISCVCPU *cpu, 
MMUAccessType access_type)
  riscv_pmu_incr_ctr(cpu, pmu_event_type);
  }
  
+static target_ulong adjust_pc_address(CPURISCVState *env, target_ulong pc)

+{
+target_ulong adjust_pc = pc;
+
+if (env->cur_pminsn) {
+adjust_pc = (adjust_pc & ~env->cur_pmmask) | env->cur_pmbase;
+}
+
+return adjust_pc;
+}
+
  bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
  MMUAccessType access_type, int mmu_idx,
  bool probe, uintptr_t retaddr)
@@ -1232,6 +1248,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int 
size,
  RISCVCPU *cpu = RISCV_CPU(cs);
  CPURISCVState *env = &cpu->env;
  vaddr im_address;
+vaddr orig_address = address;
  hwaddr pa = 0;
  int prot, prot2, prot_pmp;
  bool pmp_violation = false;
@@ -1248,6 +1265,10 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int 
size,
  qemu_log_mask(CPU_LOG_MMU, "%s ad %" VADDR_PRIx " rw %d mmu_idx %d\n",
__func__, address, access_type, mmu_idx);
  
+if (access_type == MMU_INST_FETCH) {

+address = adjust_pc_address(env, address);
+}
+
  /* MPRV does not affect the virtual-machine load/store
 instructions, HLV, HLVX, and HSV. */
  if (riscv_cpu_two_stage_lookup(mmu_idx)) {
@@ -1351,13 +1372,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, 
int size,
  }
  
  if (ret == TRANSLATE_SUCCESS) {

-tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
+tlb_set_page(cs, orig_address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
   prot, mmu_idx, tlb_size);
  return true;
  } else if (probe) {
  return false;
  } else {
-raise_mmu_exception(env, address, access_type, pmp_violation,
+raise_mmu_exception(env, orig_address, access_type, pmp_violation,
  first_stage_error,
  riscv_cpu_virt_enabled(env) ||
  riscv_cpu_two_stage_lookup(mmu_idx),
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index d522efc0b6..4544c9d934 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -3511,8 +3511,6 @@ static RISCVException write_mmte(CPURISCVState *env, int 
csrno,
  /* for machine mode pm.current is hardwired to 1 */
  wpri_val |= MMTE_M_PM_CURRENT;
  
-/* hardwiring pm.instruction bit to 0, since it's not supported yet */

-wpri_val &= ~(MMTE_M_PM_INSN | MMTE_S_PM_INSN | MMTE_U_PM_INSN);
  env->mmte = wpri_val | PM_EXT_DIRTY;
  riscv_cpu_update_mask(env);
  




Re: [PATCH 5/5] target/riscv: Add pointer mask support for instruction fetch

2023-03-27 Thread Daniel Henrique Barboza




On 3/27/23 07:00, Weiwei Li wrote:

Transform the fetch address before page walk when pointer mask is
enabled for instruction fetch.

Signed-off-by: Weiwei Li 
Signed-off-by: Junqiang Wang 
---


Reviewed-by: Daniel Henrique Barboza 


  target/riscv/cpu.h|  1 +
  target/riscv/cpu_helper.c | 25 +++--
  target/riscv/csr.c|  2 --
  3 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 638e47c75a..57bd9c3279 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -368,6 +368,7 @@ struct CPUArchState {
  #endif
  target_ulong cur_pmmask;
  target_ulong cur_pmbase;
+bool cur_pminsn;
  
  /* Fields from here on are preserved across CPU reset. */

  QEMUTimer *stimer; /* Internal timer for S-mode interrupt */
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index f88c503cf4..77132a3e0c 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -124,6 +124,7 @@ void cpu_get_tb_cpu_state(CPURISCVState *env, target_ulong 
*pc,
  void riscv_cpu_update_mask(CPURISCVState *env)
  {
  target_ulong mask = -1, base = 0;
+bool insn = false;
  /*
   * TODO: Current RVJ spec does not specify
   * how the extension interacts with XLEN.
@@ -135,18 +136,21 @@ void riscv_cpu_update_mask(CPURISCVState *env)
  if (env->mmte & M_PM_ENABLE) {
  mask = env->mpmmask;
  base = env->mpmbase;
+insn = env->mmte & MMTE_M_PM_INSN;
  }
  break;
  case PRV_S:
  if (env->mmte & S_PM_ENABLE) {
  mask = env->spmmask;
  base = env->spmbase;
+insn = env->mmte & MMTE_S_PM_INSN;
  }
  break;
  case PRV_U:
  if (env->mmte & U_PM_ENABLE) {
  mask = env->upmmask;
  base = env->upmbase;
+insn = env->mmte & MMTE_U_PM_INSN;
  }
  break;
  default:
@@ -161,6 +165,7 @@ void riscv_cpu_update_mask(CPURISCVState *env)
  env->cur_pmmask = mask;
  env->cur_pmbase = base;
  }
+env->cur_pminsn = insn;
  }
  
  #ifndef CONFIG_USER_ONLY

@@ -1225,6 +1230,17 @@ static void pmu_tlb_fill_incr_ctr(RISCVCPU *cpu, 
MMUAccessType access_type)
  riscv_pmu_incr_ctr(cpu, pmu_event_type);
  }
  
+static target_ulong adjust_pc_address(CPURISCVState *env, target_ulong pc)

+{
+target_ulong adjust_pc = pc;
+
+if (env->cur_pminsn) {
+adjust_pc = (adjust_pc & ~env->cur_pmmask) | env->cur_pmbase;
+}
+
+return adjust_pc;
+}
+
  bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
  MMUAccessType access_type, int mmu_idx,
  bool probe, uintptr_t retaddr)
@@ -1232,6 +1248,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int 
size,
  RISCVCPU *cpu = RISCV_CPU(cs);
  CPURISCVState *env = &cpu->env;
  vaddr im_address;
+vaddr orig_address = address;
  hwaddr pa = 0;
  int prot, prot2, prot_pmp;
  bool pmp_violation = false;
@@ -1248,6 +1265,10 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int 
size,
  qemu_log_mask(CPU_LOG_MMU, "%s ad %" VADDR_PRIx " rw %d mmu_idx %d\n",
__func__, address, access_type, mmu_idx);
  
+if (access_type == MMU_INST_FETCH) {

+address = adjust_pc_address(env, address);
+}
+
  /* MPRV does not affect the virtual-machine load/store
 instructions, HLV, HLVX, and HSV. */
  if (riscv_cpu_two_stage_lookup(mmu_idx)) {
@@ -1351,13 +1372,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, 
int size,
  }
  
  if (ret == TRANSLATE_SUCCESS) {

-tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
+tlb_set_page(cs, orig_address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
   prot, mmu_idx, tlb_size);
  return true;
  } else if (probe) {
  return false;
  } else {
-raise_mmu_exception(env, address, access_type, pmp_violation,
+raise_mmu_exception(env, orig_address, access_type, pmp_violation,
  first_stage_error,
  riscv_cpu_virt_enabled(env) ||
  riscv_cpu_two_stage_lookup(mmu_idx),
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index d522efc0b6..4544c9d934 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -3511,8 +3511,6 @@ static RISCVException write_mmte(CPURISCVState *env, int 
csrno,
  /* for machine mode pm.current is hardwired to 1 */
  wpri_val |= MMTE_M_PM_CURRENT;
  
-/* hardwiring pm.instruction bit to 0, since it's not supported yet */

-wpri_val &= ~(MMTE_M_PM_INSN | MMTE_S_PM_INSN | MMTE_U_PM_INSN);
  env->mmte = wpri_val | PM_EXT_DIRTY;
  riscv_cpu_update_mask(env);