On Wed, 3 Apr 2019 at 05:05, Richard Henderson <richard.hender...@linaro.org> wrote: > > Most of the existing users would continue around a loop which > would fault the tlb entry in via a normal load/store. But for > SVE we have a true non-faulting case which requires the new > probing form of tlb_fill.
So am I right in thinking that this fixes a bug where we previously would mark a load as faulted if the memory happened not to be in the TLB, whereas now we will correctly pull in the TLB entry and do the load ? (Since guest code ought to be handling the "non-first-load faulted" case by looping round or otherwise arranging to retry, nothing in practice would have noticed this bug, right?) > Signed-off-by: Richard Henderson <richard.hender...@linaro.org> > --- > include/exec/cpu_ldst.h | 40 ++++-------------------- > accel/tcg/cputlb.c | 69 ++++++++++++++++++++++++++++++++++++----- > target/arm/sve_helper.c | 6 +--- > 3 files changed, 68 insertions(+), 47 deletions(-) > > diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h > index d78041d7a0..be8c3f4da2 100644 > --- a/include/exec/cpu_ldst.h > +++ b/include/exec/cpu_ldst.h > @@ -440,43 +440,15 @@ static inline CPUTLBEntry *tlb_entry(CPUArchState *env, > uintptr_t mmu_idx, > * This is the equivalent of the initial fast-path code used by > * TCG backends for guest load and store accesses. > */ The doc comment which this is the last two lines of needs updating, I think -- with the changed implementation it's no longer just the equivalent of the fast-path bit of code, and it doesn't return NULL on a TLB miss any more. Otherwise Reviewed-by: Peter Maydell <peter.mayd...@linaro.org> thanks -- PMM