Hi Catalin,
On 2020/7/10 1:36, Catalin Marinas wrote:
> On Thu, Jul 09, 2020 at 05:10:54PM +0800, Zhenyu Ye wrote:
>> #define __tlbi_level(op, addr, level) do { \
>> u64 arg = addr; \
>>
On Thu, Jul 09, 2020 at 05:10:54PM +0800, Zhenyu Ye wrote:
> Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range().
>
> Signed-off-by: Zhenyu Ye
> ---
> arch/arm64/include/asm/tlbflush.h | 156 --
> 1 file changed, 126 insertions(+), 30 deletions(-)
>
> diff -
On 2020/7/9 17:10, Zhenyu Ye wrote:
> + /*
> + * When cpu does not support TLBI RANGE feature, we flush the tlb
> + * entries one by one at the granularity of 'stride'.
> + * When cpu supports the TLBI RANGE feature, then:
> + * 1. If pages is odd, flush the first page throu
Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range().
Signed-off-by: Zhenyu Ye
---
arch/arm64/include/asm/tlbflush.h | 156 --
1 file changed, 126 insertions(+), 30 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush
4 matches
Mail list logo