On 10/27/18 1:16 AM, Emilio G. Cota wrote:
> On Tue, Oct 23, 2018 at 08:02:47 +0100, Richard Henderson wrote:
>> +static void tlb_flush_page_locked(CPUArchState *env, int midx,
>> + target_ulong addr)
>> +{
>> +target_ulong lp_addr = env->tlb_d[midx].large_page_
On Tue, Oct 23, 2018 at 08:02:47 +0100, Richard Henderson wrote:
> +static void tlb_flush_page_locked(CPUArchState *env, int midx,
> + target_ulong addr)
> +{
> +target_ulong lp_addr = env->tlb_d[midx].large_page_addr;
> +target_ulong lp_mask = env->tlb_d[mi
The set of large pages in the kernel is probably not the same
as the set of large pages in the application. Forcing one
range to cover both will flush more often than necessary.
This allows tlb_flush_page_async_work to flush just the one
mmu_idx implicated, which in turn allows us to remove
tlb_c