On 12/15/18 4:42 AM, Emilio G. Cota wrote: > +#if TCG_TARGET_IMPLEMENTS_DYN_TLB > +#define CPU_TLB_DYN_MIN_BITS 6 > +#define CPU_TLB_DYN_DEFAULT_BITS 8 > +/* > + * Assuming TARGET_PAGE_BITS==12, with 2**22 entries we can cover 2**(22+12) > == > + * 2**34 == 16G of address space. This is roughly what one would expect a > + * TLB to cover in a modern (as of 2018) x86_64 CPU. For instance, Intel > + * Skylake's Level-2 STLB has 16 1G entries. > + */ > +#define CPU_TLB_DYN_MAX_BITS 22
For 32-bit hosts, we need to limit this to (32 - TARGET_PAGE_BITS) so that we do not require a double-word shift when implementing the tlb load. We probably want to restrict it even further because a 32-bit host may well not have 2**27 bytes of available memory. For 64-bit hosts, we should limit this to (TARGET_VIRT_ADDR_SPACE_BITS - TARGET_PAGE_BITS) so that we do not grow the tlb past the guest's address space. > + env->tlb_table[mmu_idx] = g_new(CPUTLBEntry, new_size); > + env->iotlb[mmu_idx] = g_new(CPUIOTLBEntry, new_size); For 32-bit hosts, we should probably be prepared for this to fail; see above re large tlb sizes. If it fails, we should be able to go back to the previous tlb size, since we just freed that memory. r~