At the moment, the VMID algorithm will send an SGI to all the CPUs to
force an exit and then broadcast a full TLB flush and I-Cache
invalidation.
This patch re-use the new ASID allocator. The
benefits are:
- CPUs are not forced to exit at roll-over. Instead the VMID will be
marked
A follow-up patch will replace the KVM VMID allocator with the arm64 ASID
allocator.
To avoid as much as possible duplication, the arm KVM code will directly
compile arch/arm64/lib/asid.c. The header is a verbatim to copy to
avoid breaking the assumption that architecture port has self-containers
We will want to re-use the ASID allocator in a separate context (e.g
allocating VMID). So move the code in a new file.
The function asid_check_context has been moved in the header as a static
inline function because we want to avoid add a branch when checking if the
ASID is still valid.
Some users of the ASID allocator (e.g VMID) may require to free any
resource if the initialization fail. So introduce a function allows to
free any memory allocated by the ASID allocator.
Signed-off-by: Julien Grall
---
Changes in v3:
- Patch added
---
At the moment, the function kvm_get_vmid_bits() is looking up for the
sanitized value of ID_AA64MMFR1_EL1 and extract the information
regarding the number of VMID bits supported.
This is fine as the function is mainly used during VMID roll-over. New
use in a follow-up patch will require the
Currently the number of ASID allocated per context is determined at
compilation time. As the algorithm is becoming generic, the user may
want to instantiate the ASID allocator multiple time with different
number of ASID allocated.
Add a field in asid_info to track the number ASID allocated per
Move out the common initialization of the ASID allocator in a separate
function.
Signed-off-by: Julien Grall
---
Changes in v3:
- Allow bisection (asid_allocator_init() return 0 on success not
error!).
---
arch/arm64/mm/context.c | 43
Flushing the local context will vary depending on the actual user of the ASID
allocator. Introduce a new callback to flush the local context and move
the call to flush local TLB in it.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 16 +---
1 file changed, 13
The function check_and_switch_context is used to:
1) Check whether the ASID is still valid
2) Generate a new one if it is not valid
3) Switch the context
While the latter is specific to the MM subsystem, the rest could be part
of the generic ASID allocator.
After this patch, the
The variable bits hold information for a given ASID allocator. So move
it to the asid_info structure.
Because most of the macros were relying on bits, they are now taking an
extra parameter that is a pointer to the asid_info structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c |
At the moment ASID_FIRST_VERSION is used to know the number of ASIDs
supported. As we are going to move the ASID allocator in a separate, it
would be better to use a different name for external users.
This patch adds NUM_ASIDS and implements ASID_FIRST_VERSION using it.
Signed-off-by: Julien
The variables lock and tlb_flush_pending holds information for a given
ASID allocator. So move them to the asid_info structure.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/mm/context.c
The variables active_asids and reserved_asids hold information for a
given ASID allocator. So move them to the structure asid_info.
At the same time, introduce wrappers to access the active and reserved
ASIDs to make the code clearer.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c |
The function new_context will be part of a generic ASID allocator. At
the moment, the MM structure is only used to fetch the ASID.
To remove the dependency on MM, it is possible to just pass a pointer to
the current ASID.
Signed-off-by: Julien Grall
---
arch/arm64/mm/context.c | 6 +++---
1
Hi all,
This patch series is moving out the ASID allocator in a separate file in order
to re-use it for the VMID. The benefits are:
- CPUs are not forced to exit on a roll-over.
- Context invalidation is now per-CPU rather than
broadcasted.
There are no performance regression on
In an attempt to make the ASID allocator generic, create a new structure
asid_info to store all the information necessary for the allocator.
For now, move the variables asid_generation and asid_map to the new structure
asid_info. Follow-up patches will move more variables.
Note to avoid more
On 24.07.2019 14:39, Marc Zyngier wrote:
On 24/07/2019 11:25, Tomasz Nowicki wrote:
On 21.06.2019 11:38, Marc Zyngier wrote:
From: Jintack Lim
When supporting nested virtualization a guest hypervisor executing AT
instructions must be trapped and emulated by the host hypervisor,
because
On 24/07/2019 11:25, Tomasz Nowicki wrote:
> On 21.06.2019 11:38, Marc Zyngier wrote:
>> From: Jintack Lim
>>
>> When supporting nested virtualization a guest hypervisor executing AT
>> instructions must be trapped and emulated by the host hypervisor,
>> because untrapped AT instructions
On 24/07/2019 10:04, Xiangyou Xie wrote:
> During the halt polling process, vgic_cpu->ap_list_lock is frequently
> obtained andreleased, (kvm_vcpu_check_block->kvm_arch_vcpu_runnable->
> kvm_vgic_vcpu_pending_irq).This action affects the performance of virq
> interrupt injection, because
On 24/07/2019 10:04, Xiangyou Xie wrote:
> It is not necessary to invalidate the lpi translation cache when the
> virtual machine executes the movi instruction to adjust the affinity of
> the interrupt. Irqbalance will adjust the interrupt affinity in a short
> period of time to achieve the
Hi Xiangyou,
On 24/07/2019 10:04, Xiangyou Xie wrote:
> Because dist->lpi_list_lock is a perVM lock, when a virtual machine
> is configured with multiple virtual NIC devices and receives
> network packets at the same time, dist->lpi_list_lock will become
> a performance bottleneck.
I'm sorry,
Hi KarimAllah,
On 12/07/2019 09:22, KarimAllah Ahmed wrote:
> Valid RAM can live outside kernel control (e.g. using "mem=" command-line
> parameter). This memory can still be used as valid guest memory for KVM. So
> ensure that we validate that this memory is definitely not "RAM" before
>
On 21.06.2019 11:38, Marc Zyngier wrote:
From: Jintack Lim
When supporting nested virtualization a guest hypervisor executing AT
instructions must be trapped and emulated by the host hypervisor,
because untrapped AT instructions operating on S1E1 will use the wrong
translation regieme (the one
During the halt polling process, vgic_cpu->ap_list_lock is frequently
obtained andreleased, (kvm_vcpu_check_block->kvm_arch_vcpu_runnable->
kvm_vgic_vcpu_pending_irq).This action affects the performance of virq
interrupt injection, because vgic_queue_irq_unlock also attempts to get
Because dist->lpi_list_lock is a perVM lock, when a virtual machine
is configured with multiple virtual NIC devices and receives
network packets at the same time, dist->lpi_list_lock will become
a performance bottleneck.
This patch increases the number of lpi_translation_cache to eight,
hashes
It is not necessary to invalidate the lpi translation cache when the
virtual machine executes the movi instruction to adjust the affinity of
the interrupt. Irqbalance will adjust the interrupt affinity in a short
period of time to achieve the purpose of interrupting load balancing,
but this does
Hello,
This patches are based on Marc Zyngier's branch
https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/its-translation-cache
As follows:
(1) Introduce multiple LPI translation caches to reduce concurrency
(2) Removed the unnecessary
Julien,
On 23/07/2019 18:58, Julien Grall wrote:
> Hi all,
>
> I have been playing with the latest branch of Linux RT (5.2-rt1) and notice
> the
> following splat when starting a KVM guest.
>
> [ 122.336254] 003: BUG: sleeping function called from invalid context at
>
Hi Marc,
On 7/23/19 5:45 PM, Marc Zyngier wrote:
> Hi Eric,
>
> On 23/07/2019 16:10, Auger Eric wrote:
>> Hi Marc,
>>
>> On 6/11/19 7:03 PM, Marc Zyngier wrote:
>>> When performing an MSI injection, let's first check if the translation
>>> is already in the cache. If so, let's inject it quickly
29 matches
Mail list logo