VFIO IOMMU type1 currently upmaps IOVA pages synchronously, which requires
IOTLB flushing for every unmapping. This results in large IOTLB flushing
overhead when handling pass-through devices has a large number of mapped
IOVAs (e.g. GPUs). This could also cause IOTLB invalidate time-out issue
on AM
Hi Randy,
On 01/22/2018 10:15 AM, JeffyChen wrote:
Hi Randy,
On 01/22/2018 09:18 AM, Randy Li wrote:
Also the power domain driver could manage the clocks as well, I would
suggest to use pm_runtime_*.
actually the clocks required by pm domain may not be the same as what we
want to control h
Hi Joerg,
Do you have any feedback regarding this patch for AMD IOMMU? I'm re-sending the
patch 1/2
separately per Alex's suggestion.
Thanks,
Suravee
On 12/27/17 4:20 PM, Suravee Suthikulpanit wrote:
Implement the newly added IOTLB flushing interface for AMD IOMMU.
Signed-off-by: Suravee Sut
On 01/18/2018 10:25 PM, JeffyChen wrote:
Hi Robin,
On 01/18/2018 08:27 PM, Robin Murphy wrote:
Is it worth using the clk_bulk_*() APIs for this? At a glance, most of
the code being added here appears to duplicate what those functions
already do (but I'm no clk API expert, for sure).
right,
Hi Randy,
On 01/22/2018 09:18 AM, Randy Li wrote:
Also the power domain driver could manage the clocks as well, I would
suggest to use pm_runtime_*.
actually the clocks required by pm domain may not be the same as what we
want to control here, there might be some clocks only be needed when
On Sun, 2018-01-21 at 08:19 +0100, Jörg Rödel wrote:
> On Sat, Jan 20, 2018 at 05:37:52PM -0800, Joe Perches wrote:
> > While Markus' commit messages are nearly universally terrible,
> > is there really any signficant value in knowing when any
> > particular OOM condition occurs other than the subs
On Sat, 2018-01-20 at 20:40 +0100, Jörg Rödel wrote:
> On Sat, Jan 20, 2018 at 03:55:37PM +0100, SF Markus Elfring wrote:
> > Do you need any more background information for this general
> > transformation pattern?
>
> No.
>
> > Do you find the Linux allocation failure report insufficient for thi
Several functions in this driver are called from atomic context,
and thus raw locks must be used in order to be safe on PREEMPT_RT.
This includes paths that must wait for command completion, which is
a potential PREEMPT_RT latency concern but not easily avoidable.
Signed-off-by: Scott Wood
---
get_irq_table() acquires amd_iommu_devtable_lock which is not a raw lock,
and thus cannot be acquired from atomic context on PREEMPT_RT. Many
calls to modify_irte*() come from atomic context due to the IRQ
desc->lock, as does amd_iommu_update_ga() due to the preemption disabling
in vcpu_load/put()