Hi Baolu,
On 12/8/21 3:44 AM, Lu Baolu wrote:
> Hi Eric,
>
> On 12/7/21 6:22 PM, Eric Auger wrote:
>> On 12/6/21 11:48 AM, Joerg Roedel wrote:
>>> On Wed, Oct 27, 2021 at 12:44:20PM +0200, Eric Auger wrote:
Signed-off-by: Jean-Philippe Brucker
Signed-off-by: Liu, Yi L
On 07-12-21, 16:27, Dave Jiang wrote:
>
> On 12/7/2021 6:47 AM, Jacob Pan wrote:
> > In-kernel DMA should be managed by DMA mapping API. The existing kernel
> > PASID support is based on the SVA machinery in SVA lib that is intended
> > for user process SVA. The binding between a kernel PASID and
Hi Eric,
On 12/7/21 6:22 PM, Eric Auger wrote:
On 12/6/21 11:48 AM, Joerg Roedel wrote:
On Wed, Oct 27, 2021 at 12:44:20PM +0200, Eric Auger wrote:
Signed-off-by: Jean-Philippe Brucker
Signed-off-by: Liu, Yi L
Signed-off-by: Ashok Raj
Signed-off-by: Jacob Pan
Signed-off-by: Eric Auger
This
On Tue, 2021-12-07 at 13:16 +0100, AngeloGioacchino Del Regno wrote:
> Il 07/12/21 13:10, Yong Wu ha scritto:
> > On Tue, 2021-12-07 at 09:56 +0100, AngeloGioacchino Del Regno
> > wrote:
> > > Il 07/12/21 07:24, Yong Wu ha scritto:
> > > > Hi AngeloGioacchino,
> > > >
> > > > Thanks for your
Hi Jacob,
On 12/7/21 9:47 PM, Jacob Pan wrote:
DMA mapping API is the de facto standard for in-kernel DMA. It operates
on a per device/RID basis which is not PASID-aware.
Some modern devices such as Intel Data Streaming Accelerator, PASID is
required for certain work submissions. To allow such
Drop the useless NULL check on kvm_x86_ops.check_apicv_inhibit_reasons
when handling an APICv update, both VMX and SVM unconditionally implement
the helper and leave it non-NULL even if APICv is disabled at the module
level. The latter is a moot point now that __kvm_request_apicv_update()
is
Unexport __kvm_request_apicv_update(), it's not used by vendor code and
should never be used by vendor code. The only reason it's exposed at all
is because Hyper-V's SynIC needs to track how many auto-EOIs are in use,
and it's convenient to use apicv_update_lock to guard that tracking.
No
Bail from the APICv update paths _before_ taking apicv_update_lock if
APICv is disabled at the module level. kvm_request_apicv_update() in
particular is invoked from multiple paths that can be reached without
APICv being enabled, e.g. svm_enable_irq_window(), and taking the
rw_sem for write when
Nullify svm_x86_ops.vcpu_(un)blocking if AVIC/APICv is disabled as the
hooks are necessary only to clear the vCPU's IsRunning entry in the
Physical APIC and to update IRTE entries if the VM has a pass-through
device attached.
Opportunistically rename the helpers to clarify their AVIC
Move svm_hardware_setup() below svm_x86_ops so that KVM can modify ops
during setup, e.g. the vcpu_(un)blocking hooks can be nullified if AVIC
is disabled or unsupported.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 466
Drop avic_set_running() in favor of calling avic_vcpu_{load,put}()
directly, and modify the block+put path to use preempt_disable/enable()
instead of get/put_cpu(), as it doesn't actually care about the current
pCPU associated with the vCPU. Opportunistically add lockdep assertions
as being
When waking vCPUs in the posted interrupt wakeup handling, do exactly
that and no more. There is no need to kick the vCPU as the wakeup
handler just needs to get the vCPU task running, and if it's in the guest
then it's definitely running.
Signed-off-by: Sean Christopherson
Reviewed-by: Maxim
Move the fallback "wake_up" path into the helper to trigger posted
interrupt helper now that the nested and non-nested paths are identical.
No functional change intended.
Signed-off-by: Sean Christopherson
Reviewed-by: Maxim Levitsky
---
arch/x86/kvm/vmx/vmx.c | 18 ++
1 file
Refactor the posted interrupt helper to take the desired notification
vector instead of a bool so that the callers are self-documenting.
No functional change intended.
Signed-off-by: Sean Christopherson
Reviewed-by: Maxim Levitsky
---
arch/x86/kvm/vmx/vmx.c | 8 +++-
1 file changed, 3
Drop a check that guards triggering a posted interrupt on the currently
running vCPU, and more importantly guards waking the target vCPU if
triggering a posted interrupt fails because the vCPU isn't IN_GUEST_MODE.
The "do nothing" logic when "vcpu == running_vcpu" works only because KVM
doesn't
Replace the full "kick" with just the "wake" in the fallback path when
triggering a virtual interrupt via a posted interrupt fails because the
guest is not IN_GUEST_MODE. If the guest transitions into guest mode
between the check and the kick, then it's guaranteed to see the pending
interrupt as
Now that the one and only caller of amd_iommu_update_ga() passes in
"is_run == (cpu >= 0)" in all paths, infer IRT.vAPIC.IsRun from @cpu
instead of having the caller pass redundant information.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/avic.c | 8
Don't bother updating the Physical APIC table or IRTE when loading a vCPU
that is blocking, i.e. won't be marked IsRun{ning}=1, as the pCPU is
queried if and only if IsRunning is '1'. If the vCPU was migrated, the
new pCPU will be picked up when avic_vcpu_load() is called by
Remove handling of KVM_REQ_APICV_UPDATE from svm_vcpu_unblocking(), it's
no longer needed as it was made obsolete by commit df7e4827c549 ("KVM:
SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC").
Prior to that commit, the manual check was necessary to ensure the AVIC
stuff was
Use kvm_vcpu_is_blocking() to determine whether or not the vCPU should be
marked running during avic_vcpu_load(). Drop avic_is_running, which
really should have been named "vcpu_is_not_blocking", as it tracked if
the vCPU was blocking, not if it was actually running, e.g. it was set
during
Drop the avic_vcpu_is_running() check when waking vCPUs in response to a
VM-Exit due to incomplete IPI delivery. The check isn't wrong per se, but
it's not 100% accurate in the sense that it doesn't guarantee that the vCPU
was one of the vCPUs that didn't receive the IPI.
The check isn't
Signal the AVIC doorbell iff the vCPU is running in the guest. If the vCPU
is not IN_GUEST_MODE, it's guaranteed to pick up any pending IRQs on the
next VMRUN, which unconditionally processes the vIRR.
Add comments to document the logic.
Signed-off-by: Sean Christopherson
---
Drop kvm_x86_ops' pre/post_block() now that all implementations are nops.
No functional change intended.
Signed-off-by: Sean Christopherson
Reviewed-by: Maxim Levitsky
---
arch/x86/include/asm/kvm-x86-ops.h | 2 --
arch/x86/include/asm/kvm_host.h| 12
arch/x86/kvm/vmx/vmx.c
Unexport switch_to_{hv,sw}_timer() now that common x86 handles the
transitions.
No functional change intended.
Signed-off-by: Sean Christopherson
Reviewed-by: Maxim Levitsky
---
arch/x86/kvm/lapic.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/x86/kvm/lapic.c
Handle the switch to/from the hypervisor/software timer when a vCPU is
blocking in common x86 instead of in VMX. Even though VMX is the only
user of a hypervisor timer, the logic and all functions involved are
generic x86 (unless future CPUs do something completely different and
implement a
Move the seemingly generic block_vcpu_list from kvm_vcpu to vcpu_vmx, and
rename the list and all associated variables to clarify that it tracks
the set of vCPU that need to be poked on a posted interrupt to the wakeup
vector. The list is not used to track _all_ vCPUs that are blocking, and
the
Remove kvm_vcpu.pre_pcpu as it no longer has any users. No functional
change intended.
Signed-off-by: Sean Christopherson
Reviewed-by: Maxim Levitsky
---
include/linux/kvm_host.h | 1 -
virt/kvm/kvm_main.c | 1 -
2 files changed, 2 deletions(-)
diff --git a/include/linux/kvm_host.h
Move the posted interrupt pre/post_block logic into vcpu_put/load
respectively, using the kvm_vcpu_is_blocking() to determining whether or
not the wakeup handler needs to be set (and unset). This avoids updating
the PI descriptor if halt-polling is successful, reduces the number of
touchpoints
Move the WARN sanity checks out of the PI descriptor update loop so as
not to spam the kernel log if the condition is violated and the update
takes multiple attempts due to another writer. This also eliminates a
few extra uops from the retry path.
Technically not checking every attempt could
Add a memory barrier between writing vcpu->requests and reading
vcpu->guest_mode to ensure the read is ordered after the write when
(potentially) delivering an IRQ to L2 via nested posted interrupt. If
the request were to be completed after reading vcpu->mode, it would be
possible for the target
From: Paolo Bonzini
avic_set_running() passes the current CPU to avic_vcpu_load(), albeit
via vcpu->cpu rather than smp_processor_id(). If the thread is migrated
while avic_set_running runs, the call to avic_vcpu_load() can use a stale
value for the processor id. Avoid this by blocking
Overhaul and cleanup APIC virtualization (Posted Interrupts on Intel VMX,
AVIC on AMD SVM) to streamline things as much as possible, remove a bunch
of cruft, and document the lurking gotchas along the way.
Patch 01 is a fix from Paolo that's already been merged but hasn't made
its way to
On 12/7/2021 6:47 AM, Jacob Pan wrote:
In-kernel DMA should be managed by DMA mapping API. The existing kernel
PASID support is based on the SVA machinery in SVA lib that is intended
for user process SVA. The binding between a kernel PASID and kernel
mapping has many flaws. See discussions in
On Mon, Dec 6, 2021 at 7:04 AM Jason Gunthorpe wrote:
>
> On Mon, Dec 06, 2021 at 06:47:45AM -0800, Christoph Hellwig wrote:
> > On Mon, Dec 06, 2021 at 10:45:35AM -0400, Jason Gunthorpe via iommu wrote:
> > > IIRC the only thing this function does is touch ACPI and OF stuff?
> > > Isn't that
DMA mapping API is the de facto standard for in-kernel DMA. It operates
on a per device/RID basis which is not PASID-aware.
Some modern devices such as Intel Data Streaming Accelerator, PASID is
required for certain work submissions. To allow such devices use DMA
mapping API, we need the
In-kernel DMA is managed by DMA mapping APIs, which supports per device
addressing mode for legacy DMA requests. With the introduction of
Process Address Space ID (PASID), device DMA can now target at a finer
granularity per PASID + Requester ID (RID).
However, for in-kernel DMA there is no need
Modern accelerators such as Intel's Data Streaming Accelerator (DSA) can
perform DMA requests with PASID, which is a finer granularity than the
device's requester ID(RID). In fact, work submissions on DSA shared work
queues require PASID.
DMA mapping API is the de facto standard for in-kernel
Between DMA requests with and without PASID (legacy), DMA mapping APIs
are used indiscriminately on a device. Therefore, we should always match
the addressing mode of the legacy DMA when enabling kernel PASID.
This patch adds support for VT-d driver where the kernel PASID is
programmed to match
In-kernel DMA should be managed by DMA mapping API. The existing kernel
PASID support is based on the SVA machinery in SVA lib that is intended
for user process SVA. The binding between a kernel PASID and kernel
mapping has many flaws. See discussions in the link below.
This patch utilizes
Cedric,
On Tue, Dec 07 2021 at 18:42, Cédric Le Goater wrote:
>
> This is breaking nvme on pseries but it's probably one of the previous
> patches. I haven't figured out what's wrong yet. Here is the oops FYI.
Hrm.
> [ 32.494562] WARNING: CPU: 26 PID: 658 at kernel/irq/chip.c:210
>
On Mon, Dec 06, 2021 at 11:39:41PM +0100, Thomas Gleixner wrote:
> Use msi_get_vector() and handle the return value to be compatible.
>
> No functional change intended.
>
> Signed-off-by: Thomas Gleixner
Acked-by: Bjorn Helgaas
> ---
> V2: Handle the INTx case directly instead of trying to
On Mon, Dec 06, 2021 at 11:39:36PM +0100, Thomas Gleixner wrote:
> Provide a domain info flag which makes the core code check for a contiguous
> MSI-X index on allocation. That's simpler than checking it at some other
> domain callback in architecture code.
>
> Signed-off-by: Thomas Gleixner
>
On Mon, Dec 06, 2021 at 11:39:26PM +0100, Thomas Gleixner wrote:
> Store the properties which are interesting for various places so the MSI
> descriptor fiddling can be removed.
>
> Signed-off-by: Thomas Gleixner
Acked-by: Bjorn Helgaas
> ---
> V2: Use the setter function
> ---
>
On Mon, Dec 06, 2021 at 11:39:23PM +0100, Thomas Gleixner wrote:
> The usage of msi_desc::pci::entry_nr is confusing at best. It's the index
> into the MSI[X] descriptor table.
>
> Use msi_desc::msi_index which is shared between all MSI incarnations
> instead of having a PCI specific storage for
On Mon, Dec 06, 2021 at 11:39:09PM +0100, Thomas Gleixner wrote:
> Set the domain info flag which makes the core code handle sysfs groups and
> put an explicit invocation into the legacy code.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
On Mon, Dec 06, 2021 at 11:39:00PM +0100, Thomas Gleixner wrote:
> Allocate MSI device data on first use, i.e. when a PCI driver invokes one
> of the PCI/MSI enablement functions.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
Acked-by:
Thomas,
On 12/6/21 23:39, Thomas Gleixner wrote:
Replace open coded MSI descriptor chasing and use the proper accessor
functions instead.
Signed-off-by: Thomas Gleixner
Reviewed-by: Greg Kroah-Hartman
Reviewed-by: Jason Gunthorpe
---
drivers/pci/msi/msi.c | 26 ++
On 2021-11-29 08:22, Yicong Yang via iommu wrote:
On 2021/11/25 23:49, Robin Murphy wrote:
On 2021-11-18 09:01, Yicong Yang via iommu wrote:
Hi Robin,
On 2021/11/16 19:37, Yicong Yang wrote:
On 2021/11/16 18:56, Robin Murphy wrote:
On 2021-11-16 09:06, Yicong Yang via iommu wrote:
[...]
On 07/12/2021 09:41, Zhen Lei via iommu wrote:
Although the parameter 'cmd' is always passed by a local array variable,
and only this function modifies it, the compiler does not know this. Every
time the 'cmd' variable is updated, a memory write operation is generated.
This generates many
On Tue, Dec 07, 2021 at 02:00:35PM +, John Garry wrote:
> On 07/12/2021 13:59, Leo Yan wrote:
> > > Whether other implementers might retroactively define "equivalent" IIDR
> > > values for their existing implementations in a way we could potentially
> > > quirk in the driver is an orthogonal
On 07/12/2021 13:59, Leo Yan wrote:
Whether other implementers might retroactively define "equivalent" IIDR
values for their existing implementations in a way we could potentially
quirk in the driver is an orthogonal question.
Agreed, it makes sense that supports the standard IP modules in
the
On Tue, Dec 07, 2021 at 01:46:49PM +, Robin Murphy wrote:
[...]
> >[ 28.854767] arm-smmu-v3-pmcg arm-smmu-v3-pmcg.15.auto: iidr=0x0
> >
> > Please confirm if this is expected or not? I think this might
> > introduce difficulty for John for the PMU event alias patches, which
> > is
On 2021-12-07 13:20, Leo Yan wrote:
On Tue, Dec 07, 2021 at 12:48:13PM +, Robin Murphy wrote:
On 2021-12-07 12:28, John Garry via iommu wrote:
On 07/12/2021 12:04, Robin Murphy wrote:
So is there some userspace part to go with this now?
FWIW I've not looked into it - is it just a case
On Tue, Dec 07, 2021 at 12:48:13PM +, Robin Murphy wrote:
> On 2021-12-07 12:28, John Garry via iommu wrote:
> > On 07/12/2021 12:04, Robin Murphy wrote:
> > > > >
> > > > So is there some userspace part to go with this now?
> > >
> > > FWIW I've not looked into it - is it just a case of
On Tue, Dec 07, 2021 at 05:25:04AM -0800, Christoph Hellwig wrote:
> On Tue, Dec 07, 2021 at 09:16:27AM -0400, Jason Gunthorpe wrote:
> > Yes, the suggestion was to put everything that 'if' inside a function
> > and then of course a matching undo function.
>
> Can't we simplify things even more?
On Tue, Dec 07, 2021 at 09:16:27AM -0400, Jason Gunthorpe wrote:
> Yes, the suggestion was to put everything that 'if' inside a function
> and then of course a matching undo function.
Can't we simplify things even more? Do away with the DMA API owner
entirely, and instead in
On Tue, Dec 07, 2021 at 10:57:25AM +0800, Lu Baolu wrote:
> On 12/6/21 11:06 PM, Jason Gunthorpe wrote:
> > On Mon, Dec 06, 2021 at 06:36:27AM -0800, Christoph Hellwig wrote:
> > > I really hate the amount of boilerplate code that having this in each
> > > bus type causes.
> > +1
> >
> > I liked
On Tue, Dec 07 2021 at 13:47, Thomas Gleixner wrote:
> On Tue, Dec 07 2021 at 10:04, Cédric Le Goater wrote:
>>> +/**
>>> + * msi_device_set_properties - Set device specific MSI properties
>>> + * @dev: Pointer to the device which is queried
>>> + * @prop: Properties to set
>>> + */
>>> +void
On 2021-12-07 12:28, John Garry via iommu wrote:
On 07/12/2021 12:04, Robin Murphy wrote:
So is there some userspace part to go with this now?
FWIW I've not looked into it - is it just a case of someone knocking
out some JSON from the MMU-600/700 TRMs, or is there still mroe to do?
That
On Tue, Dec 07 2021 at 10:04, Cédric Le Goater wrote:
>> +/**
>> + * msi_device_set_properties - Set device specific MSI properties
>> + * @dev:Pointer to the device which is queried
>> + * @prop: Properties to set
>> + */
>> +void msi_device_set_properties(struct device *dev, unsigned long
On 2021-12-07 11:49, Christoph Hellwig wrote:
On Mon, Dec 06, 2021 at 04:33:10PM +, Robin Murphy wrote:
On 2021-11-11 06:50, Christoph Hellwig wrote:
Add two local variables to track if we want to remap the returned
address using vmap or call dma_set_uncached and use that to simplify
the
On 07/12/2021 12:04, Robin Murphy wrote:
So is there some userspace part to go with this now?
FWIW I've not looked into it - is it just a case of someone knocking out
some JSON from the MMU-600/700 TRMs, or is there still mroe to do?
That should just be it.
I had
the impression that
Il 07/12/21 13:10, Yong Wu ha scritto:
On Tue, 2021-12-07 at 09:56 +0100, AngeloGioacchino Del Regno wrote:
Il 07/12/21 07:24, Yong Wu ha scritto:
Hi AngeloGioacchino,
Thanks for your review.
On Mon, 2021-12-06 at 16:08 +0100, AngeloGioacchino Del Regno
wrote:
Il 03/12/21 07:40, Yong Wu ha
On Tue, 2021-12-07 at 09:56 +0100, AngeloGioacchino Del Regno wrote:
> Il 07/12/21 07:24, Yong Wu ha scritto:
> > Hi AngeloGioacchino,
> >
> > Thanks for your review.
> >
> > On Mon, 2021-12-06 at 16:08 +0100, AngeloGioacchino Del Regno
> > wrote:
> > > Il 03/12/21 07:40, Yong Wu ha scritto:
> >
On 2021-12-07 09:14, John Garry wrote:
On 17/11/2021 14:48, Jean-Philippe Brucker wrote:
From: Robin Murphy
The SMMU_PMCG_IIDR register was not present in older revisions of the
Arm SMMUv3 spec. On Arm Ltd. implementations, the IIDR value consists of
fields from several PIDR registers,
On Mon, Dec 06, 2021 at 04:33:10PM +, Robin Murphy wrote:
> On 2021-11-11 06:50, Christoph Hellwig wrote:
>> Add two local variables to track if we want to remap the returned
>> address using vmap or call dma_set_uncached and use that to simplify
>> the code flow.
>
> I still wonder about the
On Mon, Dec 06, 2021 at 04:32:58PM +, Robin Murphy wrote:
> On 2021-11-11 06:50, Christoph Hellwig wrote:
>> We must never unencryped memory go back into the general page pool.
>> So if we fail to set it back to encrypted when freeing DMA memory, leak
>> the memory insted and warn the user.
>
On 2021-12-07 11:17, John Garry wrote:
It really is a property of the IOVA rcache code that we need to alloc a
power-of-2 size, so relocate the functionality to resize into
alloc_iova_fast(), rather than the callsites.
I'd still much prefer to resolve the issue that there shouldn't *be*
more
From: Yunfei Wang
In __arm_v7s_alloc_table function:
iommu call kmem_cache_alloc to allocate page table, this function
allocate memory may fail, when kmem_cache_alloc fails to allocate
table, call virt_to_phys will be abnomal and return unexpected phys
and goto out_free, then call
On 2021-12-07 11:33, yf.w...@mediatek.com wrote:
From: Yunfei Wang
In __arm_v7s_alloc_table function:
iommu call kmem_cache_alloc to allocate page table, this function
allocate memory may fail, when kmem_cache_alloc fails to allocate
table, call virt_to_phys will be abnomal and return
It really is a property of the IOVA rcache code that we need to alloc a
power-of-2 size, so relocate the functionality to resize into
alloc_iova_fast(), rather than the callsites.
Signed-off-by: John Garry
Acked-by: Will Deacon
Reviewed-by: Xie Yongji
Acked-by: Jason Wang
Acked-by: Michael S.
Hi Borislav:
Thanks for your review.
On 12/7/2021 5:47 PM, Borislav Petkov wrote:
On Tue, Dec 07, 2021 at 02:55:58AM -0500, Tianyu Lan wrote:
From: Tianyu Lan
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of
Hi Zhangfei,
On 12/7/21 11:35 AM, Zhangfei Gao wrote:
>
>
> On 2021/12/7 下午6:27, Eric Auger wrote:
>> Hi Zhangfei,
>>
>> On 12/3/21 1:27 PM, Zhangfei Gao wrote:
>>> Hi, Eric
>>>
>>> On 2021/10/27 下午6:44, Eric Auger wrote:
This series brings the IOMMU part of HW nested paging support
in
On 2021/12/7 下午6:27, Eric Auger wrote:
Hi Zhangfei,
On 12/3/21 1:27 PM, Zhangfei Gao wrote:
Hi, Eric
On 2021/10/27 下午6:44, Eric Auger wrote:
This series brings the IOMMU part of HW nested paging support
in the SMMUv3.
The SMMUv3 driver is adapted to support 2 nested stages.
The IOMMU API
Hi Sumit,
On 12/3/21 2:13 PM, Sumit Gupta wrote:
> Hi Eric,
>
>> This series brings the IOMMU part of HW nested paging support
>> in the SMMUv3.
>>
>> The SMMUv3 driver is adapted to support 2 nested stages.
>>
>> The IOMMU API is extended to convey the guest stage 1
>> configuration and the hook
Hi Zhangfei,
On 12/3/21 1:27 PM, Zhangfei Gao wrote:
>
> Hi, Eric
>
> On 2021/10/27 下午6:44, Eric Auger wrote:
>> This series brings the IOMMU part of HW nested paging support
>> in the SMMUv3.
>>
>> The SMMUv3 driver is adapted to support 2 nested stages.
>>
>> The IOMMU API is extended to convey
Hi Joerg,
On 12/6/21 11:48 AM, Joerg Roedel wrote:
> On Wed, Oct 27, 2021 at 12:44:20PM +0200, Eric Auger wrote:
>> Signed-off-by: Jean-Philippe Brucker
>> Signed-off-by: Liu, Yi L
>> Signed-off-by: Ashok Raj
>> Signed-off-by: Jacob Pan
>> Signed-off-by: Eric Auger
> This Signed-of-by chain
On Tue, Dec 07, 2021 at 10:47:22AM +0800, yf.w...@mediatek.com wrote:
> From: Yunfei Wang
>
> In __arm_v7s_alloc_table function:
> iommu call kmem_cache_alloc to allocate page table, this function
> allocate memory may fail, when kmem_cache_alloc fails to allocate
> table, call virt_to_phys will
On Tue, Dec 07, 2021 at 02:55:58AM -0500, Tianyu Lan wrote:
> From: Tianyu Lan
>
> Hyper-V provides Isolation VM which has memory encrypt support. Add
> hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
> attribute.
You need to refresh on how to write commit messages -
v2 --> v3:
Discard 'register' modifier for local variable 'cmd'.
v1 --> v2:
1. Add patch 1, Properly handle the return value of arm_smmu_cmdq_build_cmd()
2. Remove arm_smmu_cmdq_copy_cmd(). In addition, when build command fails,
out_cmd is not filled.
[v2]
Although the parameter 'cmd' is always passed by a local array variable,
and only this function modifies it, the compiler does not know this. Every
time the 'cmd' variable is updated, a memory write operation is generated.
This generates many useless instruction operations.
To guide the compiler
Hello Thomas,
On 12/6/21 23:39, Thomas Gleixner wrote:
Add a properties field which allows core code to store information for easy
retrieval in order to replace MSI descriptor fiddling.
Signed-off-by: Thomas Gleixner
---
V2: Add a setter function to prepare for future changes
---
On 17/11/2021 14:48, Jean-Philippe Brucker wrote:
From: Robin Murphy
The SMMU_PMCG_IIDR register was not present in older revisions of the
Arm SMMUv3 spec. On Arm Ltd. implementations, the IIDR value consists of
fields from several PIDR registers, allowing us to present a
standardized
Il 07/12/21 07:24, Yong Wu ha scritto:
Hi AngeloGioacchino,
Thanks for your review.
On Mon, 2021-12-06 at 16:08 +0100, AngeloGioacchino Del Regno wrote:
Il 03/12/21 07:40, Yong Wu ha scritto:
sleep control means that when the larb go to sleep, we should wait
a bit
until all the current
On 27.11.21 04:46, Yong Wu wrote:
Hi Dafna,
Sorry for reply late.
On Mon, 2021-11-22 at 12:43 +0200, Dafna Hirschfeld wrote:
From: Yong Wu
Prepare for 2 HWs that sharing pgtable in different power-domains.
When there are 2 M4U HWs, it may has problem in the flush_range in
which
we get
On Mon, Dec 06, 2021 at 11:39:25PM +0100, Thomas Gleixner wrote:
> Add a properties field which allows core code to store information for easy
> retrieval in order to replace MSI descriptor fiddling.
>
> Signed-off-by: Thomas Gleixner
Reviewed-by: Greg Kroah-Hartman
86 matches
Mail list logo