On Thu, Jul 15, 2021 at 10:45:44AM -0600, Logan Gunthorpe wrote:
> @@ -194,6 +194,8 @@ static int __dma_map_sg_attrs(struct device *dev, struct
> scatterlist *sg,
> else
> ents = ops->map_sg(dev, sg, nents, dir, attrs);
>
> + WARN_ON_ONCE(ents == 0);
Turns this into a ne
On Thu, Jul 15, 2021 at 10:45:42AM -0600, Logan Gunthorpe wrote:
> @@ -458,7 +460,7 @@ static int gart_map_sg(struct device *dev, struct
> scatterlist *sg, int nents,
> iommu_full(dev, pages << PAGE_SHIFT, dir);
> for_each_sg(sg, s, nents, i)
> s->dma_address = DMA_MAPPIN
Careful here. What do all these errors from the low-level code mean
here? I think we need to clearly standardize on what we actually
return from ->map_sg and possibly document what the callers expect and
can do, and enforce that only those error are reported.
On Thu, Jul 15, 2021 at 10:45:29AM -0600, Logan Gunthorpe wrote:
> + * dma_map_sgtable() will return the error code returned and convert
> + * a zero return (for legacy implementations) into -EINVAL.
> + *
> + * dma_map_sg() will always return zero on any negative or zero
> +
On 7/14/21 6:09 PM, Nicholas Piggin wrote:
Excerpts from Nicholas Piggin's message of July 12, 2021 12:41 pm:
Excerpts from Athira Rajeev's message of July 10, 2021 12:50 pm:
On 22-Jun-2021, at 4:27 PM, Nicholas Piggin wrote:
KVM PMU management code looks for particular frozen/disabled bi
vcpu_put is not called if the user copy fails. This can result in preempt
notifier corruption and crashes, among other issues.
Reported-by: Alexey Kardashevskiy
Fixes: b3cebfe8c1ca ("KVM: PPC: Move vcpu_load/vcpu_put down to each ioctl case
in kvm_arch_vcpu_ioctl")
Signed-off-by: Nicholas Piggin
When running CPU_FTR_P9_TM_HV_ASSIST, HFSCR[TM] is set for the guest
even if the host has CONFIG_TRANSACTIONAL_MEM=n, which causes it to be
unprepared to handle guest exits while transactional.
Normal guests don't have a problem because the HTM capability will not
be advertised, but a rogue or bug
allyesconfig
powerpc allmodconfig
powerpc allnoconfig
i386 randconfig-a005-20210715
i386 randconfig-a006-20210715
i386 randconfig-a004-20210715
i386 randconfig-a001-20210715
On Tue, Jul 06, 2021 at 05:48:03PM +0200, Uwe Kleine-König wrote:
> The driver core ignores the return value of this callback because there
> is only little it can do when a device disappears.
>
> This is the final bit of a long lasting cleanup quest where several
> buses were converted to also re
Pratik Sampat writes:
> Hello,
>
> On 12/07/21 9:13 pm, Fabiano Rosas wrote:
>> "Pratik R. Sampat" writes:
>>
>> Hi, have you seen Documentation/core-api/kobject.rst, particularly the
>> part that says:
>>
>> "When you see a sysfs directory full of other directories, generally each
>> of tho
With @logbuf_lock removed, the high level printk functions for
storing messages are lockless. Messages can be stored from any
context, so there is no need for the NMI and safe buffers anymore.
Remove the NMI and safe buffers.
Although the safe buffers are removed, the NMI and safe context
tracking
All NMI contexts are handled the same as the safe context: store the
message and defer printing. There is no need to have special NMI
context tracking for this. Using in_nmi() is enough.
There are several parts of the kernel that are manually calling into
the printk NMI context tracking in order t
Hi,
Here is v4 of a series to remove the safe buffers. v3 can be
found here [0]. The safe buffers are no longer needed because
messages can be stored directly into the log buffer from any
context.
However, the safe buffers also provided a form of recursion
protection. For that reason, explicit re
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
Propagate the error up if vio_dma_iommu_map_sg() fails.
ppc_iommu_map_sg() may fail either because of iommu_range_alloc() or
because of tbl->it_ops->set(). The former only supports returning an
error wi
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure,
so propagate any errors that may happen all the way up.
Signed-off-by: Martin Oliveira
Signed-off-by: Logan Gunthorpe
Cc: Russell King
Cc: Thomas Bogendoerfer
---
arch/arm/mm/dma-mapping.c | 22
Convert to ssize_t return code so the return code from __iommu_map()
can be returned all the way down through dma_iommu_map_sg().
Signed-off-by: Logan Gunthorpe
Cc: Joerg Roedel
Cc: Will Deacon
---
drivers/iommu/iommu.c | 15 +++
include/linux/iommu.h | 22 +++---
2
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
Propagate the return of dma_mapping_error() up, if it is an errno.
sba_coalesce_chunks() may only presently fail if sba_alloc_range()
fails, which in turn only fails if the iommu is out of mapping
resou
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
xen_swiotlb_map_sg() may only fail if xen_swiotlb_map_page() fails, but
xen_swiotlb_map_page() only supports returning errors as
DMA_MAPPING_ERROR. So coalesce all errors into EINVAL.
Signed-off-by: Mar
Hi,
This series is spun out and expanded from my work to add P2PDMA support
to DMA map operations[1].
The P2PDMA work requires distinguishing different error conditions in
a map_sg operation. dma_map_sgtable() already allows for returning an
error code (where as dma_map_sg() is only allowed to re
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
So make __dma_map_cont() return a valid errno (which is then propagated
to gart_map_sg() via dma_map_cont()) and return it in case of failure.
Also, return -EINVAL in case of invalid nents.
Signed-off-
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
Signed-off-by: Martin Oliveira
Signed-off-by: Logan Gunthorpe
Cc: "James E.J. Bottomley"
Cc: Helge Deller
---
drivers/parisc/ccio-dma.c | 2 +-
drivers/parisc/sba_iommu.c | 2 +-
2 files changed, 2
Now that the map_sg() op expects error codes instead of return zero on
error, convert dma_direct_map_sg() to return an error code. The
only error to return presently is EINVAL if a page could not
be mapped.
Signed-off-by: Logan Gunthorpe
---
kernel/dma/direct.c | 2 +-
1 file changed, 1 insertio
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
Returning an errno from __sbus_iommu_map_sg() results in
sbus_iommu_map_sg_gflush() and sbus_iommu_map_sg_pflush() returning an
errno, as those functions are wrappers around __sbus_iommu_map_sg().
Signe
Allow dma_map_sgtable() to pass errors from the map_sg() ops. This
will be required for returning appropriate error codes when mapping
P2PDMA memory.
Introduce __dma_map_sg_attrs() which will return the raw error code
from the map_sg operation (whether it be negative or zero). Then add a
dma_map_s
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
pci_map_single_1() can fail for different reasons, but since the only
supported type of error return is DMA_MAPPING_ERROR, we coalesce those
errors into EINVAL.
ENOMEM is returned when no page tables ca
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
So propagate the error from __s390_dma_map_sg() up.
Signed-off-by: Martin Oliveira
Signed-off-by: Logan Gunthorpe
Cc: Niklas Schnelle
Cc: Gerald Schaefer
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: C
On 2021-07-15 10:53 a.m., Russell King (Oracle) wrote:
> On Thu, Jul 15, 2021 at 10:45:28AM -0600, Logan Gunthorpe wrote:
>> Hi,
>>
>> This series is spun out and expanded from my work to add P2PDMA support
>> to DMA map operations[1].
>>
>> The P2PDMA work requires distinguishing different err
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
vdma_alloc() may fail for different reasons, but since it only supports
indicating an error via a return of DMA_MAPPING_ERROR, we coalesce the
different reasons into -EINVAL.
Signed-off-by: Martin Olive
Pass through appropriate error codes from iommu_dma_map_sg() now that
the error code will be passed through dma_map_sgtable().
Signed-off-by: Logan Gunthorpe
Cc: Joerg Roedel
Cc: Will Deacon
---
drivers/iommu/dma-iommu.c | 20 +---
1 file changed, 13 insertions(+), 7 deletions(
Now that all the .map_sg operations have been converted to returning
proper error codes, drop the code to handle a zero return value,
add a warning if a zero is returned and update the comment for the
map_sg operation.
Signed-off-by: Logan Gunthorpe
---
include/linux/dma-map-ops.h | 8 +++-
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
The only errno to return is -ENODEV in the case when DMA is not
supported.
Signed-off-by: Martin Oliveira
Signed-off-by: Logan Gunthorpe
---
kernel/dma/dummy.c | 2 +-
1 file changed, 1 insertion(+),
On Thu, Jul 15, 2021 at 10:45:28AM -0600, Logan Gunthorpe wrote:
> Hi,
>
> This series is spun out and expanded from my work to add P2PDMA support
> to DMA map operations[1].
>
> The P2PDMA work requires distinguishing different error conditions in
> a map_sg operation. dma_map_sgtable() already
On POWER9 and newer, rather than the complex HMI synchronisation and
subcore state, have each thread un-apply the guest TB offset before
calling into the early HMI handler.
This allows the subcore state to be avoided, including subcore enter
/ exit guest, which includes an expensive divide that sh
The P9 path uses vc->dpdes only for msgsndp / SMT emulation. This adds
an ordering requirement between vcpu->doorbell_request and vc->dpdes for
no real benefit. Use vcpu->doorbell_request directly.
XXX: verify msgsndp / DPDES emulation works properly.
Signed-off-by: Nicholas Piggin
---
arch/pow
This goes further to removing vcores from the P9 path. Also avoid the
memset in favour of explicitly initialising all fields.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 61 +---
1 file changed, 35 insertions(+), 26 deletions(-)
diff --git a
The P9 path always uses one vcpu per vcore, so none of the the vcore,
locks, stolen time, blocking logic, shared waitq, etc., is required.
Remove most of it.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 147 ---
1 file changed, 85 insertions(
cpu_in_guest is set to determine if a CPU needs to be IPI'ed to exit
the guest and notice the need_tlb_flush bit.
This can be implemented as a global per-CPU pointer to the currently
running guest instead of per-guest cpumasks, saving 2 atomics per
entry/exit. P7/8 doesn't require cpu_in_guest, no
The mmu will almost always be ready.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kvm/book3s_hv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 9d15bbafe333..27a7a856eed1 100644
--- a/arch/powerpc/kvm/book
This goes on top of the previous speedup series. The previous series is
mostly invovled with reducing the cost of SPR accesses. This one starts
to look beyond those, to atomics, barriers, and other logic that can be
reduced. After this series the P9 path uses very few things from the
vcore structur
This goes on top of the previous speedup series. The previous series is
mostly invovled with reducing the cost of SPR accesses. This one starts
to look beyond those, to atomics, barriers, and other logic that can be
reduced. After this series the P9 path uses very few things from the
vcore structur
On Mon, 12 Jul 2021 11:36:50 +1000, Nicholas Piggin wrote:
> The conversion to C introduced several bugs in TM handling that can
> cause host crashes with TM bad thing interrupts. Mostly just simple
> typos or missed logic in the conversion that got through due to my
> not testing TM in the guest s
41 matches
Mail list logo