Re: [next-20190530] Boot failure on PowerPC
On Fri, May 31, 2019 at 6:52 AM Michael Ellerman wrote: > > Sachin Sant writes: > > Latest next fails to boot with a kernel panic on POWER9. > > > > [ 33.689332] Kernel panic - not syncing: stack-protector: Kernel stack is > > corrupted in: write_irq_affinity.isra.5+0x15c/0x160 > > [ 33.689346] CPU: 35 PID: 4907 Comm: irqbalance Not tainted > > 5.2.0-rc2-next-20190530-autotest-autotest #1 > > [ 33.689352] Call Trace: > > [ 33.689356] [c018d974bab0] [c0b5328c] dump_stack+0xb0/0xf4 > > (unreliable) > > [ 33.689364] [c018d974baf0] [c0120694] panic+0x16c/0x408 > > [ 33.689370] [c018d974bb80] [c012010c] > > __stack_chk_fail+0x2c/0x30 > > [ 33.689376] [c018d974bbe0] [c01b859c] > > write_irq_affinity.isra.5+0x15c/0x160 > > [ 33.689383] [c018d974bd30] [c04d6f30] > > proc_reg_write+0x90/0x110 > > [ 33.689388] [c018d974bd60] [c041453c] __vfs_write+0x3c/0x70 > > [ 33.689394] [c018d974bd80] [c0418650] vfs_write+0xd0/0x250 > > [ 33.689399] [c018d974bdd0] [c0418a2c] ksys_write+0x7c/0x130 > > [ 33.689405] [c018d974be20] [c000b688] system_call+0x5c/0x70 > > > > Machine boots till login prompt and then panics few seconds later. > > > > Last known next build was May 24th. Will attempt few builds till May 30 to > > narrow down this problem. > > My CI was fine with next-20190529 (9a15d2e3fd03e3). > > cheers Hi Sachin, It looks this patch may fix the issue: https://lkml.org/lkml/2019/5/30/1630 , but I'm not sure. Thanks -- Dexuan
[Bug 203517] WARNING: inconsistent lock state. inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
https://bugzilla.kernel.org/show_bug.cgi?id=203517 --- Comment #9 from Erhard F. (erhar...@mailbox.org) --- Has it already landed in 5.1 stable? Have not seen it yet. -- You are receiving this mail because: You are watching the assignee of the bug.
[PATCH] scsi: ibmvscsi: Don't use rc uninitialized in ibmvscsi_do_work
clang warns: drivers/scsi/ibmvscsi/ibmvscsi.c:2126:7: warning: variable 'rc' is used uninitialized whenever switch case is taken [-Wsometimes-uninitialized] case IBMVSCSI_HOST_ACTION_NONE: ^ drivers/scsi/ibmvscsi/ibmvscsi.c:2151:6: note: uninitialized use occurs here if (rc) { ^~ Initialize rc to zero so that the atomic_set and dev_err statement don't trigger for the cases that just break. Fixes: 035a3c4046b5 ("scsi: ibmvscsi: redo driver work thread to use enum action states") Link: https://github.com/ClangBuiltLinux/linux/issues/502 Signed-off-by: Nathan Chancellor --- drivers/scsi/ibmvscsi/ibmvscsi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c index 727c31dc11a0..6714d8043e62 100644 --- a/drivers/scsi/ibmvscsi/ibmvscsi.c +++ b/drivers/scsi/ibmvscsi/ibmvscsi.c @@ -2118,7 +2118,7 @@ static unsigned long ibmvscsi_get_desired_dma(struct vio_dev *vdev) static void ibmvscsi_do_work(struct ibmvscsi_host_data *hostdata) { unsigned long flags; - int rc; + int rc = 0; char *action = "reset"; spin_lock_irqsave(hostdata->host->host_lock, flags); -- 2.22.0.rc2
Re: [RFC] mm: Generalize notify_page_fault()
On Fri, May 31, 2019 at 02:17:43PM +0530, Anshuman Khandual wrote: > On 05/30/2019 07:09 PM, Matthew Wilcox wrote: > > On Thu, May 30, 2019 at 05:31:15PM +0530, Anshuman Khandual wrote: > >> On 05/30/2019 04:36 PM, Matthew Wilcox wrote: > >>> The two handle preemption differently. Why is x86 wrong and this one > >>> correct? > >> > >> Here it expects context to be already non-preemptible where as the proposed > >> generic function makes it non-preemptible with a preempt_[disable|enable]() > >> pair for the required code section, irrespective of it's present state. Is > >> not this better ? > > > > git log -p arch/x86/mm/fault.c > > > > search for 'kprobes'. > > > > tell me what you think. > > Are you referring to these following commits > > a980c0ef9f6d ("x86/kprobes: Refactor kprobes_fault() like > kprobe_exceptions_notify()") > b506a9d08bae ("x86: code clarification patch to Kprobes arch code") > > In particular the later one (b506a9d08bae). It explains how the invoking > context > in itself should be non-preemptible for the kprobes processing context > irrespective > of whether kprobe_running() or perhaps smp_processor_id() is safe or not. > Hence it > does not make much sense to continue when original invoking context is > preemptible. > Instead just bail out earlier. This seems to be making more sense than preempt > disable-enable pair. If there are no concerns about this change from other > platforms, > I will change the preemption behavior in proposed generic function next time > around. Exactly. So, any of the arch maintainers know of a reason they behave differently from x86 in this regard? Or can Anshuman use the x86 implementation for all the architectures supporting kprobes?
Re: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
On Fri, May 31, 2019 at 06:45:00PM +0100, Robin Murphy wrote: > Bleh, I'm certainly not keen on formalising any kind of > dma_to_phys()/dma_to_virt() interface for this. Or are you just proposing > something like dma_unmap_sorry_sir_the_dog_ate_my_homework() for drivers > which have 'lost' the original VA they mapped? Yes, I guess we need that in some form. I've heard a report the IBM emca ethernet driver has the same issue, and any SOC with it this totally blows up dma-debug as they just never properly unmap.
Re: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
On 31/05/2019 18:08, Christoph Hellwig wrote: On Fri, May 31, 2019 at 06:03:30PM +0100, Robin Murphy wrote: The thing needs to be completely redone as it abuses parts of the iommu API in a completely unacceptable way. `git grep iommu_iova_to_phys drivers/{crypto,gpu,net}` :( I guess one alternative is for the offending drivers to maintain their own lookup tables of mapped DMA addresses - I think at least some of these things allow storing some kind of token in a descriptor, which even if it's not big enough for a virtual address might be sufficient for an index. Well, we'll at least need DMA API wrappers that work on the dma addr only and hide this madness underneath. And then tell if an given device supports this and fail the probe otherwise. Bleh, I'm certainly not keen on formalising any kind of dma_to_phys()/dma_to_virt() interface for this. Or are you just proposing something like dma_unmap_sorry_sir_the_dog_ate_my_homework() for drivers which have 'lost' the original VA they mapped? Robin.
RE: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
> -Original Message- > From: Andreas Färber > Sent: Friday, May 31, 2019 8:04 PM > > Hello Laurentiu, > > Am 31.05.19 um 18:46 schrieb Laurentiu Tudor: > >> -Original Message- > >> From: Andreas Färber > >> Sent: Friday, May 31, 2019 7:15 PM > >> > >> Hi Laurentiu, > >> > >> Am 30.05.19 um 16:19 schrieb laurentiu.tu...@nxp.com: > >>> This patch series contains several fixes in preparation for SMMU > >>> support on NXP LS1043A and LS1046A chips. Once these get picked up, > >>> I'll submit the actual SMMU enablement patches consisting in the > >>> required device tree changes. > >> > >> Have you thought through what will happen if this patch ordering is not > >> preserved? In particular, a user installing a future U-Boot update with > >> the DTB bits but booting a stable kernel without this patch series - > >> wouldn't that regress dpaa then for our customers? > >> > > > > These are fixes for issues that popped out after enabling SMMU. > > I do not expect them to break anything. > > That was not my question! You're missing my point: All your patches are > lacking a Fixes header in their commit message, for backporting them, to > avoid _your DT patches_ breaking the driver on stable branches! It does appear that I'm missing your point. For sure, the DT updates solely will break the kernel without these fixes but I'm not sure I understand how this could happen. My plan was to share the kernel dts patches sometime after this series makes it through. --- Best Regards, Laurentiu
Re: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
On Fri, May 31, 2019 at 06:03:30PM +0100, Robin Murphy wrote: > > The thing needs to be completely redone as it abuses parts of the > > iommu API in a completely unacceptable way. > > `git grep iommu_iova_to_phys drivers/{crypto,gpu,net}` > > :( > > I guess one alternative is for the offending drivers to maintain their own > lookup tables of mapped DMA addresses - I think at least some of these > things allow storing some kind of token in a descriptor, which even if it's > not big enough for a virtual address might be sufficient for an index. Well, we'll at least need DMA API wrappers that work on the dma addr only and hide this madness underneath. And then tell if an given device supports this and fail the probe otherwise.
Re: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
Hello Laurentiu, Am 31.05.19 um 18:46 schrieb Laurentiu Tudor: >> -Original Message- >> From: Andreas Färber >> Sent: Friday, May 31, 2019 7:15 PM >> >> Hi Laurentiu, >> >> Am 30.05.19 um 16:19 schrieb laurentiu.tu...@nxp.com: >>> This patch series contains several fixes in preparation for SMMU >>> support on NXP LS1043A and LS1046A chips. Once these get picked up, >>> I'll submit the actual SMMU enablement patches consisting in the >>> required device tree changes. >> >> Have you thought through what will happen if this patch ordering is not >> preserved? In particular, a user installing a future U-Boot update with >> the DTB bits but booting a stable kernel without this patch series - >> wouldn't that regress dpaa then for our customers? >> > > These are fixes for issues that popped out after enabling SMMU. > I do not expect them to break anything. That was not my question! You're missing my point: All your patches are lacking a Fixes header in their commit message, for backporting them, to avoid _your DT patches_ breaking the driver on stable branches! Regards, Andreas -- SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg)
Re: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
On 31/05/2019 17:33, Christoph Hellwig wrote: On Thu, May 30, 2019 at 03:08:44PM -0700, David Miller wrote: From: laurentiu.tu...@nxp.com Date: Thu, 30 May 2019 17:19:45 +0300 Depends on this pull request: http://lists.infradead.org/pipermail/linux-arm-kernel/2019-May/653554.html I'm not sure how you want me to handle this. The thing needs to be completely redone as it abuses parts of the iommu API in a completely unacceptable way. `git grep iommu_iova_to_phys drivers/{crypto,gpu,net}` :( I guess one alternative is for the offending drivers to maintain their own lookup tables of mapped DMA addresses - I think at least some of these things allow storing some kind of token in a descriptor, which even if it's not big enough for a virtual address might be sufficient for an index. Robin.
RE: [PATCH v3 5/6] dpaa_eth: fix iova handling for contiguous frames
> -Original Message- > From: Christoph Hellwig > Sent: Friday, May 31, 2019 7:56 PM > > On Fri, May 31, 2019 at 04:53:16PM +, Laurentiu Tudor wrote: > > Unfortunately due to our hardware particularities we do not have > alternatives. This is also the case for our next generation of ethernet > drivers [1]. I'll let my colleagues that work on the ethernet drivers to > comment more on this. > > Then you need to enhance the DMA API to support your use case instead > of using an API only supported for two specific IOMMU implementations. > > Remember in Linux you can should improve core code and not hack around > it in crappy ways making lots of assumptions in your drivers. Alright, I'm ok with that. I'll try to come up with something, will keep you in the loop. --- Best Regards, Laurentiu
Re: [PATCH v3 5/6] dpaa_eth: fix iova handling for contiguous frames
On Fri, May 31, 2019 at 04:53:16PM +, Laurentiu Tudor wrote: > Unfortunately due to our hardware particularities we do not have > alternatives. This is also the case for our next generation of ethernet > drivers [1]. I'll let my colleagues that work on the ethernet drivers to > comment more on this. Then you need to enhance the DMA API to support your use case instead of using an API only supported for two specific IOMMU implementations. Remember in Linux you can should improve core code and not hack around it in crappy ways making lots of assumptions in your drivers.
RE: [PATCH v3 5/6] dpaa_eth: fix iova handling for contiguous frames
Hi Christoph, > -Original Message- > From: Christoph Hellwig > Sent: Friday, May 31, 2019 7:32 PM > > On Thu, May 30, 2019 at 05:19:50PM +0300, laurentiu.tu...@nxp.com wrote: > > +static phys_addr_t dpaa_iova_to_phys(const struct dpaa_priv *priv, > > +dma_addr_t addr) > > +{ > > + return priv->domain ? iommu_iova_to_phys(priv->domain, addr) : addr; > > +} > > Again, a driver using the iommu API must not call iommu_* APIs. > > This chane is not acceptable. Unfortunately due to our hardware particularities we do not have alternatives. This is also the case for our next generation of ethernet drivers [1]. I'll let my colleagues that work on the ethernet drivers to comment more on this. [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c#n37 --- Best Regards, Laurentiu
RE: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
Hello Andreas, > -Original Message- > From: Andreas Färber > Sent: Friday, May 31, 2019 7:15 PM > > Hi Laurentiu, > > Am 30.05.19 um 16:19 schrieb laurentiu.tu...@nxp.com: > > This patch series contains several fixes in preparation for SMMU > > support on NXP LS1043A and LS1046A chips. Once these get picked up, > > I'll submit the actual SMMU enablement patches consisting in the > > required device tree changes. > > Have you thought through what will happen if this patch ordering is not > preserved? In particular, a user installing a future U-Boot update with > the DTB bits but booting a stable kernel without this patch series - > wouldn't that regress dpaa then for our customers? > These are fixes for issues that popped out after enabling SMMU. I do not expect them to break anything. --- Best Regards, Laurentiu
Re: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
On Thu, May 30, 2019 at 03:08:44PM -0700, David Miller wrote: > From: laurentiu.tu...@nxp.com > Date: Thu, 30 May 2019 17:19:45 +0300 > > > Depends on this pull request: > > > > http://lists.infradead.org/pipermail/linux-arm-kernel/2019-May/653554.html > > I'm not sure how you want me to handle this. The thing needs to be completely redone as it abuses parts of the iommu API in a completely unacceptable way.
Re: [PATCH v3 5/6] dpaa_eth: fix iova handling for contiguous frames
On Thu, May 30, 2019 at 05:19:50PM +0300, laurentiu.tu...@nxp.com wrote: > +static phys_addr_t dpaa_iova_to_phys(const struct dpaa_priv *priv, > + dma_addr_t addr) > +{ > + return priv->domain ? iommu_iova_to_phys(priv->domain, addr) : addr; > +} Again, a driver using the iommu API must not call iommu_* APIs. This chane is not acceptable.
Re: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
Hi Laurentiu, Am 30.05.19 um 16:19 schrieb laurentiu.tu...@nxp.com: > This patch series contains several fixes in preparation for SMMU > support on NXP LS1043A and LS1046A chips. Once these get picked up, > I'll submit the actual SMMU enablement patches consisting in the > required device tree changes. Have you thought through what will happen if this patch ordering is not preserved? In particular, a user installing a future U-Boot update with the DTB bits but booting a stable kernel without this patch series - wouldn't that regress dpaa then for our customers? Regards, Andreas -- SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg)
Re: [PATCH] Documentation/stackprotector: powerpc supports stack protector
Jonathan Corbet writes: > On Thu, 30 May 2019 18:37:46 +0530 > Bhupesh Sharma wrote: > >> > This should probably go via the documentation tree? >> > >> > Acked-by: Michael Ellerman >> >> Thanks for the review Michael. >> I am ok with this going through the documentation tree as well. > > Works for me too, but I don't seem to find the actual patch anywhere I > look. Can you send me a copy? You can get it from lore: https://lore.kernel.org/linuxppc-dev/1559212177-7072-1-git-send-email-bhsha...@redhat.com/raw Or patchwork (automatically adds my ack): https://patchwork.ozlabs.org/patch/1107706/mbox/ Or Bhupesh can send it to you :) cheers
Re: [next-20190530] Boot failure on PowerPC
Sachin Sant writes: > Latest next fails to boot with a kernel panic on POWER9. > > [ 33.689332] Kernel panic - not syncing: stack-protector: Kernel stack is > corrupted in: write_irq_affinity.isra.5+0x15c/0x160 > [ 33.689346] CPU: 35 PID: 4907 Comm: irqbalance Not tainted > 5.2.0-rc2-next-20190530-autotest-autotest #1 > [ 33.689352] Call Trace: > [ 33.689356] [c018d974bab0] [c0b5328c] dump_stack+0xb0/0xf4 > (unreliable) > [ 33.689364] [c018d974baf0] [c0120694] panic+0x16c/0x408 > [ 33.689370] [c018d974bb80] [c012010c] > __stack_chk_fail+0x2c/0x30 > [ 33.689376] [c018d974bbe0] [c01b859c] > write_irq_affinity.isra.5+0x15c/0x160 > [ 33.689383] [c018d974bd30] [c04d6f30] proc_reg_write+0x90/0x110 > [ 33.689388] [c018d974bd60] [c041453c] __vfs_write+0x3c/0x70 > [ 33.689394] [c018d974bd80] [c0418650] vfs_write+0xd0/0x250 > [ 33.689399] [c018d974bdd0] [c0418a2c] ksys_write+0x7c/0x130 > [ 33.689405] [c018d974be20] [c000b688] system_call+0x5c/0x70 > > Machine boots till login prompt and then panics few seconds later. > > Last known next build was May 24th. Will attempt few builds till May 30 to > narrow down this problem. My CI was fine with next-20190529 (9a15d2e3fd03e3). cheers
RE: [PATCH v3 0/6] Prerequisites for NXP LS104xA SMMU enablement
Hello, > -Original Message- > From: David Miller > Sent: Friday, May 31, 2019 1:09 AM > > From: laurentiu.tu...@nxp.com > Date: Thu, 30 May 2019 17:19:45 +0300 > > > Depends on this pull request: > > > > > http://lists.infradead.org/pipermail/linux-arm-kernel/2019-May/653554.html > > I'm not sure how you want me to handle this. Dave, would it make sense / be possible to also pick Leo's PR through your tree? --- Thanks & Best Regards, Laurentiu
[next-20190530] Boot failure on PowerPC
Latest next fails to boot with a kernel panic on POWER9. [ 33.689332] Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: write_irq_affinity.isra.5+0x15c/0x160 [ 33.689346] CPU: 35 PID: 4907 Comm: irqbalance Not tainted 5.2.0-rc2-next-20190530-autotest-autotest #1 [ 33.689352] Call Trace: [ 33.689356] [c018d974bab0] [c0b5328c] dump_stack+0xb0/0xf4 (unreliable) [ 33.689364] [c018d974baf0] [c0120694] panic+0x16c/0x408 [ 33.689370] [c018d974bb80] [c012010c] __stack_chk_fail+0x2c/0x30 [ 33.689376] [c018d974bbe0] [c01b859c] write_irq_affinity.isra.5+0x15c/0x160 [ 33.689383] [c018d974bd30] [c04d6f30] proc_reg_write+0x90/0x110 [ 33.689388] [c018d974bd60] [c041453c] __vfs_write+0x3c/0x70 [ 33.689394] [c018d974bd80] [c0418650] vfs_write+0xd0/0x250 [ 33.689399] [c018d974bdd0] [c0418a2c] ksys_write+0x7c/0x130 [ 33.689405] [c018d974be20] [c000b688] system_call+0x5c/0x70 Machine boots till login prompt and then panics few seconds later. Last known next build was May 24th. Will attempt few builds till May 30 to narrow down this problem. Thanks -Sachin
Re: [PATCH v8 1/7] iommu: enhance IOMMU default DMA mode build options
-config IOMMU_DEFAULT_PASSTHROUGH -bool "IOMMU passthrough by default" +choice +prompt "IOMMU default DMA mode" depends on IOMMU_API -help - Enable passthrough by default, removing the need to pass in - iommu.passthrough=on or iommu=pt through command line. If this - is enabled, you can still disable with iommu.passthrough=off - or iommu=nopt depending on the architecture. +default IOMMU_DEFAULT_STRICT +help + This option allows IOMMU DMA mode to be chose at build time, to As before: /s/chose/chosen/, /s/allows IOMMU/allows an IOMMU/ I'm sorry that the previous version was not modified. + override the default DMA mode of each ARCHs, removing the need to Again, as before: ARCHs should be singular OK + pass in kernel parameters through command line. You can still use + ARCHs specific boot options to override this option again. * + +config IOMMU_DEFAULT_PASSTHROUGH +bool "passthrough" +help + In this mode, the DMA access through IOMMU without any addresses + translation. That means, the wrong or illegal DMA access can not + be caught, no error information will be reported. If unsure, say N here. +config IOMMU_DEFAULT_LAZY +bool "lazy" +help + Support lazy mode, where for every IOMMU DMA unmap operation, the + flush operation of IOTLB and the free operation of IOVA are deferred. + They are only guaranteed to be done before the related IOVA will be + reused. why no advisory on how to set if unsure? Because the LAZY and STRICT have their own advantages and disadvantages. Should I say: If unsure, keep the default。 Maybe. So you could put this in the help for the choice, * above, and remove the advisory on IOMMU_DEFAULT_PASSTHROUGH. However the maintainer may have a different view. Thanks, John + +config IOMMU_DEFAULT_STRICT +bool "strict" +help + For every IOMMU DMA unmap operation, the flush operation of IOTLB and + the free operation of IOVA are guaranteed to be done in the unmap + function. + + This mode is safer than the two above, but it maybe slower in some + high performace scenarios. and here?
Re: [PATCH v8 1/7] iommu: enhance IOMMU default DMA mode build options
On 2019/5/30 20:20, John Garry wrote: > On 30/05/2019 04:48, Zhen Lei wrote: >> First, add build option IOMMU_DEFAULT_{LAZY|STRICT}, so that we have the >> opportunity to set {lazy|strict} mode as default at build time. Then put >> the three config options in an choice, make people can only choose one of >> the three at a time. >> > > Since this was not picked up, but modulo (somtimes same) comments below: > > Reviewed-by: John Garry > >> Signed-off-by: Zhen Lei >> --- >> drivers/iommu/Kconfig | 42 +++--- >> drivers/iommu/iommu.c | 3 ++- >> 2 files changed, 37 insertions(+), 8 deletions(-) >> >> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig >> index 83664db5221df02..d6a1a45f80ffbf5 100644 >> --- a/drivers/iommu/Kconfig >> +++ b/drivers/iommu/Kconfig >> @@ -75,17 +75,45 @@ config IOMMU_DEBUGFS >> debug/iommu directory, and then populate a subdirectory with >> entries as required. >> >> -config IOMMU_DEFAULT_PASSTHROUGH >> - bool "IOMMU passthrough by default" >> +choice >> + prompt "IOMMU default DMA mode" >> depends on IOMMU_API >> - help >> - Enable passthrough by default, removing the need to pass in >> - iommu.passthrough=on or iommu=pt through command line. If this >> - is enabled, you can still disable with iommu.passthrough=off >> - or iommu=nopt depending on the architecture. >> + default IOMMU_DEFAULT_STRICT >> + help >> + This option allows IOMMU DMA mode to be chose at build time, to > > As before: > /s/chose/chosen/, /s/allows IOMMU/allows an IOMMU/ I'm sorry that the previous version was not modified. > >> + override the default DMA mode of each ARCHs, removing the need to > > Again, as before: > ARCHs should be singular OK > >> + pass in kernel parameters through command line. You can still use >> + ARCHs specific boot options to override this option again. >> + >> +config IOMMU_DEFAULT_PASSTHROUGH >> + bool "passthrough" >> + help >> + In this mode, the DMA access through IOMMU without any addresses >> + translation. That means, the wrong or illegal DMA access can not >> + be caught, no error information will be reported. >> >> If unsure, say N here. >> >> +config IOMMU_DEFAULT_LAZY >> + bool "lazy" >> + help >> + Support lazy mode, where for every IOMMU DMA unmap operation, the >> + flush operation of IOTLB and the free operation of IOVA are deferred. >> + They are only guaranteed to be done before the related IOVA will be >> + reused. > > why no advisory on how to set if unsure? Because the LAZY and STRICT have their own advantages and disadvantages. Should I say: If unsure, keep the default。 > >> + >> +config IOMMU_DEFAULT_STRICT >> + bool "strict" >> + help >> + For every IOMMU DMA unmap operation, the flush operation of IOTLB and >> + the free operation of IOVA are guaranteed to be done in the unmap >> + function. >> + >> + This mode is safer than the two above, but it maybe slower in some >> + high performace scenarios. > > and here? > >> + >> +endchoice >> + >> config OF_IOMMU >> def_bool y >> depends on OF && IOMMU_API >> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c >> index 67ee6623f9b2a4d..56bce221285b15f 100644 >> --- a/drivers/iommu/iommu.c >> +++ b/drivers/iommu/iommu.c >> @@ -43,7 +43,8 @@ >> #else >> static unsigned int iommu_def_domain_type = IOMMU_DOMAIN_DMA; >> #endif >> -static bool iommu_dma_strict __read_mostly = true; >> +static bool iommu_dma_strict __read_mostly = >> + IS_ENABLED(CONFIG_IOMMU_DEFAULT_STRICT); >> >> struct iommu_group { >> struct kobject kobj; >> > > > > . >
Re: [RFC] mm: Generalize notify_page_fault()
On 05/30/2019 07:09 PM, Matthew Wilcox wrote: > On Thu, May 30, 2019 at 05:31:15PM +0530, Anshuman Khandual wrote: >> On 05/30/2019 04:36 PM, Matthew Wilcox wrote: >>> The two handle preemption differently. Why is x86 wrong and this one >>> correct? >> >> Here it expects context to be already non-preemptible where as the proposed >> generic function makes it non-preemptible with a preempt_[disable|enable]() >> pair for the required code section, irrespective of it's present state. Is >> not this better ? > > git log -p arch/x86/mm/fault.c > > search for 'kprobes'. > > tell me what you think. > Are you referring to these following commits a980c0ef9f6d ("x86/kprobes: Refactor kprobes_fault() like kprobe_exceptions_notify()") b506a9d08bae ("x86: code clarification patch to Kprobes arch code") In particular the later one (b506a9d08bae). It explains how the invoking context in itself should be non-preemptible for the kprobes processing context irrespective of whether kprobe_running() or perhaps smp_processor_id() is safe or not. Hence it does not make much sense to continue when original invoking context is preemptible. Instead just bail out earlier. This seems to be making more sense than preempt disable-enable pair. If there are no concerns about this change from other platforms, I will change the preemption behavior in proposed generic function next time around.
Re: [PATCH BACKPORT 4.19, 5.0, 5.1] crypto: vmx - ghash: do nosimd fallback manually
Daniel Axtens a écrit : Hi I think you have to mention the upstream commit Id when submitting a patch to stable, see https://elixir.bootlin.com/linux/v5.2-rc1/source/Documentation/process/stable-kernel-rules.rst Christophe VMX ghash was using a fallback that did not support interleaving simd and nosimd operations, leading to failures in the extended test suite. If I understood correctly, Eric's suggestion was to use the same data format that the generic code uses, allowing us to call into it with the same contexts. I wasn't able to get that to work - I think there's a very different key structure and data layout being used. So instead steal the arm64 approach and perform the fallback operations directly if required. Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Cc: sta...@vger.kernel.org # v4.1+ Reported-by: Eric Biggers Signed-off-by: Daniel Axtens Acked-by: Ard Biesheuvel Tested-by: Michael Ellerman Signed-off-by: Herbert Xu (backported from commit 357d065a44cdd77ed5ff35155a989f2a763e96ef) Signed-off-by: Daniel Axtens --- drivers/crypto/vmx/ghash.c | 212 +++-- 1 file changed, 86 insertions(+), 126 deletions(-) diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c index dd8b8716467a..2d1a8cd35509 100644 --- a/drivers/crypto/vmx/ghash.c +++ b/drivers/crypto/vmx/ghash.c @@ -1,22 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0 /** * GHASH routines supporting VMX instructions on the Power 8 * - * Copyright (C) 2015 International Business Machines Inc. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; version 2 only. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + * Copyright (C) 2015, 2019 International Business Machines Inc. * * Author: Marcelo Henrique Cerri + * + * Extended by Daniel Axtens to replace the fallback + * mechanism. The new approach is based on arm64 code, which is: + * Copyright (C) 2014 - 2018 Linaro Ltd. */ #include @@ -39,71 +31,25 @@ void gcm_ghash_p8(u64 Xi[2], const u128 htable[16], const u8 *in, size_t len); struct p8_ghash_ctx { + /* key used by vector asm */ u128 htable[16]; - struct crypto_shash *fallback; + /* key used by software fallback */ + be128 key; }; struct p8_ghash_desc_ctx { u64 shash[2]; u8 buffer[GHASH_DIGEST_SIZE]; int bytes; - struct shash_desc fallback_desc; }; -static int p8_ghash_init_tfm(struct crypto_tfm *tfm) -{ - const char *alg = "ghash-generic"; - struct crypto_shash *fallback; - struct crypto_shash *shash_tfm = __crypto_shash_cast(tfm); - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm); - - fallback = crypto_alloc_shash(alg, 0, CRYPTO_ALG_NEED_FALLBACK); - if (IS_ERR(fallback)) { - printk(KERN_ERR - "Failed to allocate transformation for '%s': %ld\n", - alg, PTR_ERR(fallback)); - return PTR_ERR(fallback); - } - - crypto_shash_set_flags(fallback, - crypto_shash_get_flags((struct crypto_shash - *) tfm)); - - /* Check if the descsize defined in the algorithm is still enough. */ - if (shash_tfm->descsize < sizeof(struct p8_ghash_desc_ctx) - + crypto_shash_descsize(fallback)) { - printk(KERN_ERR - "Desc size of the fallback implementation (%s) does not match the expected value: %lu vs %u\n", - alg, - shash_tfm->descsize - sizeof(struct p8_ghash_desc_ctx), - crypto_shash_descsize(fallback)); - return -EINVAL; - } - ctx->fallback = fallback; - - return 0; -} - -static void p8_ghash_exit_tfm(struct crypto_tfm *tfm) -{ - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm); - - if (ctx->fallback) { - crypto_free_shash(ctx->fallback); - ctx->fallback = NULL; - } -} - static int p8_ghash_init(struct shash_desc *desc) { - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm)); struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc); dctx->bytes = 0; memset(dctx->shash, 0, GHASH_DIGEST_SIZE); - dctx->fallback_desc.tfm = ctx->fallback; - dctx->fallback_desc.flags = desc->flags; - return
[PATCH BACKPORT 4.4] crypto: vmx - ghash: do nosimd fallback manually
VMX ghash was using a fallback that did not support interleaving simd and nosimd operations, leading to failures in the extended test suite. If I understood correctly, Eric's suggestion was to use the same data format that the generic code uses, allowing us to call into it with the same contexts. I wasn't able to get that to work - I think there's a very different key structure and data layout being used. So instead steal the arm64 approach and perform the fallback operations directly if required. Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Cc: sta...@vger.kernel.org # v4.1+ Reported-by: Eric Biggers Signed-off-by: Daniel Axtens Acked-by: Ard Biesheuvel Tested-by: Michael Ellerman Signed-off-by: Herbert Xu (backported from commit 357d065a44cdd77ed5ff35155a989f2a763e96ef) Signed-off-by: Daniel Axtens --- drivers/crypto/vmx/ghash.c | 218 +++-- 1 file changed, 89 insertions(+), 129 deletions(-) diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c index 84b9389bf1ed..d6b68cf7bba7 100644 --- a/drivers/crypto/vmx/ghash.c +++ b/drivers/crypto/vmx/ghash.c @@ -1,22 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0 /** * GHASH routines supporting VMX instructions on the Power 8 * - * Copyright (C) 2015 International Business Machines Inc. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; version 2 only. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + * Copyright (C) 2015, 2019 International Business Machines Inc. * * Author: Marcelo Henrique Cerri + * + * Extended by Daniel Axtens to replace the fallback + * mechanism. The new approach is based on arm64 code, which is: + * Copyright (C) 2014 - 2018 Linaro Ltd. */ #include @@ -39,71 +31,25 @@ void gcm_ghash_p8(u64 Xi[2], const u128 htable[16], const u8 *in, size_t len); struct p8_ghash_ctx { + /* key used by vector asm */ u128 htable[16]; - struct crypto_shash *fallback; + /* key used by software fallback */ + be128 key; }; struct p8_ghash_desc_ctx { u64 shash[2]; u8 buffer[GHASH_DIGEST_SIZE]; int bytes; - struct shash_desc fallback_desc; }; -static int p8_ghash_init_tfm(struct crypto_tfm *tfm) -{ - const char *alg = "ghash-generic"; - struct crypto_shash *fallback; - struct crypto_shash *shash_tfm = __crypto_shash_cast(tfm); - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm); - - fallback = crypto_alloc_shash(alg, 0, CRYPTO_ALG_NEED_FALLBACK); - if (IS_ERR(fallback)) { - printk(KERN_ERR - "Failed to allocate transformation for '%s': %ld\n", - alg, PTR_ERR(fallback)); - return PTR_ERR(fallback); - } - - crypto_shash_set_flags(fallback, - crypto_shash_get_flags((struct crypto_shash - *) tfm)); - - /* Check if the descsize defined in the algorithm is still enough. */ - if (shash_tfm->descsize < sizeof(struct p8_ghash_desc_ctx) - + crypto_shash_descsize(fallback)) { - printk(KERN_ERR - "Desc size of the fallback implementation (%s) does not match the expected value: %lu vs %u\n", - alg, - shash_tfm->descsize - sizeof(struct p8_ghash_desc_ctx), - crypto_shash_descsize(fallback)); - return -EINVAL; - } - ctx->fallback = fallback; - - return 0; -} - -static void p8_ghash_exit_tfm(struct crypto_tfm *tfm) -{ - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm); - - if (ctx->fallback) { - crypto_free_shash(ctx->fallback); - ctx->fallback = NULL; - } -} - static int p8_ghash_init(struct shash_desc *desc) { - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm)); struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc); dctx->bytes = 0; memset(dctx->shash, 0, GHASH_DIGEST_SIZE); - dctx->fallback_desc.tfm = ctx->fallback; - dctx->fallback_desc.flags = desc->flags; - return crypto_shash_init(>fallback_desc); + return 0; } static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key, @@ -122,7 +68,53 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
[PATCH BACKPORT 4.9, 4.14] crypto: vmx - ghash: do nosimd fallback manually
VMX ghash was using a fallback that did not support interleaving simd and nosimd operations, leading to failures in the extended test suite. If I understood correctly, Eric's suggestion was to use the same data format that the generic code uses, allowing us to call into it with the same contexts. I wasn't able to get that to work - I think there's a very different key structure and data layout being used. So instead steal the arm64 approach and perform the fallback operations directly if required. Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Cc: sta...@vger.kernel.org # v4.1+ Reported-by: Eric Biggers Signed-off-by: Daniel Axtens Acked-by: Ard Biesheuvel Tested-by: Michael Ellerman Signed-off-by: Herbert Xu (backported from commit 357d065a44cdd77ed5ff35155a989f2a763e96ef) Signed-off-by: Daniel Axtens --- drivers/crypto/vmx/ghash.c | 213 +++-- 1 file changed, 87 insertions(+), 126 deletions(-) diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c index 1c4b5b889fba..1bfe867c0b7b 100644 --- a/drivers/crypto/vmx/ghash.c +++ b/drivers/crypto/vmx/ghash.c @@ -1,22 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0 /** * GHASH routines supporting VMX instructions on the Power 8 * - * Copyright (C) 2015 International Business Machines Inc. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; version 2 only. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + * Copyright (C) 2015, 2019 International Business Machines Inc. * * Author: Marcelo Henrique Cerri + * + * Extended by Daniel Axtens to replace the fallback + * mechanism. The new approach is based on arm64 code, which is: + * Copyright (C) 2014 - 2018 Linaro Ltd. */ #include @@ -39,71 +31,25 @@ void gcm_ghash_p8(u64 Xi[2], const u128 htable[16], const u8 *in, size_t len); struct p8_ghash_ctx { + /* key used by vector asm */ u128 htable[16]; - struct crypto_shash *fallback; + /* key used by software fallback */ + be128 key; }; struct p8_ghash_desc_ctx { u64 shash[2]; u8 buffer[GHASH_DIGEST_SIZE]; int bytes; - struct shash_desc fallback_desc; }; -static int p8_ghash_init_tfm(struct crypto_tfm *tfm) -{ - const char *alg = "ghash-generic"; - struct crypto_shash *fallback; - struct crypto_shash *shash_tfm = __crypto_shash_cast(tfm); - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm); - - fallback = crypto_alloc_shash(alg, 0, CRYPTO_ALG_NEED_FALLBACK); - if (IS_ERR(fallback)) { - printk(KERN_ERR - "Failed to allocate transformation for '%s': %ld\n", - alg, PTR_ERR(fallback)); - return PTR_ERR(fallback); - } - - crypto_shash_set_flags(fallback, - crypto_shash_get_flags((struct crypto_shash - *) tfm)); - - /* Check if the descsize defined in the algorithm is still enough. */ - if (shash_tfm->descsize < sizeof(struct p8_ghash_desc_ctx) - + crypto_shash_descsize(fallback)) { - printk(KERN_ERR - "Desc size of the fallback implementation (%s) does not match the expected value: %lu vs %u\n", - alg, - shash_tfm->descsize - sizeof(struct p8_ghash_desc_ctx), - crypto_shash_descsize(fallback)); - return -EINVAL; - } - ctx->fallback = fallback; - - return 0; -} - -static void p8_ghash_exit_tfm(struct crypto_tfm *tfm) -{ - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm); - - if (ctx->fallback) { - crypto_free_shash(ctx->fallback); - ctx->fallback = NULL; - } -} - static int p8_ghash_init(struct shash_desc *desc) { - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm)); struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc); dctx->bytes = 0; memset(dctx->shash, 0, GHASH_DIGEST_SIZE); - dctx->fallback_desc.tfm = ctx->fallback; - dctx->fallback_desc.flags = desc->flags; - return crypto_shash_init(>fallback_desc); + return 0; } static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key, @@ -121,7 +67,51 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
[PATCH BACKPORT 4.19, 5.0, 5.1] crypto: vmx - ghash: do nosimd fallback manually
VMX ghash was using a fallback that did not support interleaving simd and nosimd operations, leading to failures in the extended test suite. If I understood correctly, Eric's suggestion was to use the same data format that the generic code uses, allowing us to call into it with the same contexts. I wasn't able to get that to work - I think there's a very different key structure and data layout being used. So instead steal the arm64 approach and perform the fallback operations directly if required. Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Cc: sta...@vger.kernel.org # v4.1+ Reported-by: Eric Biggers Signed-off-by: Daniel Axtens Acked-by: Ard Biesheuvel Tested-by: Michael Ellerman Signed-off-by: Herbert Xu (backported from commit 357d065a44cdd77ed5ff35155a989f2a763e96ef) Signed-off-by: Daniel Axtens --- drivers/crypto/vmx/ghash.c | 212 +++-- 1 file changed, 86 insertions(+), 126 deletions(-) diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c index dd8b8716467a..2d1a8cd35509 100644 --- a/drivers/crypto/vmx/ghash.c +++ b/drivers/crypto/vmx/ghash.c @@ -1,22 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0 /** * GHASH routines supporting VMX instructions on the Power 8 * - * Copyright (C) 2015 International Business Machines Inc. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; version 2 only. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + * Copyright (C) 2015, 2019 International Business Machines Inc. * * Author: Marcelo Henrique Cerri + * + * Extended by Daniel Axtens to replace the fallback + * mechanism. The new approach is based on arm64 code, which is: + * Copyright (C) 2014 - 2018 Linaro Ltd. */ #include @@ -39,71 +31,25 @@ void gcm_ghash_p8(u64 Xi[2], const u128 htable[16], const u8 *in, size_t len); struct p8_ghash_ctx { + /* key used by vector asm */ u128 htable[16]; - struct crypto_shash *fallback; + /* key used by software fallback */ + be128 key; }; struct p8_ghash_desc_ctx { u64 shash[2]; u8 buffer[GHASH_DIGEST_SIZE]; int bytes; - struct shash_desc fallback_desc; }; -static int p8_ghash_init_tfm(struct crypto_tfm *tfm) -{ - const char *alg = "ghash-generic"; - struct crypto_shash *fallback; - struct crypto_shash *shash_tfm = __crypto_shash_cast(tfm); - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm); - - fallback = crypto_alloc_shash(alg, 0, CRYPTO_ALG_NEED_FALLBACK); - if (IS_ERR(fallback)) { - printk(KERN_ERR - "Failed to allocate transformation for '%s': %ld\n", - alg, PTR_ERR(fallback)); - return PTR_ERR(fallback); - } - - crypto_shash_set_flags(fallback, - crypto_shash_get_flags((struct crypto_shash - *) tfm)); - - /* Check if the descsize defined in the algorithm is still enough. */ - if (shash_tfm->descsize < sizeof(struct p8_ghash_desc_ctx) - + crypto_shash_descsize(fallback)) { - printk(KERN_ERR - "Desc size of the fallback implementation (%s) does not match the expected value: %lu vs %u\n", - alg, - shash_tfm->descsize - sizeof(struct p8_ghash_desc_ctx), - crypto_shash_descsize(fallback)); - return -EINVAL; - } - ctx->fallback = fallback; - - return 0; -} - -static void p8_ghash_exit_tfm(struct crypto_tfm *tfm) -{ - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm); - - if (ctx->fallback) { - crypto_free_shash(ctx->fallback); - ctx->fallback = NULL; - } -} - static int p8_ghash_init(struct shash_desc *desc) { - struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm)); struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc); dctx->bytes = 0; memset(dctx->shash, 0, GHASH_DIGEST_SIZE); - dctx->fallback_desc.tfm = ctx->fallback; - dctx->fallback_desc.flags = desc->flags; - return crypto_shash_init(>fallback_desc); + return 0; } static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key, @@ -121,7 +67,51 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,