On Tue, Mar 21, 2017 at 02:59:46AM +1000, Nicholas Piggin wrote:
> On Mon, 20 Mar 2017 21:24:18 +0530
> "Gautham R. Shenoy" wrote:
>
> > From: "Gautham R. Shenoy"
> >
> > POWER9 DD1.0 hardware has an issue due to which the SPRs of a thread
> > waking up from stop 0,1,2 with ESL=1 can endup bein
Hi,
On Tue, Mar 21, 2017 at 02:39:34AM +1000, Nicholas Piggin wrote:
> > @@ -241,8 +240,9 @@ static DEVICE_ATTR(fastsleep_workaround_applyonce, 0600,
> > * The default stop state that will be used by ppc_md.power_save
> > * function on platforms that support stop instruction.
> > */
> > -u64
Hi Nick,
On Tue, Mar 21, 2017 at 02:35:17AM +1000, Nicholas Piggin wrote:
> On Mon, 20 Mar 2017 21:24:15 +0530
> "Gautham R. Shenoy" wrote:
>
> > From: "Gautham R. Shenoy"
> >
> > Move the piece of code in powernv/smp.c::pnv_smp_cpu_kill_self() which
> > transitions the CPU to the deepest avai
On 22/03/17 10:53, Matt Brown wrote:
The HDAT data area is consumed by skiboot and turned into a device-tree.
In some cases we would like to look directly at the HDAT, so this patch
adds a sysfs node to allow it to be viewed. This is not possible through
/dev/mem as it is reserved memory which i
It does not make much sense to have KVM in book3s-64 and
not to have IOMMU bits for PCI pass through support as it costs little
and allows VFIO to function on book3s KVM.
Having IOMMU_API always enabled makes it unnecessary to have a lot of
"#ifdef IOMMU_API" in arch/powerpc/kvm/book3s_64_vio*. Wi
So far iommu_table obejcts were only used in virtual mode and had
a single owner. We are going to change this by implementing in-kernel
acceleration of DMA mapping requests. The proposed acceleration
will handle requests in real mode and KVM will keep references to tables.
This adds a kref to iomm
At the moment iommu_table can be disposed by either calling
iommu_table_free() directly or it_ops::free(); the only implementation
of free() is in IODA2 - pnv_ioda2_table_free() - and it calls
iommu_table_free() anyway.
As we are going to have reference counting on tables, we need an unified
way o
In real mode, TCE tables are invalidated using special
cache-inhibited store instructions which are not available in
virtual mode
This defines and implements exchange_rm() callback. This does not
define set_rm/clear_rm/flush_rm callbacks as there is no user for those -
exchange/exchange_rm are onl
This is my current queue of patches to add acceleration of TCE
updates in KVM.
This is based on sha1 093b995e3b55 Huang Ying "mm, swap: Remove WARN_ON_ONCE()
in free_swap_slot()".
Please comment. Thanks.
Changes:
v11:
* added rb:David to 04/10
* fixed reference leak in 10/10
v10:
* fixed bug
This allows the host kernel to handle H_PUT_TCE, H_PUT_TCE_INDIRECT
and H_STUFF_TCE requests targeted an IOMMU TCE table used for VFIO
without passing them to user space which saves time on switching
to user space and back.
This adds H_PUT_TCE/H_PUT_TCE_INDIRECT/H_STUFF_TCE handlers to KVM.
KVM tr
This reworks helpers for checking TCE update parameters in way they
can be used in KVM.
This should cause no behavioral change.
Signed-off-by: Alexey Kardashevskiy
Reviewed-by: David Gibson
---
Changes:
v6:
* s/tce/gpa/ as TCE without permission bits is a GPA and this is what is
passed everywhe
VFIO on sPAPR already implements guest memory pre-registration
when the entire guest RAM gets pinned. This can be used to translate
the physical address of a guest page containing the TCE list
from H_PUT_TCE_INDIRECT.
This makes use of the pre-registrered memory API to access TCE list
pages in ord
The guest view TCE tables are per KVM anyway (not per VCPU) so pass kvm*
there. This will be used in the following patches where we will be
attaching VFIO containers to LIOBNs via ioctl() to KVM (rather than
to VCPU).
Signed-off-by: Alexey Kardashevskiy
Reviewed-by: David Gibson
---
arch/powerp
This adds a capability number for in-kernel support for VFIO on
SPAPR platform.
The capability will tell the user space whether in-kernel handlers of
H_PUT_TCE can handle VFIO-targeted requests or not. If not, the user space
must not attempt allocating a TCE table in the host kernel via
the KVM_CR
This makes mm_iommu_lookup() able to work in realmode by replacing
list_for_each_entry_rcu() (which can do debug stuff which can fail in
real mode) with list_for_each_entry_lockless().
This adds realmode version of mm_iommu_ua_to_hpa() which adds
explicit vmalloc'd-to-linear address conversion.
Un
Nvlink2 supports address translation services (ATS) allowing devices
to request address translations from an mmu known as the nest MMU
which is setup to walk the CPU page tables.
To access this functionality certain firmware calls are required to
setup and manage hardware context tables in the nvl
The pnv_pci_get_{gpu|npu}_dev functions are used to find associations
between nvlink PCIe devices and standard PCIe devices. However they
lacked basic sanity checking which results in NULL pointer
dereferencing if they are incorrectly called which can be harder to
spot than an explicit WARN_ON.
Si
There is of_property_read_u32_index but no u64 variant. This patch
adds one similar to the u32 version for u64.
Signed-off-by: Alistair Popple
---
drivers/of/base.c | 31 +++
include/linux/of.h | 3 +++
2 files changed, 34 insertions(+)
diff --git a/drivers/of/base
As we start supporting larger address space (>128TB), we want to give
architecture a control on max task size of an application which is different
from the TASK_SIZE. For ex: ppc64 needs to track the base page size of a segment
and it is copied from mm_context_t to PACA on each context switch. If w
Now that we use all the available virtual address range, we need to make sure
we don't generate VSID such that it overlaps with the reserved vsid range.
Reserved vsid range include the virtual address range used by the adjunct
partition and also the VRMA virtual segment. We find the context value t
Not all user space application is ready to handle wide addresses. It's known
that
at least some JIT compilers use higher bits in pointers to encode their
information. It collides with valid pointers with 512TB addresses and
leads to crashes.
To mitigate this, we are not going to allocate virtual
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/hugetlbpage-radix.c | 4 ++--
arch/powerpc/mm/mmap.c | 12 ++--
arch/powerpc/mm/slice.c | 6 +++---
arch/powerpc/mm/subpage-prot.c | 3 ++-
4 files changed, 13 insertions(+), 12 deletions(-)
diff --git a
We optmize the slice page size array copy to paca by copying only the
range based on task size. This will require us to not look at page size
array beyond task size in PACA on slb fault. To enable that copy task size
to paca which will be used during slb fault.
Signed-off-by: Aneesh Kumar K.V
---
In the followup patch, we will increase the slice array size to handle 512TB
range, but will limit the task size to 128TB. Avoid doing uncessary computation
and avoid doing slice mask related operation above task_size.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu-ha
We update the hash linux page table layout such that we can support 512TB. But
we limit the TASK_SIZE to 128TB. We can switch to 128TB by default without
conditional because that is the max virtual address supported by other
architectures. We will later add a mechanism to on-demand increase the
app
With current kernel, we use the top 4 context for the kernel. Kernel VSIDs are
built
using these top context values and effective segemnt ID. In the following
patches,
we want to increase the max effective address to 512TB. We achieve that by
increasing the effective segments IDs there by increas
This doesn't have any functional change. But helps in avoiding mistakes
in case the shift bit changes
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.
This is now used by linear mapped region of the kernel. User space still
should not see a VSID 0. But having that VSID check confuse the reader.
Remove the same and convert the error checking to be based on addr value
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu-has
We now get output like below which is much better.
[0.935306] good_mask low_slice: 0-15
[0.935360] good_mask high_slice: 0-511
Compared to
[0.953414] good_mask: - 1.
I also fixed an error with slice_dbg printing.
Signed-off-by: Aneesh Kumar K.
Inorder to support large effective address range (512TB), we want to increase
the virtual address bits to 68. But we do have platforms like p4 and p5 that can
only do 65 bit VA. We support those platforms by limiting context bits on them
to 16.
The protovsid -> vsid conversion is verified to work
This structure definition need not be in a header since this is used only by
slice.c file. So move it to slice.c. This also allow us to use SLICE_NUM_HIGH
instead of 64.
I also switch the low_slices type to u64 from u16. This doesn't have an impact
on size of struct due to padding added with u16 t
The check against VSID range is implied when we check task size against
hash and radix pgtable range[1], because we make sure page table range cannot
exceed vsid range.
[1] BUILD_BUG_ON(TASK_SIZE_USER64 > H_PGTABLE_RANGE);
BUILD_BUG_ON(TASK_SIZE_USER64 > RADIX_PGTABLE_RANGE);
The check for smalle
We also update the function arg to struct mm_struct. Move this so that function
finds the definition of struct mm_struct. No functional change in this patch.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/paca.h | 18 +-
arch/powerpc/kernel/paca.c | 19
This avoid copying the slice_mask struct as function return value
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/slice.c | 62 ++---
1 file changed, 28 insertions(+), 34 deletions(-)
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
In followup patch we want to increase the va range which will result
in us requiring high_slices to have more than 64 bits. To enable this
convert high_slices to bitmap. We keep the number bits same in this patch
and later change that to higher value
Signed-off-by: Aneesh Kumar K.V
---
arch/powe
This patch series increase the effective virtual address range of
applications from 64TB to 128TB. We do that by supporting a
68 bit virtual address. On platforms that can only do 65 bit virtual
address we limit the max contexts to a 16bit value instead of 19.
The patch series also switch the pag
On Tue, 21 Mar 2017 02:59:46 +1000
Nicholas Piggin wrote:
> On Mon, 20 Mar 2017 21:24:18 +0530
> This is quite neat now you've moved it to its own function. Nice.
> It will be only a trivial clash with my patches now, I think.
>
> Reviewed-by: Nicholas Piggin
Hmm... This won't actually work f
On Fri, Mar 17, 2017 at 04:09:53PM +1100, Alexey Kardashevskiy wrote:
> So far iommu_table obejcts were only used in virtual mode and had
> a single owner. We are going to change this by implementing in-kernel
> acceleration of DMA mapping requests. The proposed acceleration
> will handle requests
The HDAT data area is consumed by skiboot and turned into a device-tree.
In some cases we would like to look directly at the HDAT, so this patch
adds a sysfs node to allow it to be viewed. This is not possible through
/dev/mem as it is reserved memory which is stopped by the /dev/mem filter.
Sign
The mpc52xx_gpt code currently implements an irq_chip for handling
interrupts; due to how irq_chip handling is done, it's necessary for the
irq_chip methods to be invoked from hardirq context, even on a a
real-time kernel. Because the spinlock_t type becomes a "sleeping"
spinlock w/ RT kernels, it
On Tue, 2017-03-21 at 06:29 -0700, Matthew Wilcox wrote:
>
> Well, those are the generic versions in the first patch:
>
> http://git.infradead.org/users/willy/linux-dax.git/commitdiff/538b977
> 6ac925199969bd5af4e994da776d461e7
>
> so if those are good enough for you guys, there's no need for yo
On Tue, Mar 21, 2017 at 11:37:11AM +0100, Geert Uytterhoeven wrote:
> Hi Björn,
>
> On Mon, Mar 20, 2017 at 7:42 PM, Bjorn Helgaas wrote:
> > Several arches use __ioremap() to help implement the generic ioremap(),
> > ioremap_nocache(), and ioremap_wc() interfaces, but this usage is all
> > insid
On Fri, 17 Mar 2017 16:09:59 +1100
Alexey Kardashevskiy wrote:
> This allows the host kernel to handle H_PUT_TCE, H_PUT_TCE_INDIRECT
> and H_STUFF_TCE requests targeted an IOMMU TCE table used for VFIO
> without passing them to user space which saves time on switching
> to user space and back.
>
On Tue, Mar 21, 2017 at 06:29:10AM -0700, Matthew Wilcox wrote:
> > Unrolling the loop could help a bit on old powerpc32s that don't have branch
> > units, but on those processors the main driver is the time spent to do the
> > effective write to memory, and the operations necessary to unroll the l
We don't support the full 57 bits of physical address and hence can overload
the top bits of RPN as hash specific pte bits.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash.h| 17 +
arch/powerpc/include/asm/book3s/64/pgtable.h | 17 ++---
Max value supported by hardware is 51 bits address. Radix page table define
a slot of 57 bits for future expansion. We restrict the value supported in
linux kernel 53 bits, so that we can use the bits between 57-53 for storing
hash linux page table bits. This is done in the next patch.
This will f
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-64k.h | 4 ++--
arch/powerpc/include/asm/book3s/64/pgtable.h | 2 ++
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h
b/arch/power
Conditional PTE bit definition is confusing and results in coding error.
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 4
1 file changed, 4 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h
b/arch/power
Without this if firmware reports 1MB page size support we will crash
trying to use 1MB as hugetlb page size.
echo 300 > /sys/kernel/mm/hugepages/hugepages-1024kB/nr_hugepages
kernel BUG at ./arch/powerpc/include/asm/hugetlb.h:19!
.
[c000e2c27b30] c029dae8 .hugetlb_fault+0x638
With this we have on powernv and pseries /proc/cpuinfo reporting
timebase: 51200
platform: PowerNV
model : 8247-22L
machine : PowerNV 8247-22L
firmware: OPAL
MMU : Hash
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arc
This bit is only used by radix and it is nice to follow the naming style of
having
bit name start with H_/R_ depending on which translation mode they are used.
No functional change in this patch.
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/6
Define everything based on bits present in pgtable.h. This will help in easily
identifying overlapping bits between hash/radix.
No functional change with this patch.
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-64k.h | 4
arch/po
For low slice, max addr should be less than 4G. Without limiting this correctly
we will end up with a low slice mask which has 17th bit set. This is not
a problem with the current code because our low slice mask is of type u16. But
in later patch I am switching low slice mask to u64 type and having
BOOKE code is dead code as per the Kconfig details. So make it simpler
by enabling MM_SLICE only for book3s_64. The changes w.r.t nohash is just
removing deadcode. W.r.t ppc64, 4k without hugetlb will now enable MM_SLICE.
But that is good, because we reduce one extra variant which probably is not
g
Le 10/03/2017 à 16:41, Segher Boessenkool a écrit :
On Fri, Mar 10, 2017 at 03:41:23PM +0100, Christophe LEROY wrote:
gpio_get() and gpio_set() are used extensively by some GPIO based
drivers like SPI, NAND, so it may be worth it as it doesn't impair
readability (if anyone prefers, we could wr
On Tue, Mar 21, 2017 at 01:23:36PM +0100, Christophe LEROY wrote:
> > It doesn't look free for you as you only store one register each time
> > around the loop in the 32-bit memset implementation:
> >
> > 1: stwur4,4(r6)
> > bdnz1b
> >
> > (wouldn't you get better performance
Hi Matthew
Le 20/03/2017 à 22:14, Matthew Wilcox a écrit :
I recently introduced memset32() / memset64(). I've done implementations
for x86 & ARM; akpm has agreed to take the patchset through his tree.
Do you fancy doing a powerpc version? Minchan Kim got a 7% performance
increase with zram fr
On Tue, 2017-03-14 at 12:36:43 UTC, Nicholas Piggin wrote:
> Print the faulting address of the machine check that may help with
> debugging. The effective address reported can be a target memory address
> rather than the faulting instruction address.
>
> Fix up a dangling bracket while here.
>
>
On Sun, 2017-03-12 at 13:17:00 UTC, Geert Uytterhoeven wrote:
> Submitters of device tree binding documentation may forget to CC
> the subsystem maintainer if this is missing.
>
> Signed-off-by: Geert Uytterhoeven
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Michael Ellerman
> Cc: l
On Tue, 2017-03-07 at 09:32:42 UTC, tcharding wrote:
> struct hcall_stats is only used in hvCall_inst.c.
>
> Move struct hcall_stats to hvCall_inst.c
>
> Signed-off-by: Tobin C. Harding
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/017614a5d6c09ec9e0dc3fd46a5018
cheers
On Mon, 2017-03-06 at 08:49:46 UTC, "Tobin C. Harding" wrote:
> Sparse emits warning: symbol 'prepare_ftrace_return' was not
> declared. Should it be static? prepare_ftrace_return() is called
> from assembler and should not be static. Adding a header file
> declaring the function will fix the spars
On Mon, 2017-03-06 at 08:25:31 UTC, "Tobin C. Harding" wrote:
> Spares emits two symbol not declared warnings. The two functions in
> question are declared already in a kernel header.
>
> Add include directive to include kernel header.
>
> Signed-off-by: Tobin C. Harding
Applied to powerpc next
On Fri, 2017-02-24 at 00:52:09 UTC, Hamish Martin wrote:
> Shift the logic for defining THREAD_SHIFT logic to Kconfig in order to
> allow override by users.
>
> Signed-off-by: Hamish Martin
> Reviewed-by: Chris Packham
Series applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/47
On Tue, 2017-02-21 at 02:40:20 UTC, Alexey Kardashevskiy wrote:
> PNV_IODA_PE_DEV is only used for NPU devices (emulated PCI bridges
> representing NVLink). These are added to IOMMU groups with corresponding
> NVIDIA devices after all non-NPU PEs are setup; a special helper -
> pnv_pci_ioda_setup_i
On Tue, 2017-02-21 at 02:38:54 UTC, Alexey Kardashevskiy wrote:
> The iommu_table_ops callbacks are declared CPU endian as they take and
> return "unsigned long"; underlying hardware tables are big-endian.
>
> However get() was missing be64_to_cpu(), this adds the missing conversion.
>
> The only
On Tue, 2017-02-14 at 16:45:10 UTC, Laurent Dufour wrote:
> Move mmap_sem releasing in the do_sigbus()'s unique caller : mm_fault_error()
>
> No functional changes.
>
> Signed-off-by: Laurent Dufour
Series applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/c2294e0ffe741c8b34c630
On Mon, 2017-02-06 at 10:13:27 UTC, Michael Ellerman wrote:
> Refactor the AUXV routines so they are more composable. In a future test
> we want to look for many AUXV entries and we don't want to have to read
> /proc/self/auxv each time.
>
> Signed-off-by: Michael Ellerman
Series applied to powe
On Fri, 2016-12-02 at 02:38:38 UTC, Ben Hutchings wrote:
> Add declarations for:
> - __mfdcr, __mtdcr (if CONFIG_PPC_DCR_NATIVE=y; through )
> - switch_mmu_context (if CONFIG_PPC_BOOK3S_64=n; through )
>
> Signed-off-by: Ben Hutchings
Applied to powerpc next, thanks.
https://git.kernel.org/powe
On Fri, 2016-12-02 at 02:35:52 UTC, Ben Hutchings wrote:
> The symbols exported for use by MOL aren't getting CRCs and I was
> about to fix that. But MOL is dead upstream, and the latest work on
> it was to make it use KVM instead of its own kernel module. So remove
> them instead.
>
> Signed-of
On Sat, 2016-09-10 at 10:01:30 UTC, Michael Ellerman wrote:
> We'd like to eventually remove NO_IRQ on powerpc, so remove usages of it
> from electra_cf.c which is a powerpc-only driver.
>
> Signed-off-by: Michael Ellerman
Applied to powerpc next.
https://git.kernel.org/powerpc/c/6c8343e82bec46
On Fri, 2017-03-17 at 05:13:20 UTC, Nicholas Piggin wrote:
> We concluded there may be a window where the idle wakeup code could
> get to pnv_wakeup_tb_loss (which clobbers non-volatile GPRs), but the
> hardware may set SRR1[46:47] to 01b (no state loss) which would
> result in the wakeup code fail
On Thu, 2017-02-23 at 03:27:26 UTC, Vaibhav Jain wrote:
> Fix a boundary condition where in some cases an eeh event with
> state == pci_channel_io_perm_failure wont be passed on to a driver
> attached to the virtual pci device associated with a slice. This will
> happen in case the slice just befor
Another thought about that patch. Now that we keep track of the mm
associated to a context, I think we can simplify slightly the function
_cxl_slbia() in main.c, where we look for the mm based on the pid. We
now have the information readily available.
Fred
Le 14/03/2017 à 12:08, Christophe
Le 14/03/2017 à 12:08, Christophe Lombard a écrit :
The new Coherent Accelerator Interface Architecture, level 2, for the
IBM POWER9 brings new content and features:
- POWER9 Service Layer
- Registers
- Radix mode
- Process element entry
- Dedicated-Shared Process Programming Model
- Translatio
Le 14/03/2017 à 12:08, Christophe Lombard a écrit :
The two fields pid and tid of the structure cxl_irq_info are only used
in the guest environment. To avoid confusion, it's not necessary
to fill the fields in the bare-metal environment.
The PSL Process and Thread Identification Register is onl
Hi Björn,
On Mon, Mar 20, 2017 at 7:42 PM, Bjorn Helgaas wrote:
> Several arches use __ioremap() to help implement the generic ioremap(),
> ioremap_nocache(), and ioremap_wc() interfaces, but this usage is all
> inside the arch/ directory.
>
> The only __ioremap() uses outside arch/ are in the Zo
On 21/03/2017 10:12, Aneesh Kumar K.V wrote:
> Laurent Dufour writes:
>
>> In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry
>> the page fault handling before anything else.
>>
>> This would simplify the handling of the mmap_sem lock in this part of
>> the code.
>>
>> Signed-o
Le 21/03/2017 à 03:47, Andrew Donnellan a écrit :
On 14/03/17 22:08, Christophe Lombard wrote:
The first 3 patches are mostly cleanup and fixes, separating the
psl8-specific code from the code which will also be used for psl9.
Patches 4 restructure existing code, to easily add the psl
implementa
Le 20/03/2017 à 17:26, Frederic Barrat a écrit :
Le 14/03/2017 à 12:08, Christophe Lombard a écrit :
Rename a few functions, changing the '_psl' suffix to '_psl8', to make
clear that the implementation is psl8 specific.
Those functions will have an equivalent implementation for the psl9 in
a l
Laurent Dufour writes:
> Since the fault retry is now handled earlier, we can release the
> mmap_sem lock earlier too and remove later unlocking previously done in
> mm_fault_error().
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Laurent Dufour
> ---
> arch/powerpc/mm/fault.c | 19 -
Laurent Dufour writes:
> In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry
> the page fault handling before anything else.
>
> This would simplify the handling of the mmap_sem lock in this part of
> the code.
>
> Signed-off-by: Laurent Dufour
> ---
> arch/powerpc/mm/fault.c
Laurent Dufour writes:
> Move mmap_sem releasing in the do_sigbus()'s unique caller : mm_fault_error()
>
> No functional changes.
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Laurent Dufour
> ---
> arch/powerpc/mm/fault.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
>
Kees Cook writes:
> On Mon, Mar 20, 2017 at 1:39 AM, Michael Ellerman wrote:
>> Andrew Donnellan writes:
>>
>>> Commit 65c059bcaa73 ("powerpc: Enable support for GCC plugins") enabled GCC
>>> plugins on powerpc, but neglected to update the architecture list in the
>>> docs. Rectify this.
>>>
>>
83 matches
Mail list logo