Presently we only support initiating Service Processor dump from host.
Hence update sysfs message. Also update couple of other error/info
messages.
Signed-off-by: Vasant Hegde
---
Ben,
This patch applies cleanly on top of powerpc next branch.
Vasant
arch/powerpc/platforms/powernv/opal-dump.c
On Thu, 2014-08-14 at 16:16 +1000, Benjamin Herrenschmidt wrote:
> Another interesting one in the "OMG" category is the series from Michael
> adding memory barriers to spin_is_locked(). That's also the result of many
> days of debugging to figure out why the semaphore code would occasionally
> cras
Hi Linus !
Here are some more powerpc bits for 3.17, essentially fixes.
The biggest series, also aimed at -stable, is from Aneesh and is the result
of weeks and weeks of debugging to find out why the heck or THP implementation
was occasionally triggering multi-hit errors in our level 1 TLB. It en
Alexey Kardashevskiy writes:
> fc95ca7284bc54953165cba76c3228bd2cdb9591 claims that there is no
> functional change but this is not true as it calls get_order() (which
> takes bytes) where it should have called ilog2() and the kernel stops
> on VM_BUG_ON().
>
> This replaces get_order() with orde
fc95ca7284bc54953165cba76c3228bd2cdb9591 claims that there is no
functional change but this is not true as it calls get_order() (which
takes bytes) where it should have called ilog2() and the kernel stops
on VM_BUG_ON().
This replaces get_order() with order_base_2() (round-up version of ilog2).
S
MADV_FREE needs pmd_dirty and pmd_mkclean for detecting recent
overwrite of the contents since MADV_FREE syscall is called for
THP page.
This patch adds pmd_dirty and pmd_mkclean for THP page MADV_FREE
support.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: linuxppc-dev@lists.ozlabs.org
Cc:
There is an issue currently where NUMA information is used on powerpc
(and possibly ia64) before it has been read from the device-tree, which
leads to large slab consumption with CONFIG_SLUB and memoryless nodes.
NUMA powerpc non-boot CPU's cpu_to_node/cpu_to_mem is only accurate
after start_secon
After discussions with Tejun, we don't want to spread the use of
cpu_to_mem() (and thus knowledge of allocators/NUMA topology details)
into callers, but would rather ensure the callees correctly handle
memoryless nodes. With the previous patches ("topology: add support for
node_to_mem_node() to det
From: Joonsoo Kim
Update the SLUB code to search for partial slabs on the nearest node
with memory in the presence of memoryless nodes. Additionally, do not
consider it to be an ALLOC_NODE_MISMATCH (and deactivate the slab) when
a memoryless-node specified allocation goes off-node.
Signed-off-by
From: Joonsoo Kim
We need to determine the fallback node in slub allocator if the
allocation target node is memoryless node. Without it, the SLUB wrongly
select the node which has no memory and can't use a partial slab,
because of node mismatch. Introduced function, node_to_mem_node(X), will
retu
Anton noticed (http://www.spinics.net/lists/linux-mm/msg67489.html) that
on ppc LPARs with memoryless nodes, a large amount of memory was
consumed by slabs and was marked unreclaimable. He tracked it down to
slab deactivations in the SLUB core when we allocate remotely, leading
to poor efficiency a
On Tue, 2014-08-12 at 16:29 +0200, Chris Enrique wrote:
> Hello,
>
> i am currently working on a GE PPC9A board which has a MPC8641D
> processor.
>
>
> My work on this board is currently based on the yocto project (where i
> also put this issue on the mailinglist) but the issue affects mainly
>
On Fri, Aug 08, 2014 at 02:47:20PM +0800, Shengjiu Wang wrote:
> These patchs is to refine esai for tdm support.
Applied both, thanks.
signature.asc
Description: Digital signature
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://list
The 32-bit defconfig version has these enabled
for years so make the 64-bit defconfig have them too.
This patch only adds CONFIG_VIRT_DRIVERS,
CONFIG_FSL_HV_MANAGER and CONFIG_PPC_EPAPR_HV_BYTECHAN
other changes being "make savedefconfig" artifacts.
Signed-off-by: Laurentiu Tudor
---
based on git
The new MSI block in MPIC 4.3 added the MSIIR1 register,
with a different layout, in order to support 16 MSIR
registers. The msi binding was also updated so that
the "reg" reflects the newly introduced MSIIR1 register.
Virtual machines advertise these msi nodes by using the
compatible "fsl,vmpic-ms
If we know that user address space has never executed on other cpus
we could use tlbiel.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/machdep.h| 2 +-
arch/powerpc/include/asm/tlbflush.h | 4 +-
arch/powerpc/mm/hash_native_64.c | 4 +-
arch/powerpc/mm/hash_utils_64.
Alexey Kardashevskiy writes:
> fc95ca7284bc54953165cba76c3228bd2cdb9591 claims that there is no
> functional change but this is not true as it calls get_order() (which
> takes bytes) where it should have called ilog2() and the kernel stops
> on VM_BUG_ON().
>
> This replaces get_order() with ilog
When PE passed to guest, and guest EEH occured with this PE,
EEH_PE_ISOLATED maybe set in host.
It is a big issue when the PE is reused by host, host EEH
will not work on this PE because it was set to EEH_PE_ISOLATED
unexpectly.
Signed-off-by: Mike Qiu
---
arch/powerpc/platforms/powernv/eeh-io
On Fri, Aug 08, 2014 at 06:41:19PM +0800, Nicolin Chen wrote:
> The previous patch (ASoC: fsl_sai: Add asynchronous mode support) added
> new Device Tree bindings for Asynchronous and Synchronous modes support.
> However, these two shall not be present at the same time.
Applied, thanks.
signatu
Hi Tomeu,
On Wed, 6 Aug 2014 15:57:48 +0200
Tomeu Vizoso wrote:
> In preparation to change the public API to return a per-user clk structure,
> remove any usage of this public API from the clock implementations.
>
> The reason for having this in a separate commit from the one that introduces
>
Continue is not needed at the bottom of a loop.
The Coccinelle semantic patch implementing this change is:
@@
@@
for (...;...;...) {
...
if (...) {
...
- continue;
}
}
Signed-off-by: Himangi Saraogi
Acked-by: Julia Lawall
---
Not compile tested.
arch/powerpc/platforms/pseries/cmm
> -Original Message-
> From: Linuxppc-dev [mailto:linuxppc-dev-
> bounces+b21989=freescale@lists.ozlabs.org] On Behalf Of Scott Wood
> Sent: Friday, August 01, 2014 2:31 AM
> To: Medve Emilian-EMMEDVE1
> Cc: linuxppc-...@ozlabs.org
> Subject: Re: [PATCH v2 5/7] powerpc/corenet: Add MDIO
On 8/11/2014 8:46 AM, Benjamin Herrenschmidt wrote:
> On Tue, 2014-08-05 at 15:51 +0100, One Thousand Gnomes wrote:
>> On Tue, 05 Aug 2014 20:12:09 +0530
>> Vishal Mansur wrote:
>>
>>> EEH kernel services are inconsistently exported by the
>>> kernel. eeh_check_failure is exported for any use, bu
Add tracepoint to track hugepage invalidate. This help us
in debugging difficult to track bugs.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/pgtable_64.c | 6 +++
arch/powerpc/mm/tlb_hash64.c | 4 ++
include/trace/events/thp.h | 88
3 files
On ppc64 we support 4K hash pte with 64K page size. That requires
us to track the hash pte slot information on a per 4k basis. We do that
by storing the slot details in the second half of pte page. The pte bit
_PAGE_COMBO is used to indicate whether the second half need to be
looked while building
We would get wrong results in compiler recomputed old_pmd. Avoid
that by using ACCESS_ONCE
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/hugepage-hash64.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/hugepage-hash64.c
b/arch/powerpc/mm/hugepage-ha
As per ISA, for 4k base page size we compare 14..65 bits of VA specified
with the entry_VA in tlb. That implies we need to make sure we do a
tlbie with all the possible 4k va we used to access the 16MB hugepage.
With 64k base page size we compare 14..57 bits of VA. Hence we cannot
ignore the lower
If we changed base page size of the segment, either via sub_page_protect
or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash
table entries. We do a lazy hash page table flush for all mapped pages
in the demoted segment. This happens when we handle hash page fault for
these page
With hugepages, we store the hpte valid information in the pte page
whose address is stored in the second half of the PMD. Use a
write barrier to make sure clearing pmd busy bit and updating
hpte valid info are ordered properly.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/hugepage-hash64
If we changed base page size of the segment, either via sub_page_protect
or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash
table entries. We do a lazy hash page table flush for all mapped pages
in the demoted segment. This happens when we handle hash page fault
for these page
Hi,
This patch series fixes machine check exception that we observed when using
transparent huge page along with 4k hash pte on power bare metal platform.
Patch "powerpc: mm: Use read barrier when creating real_pte" is not really
related to thp, but was added in the series because it is fixing a
The segment identifier and segment size will remain the same in
the loop, So we can compute it outside. We also change the
hugepage_invalidate interface so that we can use it the later patch
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/machdep.h| 6 +++---
arch/powerpc/mm/ha
32 matches
Mail list logo