This adds special support for huge pages (16MB). The reference
counting cannot be easily done for such pages in real mode (when
MMU is off) so we added a list of huge pages. It is populated in
virtual mode and get_page is called just once per a huge page.
Real mode handlers check if the
The series of patches are follow-up in order to make EEH workable for PowerNV
platform on Juno-IOC-L machine. Couple of issues have been fixed with help of
Ben:
- Check PCIe link after PHB complete reset
- Restore config space for bridges
- The EEH address cache wasn't
We have 2 fields in struct pnv_phb to trace the states. The patch
replace the fields with one and introduces flags for that. The patch
doesn't impact the logic.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
arch/powerpc/platforms/powernv/eeh-ioda.c |8
When the PHB is fenced or dead, it's pointless to collect the data
from PCI config space of subordinate PCI devices since it should
return 0xFF's. The patch also fixes overwritten buffer while getting
PCI config data.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
During recovery for EEH errors, the device driver requires reset
explicitly (most of cases). The EEH core doesn't do hotplug during
reset. However, there might have some device drivers that can't
support EEH. So the deivce can't be put into quite state during
the reset and possibly requesting PCI
On the PowerNV platform, the EEH address cache isn't built correctly
because we skipped the EEH devices without binding PE. The patch
fixes that.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
arch/powerpc/kernel/eeh_cache.c |2 +-
arch/powerpc/platforms/powernv/pci-ioda.c
Currently, we're using the combo (PCI bus + devfn) in the PCI
config accessors and PCI config accessors in EEH depends on them.
However, it's not safe to refer the PCI bus which might have been
removed during hotplug. So we're using device node in the PCI
config accessors and the corresponding
We needn't the the whole backtrace other than one-line message in
the error reporting interrupt handler. For errors triggered by
access PCI config space or MMIO, we replace WARN(1, ...) with
pr_err() and dump_stack(). The patch also adds more output messages
to indicate what EEH core is doing.
The patch is for avoiding following build warnings:
The function .pnv_pci_ioda_fixup() references
the function __init .eeh_init().
This is often because .pnv_pci_ioda_fixup lacks a __init
The function .pnv_pci_ioda_fixup() references
the function __init .eeh_addr_cache_build().
After reset (e.g. complete reset) in order to bring the fenced PHB
back, the PCIe link might not be ready yet. The patch intends to
make sure the PCIe link is ready before accessing its subordinate
PCI devices. The patch also fixes that wrong values restored to
PCI_COMMAND register for PCI
On Thu, Jun 27, 2013 at 01:46:41PM +0800, Gavin Shan wrote:
The subject went wrong. I didn't make the format correct and here's
the makeup:
[PATCH v4 00/8] Follow-up fixes for EEH on PowerNV
The series of patches are follow-up in order to make EEH workable for PowerNV
platform on Juno-IOC-L
VFIO IOMMU driver for sPAPR TCE locks the whole DMA window by setting
ones to iommu_table.it_map. However this was not protected by the locks
which other clients of iommu_table use.
The patch fixes this.
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
v1-v2:
* Fixed a potential warning
-Original Message-
From: Alex Williamson [mailto:alex.william...@redhat.com]
Sent: Tuesday, June 25, 2013 10:27 AM
To: Sethi Varun-B16395
Cc: j...@8bytes.org; io...@lists.linux-foundation.org; linuxppc-
d...@lists.ozlabs.org; linux-ker...@vger.kernel.org;
b...@kernel.crashing.org;
On 06/26/2013 01:42 PM, Bharat Bhushan wrote:
ehpriv instruction is used for setting software breakpoints
by user space. This patch adds support to exit to user space
with run-debug have relevant information.
As this is the first point we are using run-debug, also defined
the run-debug
Hi Andrew,
Today's linux-next merge of the akpm tree got a conflict in
arch/powerpc/kernel/ptrace.c between commit b0b0aa9c7faf
(powerpc/hw_brk: Fix setting of length for exact mode breakpoints) from
the powerpc tree and commit ptrace/powerpc: revert hw_breakpoints: Fix
racy access to ptrace
From: Sukadev Bhattiprolu suka...@linux.vnet.ibm.com
Date: Tue, 25 Jun 2013 15:50:18 -0700
Subject: [RFC][PATCH 2/3][v2] perf/Power7: Export MDTLB_SRC fields to userspace
Power7 saves the perf-event vector information in the mmcra register.
Included in this event vector is a marked-data-TLB
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
- cpu = get_cpu();
+ cpu = get_online_cpus_atomic();
vmx_vcpu_load(vmx-vcpu, cpu);
vmx-vcpu.cpu = cpu;
err = vmx_vcpu_setup(vmx);
vmx_vcpu_put(vmx-vcpu);
- put_cpu();
+ put_online_cpus_atomic();
The
On 06/26/2013 01:16 PM, Paolo Bonzini wrote:
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
-cpu = get_cpu();
+cpu = get_online_cpus_atomic();
vmx_vcpu_load(vmx-vcpu, cpu);
vmx-vcpu.cpu = cpu;
err = vmx_vcpu_setup(vmx);
vmx_vcpu_put(vmx-vcpu);
-put_cpu();
On Wed, 2013-06-26 at 16:56 +1000, Stephen Rothwell wrote:
Today's linux-next merge of the akpm tree got a conflict in
arch/powerpc/kernel/ptrace.c between commit b0b0aa9c7faf
(powerpc/hw_brk: Fix setting of length for exact mode breakpoints) from
the powerpc tree and commit ptrace/powerpc:
On Tue, Jun 25, 2013 at 09:03:32AM -0700, Paul E. McKenney wrote:
On Tue, Jun 25, 2013 at 05:44:23PM +1000, Michael Ellerman wrote:
On Tue, Jun 25, 2013 at 05:19:14PM +1000, Michael Ellerman wrote:
Here's another trace from 3.10-rc7 plus a few local patches.
And here's another with
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while
Il 26/06/2013 10:06, Srivatsa S. Bhat ha scritto:
On 06/26/2013 01:16 PM, Paolo Bonzini wrote:
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
- cpu = get_cpu();
+ cpu = get_online_cpus_atomic();
vmx_vcpu_load(vmx-vcpu, cpu);
vmx-vcpu.cpu = cpu;
err = vmx_vcpu_setup(vmx);
On 06/24/2013 04:58 PM, Michael Ellerman wrote:
Add support for EBB (Event Based Branches) on 64-bit book3s. See the
included documentation for more details.
EBBs are a feature which allows the hardware to branch directly to a
specified user space address when a PMU event overflows. This can
-Original Message-
From: tiejun.chen [mailto:tiejun.c...@windriver.com]
Sent: Wednesday, June 26, 2013 12:25 PM
To: Bhushan Bharat-R65777
Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; ag...@suse.de; Wood Scott-
B07421; b...@kernel.crashing.org; linuxppc-dev@lists.ozlabs.org;
On 06/26/2013 01:53 PM, Paolo Bonzini wrote:
Il 26/06/2013 10:06, Srivatsa S. Bhat ha scritto:
On 06/26/2013 01:16 PM, Paolo Bonzini wrote:
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
- cpu = get_cpu();
+ cpu = get_online_cpus_atomic();
vmx_vcpu_load(vmx-vcpu, cpu);
Il 26/06/2013 10:41, Srivatsa S. Bhat ha scritto:
On 06/26/2013 01:53 PM, Paolo Bonzini wrote:
Il 26/06/2013 10:06, Srivatsa S. Bhat ha scritto:
On 06/26/2013 01:16 PM, Paolo Bonzini wrote:
Il 25/06/2013 22:30, Srivatsa S. Bhat ha scritto:
- cpu = get_cpu();
+ cpu = get_online_cpus_atomic();
We have 2 fields in struct pnv_phb to trace the states. The patch
replace the fields with one and introduces flags for that. The patch
doesn't impact the logic.
What is the benefit of this change?
...
+
+#define PNV_EEH_STATE_ENABLED(1 0)/* EEH enabled */
+#define
On 06/26/2013 04:44 PM, Bhushan Bharat-R65777 wrote:
-Original Message-
From: tiejun.chen [mailto:tiejun.c...@windriver.com]
Sent: Wednesday, June 26, 2013 12:25 PM
To: Bhushan Bharat-R65777
Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; ag...@suse.de; Wood Scott-
B07421;
-Original Message-
From: tiejun.chen [mailto:tiejun.c...@windriver.com]
Sent: Wednesday, June 26, 2013 2:47 PM
To: Bhushan Bharat-R65777
Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; ag...@suse.de; Wood Scott-
B07421; b...@kernel.crashing.org; linuxppc-dev@lists.ozlabs.org;
@@ -117,6 +117,7 @@
(EVENT_UNIT_MASK EVENT_UNIT_SHIFT) | \
(EVENT_COMBINE_MASKEVENT_COMBINE_SHIFT) | \
(EVENT_MARKED_MASK EVENT_MARKED_SHIFT) | \
+ (1ull EVENT_CONFIG_EBB_SHIFT) |
On Wed, Jun 26, 2013 at 10:12:16AM +0100, David Laight wrote:
We have 2 fields in struct pnv_phb to trace the states. The patch
replace the fields with one and introduces flags for that. The patch
doesn't impact the logic.
What is the benefit of this change?
There might have more flags
On 26.06.2013, at 11:27, Bhushan Bharat-R65777 wrote:
-Original Message-
From: tiejun.chen [mailto:tiejun.c...@windriver.com]
Sent: Wednesday, June 26, 2013 2:47 PM
To: Bhushan Bharat-R65777
Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; ag...@suse.de; Wood Scott-
B07421;
Ben, please ignore this.
Need some more code there.
On 06/26/2013 04:21 PM, Alexey Kardashevskiy wrote:
VFIO IOMMU driver for sPAPR TCE locks the whole DMA window by setting
ones to iommu_table.it_map. However this was not protected by the locks
which other clients of iommu_table use.
The
On 06/26/2013 02:14 AM, Scott Wood wrote:
On Tue, Mar 05, 2013 at 05:52:36PM +0200, Laurentiu TUDOR wrote:
From: Tudor Laurentiulaurentiu.tu...@freescale.com
The ePAPR para-virtualization needs to happen very early
otherwise the bytechannel based console will silently
drop some of the early
On Wed, 26 Jun 2013 18:10:31 +1000 Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Wed, 2013-06-26 at 16:56 +1000, Stephen Rothwell wrote:
Today's linux-next merge of the akpm tree got a conflict in
arch/powerpc/kernel/ptrace.c between commit b0b0aa9c7faf
(powerpc/hw_brk: Fix
On Wed, Jun 26, 2013 at 02:02:57AM +0530, Srivatsa S. Bhat wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
On 06/26/2013 03:30 AM, Paul E. McKenney wrote:
On Wed, Jun 26, 2013 at 01:57:55AM +0530, Srivatsa S. Bhat wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
In RCU code,
On Wed, Jun 26, 2013 at 06:10:58PM +1000, Michael Ellerman wrote:
On Tue, Jun 25, 2013 at 09:03:32AM -0700, Paul E. McKenney wrote:
On Tue, Jun 25, 2013 at 05:44:23PM +1000, Michael Ellerman wrote:
On Tue, Jun 25, 2013 at 05:19:14PM +1000, Michael Ellerman wrote:
Here's another
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Could you use an rcu-like sequence so that disabling pre-emption
would be enough?
Something like rebuilding the cpu list, then forcing
On 06/26, Benjamin Herrenschmidt wrote:
On Wed, 2013-06-26 at 16:56 +1000, Stephen Rothwell wrote:
Today's linux-next merge of the akpm tree got a conflict in
arch/powerpc/kernel/ptrace.c between commit b0b0aa9c7faf
(powerpc/hw_brk: Fix setting of length for exact mode breakpoints) from
On Wed, Jun 26, 2013 at 03:29:40PM +0100, David Laight wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Could you use an rcu-like sequence so that disabling pre-emption
would
On Wed, Jun 26, 2013 at 07:39:40PM +0530, Srivatsa S. Bhat wrote:
On 06/26/2013 03:30 AM, Paul E. McKenney wrote:
On Wed, Jun 26, 2013 at 01:57:55AM +0530, Srivatsa S. Bhat wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to
On Wed, Jun 26, 2013 at 03:29:40PM +0100, David Laight wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Could you use an rcu-like sequence so that disabling pre-emption
would
On Wed, 2013-06-26 at 07:34 -0700, Paul E. McKenney wrote:
On Wed, Jun 26, 2013 at 03:29:40PM +0100, David Laight wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Could
On Wed, Jun 26, 2013 at 10:51:11AM -0400, Steven Rostedt wrote:
It would also increase the latency of CPU-hotunplug operations.
Is that a big deal?
I thought that was the whole deal with this patchset - making cpu
hotunplugs lighter and faster mostly for powersaving. That said, just
On Wed, 2013-06-26 at 08:21 -0700, Tejun Heo wrote:
On Wed, Jun 26, 2013 at 10:51:11AM -0400, Steven Rostedt wrote:
It would also increase the latency of CPU-hotunplug operations.
Is that a big deal?
I thought that was the whole deal with this patchset - making cpu
hotunplugs lighter
Hello,
On Wed, Jun 26, 2013 at 11:33:43AM -0400, Steven Rostedt wrote:
I thought the whole deal with this patchset was to remove stop_machine
from CPU hotplug. Why halt all CPUs just to remove one? stomp_machine()
is extremely intrusive for the entire system, where as one CPU making
sure all
On 06/26/2013 07:59 PM, David Laight wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Could you use an rcu-like sequence so that disabling pre-emption
would be enough?
On 06/26/2013 08:51 PM, Tejun Heo wrote:
On Wed, Jun 26, 2013 at 10:51:11AM -0400, Steven Rostedt wrote:
It would also increase the latency of CPU-hotunplug operations.
Is that a big deal?
I thought that was the whole deal with this patchset - making cpu
hotunplugs lighter and faster
On 06/26/2013 10:59 PM, Tejun Heo wrote:
Hello,
On Wed, Jun 26, 2013 at 11:33:43AM -0400, Steven Rostedt wrote:
I thought the whole deal with this patchset was to remove stop_machine
from CPU hotplug. Why halt all CPUs just to remove one? stomp_machine()
is extremely intrusive for the
On 06/26/2013 07:36:23 AM, Tudor Laurentiu wrote:
On 06/26/2013 02:14 AM, Scott Wood wrote:
This would require converting
the code to use the early device tree functions.
I see. Had a look at that api and it seems pretty limited.
I couldn't find a simple way of reading a property other than
On Wed, 2013-06-26 at 06:24 +, Sethi Varun-B16395 wrote:
-Original Message-
From: Alex Williamson [mailto:alex.william...@redhat.com]
Sent: Tuesday, June 25, 2013 10:27 AM
To: Sethi Varun-B16395
Cc: j...@8bytes.org; io...@lists.linux-foundation.org; linuxppc-
Hey,
On Wed, Jun 26, 2013 at 11:58:48PM +0530, Srivatsa S. Bhat wrote:
Yes, we were discussing hot-unplug latency for use-cases such as
suspend/resume. We didn't want to make those operations slower in the
process of removing stop_machine() from hotplug.
Can you please explain why tho? How
On 06/25/2013 01:00:23 AM, Joakim Tjernlund wrote:
Scott Wood scottw...@freescale.com wrote on 2013/06/25 02:51:00:
On Fri, Jul 20, 2012 at 10:37:17AM +0200, Joakim Tjernlund wrote:
Zang Roy-R61911 r61...@freescale.com wrote on 2012/07/20
10:27:52:
-Original Message-
On Wed, 2013-06-26 at 16:19 +0200, Oleg Nesterov wrote:
You were cc'ed every time ;)
Why didn't it go through the powerpc tree ?
Because this series needs to update any user of
ptrace_get/put_breakpoints
in arch/ (simply remove these calls), then change the core kernel
code, then
fix
Please keep subject lines limited to 60-70 characters, and prefix with
powerpc/83xx:.
On Mon, Mar 18, 2013 at 05:47:32PM +0400, Sergey Gerasimov wrote:
Signed-off-by: Sergey Gerasimov sergey.gerasi...@astrosoft-development.com
---
arch/powerpc/boot/dts/ib8315.dts | 490 +++
For an unknown relocation type since the value of r4 is just the 8bit
relocation type, the sum of r4 and r7 may yield an invalid memory
address. For example:
In normal case:
r4 = c00x
r7 = 4000
r4 + r7 = 000x
For an unknown relocation
Currently the fsl booke 32bit kernel is using the DYNAMIC_MEMSTART relocation
method. But the RELOCATABLE method is more flexible and has less alignment
restriction. So enable this feature on this platform and use it by
default for the kdump kernel.
These patches have passed the kdump boot test
This is based on the codes in the head_44x.S. Since we always align to
256M before mapping the PAGE_OFFSET for a relocatable kernel, we also
change the init tlb map to 256M size.
Signed-off-by: Kevin Hao haoke...@gmail.com
---
arch/powerpc/Kconfig | 2 +-
For a relocatable kdump kernel, we may create a tlb map which is
beyond the real memory allocated to the kdump kernel. For example,
when the boot kernel reserve 32M memory for the kdump kernel by
using 'crashkernel=32M@64M', we will have to create a 256M tlb
entry in the kdump kernel. So define
Hi Michael,
On Tue, 25 Jun 2013 17:47:56 +1000 Michael Ellerman mich...@ellerman.id.au
wrote:
-void tm_unavailable_exception(struct pt_regs *regs)
+void facility_unavailable_exception(struct pt_regs *regs)
{
+ static char *facility_strings[] = {
+ FPU,
+
Hi,
On Wed, 26 Jun 2013 11:12:23 +0530 Bharat Bhushan r65...@freescale.com wrote:
diff --git a/arch/powerpc/include/asm/switch_to.h
b/arch/powerpc/include/asm/switch_to.h
index 200d763..50b357f 100644
--- a/arch/powerpc/include/asm/switch_to.h
+++ b/arch/powerpc/include/asm/switch_to.h
@@
The locks in arch/powerpc/kernel/iommu.c were initally added to protect
iommu_table::it_map so the patch just makes things consistent.
Specifically, it does:
1. add missing locks for it_map access during iommu_take_ownership/
iommu_release_ownership execution where the entire it_map is marked
The changes are:
1. rebased on v3.10-rc7
2. removed spinlocks from real mode
3. added security checks between KVM and VFIO
MOre details in the individual patch comments.
Alexey Kardashevskiy (8):
KVM: PPC: reserve a capability number for multitce support
KVM: PPC: reserve a capability and
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
include/uapi/linux/kvm.h |1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index d88c8ee..970b1f5 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -666,6 +666,7
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
include/uapi/linux/kvm.h |2 ++
1 file changed, 2 insertions(+)
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 970b1f5..0865c01 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -667,6 +667,7
VFIO is designed to be used via ioctls on file descriptors
returned by VFIO.
However in some situations support for an external user is required.
The first user is KVM on PPC64 (SPAPR TCE protocol) which is going to
use the existing VFIO groups for exclusive access in real/virtual mode
in the
This adds hash_for_each_possible_rcu_notrace() which is basically
a notrace clone of hash_for_each_possible_rcu() which cannot be
used in real mode due to its tracing/debugging capability.
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
include/linux/hashtable.h | 15 +++
1
The current VFIO-on-POWER implementation supports only user mode
driven mapping, i.e. QEMU is sending requests to map/unmap pages.
However this approach is really slow, so we want to move that to KVM.
Since H_PUT_TCE can be extremely performance sensitive (especially with
network adapters where
This adds real mode handlers for the H_PUT_TCE_INDIRECT and
H_STUFF_TCE hypercalls for QEMU emulated devices such as IBMVIO
devices or emulated PCI. These calls allow adding multiple entries
(up to 512) into the TCE table in one call which saves time on
transition to/from real mode.
This adds a
This allows the host kernel to handle H_PUT_TCE, H_PUT_TCE_INDIRECT
and H_STUFF_TCE requests without passing them to QEMU, which saves time
on switching to QEMU and back.
Both real and virtual modes are supported. First the kernel tries to
handle a TCE request in the real mode, if failed it
71 matches
Mail list logo