On 06/27/2013 06:39 AM, Kevin Hao wrote:
For an unknown relocation type since the value of r4 is just the 8bit
relocation type, the sum of r4 and r7 may yield an invalid memory
address. For example:
In normal case:
r4 = c00x
r7 = 4000
r4 +
Hi Alexy,
On Thu, 27 Jun 2013 15:02:31 +1000 Alexey Kardashevskiy a...@ozlabs.ru wrote:
index c488da5..54192b2 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1370,6 +1370,59 @@ static const struct file_operations vfio_device_fops =
{
};
/**
+ * External user API,
On 06/27/2013 03:04 AM, Tejun Heo wrote:
Hey,
On Wed, Jun 26, 2013 at 11:58:48PM +0530, Srivatsa S. Bhat wrote:
Yes, we were discussing hot-unplug latency for use-cases such as
suspend/resume. We didn't want to make those operations slower in the
process of removing stop_machine() from
Hi Alexy,
On Thu, 27 Jun 2013 15:02:31 +1000 Alexey Kardashevskiy a...@ozlabs.ru wrote:
+/* Allows an external user (for example, KVM) to unlock an IOMMU group */
+static void vfio_group_del_external_user(struct file *filep)
+{
+ struct vfio_group *group = filep-private_data;
+
+
On 06/26/2013 07:09 PM, Ralf Baechle wrote:
On Wed, Jun 26, 2013 at 02:02:57AM +0530, Srivatsa S. Bhat wrote:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the
VFIO is designed to be used via ioctls on file descriptors
returned by VFIO.
However in some situations support for an external user is required.
The first user is KVM on PPC64 (SPAPR TCE protocol) which is going to
use the existing VFIO groups for exclusive access in real/virtual mode
in the
Hi Alexy,
Thanks for the changes.
On Thu, 27 Jun 2013 17:14:20 +1000 Alexey Kardashevskiy a...@ozlabs.ru wrote:
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index ac8d488..7ee6575 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -90,4 +90,11 @@ extern void
Changes from v1:
- Add header size argument in the pstore write callback
instead of a separate API to return header size.
The patchset takes care of compressing oops messages while writing to NVRAM,
so that more oops data can be captured in the given space.
nvram_compress() and zip_oops() is used by the nvram_pstore_write
API to compress oops messages hence re-organise the functions
accordingly to avoid forward declarations.
Signed-off-by: Aruna Balakrishnaiah ar...@linux.vnet.ibm.com
---
arch/powerpc/platforms/pseries/nvram.c | 104
Header size is needed to distinguish between header and the dump data.
Incorporate the addition of new argument (hsize) in the pstore write
callback.
Signed-off-by: Aruna Balakrishnaiah ar...@linux.vnet.ibm.com
---
arch/powerpc/platforms/pseries/nvram.c |4 +++-
drivers/acpi/apei/erst.c
The patch set supports compression of oops messages while writing to NVRAM,
this helps in capturing more of oops data to lnx,oops-log. The pstore file
for oops messages will be in decompressed format making it readable.
In case compression fails, the patch takes care of copying the header added
It would also increase the latency of CPU-hotunplug operations.
Is that a big deal?
I thought that was the whole deal with this patchset - making cpu
hotunplugs lighter and faster mostly for powersaving. That said, just
removing stop_machine call would be a pretty good deal and I
On Thu, 2013-06-27 at 14:53 +1000, Alexey Kardashevskiy wrote:
2. remove locks from functions being called by VFIO. The whole table
is given to the user space so it is responsible now for races.
Sure but you still need to be careful that userspace cannot cause things
that crash the kernel.
On Thu, 2013-06-27 at 16:59 +1000, Stephen Rothwell wrote:
+/* Allows an external user (for example, KVM) to unlock an IOMMU
group */
+static void vfio_group_del_external_user(struct file *filep)
+{
+ struct vfio_group *group = filep-private_data;
+
+ BUG_ON(filep-f_op !=
Hi Tony,
On Tuesday 25 June 2013 09:32 PM, Luck, Tony wrote:
Introducing headersize in pstore_write() API would need changes at
multiple places whereits being called. The idea is to move the
compression support to pstore infrastructure so that other platforms
could also make use of it.
Any
On 06/27/2013 02:24 PM, David Laight wrote:
It would also increase the latency of CPU-hotunplug operations.
Is that a big deal?
I thought that was the whole deal with this patchset - making cpu
hotunplugs lighter and faster mostly for powersaving. That said, just
removing stop_machine call
On 06/27/2013 07:42 PM, Benjamin Herrenschmidt wrote:
On Thu, 2013-06-27 at 16:59 +1000, Stephen Rothwell wrote:
+/* Allows an external user (for example, KVM) to unlock an IOMMU
group */
+static void vfio_group_del_external_user(struct file *filep)
+{
+ struct vfio_group *group =
On 06/24/2013 04:58 PM, Michael Ellerman wrote:
In power_pmu_enable() we can use the existing out label to reduce the
number of return paths.
Signed-off-by: Michael Ellerman mich...@ellerman.id.au
Reviewed-by: Anshuman Khandual khand...@linux.vnet.ibm.com
---
On Sun, Jun 23, 2013 at 10:41:24PM -0600, Alex Williamson wrote:
On Mon, 2013-06-24 at 13:52 +1000, David Gibson wrote:
On Sat, Jun 22, 2013 at 08:28:06AM -0600, Alex Williamson wrote:
On Sat, 2013-06-22 at 22:03 +1000, David Gibson wrote:
On Thu, Jun 20, 2013 at 08:55:13AM -0600, Alex
On Wed, 2013-06-26 at 15:28 +0530, Anshuman Khandual wrote:
@@ -117,6 +117,7 @@
(EVENT_UNIT_MASK EVENT_UNIT_SHIFT) | \
(EVENT_COMBINE_MASKEVENT_COMBINE_SHIFT) | \
(EVENT_MARKED_MASK EVENT_MARKED_SHIFT) | \
+
On Wed, 2013-06-26 at 14:08 +0530, Anshuman Khandual wrote:
On 06/24/2013 04:58 PM, Michael Ellerman wrote:
Add support for EBB (Event Based Branches) on 64-bit book3s. See the
included documentation for more details.
..
+
+
+Terminology
+---
+
+Throughout this document
Hello,
On 6/24/2013 10:25 AM, Aneesh Kumar K.V wrote:
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes into a seperate config
so that ppc64 can enable CMA without
On Thu, Jun 27, 2013 at 02:05:39PM +1000, Stephen Rothwell wrote:
Hi Michael,
On Tue, 25 Jun 2013 17:47:56 +1000 Michael Ellerman mich...@ellerman.id.au
wrote:
-void tm_unavailable_exception(struct pt_regs *regs)
+void facility_unavailable_exception(struct pt_regs *regs)
{
+
On Tue, Jun 25, 2013 at 10:35:33PM +0800, Runzhen Wang wrote:
Power7 supports over 530 different perf events but only a small
subset of these can be specified by name, for the remaining
events, we must specify them by their raw code:
Hi Runzhen,
This is looking good. Sorry one last request
On Tue, Jun 25, 2013 at 10:35:32PM +0800, Runzhen Wang wrote:
In the Power7 PMU guide:
https://www.power.org/documentation/commonly-used-metrics-for-performance-analysis/
PM_BRU_MPRED is referred to as PM_BR_MPRED.
It fixed the typo by changing the name of the event in kernel
and
On Thu, Jun 27, 2013 at 1:32 AM, Aruna Balakrishnaiah
ar...@linux.vnet.ibm.com wrote:
Changes from v1:
- Add header size argument in the pstore write callback
instead of a separate API to return header size.
The patchset takes care of compressing oops messages
On Thu, 2013-06-27 at 17:14 +1000, Alexey Kardashevskiy wrote:
VFIO is designed to be used via ioctls on file descriptors
returned by VFIO.
However in some situations support for an external user is required.
The first user is KVM on PPC64 (SPAPR TCE protocol) which is going to
use the
On 06/27/2013 12:02:36 AM, Alexey Kardashevskiy wrote:
+/*
+ * The KVM guest can be backed with 16MB pages.
+ * In this case, we cannot do page counting from the real mode
+ * as the compound pages are used - they are linked in a list
+ * with pointers as virtual addresses which are inaccessible
Hi,
This patchset is a first step towards removing stop_machine() from the
CPU hotplug offline path. It introduces a set of APIs (as a replacement to
preempt_disable()/preempt_enable()) to synchronize with CPU hotplug from
atomic contexts.
The motivation behind getting rid of stop_machine() is
The current CPU offline code uses stop_machine() internally. And disabling
preemption prevents stop_machine() from taking effect, thus also preventing
CPUs from going offline, as a side effect.
There are places where this side-effect of preempt_disable() (or equivalent)
is used to synchronize
We have quite a few APIs now which help synchronize with CPU hotplug.
Among them, get/put_online_cpus() is the oldest and the most well-known,
so no problems there. By extension, its easy to comprehend the new
set : get/put_online_cpus_atomic().
But there is yet another set, which might appear
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
So add documentation to recommend using the new get/put_online_cpus_atomic()
APIs to prevent CPUs from going offline, while invoking from
Add a debugging infrastructure to warn if an atomic hotplug reader has not
invoked get_online_cpus_atomic() before traversing/accessing the
cpu_online_mask. Encapsulate these checks under a new debug config option
DEBUG_HOTPLUG_CPU.
This debugging infrastructure proves useful in the tree-wide
When bringing a secondary CPU online, the task running on the CPU coming up
sets itself in the cpu_online_mask. This is safe even though this task is not
the hotplug writer task.
But it is kinda hard to teach this to the CPU hotplug debug infrastructure,
and if we get it wrong, we risk making the
Now that we have a debug infrastructure in place to detect cases where
get/put_online_cpus_atomic() had to be used, add these checks at the
right spots to help catch places where we missed converting to the new
APIs.
Cc: Rusty Russell ru...@rustcorp.com.au
Cc: Alex Shi alex@intel.com
Cc:
Sometimes, we have situations where the synchronization design of a
particular subsystem handles CPU hotplug properly, but the details are
non-trivial, making it hard to teach this to the rudimentary hotplug
locking validator. In such cases, it would be useful to have a set of
_nocheck() variants
Now that we have all the pieces of the CPU hotplug debug infrastructure
in place, expose the feature by growing a new Kconfig option,
CONFIG_DEBUG_HOTPLUG_CPU.
Cc: Andrew Morton a...@linux-foundation.org
Cc: Paul E. McKenney paul.mcken...@linaro.org
Cc: Akinobu Mita akinobu.m...@gmail.com
Cc:
Convert the macros in the CPU hotplug code to static inline C functions.
Cc: Thomas Gleixner t...@linutronix.de
Cc: Andrew Morton a...@linux-foundation.org
Cc: Tejun Heo t...@kernel.org
Cc: Rafael J. Wysocki r...@sisk.pl
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
---
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Andrew Morton
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
We need not use the raw_spin_lock_irqsave/restore primitives because
all CPU_DYING notifiers run with interrupts disabled. So just use
raw_spin_lock/unlock.
Cc: Ingo Molnar mi...@redhat.com
Cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
On 06/26/2013 09:00:33 PM, Kevin Hao wrote:
This is based on the codes in the head_44x.S. Since we always align to
256M before mapping the PAGE_OFFSET for a relocatable kernel, we also
change the init tlb map to 256M size.
Why 256M?
This tightens the alignment requirement for dynamic
In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
while being protected by a spinlock. At first, it appears as if we need to
use the get/put_online_cpus_atomic() APIs to properly synchronize with CPU
hotplug, once we get rid of stop_machine(). However, RCU has adequate
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Frederic Weisbecker
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: John Stultz
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: David S. Miller
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Jens Axboe
The percpu-counter-sum code does a for_each_online_cpu() protected
by a spinlock, which makes it look like it needs to use
get/put_online_cpus_atomic(), going forward. However, the code has
adequate synchronization with CPU hotplug, via a hotplug callback
and the fbc-lock.
So use
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Hoang-Nam Nguyen
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Robert Love
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Acked-by: David Daney
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
The CPU_DYING notifier modifies the per-cpu pointer pmu-box, and this can
race with functions such as uncore_pmu_to_box() and uncore_pci_remove() when
we remove stop_machine() from the CPU offline path. So protect them using
get/put_online_cpus_atomic().
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Acked-by: Paolo Bonzini
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Konrad Rzeszutek Wilk
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Richard Henderson
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Mike Frysinger
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Acked-by: Jesper Nilsson
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Richard Kuo
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Tony Luck
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Tony Luck
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Hirokazu Takata
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ralf Baechle
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: David Howells
The function migrate_irqs() is called with interrupts disabled
and hence its not safe to do GFP_KERNEL allocations inside it,
because they can sleep. So change the gfp mask to GFP_ATOMIC.
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Michael Ellerman mich...@ellerman.id.au
Cc: Paul
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Benjamin Herrenschmidt
Bringing a secondary CPU online is a special case in which, accessing
the cpu_online_mask is safe, even though that task (which running on the
CPU coming online) is not the hotplug writer.
It is a little hard to teach this to the debugging checks under
CONFIG_DEBUG_HOTPLUG_CPU. But luckily
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Paul Mundt
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: David S. Miller
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Chris Metcalf
On Fri, Jun 28, 2013 at 01:25:17AM +0530, Srivatsa S. Bhat wrote:
In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
while being protected by a spinlock. At first, it appears as if we need to
use the get/put_online_cpus_atomic() APIs to properly synchronize with CPU
hotplug,
On 06/21/2013 12:20 PM, Santosh Shilimkar wrote:
On Friday 21 June 2013 05:04 AM, Sebastian Andrzej Siewior wrote:
On 06/21/2013 02:52 AM, Santosh Shilimkar wrote:
diff --git a/arch/microblaze/kernel/prom.c b/arch/microblaze/kernel/prom.c
index 0a2c68f..62e2e8f 100644
---
commit f8f7d63fd96ead101415a1302035137a866f8998 (powerpc/eeh: Trace eeh
device from I/O cache) broke EEH on pseries for devices that were
present during boot and have not been hotplugged/DLPARed.
eeh_check_failure will get the eeh_dev from the cache, and will get
NULL. eeh_addr_cache_build adds
On Thu, Mar 14, 2013 at 04:41:13PM +0800, Chunhe Lan wrote:
Adding pcie error interrupt edac support for mpc85xx, p3041, p4080,
and p5020. The mpc85xx uses the legacy interrupt report mechanism -
the error interrupts are reported directly to mpic. While, the p3041/
p4080/p5020 attaches the
On Tue, Mar 19, 2013 at 01:14:22AM -0400, Benjamin Collins wrote:
This isn't specifically needed in order to build the kernel. It's
stored in flash with firmware. However, keep it in the kernel for
reference (and to have an example for fsl_dpa device tree usage).
Signed-off-by: Ben Collins
On 06/28/2013 01:44 AM, Alex Williamson wrote:
On Thu, 2013-06-27 at 17:14 +1000, Alexey Kardashevskiy wrote:
VFIO is designed to be used via ioctls on file descriptors
returned by VFIO.
However in some situations support for an external user is required.
The first user is KVM on PPC64
On Tue, Mar 19, 2013 at 10:58:25AM +0100, Sergey Gerasimov wrote:
For MPC831x the bus probing function also needs the fixup to assign
addresses to the PCI devices as it was for MPC85xx and MPC86xx.
The fixup of the bridge vendor and device ID should be done early in
PCI probing. Else the
On Fri, 2013-06-28 at 08:57 +1000, Alexey Kardashevskiy wrote:
On 06/28/2013 01:44 AM, Alex Williamson wrote:
On Thu, 2013-06-27 at 17:14 +1000, Alexey Kardashevskiy wrote:
VFIO is designed to be used via ioctls on file descriptors
returned by VFIO.
However in some situations support
On Thu, Jun 27, 2013 at 02:58:34PM -0500, Scott Wood wrote:
On 06/26/2013 09:00:33 PM, Kevin Hao wrote:
This is based on the codes in the head_44x.S. Since we always align to
256M before mapping the PAGE_OFFSET for a relocatable kernel, we also
change the init tlb map to 256M size.
Why
On 06/28/2013 10:41 AM, Alex Williamson wrote:
On Fri, 2013-06-28 at 08:57 +1000, Alexey Kardashevskiy wrote:
On 06/28/2013 01:44 AM, Alex Williamson wrote:
On Thu, 2013-06-27 at 17:14 +1000, Alexey Kardashevskiy wrote:
VFIO is designed to be used via ioctls on file descriptors
returned by
On 06/27/2013 08:36:37 PM, Kevin Hao wrote:
On Thu, Jun 27, 2013 at 02:58:34PM -0500, Scott Wood wrote:
On 06/26/2013 09:00:33 PM, Kevin Hao wrote:
This is based on the codes in the head_44x.S. Since we always
align to
256M before mapping the PAGE_OFFSET for a relocatable kernel, we
also
On 06/27/2013 08:36:37 PM, Kevin Hao wrote:
On Thu, Jun 27, 2013 at 02:58:34PM -0500, Scott Wood wrote:
On 06/26/2013 09:00:33 PM, Kevin Hao wrote:
This is based on the codes in the head_44x.S. Since we always
align to
256M before mapping the PAGE_OFFSET for a relocatable kernel, we
also
Update MAINTAINERS to reflect recent changes.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
MAINTAINERS |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5be702c..b447392 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6146,10
On Fri, 2013-06-28 at 09:59 +0800, Gavin Shan wrote:
Update MAINTAINERS to reflect recent changes.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
MAINTAINERS |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index
On Fri, Jun 28, 2013 at 12:11:29PM +1000, Benjamin Herrenschmidt wrote:
On Fri, 2013-06-28 at 09:59 +0800, Gavin Shan wrote:
Update MAINTAINERS to reflect recent changes.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
MAINTAINERS |4
1 files changed, 4 insertions(+), 0
On 06/26/2013 09:00:34 PM, Kevin Hao wrote:
diff --git a/arch/powerpc/include/asm/mmu-book3e.h
b/arch/powerpc/include/asm/mmu-book3e.h
index 936db36..bf422db 100644
--- a/arch/powerpc/include/asm/mmu-book3e.h
+++ b/arch/powerpc/include/asm/mmu-book3e.h
@@ -214,6 +214,11 @@
#define
Update MAINTAINERS to reflect recent changes.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
MAINTAINERS |7 +++
1 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5be702c..c724a3a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6149,7
On Wed, Apr 03, 2013 at 10:03:18AM +0800, Hongtao Jia wrote:
The MPIC version 2.0 has a MSI errata (errata PIC1 of mpc8544), It causes
that neither MSI nor MSI-X can work fine. This is a workaround to allow
MSI-X to function properly.
Signed-off-by: Liu Shuo soniccat@gmail.com
On Fri, 2013-06-28 at 11:38 +1000, Alexey Kardashevskiy wrote:
On 06/28/2013 10:41 AM, Alex Williamson wrote:
On Fri, 2013-06-28 at 08:57 +1000, Alexey Kardashevskiy wrote:
On 06/28/2013 01:44 AM, Alex Williamson wrote:
On Thu, 2013-06-27 at 17:14 +1000, Alexey Kardashevskiy wrote:
VFIO
On 06/28/2013 12:37 PM, Alex Williamson wrote:
On Fri, 2013-06-28 at 11:38 +1000, Alexey Kardashevskiy wrote:
On 06/28/2013 10:41 AM, Alex Williamson wrote:
On Fri, 2013-06-28 at 08:57 +1000, Alexey Kardashevskiy wrote:
On 06/28/2013 01:44 AM, Alex Williamson wrote:
On Thu, 2013-06-27 at
On 06/27/2013 05:22 PM, Michael Ellerman wrote:
On Wed, 2013-06-26 at 15:28 +0530, Anshuman Khandual wrote:
@@ -117,6 +117,7 @@
(EVENT_UNIT_MASK EVENT_UNIT_SHIFT) | \
(EVENT_COMBINE_MASKEVENT_COMBINE_SHIFT) | \
(EVENT_MARKED_MASK
Hi Michael,
On Fri, 28 Jun 2013 00:16:31 +1000 Michael Ellerman mich...@ellerman.id.au
wrote:
On Thu, Jun 27, 2013 at 02:05:39PM +1000, Stephen Rothwell wrote:
On Tue, 25 Jun 2013 17:47:56 +1000 Michael Ellerman
mich...@ellerman.id.au wrote:
-void tm_unavailable_exception(struct
On Fri, 2013-06-28 at 10:25 +0800, Gavin Shan wrote:
Update MAINTAINERS to reflect recent changes.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
MAINTAINERS |7 +++
1 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index
On 06/24/2013 04:58 PM, Michael Ellerman wrote:
In power_pmu_enable() we still enable the PMU even if we have zero
events. This should have no effect but doesn't make much sense. Instead
just return after telling the hypervisor that we are not using the PMCs.
Signed-off-by: Michael Ellerman
On Fri, Jun 28, 2013 at 02:56:53PM +1000, Benjamin Herrenschmidt wrote:
On Fri, 2013-06-28 at 10:25 +0800, Gavin Shan wrote:
Update MAINTAINERS to reflect recent changes.
Signed-off-by: Gavin Shan sha...@linux.vnet.ibm.com
---
MAINTAINERS |7 +++
1 files changed, 7 insertions(+), 0
1 - 100 of 102 matches
Mail list logo