Even though 'compatability' has a dedicated entry in the Wiktionary,
it's listed as 'Mispelling of compatibility'. Fix it.
Signed-off-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
---
arch/metag/include/asm/elf.h | 2 +-
arch/powerpc/kvm/book3s.c| 2 +-
On Wed, May 27, 2015 at 03:05:42PM +0300, Laurent Pinchart wrote:
Even though 'compatability' has a dedicated entry in the Wiktionary,
it's listed as 'Mispelling of compatibility'. Fix it.
Signed-off-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
---
arch/metag/include/asm/elf.h
From: Laurent Pinchart laurent.pinch...@ideasonboard.com
Date: Wed, 27 May 2015 15:05:42 +0300
Even though 'compatability' has a dedicated entry in the Wiktionary,
it's listed as 'Mispelling of compatibility'. Fix it.
Signed-off-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
This patch series provides a way to use more of the capacity of each
processor core when running guests configured with threads=1, 2 or 4
on a POWER8 host with HV KVM, without having to change the static
micro-threading (the official name for split-core) mode for the whole
machine. The problem
When running a virtual core of a guest that is configured with fewer
threads per core than the physical cores have, the extra physical
threads are currently unused. This makes it possible to use them to
run one or more other virtual cores from the same guest when certain
conditions are met. This
This builds on the ability to run more than one vcore on a physical
core by using the micro-threading (split-core) modes of the POWER8
chip. Previously, only vcores from the same VM could be run together,
and (on POWER8) only if they had just one thread per core. With the
ability to split the
In 64 bit kernels, the Fixed Point Exception Register (XER) is a 64
bit field (e.g. in kvm_regs and kvm_vcpu_arch) and in most places it is
accessed as such.
This patch corrects places where it is accessed as a 32 bit field by a
64 bit kernel. In some cases this is via a 32 bit load or store
On 26.05.15 02:27, Sam Bobroff wrote:
In 64 bit kernels, the Fixed Point Exception Register (XER) is a 64
bit field (e.g. in kvm_regs and kvm_vcpu_arch) and in most places it is
accessed as such.
This patch corrects places where it is accessed as a 32 bit field by a
64 bit kernel. In
This was signaled by a static code analysis tool.
Signed-off-by: Laurentiu Tudor laurentiu.tu...@freescale.com
Reviewed-by: Scott Wood scottw...@freescale.com
---
arch/powerpc/kvm/e500_mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/e500_mmu.c
On 25.05.15 10:48, Laurentiu Tudor wrote:
This was signaled by a static code analysis tool.
Signed-off-by: Laurentiu Tudor laurentiu.tu...@freescale.com
Reviewed-by: Scott Wood scottw...@freescale.com
Thanks, applied to kvm-ppc-queue.
Alex
--
To unsubscribe from this list: send the line
On 22.05.15 11:41, Thomas Huth wrote:
Since the PPC970 support has been removed from the kvm-hv kernel
module recently, we should also reflect this change in the help
text of the corresponding Kconfig option.
Signed-off-by: Thomas Huth th...@redhat.com
Thanks, applied to kvm-ppc-queue.
On 22.05.15 11:41, Thomas Huth wrote:
Since the PPC970 support has been removed from the kvm-hv kernel
module recently, we should also reflect this change in the help
text of the corresponding Kconfig option.
Signed-off-by: Thomas Huth th...@redhat.com
Thanks, applied to kvm-ppc-queue.
On 22.05.15 09:25, Thomas Huth wrote:
When compiling the KVM code for POWER with make C=1, sparse
complains about functions missing proper prototypes and a 64-bit
constant missing the ULL prefix. Let's fix this by making the
functions static or by including the proper header with the
On 21.05.15 21:37, Scott Wood wrote:
On Thu, 2015-05-21 at 16:26 +0300, Laurentiu Tudor wrote:
If passed a larger page size lookup_linux_ptep()
may fail, so add a check for that and bail out
if that's the case.
This was found with the help of a static
code analysis tool.
Signed-off-by:
On 26.05.15 02:14, Sam Bobroff wrote:
On Mon, May 25, 2015 at 11:08:08PM +0200, Alexander Graf wrote:
On 20.05.15 07:26, Sam Bobroff wrote:
In 64 bit kernels, the Fixed Point Exception Register (XER) is a 64
bit field (e.g. in kvm_regs and kvm_vcpu_arch) and in most places it is
accessed
On 18.05.15 14:44, Laurentiu Tudor wrote:
On this switch branch the regs initialization
doesn't happen so add it.
This was found with the help of a static
code analysis tool.
Signed-off-by: Laurentiu Tudor laurentiu.tu...@freescale.com
Cc: Scott Wood scottw...@freescale.com
Cc: Mihai
On Mon, May 25, 2015 at 11:08:08PM +0200, Alexander Graf wrote:
On 20.05.15 07:26, Sam Bobroff wrote:
In 64 bit kernels, the Fixed Point Exception Register (XER) is a 64
bit field (e.g. in kvm_regs and kvm_vcpu_arch) and in most places it is
accessed as such.
This patch corrects
In 64 bit kernels, the Fixed Point Exception Register (XER) is a 64
bit field (e.g. in kvm_regs and kvm_vcpu_arch) and in most places it is
accessed as such.
This patch corrects places where it is accessed as a 32 bit field by a
64 bit kernel. In some cases this is via a 32 bit load or store
When compiling the KVM code for POWER with make C=1, sparse
complains about functions missing proper prototypes and a 64-bit
constant missing the ULL prefix. Let's fix this by making the
functions static or by including the proper header with the
prototypes, and by appending a ULL prefix to the
Since the PPC970 support has been removed from the kvm-hv kernel
module recently, we should also reflect this change in the help
text of the corresponding Kconfig option.
Signed-off-by: Thomas Huth th...@redhat.com
---
arch/powerpc/kvm/Kconfig | 8
1 file changed, 4 insertions(+), 4
This was signaled by a static code analysis tool.
Signed-off-by: Laurentiu Tudor laurentiu.tu...@freescale.com
---
arch/powerpc/kvm/e500_mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c
index 50860e9..29911a0
On Fri, 2015-05-22 at 17:46 +0300, Laurentiu Tudor wrote:
This was signaled by a static code analysis tool.
Signed-off-by: Laurentiu Tudor laurentiu.tu...@freescale.com
---
arch/powerpc/kvm/e500_mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
In guest_exit_cont we call kvmhv_commence_exit which expects the trap
number as the argument. However r3 doesn't contain the trap number at
this point and as a result we would be calling the function with a
spurious trap number.
Fix this by copying r12 into r3 before calling kvmhv_commence_exit
If passed a larger page size lookup_linux_ptep()
may fail, so add a check for that and bail out
if that's the case.
This was found with the help of a static
code analysis tool.
Signed-off-by: Mihai Caraman mihai.cara...@freescale.com
Signed-off-by: Laurentiu Tudor laurentiu.tu...@freescale.com
On Thu, 2015-05-21 at 16:26 +0300, Laurentiu Tudor wrote:
If passed a larger page size lookup_linux_ptep()
may fail, so add a check for that and bail out
if that's the case.
This was found with the help of a static
code analysis tool.
Signed-off-by: Mihai Caraman
On Wed, May 20, 2015 at 03:26:12PM +1000, Sam Bobroff wrote:
In 64 bit kernels, the Fixed Point Exception Register (XER) is a 64
bit field (e.g. in kvm_regs and kvm_vcpu_arch) and in most places it is
accessed as such.
This patch corrects places where it is accessed as a 32 bit field by a
On Wed, 2015-05-20 at 15:26 +1000, Sam Bobroff wrote:
In 64 bit kernels, the Fixed Point Exception Register (XER) is a 64
bit field (e.g. in kvm_regs and kvm_vcpu_arch) and in most places it is
accessed as such.
This patch corrects places where it is accessed as a 32 bit field by a
64 bit
On Wed, May 20, 2015 at 05:35:08PM -0500, Scott Wood wrote:
It's nominally a 64-bit register, but the upper 32 bits are reserved in
ISA 2.06. Do newer ISAs or certain implementations define things in the
upper 32 bits, or is this just about the asm accesses being wrong on
big-endian?
It's
powerpc provides hcall events that also provides insights into guest
behaviour. Enhance perf kvm to record and analyze hcall events.
- To trace hcall events :
perf kvm stat record
- To show the results :
perf kvm stat report --event=hcall
The result shows the number of hypervisor calls
To analyze the kvm exits with perf, we will need to map the exit codes
with the exit reasons. Such a mapping exists today in trace_book3s.h.
Currently its not exported to perf.
This patch moves these kvm exit reasons and their mapping from
arch/powerpc/kvm/trace_book3s.h to
For perf to analyze the KVM events like hcalls, we need the
hypervisor calls and their codes to be exported through uapi.
This patch moves most of the pSeries hcall codes from
arch/powerpc/include/asm/hvcall.h to
arch/powerpc/include/uapi/asm/pseries_hcalls.h.
It also moves the mapping
Hi Scott,
On 05/13/2015 08:52 AM, Scott Wood wrote:
On Tue, 2015-05-12 at 21:34 +0530, Hemant Kumar wrote:
Hi Scott,
On 05/12/2015 03:38 AM, Scott Wood wrote:
On Fri, 2015-05-08 at 06:37 +0530, Hemant Kumar wrote:
diff --git a/arch/powerpc/include/uapi/asm/kvm_perf.h
On 09/05/2015 21:50, Alexander Graf wrote:
Reviewed-by: Alexander Graf ag...@suse.de
Paolo, can you please take this patch into 4.1 directly?
Sure.
Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo
* Hemant Kumar hem...@linux.vnet.ibm.com wrote:
# perf kvm stat report -p 60515
Analyze events for pid(s) 60515, all VCPUs:
VM-EXITSamples Samples% Time%Min Time Max
Time Avg time
H_DATA_STORAGE 500635.30% 0.13% 1.94us
Both functions are doing the same thing - looking up the struct
kvm_vcpu pointer for a given vCPU ID. So there's no need for the
kvmppc_find_vcpu() function, simply use the common function instead.
Signed-off-by: Thomas Huth th...@redhat.com
---
arch/powerpc/kvm/book3s_hv.c | 22
This rework allows to avoid some cycles by not disabling interrupts
twice.
Christian Borntraeger (2):
KVM: provide irq_unsafe kvm_guest_{enter|exit}
KVM: arm/mips/x86/power use __kvm_guest_{enter|exit}
arch/arm/kvm/arm.c | 4 ++--
arch/mips/kvm/mips.c | 4 ++--
On 30/04/2015 13:43, Christian Borntraeger wrote:
+/* must be called with irqs disabled */
+static inline void __kvm_guest_enter(void)
{
- unsigned long flags;
-
- BUG_ON(preemptible());
Please keep the BUG_ON() in kvm_guest_enter. Otherwise looks good, thanks!
Paolo
-
Am 30.04.2015 um 14:02 schrieb Christian Borntraeger:
Am 30.04.2015 um 14:01 schrieb Christian Borntraeger:
Am 30.04.2015 um 13:50 schrieb Paolo Bonzini:
On 30/04/2015 13:43, Christian Borntraeger wrote:
+/* must be called with irqs disabled */
+static inline void __kvm_guest_enter(void)
Am 30.04.2015 um 13:50 schrieb Paolo Bonzini:
On 30/04/2015 13:43, Christian Borntraeger wrote:
+/* must be called with irqs disabled */
+static inline void __kvm_guest_enter(void)
{
-unsigned long flags;
-
-BUG_ON(preemptible());
Please keep the BUG_ON() in kvm_guest_enter.
Am 30.04.2015 um 14:01 schrieb Christian Borntraeger:
Am 30.04.2015 um 13:50 schrieb Paolo Bonzini:
On 30/04/2015 13:43, Christian Borntraeger wrote:
+/* must be called with irqs disabled */
+static inline void __kvm_guest_enter(void)
{
- unsigned long flags;
-
-
Am 30.04.2015 um 14:40 schrieb Paolo Bonzini:
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
Reviewed-by: Christian Borntraeger borntrae...@de.ibm.com
but no way to test it
---
arch/powerpc/kvm/booke.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git
On 28/04/2015 16:10, Christian Borntraeger wrote:
Alternatively, the irq-disabled versions could be called
__kvm_guest_{enter,exit}. Then you can use those directly when it makes
sense.
..having a special __kvm_guest_{enter,exit} without the WARN_ON might be even
the cheapest way. In
Am 28.04.2015 um 13:37 schrieb Paolo Bonzini:
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -891,7 +891,9 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct
kvm_vcpu *vcpu,
/* We get here with MSR.EE=1 */
+local_irq_disable();
On 28/04/2015 12:32, Christian Borntraeger wrote:
Some architectures already have irq disabled when calling
kvm_guest_exit. Push down the disabling into the architectures
to avoid double disabling. This also allows to replace
irq_save with irq_disable which might be cheaper.
arm and mips
This fixes a regression introduced in commit 25fedfca94cf, KVM: PPC:
Book3S HV: Move vcore preemption point up into kvmppc_run_vcpu, which
leads to a user-triggerable oops.
In the case where we try to run a vcore on a physical core that is
not in single-threaded mode, or the vcore has too many
I was able to get rid of some nanoseconds for a guest exit loop
on s390. I did my best to not break other architectures but
review and comments on the general approach is welcome.
Downside is that the existing irq_save things will just work
no matter what the callers have done, the new code must
Some architectures already have irq disabled when calling
kvm_guest_exit. Push down the disabling into the architectures
to avoid double disabling. This also allows to replace
irq_save with irq_disable which might be cheaper.
arm and mips already have interrupts disabled. s390/power/x86
need
local_irq_disable can be cheaper than local_irq_save, especially
when done only once instead of twice. We can push down the
local_irq_save (and replace it with local_irq_disable) to
save some cycles.
x86, mips and arm already disable the interrupts before calling
kvm_guest_enter. Here we save one
On Tue, Apr 28, 2015 at 10:36:52AM +0530, Aneesh Kumar K.V wrote:
Paul Mackerras pau...@samba.org writes:
The reference (R) and change (C) bits in a HPT entry can be set by
hardware at any time up until the HPTE is invalidated and the TLB
invalidation sequence has completed. This means
This fixes a bug in the tracking of pages that get modified by the
guest. If the guest creates a large-page HPTE, writes to memory
somewhere within the large page, and then removes the HPTE, we only
record the modified state for the first normal page within the large
page, when in fact the guest
The reference (R) and change (C) bits in a HPT entry can be set by
hardware at any time up until the HPTE is invalidated and the TLB
invalidation sequence has completed. This means that when removing
a HPTE, we need to read the HPTE after the invalidation sequence has
completed in order to obtain
This adds implementations for the H_CLEAR_REF (test and clear reference
bit) and H_CLEAR_MOD (test and clear changed bit) hypercalls.
When clearing the reference or change bit in the guest view of the HPTE,
we also have to clear it in the real HPTE so that we can detect future
references or
On Wed, Apr 15, 2015 at 10:16:41PM +0200, Alexander Graf wrote:
On 14.04.15 13:56, Paul Mackerras wrote:
Did you forget to push it out or something? Your kvm-ppc-queue branch
is still at 4.0-rc1 as far as I can see.
Oops, not sure how that happened. Does it show up correctly for you
On powerpc, kvm tracks both the guest steal time as well as the time
when guest was idle and this gets sent in to the guest through DTL. The
guest accounts these entries as either steal time or idle time based on
the last running task. Since the true guest idle status is not visible
to the host,
Report guest steal time in host task statistics. On x86, this is just
the scheduler run_delay.
Signed-off-by: Naveen N. Rao naveen.n@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0ee725f..737b0e4
Steal time accounts the time duration during which a guest vcpu was ready to
run, but was not scheduled to run by the hypervisor. This is particularly
relevant in cloud environment where customers would want to use this as an
indicator that their guests are being throttled. However, as it stands
Introduce a field in /proc/pid/stat to expose guest steal time.
Signed-off-by: Naveen N. Rao naveen.n@linux.vnet.ibm.com
---
fs/proc/array.c | 6 ++
include/linux/sched.h | 7 +++
kernel/fork.c | 2 +-
3 files changed, 14 insertions(+), 1 deletion(-)
diff --git
On 2015/04/22 01:05PM, Christian Borntraeger wrote:
Am 22.04.2015 um 12:24 schrieb Naveen N. Rao:
Steal time accounts the time duration during which a guest vcpu was ready to
run, but was not scheduled to run by the hypervisor. This is particularly
relevant in cloud environment where
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.
This makes correctly performing cache inhibited IO accesses
On Tue, Apr 21, 2015 at 08:37:02AM +0200, Thomas Huth wrote:
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page
From: David Gibson da...@gibson.dropbear.id.au
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.
This makes correctly performing cache inhibited IO accesses awkward when
the MMU is turned off (real
Hi Paolo / Marcelo,
This is my current patch queue for ppc. Please pull.
Alex
The following changes since commit b79013b2449c23f1f505bdf39c5a6c330338b244:
Merge tag 'staging-4.1-rc1' of
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging (2015-04-13
17:37:33 -0700)
are
From: Suresh Warrier warr...@linux.vnet.ibm.com
Interrupt-based hypercalls return H_TOO_HARD to inform KVM that it needs
to switch to the host to complete the rest of hypercall function in
virtual mode. This patch ports the virtual mode ICS/ICP reject and resend
functions to be runnable in
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
This adds helper routines for locking and unlocking HPTEs, and uses
them in the rest of the code. We don't change any locking rules in
this patch.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Signed-off-by: Paul Mackerras
From: Paul Mackerras pau...@samba.org
This replaces the assembler code for kvmhv_commence_exit() with C code
in book3s_hv_builtin.c. It also moves the IPI sending code that was
in book3s_hv_rm_xics.c into a new kvmhv_rm_send_ipi() function so it
can be used by kvmhv_commence_exit() as well as
From: Paul Mackerras pau...@samba.org
When running a multi-threaded guest and vcpu 0 in a virtual core
is not running in the guest (i.e. it is busy elsewhere in the host),
thread 0 of the physical core will switch the MMU to the guest and
then go to nap mode in the code at kvm_do_nap. If the
From: Paul Mackerras pau...@samba.org
Currently, the entry_exit_count field in the kvmppc_vcore struct
contains two 8-bit counts, one of the threads that have started entering
the guest, and one of the threads that have started exiting the guest.
This changes it to an entry_exit_map field which
From: Suresh Warrier warr...@linux.vnet.ibm.com
Add two counters to count how often we generate real-mode ICS resend
and reject events. The counters provide some performance statistics
that could be used in the future to consider if the real mode functions
need further optimizing. The counters
From: Suresh E. Warrier warr...@linux.vnet.ibm.com
Add counters to track number of times we switch from guest real mode
to host virtual mode during an interrupt-related hyper call because the
hypercall requires actions that cannot be completed in real mode. This
will help when making
From: Paul Mackerras pau...@samba.org
Previously, if kvmppc_run_core() was running a VCPU that needed a VPA
update (i.e. one of its 3 virtual processor areas needed to be pinned
in memory so the host real mode code can update it on guest entry and
exit), we would drop the vcore lock and do the
From: Paul Mackerras pau...@samba.org
* Remove unused kvmppc_vcore::n_busy field.
* Remove setting of RMOR, since it was only used on PPC970 and the
PPC970 KVM support has been removed.
* Don't use r1 or r2 in setting the runlatch since they are
conventionally reserved for other things; use
From: Michael Ellerman mich...@ellerman.id.au
Some PowerNV systems include a hardware random-number generator.
This HWRNG is present on POWER7+ and POWER8 chips and is capable of
generating one 64-bit random number every microsecond. The random
numbers are produced by sampling a set of 64
From: Paul Mackerras pau...@samba.org
We can tell when a secondary thread has finished running a guest by
the fact that it clears its kvm_hstate.kvm_vcpu pointer, so there
is no real need for the nap_count field in the kvmppc_vcore struct.
This changes kvmppc_wait_for_nap to poll the
From: Paul Mackerras pau...@samba.org
This reads the timebase at various points in the real-mode guest
entry/exit code and uses that to accumulate total, minimum and
maximum time spent in those parts of the code. Currently these
times are accumulated per vcpu in 5 parts of the code:
* rm_entry
From: Paul Mackerras pau...@samba.org
Rather than calling cond_resched() in kvmppc_run_core() before doing
the post-processing for the vcpus that we have just run (that is,
calling kvmppc_handle_exit_hv(), kvmppc_set_timer(), etc.), we now do
that post-processing before calling cond_resched(),
From: Suresh Warrier warr...@linux.vnet.ibm.com
Replaces the ICS mutex lock with a spin lock since we will be porting
these routines to real mode. Note that we need to disable interrupts
before we take the lock in anticipation of the fact that on the guest
side, we are running in the context of a
From: Paul Mackerras pau...@samba.org
On entry to the guest, secondary threads now wait for the primary to
switch the MMU after loading up most of their state, rather than before.
This means that the secondary threads get into the guest sooner, in the
common case where the secondary threads get
From: Paul Mackerras pau...@samba.org
This uses msgsnd where possible for signalling other threads within
the same core on POWER8 systems, rather than IPIs through the XICS
interrupt controller. This includes waking secondary threads to run
the guest, the interrupts generated by the virtual
From: Paul Mackerras pau...@samba.org
This arranges for threads that are napping due to their vcpu having
ceded or due to not having a vcpu to wake up at the end of the guest's
timeslice without having to be poked with an IPI. We do that by
arranging for the decrementer to contain a value no
From: Suresh E. Warrier warr...@linux.vnet.ibm.com
Export __spin_yield so that the arch_spin_unlock() function can
be invoked from a module. This will be required for modules where
we want to take a lock that is also is acquired in hypervisor
real mode. Because we want to avoid running any
From: Paul Mackerras pau...@samba.org
This creates a debugfs directory for each HV guest (assuming debugfs
is enabled in the kernel config), and within that directory, a file
by which the contents of the guest's HPT (hashed page table) can be
read. The directory is named vm, where is
Am Tue, 21 Apr 2015 16:51:21 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On Tue, Apr 21, 2015 at 08:37:02AM +0200, Thomas Huth wrote:
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson da...@gibson.dropbear.id.au:
On POWER, storage caching is usually configured via the
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.
This makes correctly performing cache inhibited IO accesses awkward when
the MMU is turned off (real mode). Some CPU models provide special
registers
On 15/04/2015 22:19, Alexander Graf wrote:
Since you already did send out the first pull request, just let me know
when you pulled linus' tree back into kvm/next (or kvm/master) so that I
can fast-forward merge this in my kvm-ppc-next branch and then rebase my
queue on top, merge it into
On 14.04.15 13:56, Paul Mackerras wrote:
On Thu, Apr 09, 2015 at 12:57:58AM +0200, Alexander Graf wrote:
On 03/28/2015 04:21 AM, Paul Mackerras wrote:
This is the rest of my current patch queue for HV KVM on PPC. This
series is based on Alex Graf's kvm-ppc-queue branch. The only change
On 09.04.15 10:49, Paolo Bonzini wrote:
On 09/04/2015 00:57, Alexander Graf wrote:
The last patch in this series needs a definition of PPC_MSGCLR that is
added by the patch powerpc/powernv: Fixes for hypervisor doorbell
handling, which has now gone upstream into Linus' tree as commit
On Sat, Apr 11, 2015 at 12:57:54PM -0700, Nathan Whitehorn wrote:
On 02/18/15 15:33, Nathan Whitehorn wrote:
On 02/18/15 14:00, Paul Mackerras wrote:
On Wed, Feb 18, 2015 at 09:34:54AM +0100, Alexander Graf wrote:
Am 18.02.2015 um 07:12 schrieb Nathan Whitehorn
nwhiteh...@freebsd.org:
On Thu, Apr 09, 2015 at 12:57:58AM +0200, Alexander Graf wrote:
On 03/28/2015 04:21 AM, Paul Mackerras wrote:
This is the rest of my current patch queue for HV KVM on PPC. This
series is based on Alex Graf's kvm-ppc-queue branch. The only change
from the previous version of this series is
On 02/18/15 15:33, Nathan Whitehorn wrote:
On 02/18/15 14:00, Paul Mackerras wrote:
On Wed, Feb 18, 2015 at 09:34:54AM +0100, Alexander Graf wrote:
Am 18.02.2015 um 07:12 schrieb Nathan Whitehorn
nwhiteh...@freebsd.org:
It seems like KVM doesn't implement the H_CLEAR_REF and H_CLEAR_MOD
On 09/04/2015 00:57, Alexander Graf wrote:
The last patch in this series needs a definition of PPC_MSGCLR that is
added by the patch powerpc/powernv: Fixes for hypervisor doorbell
handling, which has now gone upstream into Linus' tree as commit
755563bc79c7 via the linuxppc-dev mailing
On 03/28/2015 04:21 AM, Paul Mackerras wrote:
This is the rest of my current patch queue for HV KVM on PPC. This
series is based on Alex Graf's kvm-ppc-queue branch. The only change
from the previous version of this series is that patch 2 has been
updated to take account of the timebase
Joe Perches (25):
arm: Use bool function return values of true/false not 1/0
arm64: Use bool function return values of true/false not 1/0
hexagon: Use bool function return values of true/false not 1/0
ia64: Use bool function return values of true/false not 1/0
mips: Use bool function
Use the normal return values for bool functions
Signed-off-by: Joe Perches j...@perches.com
---
arch/powerpc/include/asm/dcr-native.h| 2 +-
arch/powerpc/include/asm/dma-mapping.h | 4 ++--
arch/powerpc/include/asm/kvm_book3s_64.h | 4 ++--
arch/powerpc/sysdev/dcr.c| 2 +-
On Tue, 2015-03-31 at 12:49 +1100, Benjamin Herrenschmidt wrote:
On Mon, 2015-03-30 at 16:46 -0700, Joe Perches wrote:
Use the normal return values for bool functions
Acked-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Should we merge it or will you ?
Hey Ben.
I don't merge stuff.
On 3/30/2015 4:45 PM, Joe Perches wrote:
Joe Perches (25):
arm: Use bool function return values of true/false not 1/0
arm64: Use bool function return values of true/false not 1/0
hexagon: Use bool function return values of true/false not 1/0
ia64: Use bool function return values of
On Mon, 2015-03-30 at 17:07 -0700, Casey Schaufler wrote:
On 3/30/2015 4:45 PM, Joe Perches wrote:
Joe Perches (25):
arm: Use bool function return values of true/false not 1/0
[etc...]
Why, and why these in particular?
bool functions are probably better returning
bool values instead of
On Mon, 2015-03-30 at 16:46 -0700, Joe Perches wrote:
Use the normal return values for bool functions
Acked-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Should we merge it or will you ?
Cheers,
Ben.
Signed-off-by: Joe Perches j...@perches.com
---
On Mon, 2015-03-30 at 10:39 +0530, Aneesh Kumar K.V wrote:
This patch remove helpers which we had used only once in the code.
Limiting page table walk variants help in ensuring that we won't
end up with code walking page table with wrong assumptions.
Signed-off-by: Aneesh Kumar K.V
pte can get updated from other CPUs as part of multiple activities
like THP split, huge page collapse, unmap. We need to make sure we
don't reload the pte value again and again for different checks.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
Note:
This is posted
This patch remove helpers which we had used only once in the code.
Limiting page table walk variants help in ensuring that we won't
end up with code walking page table with wrong assumptions.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable.h
401 - 500 of 8127 matches
Mail list logo