> >>> + mutex_lock(&vb->balloon_lock);
> >>> +
> >>> + for (order = MAX_ORDER - 1; order >= 0; order--) {
> >>
> >> I scratched my head for a bit on this one. Why are you walking over
> >> orders,
> >> *then* zones. I *think* you're doing it because you can efficiently
> >> fill the bitmaps at a
On Mon, Dec 05, 2016 at 06:10:58PM -0800, Andy Lutomirski wrote:
> With CONFIG_VMAP_STACK=y, virtnet_set_mac_address() can be passed a
> pointer to the stack and it will OOPS. Copy the address to the heap
> to prevent the crash.
>
> Cc: Michael S. Tsirkin
> Cc: Jason Wang
> Cc: Laura Abbott
>
On 2016年12月06日 10:10, Andy Lutomirski wrote:
With CONFIG_VMAP_STACK=y, virtnet_set_mac_address() can be passed a
pointer to the stack and it will OOPS. Copy the address to the heap
to prevent the crash.
Cc: Michael S. Tsirkin
Cc: Jason Wang
Cc: Laura Abbott
Reported-by: zbys...@in.waw.pl
S
在 2016/12/6 09:24, Pan Xinhui 写道:
在 2016/12/6 08:58, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 insertions(
With CONFIG_VMAP_STACK=y, virtnet_set_mac_address() can be passed a
pointer to the stack and it will OOPS. Copy the address to the heap
to prevent the crash.
Cc: Michael S. Tsirkin
Cc: Jason Wang
Cc: Laura Abbott
Reported-by: zbys...@in.waw.pl
Signed-off-by: Andy Lutomirski
---
Very lightly
在 2016/12/6 09:23, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:23AM -0500, Pan Xinhui wrote:
Add two corresponding helper functions to support pv-qspinlock.
For normal use, __spin_yield_cpu will confer current vcpu slices to the
target vcpu(say, a lock holder). If target vcpu is not specifie
在 2016/12/6 08:58, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/platfor
On Mon, Dec 05, 2016 at 10:19:23AM -0500, Pan Xinhui wrote:
> Add two corresponding helper functions to support pv-qspinlock.
>
> For normal use, __spin_yield_cpu will confer current vcpu slices to the
> target vcpu(say, a lock holder). If target vcpu is not specified or it
> is in running state,
correct waiman's address.
在 2016/12/6 08:47, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:21AM -0500, Pan Xinhui wrote:
This patch add basic code to enable qspinlock on powerpc. qspinlock is
one kind of fairlock implementation. And seen some performance improvement
under some scenarios.
queued
On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
> pSeries/powerNV will use qspinlock from now on.
>
> Signed-off-by: Pan Xinhui
> ---
> arch/powerpc/platforms/pseries/Kconfig | 8
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/powerpc/platforms/pseries/Kconfig
> b
On Mon, Dec 05, 2016 at 10:19:21AM -0500, Pan Xinhui wrote:
> This patch add basic code to enable qspinlock on powerpc. qspinlock is
> one kind of fairlock implementation. And seen some performance improvement
> under some scenarios.
>
> queued_spin_unlock() release the lock by just one write of N
On 12/04/2016 05:13 AM, Li, Liang Z wrote:
>> On 11/30/2016 12:43 AM, Liang Li wrote:
>>> +static void send_unused_pages_info(struct virtio_balloon *vb,
>>> + unsigned long req_id)
>>> +{
>>> + struct scatterlist sg_in;
>>> + unsigned long pos = 0;
>>> + struct virtq
The default pv-qspinlock uses qspinlock(native version of pv-qspinlock).
pv_lock initialization should be done in bootstage with irq disabled.
And if we run as a guest with powerKVM/pHyp shared_processor mode,
restore pv_lock_ops callbacks to pv-qspinlock(pv version) which makes
full use of virtual
Avoid a function call under native version of qspinlock. On powerNV,
bafore applying this patch, every unlock is expensive. This small
optimizes enhance the performance.
We use static_key with jump_label which removes unnecessary loads of
lppaca and its stuff.
Signed-off-by: Pan Xinhui
---
arch
pSeries run as a guest and might need pv-qspinlock.
Signed-off-by: Pan Xinhui
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 19
Add two corresponding helper functions to support pv-qspinlock.
For normal use, __spin_yield_cpu will confer current vcpu slices to the
target vcpu(say, a lock holder). If target vcpu is not specified or it
is in running state, such conferging to lpar happens or not depends.
Because hcall itself
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/platforms/pseries/Kconfig
b/arch/powerpc/platforms/pseries/Kconfig
index bec90fb..8a87d06 100644
--- a/ar
This patch add basic code to enable qspinlock on powerpc. qspinlock is
one kind of fairlock implementation. And seen some performance improvement
under some scenarios.
queued_spin_unlock() release the lock by just one write of NULL to the
::locked field which sits at different places in the two en
Hi All,
this is the fairlock patchset. You can apply them and build successfully.
patches are based on linux-next
qspinlock can avoid waiter starved issue. It has about the same speed in
single-thread and it can be much faster in high contention situations
especially when the spinlock is embedd
19 matches
Mail list logo