On Tue, Sep 15, 2015 at 11:32:14AM -0400, Waiman Long wrote:
> On 09/15/2015 04:38 AM, Peter Zijlstra wrote:
> >On Mon, Sep 14, 2015 at 03:37:32PM -0400, Waiman Long wrote:
> >>BTW, the queue head vCPU at pv_wait_head_and_lock() doesn't wait early, it
> >>will spin the full threshold as there is no
On 09/15/2015 04:38 AM, Peter Zijlstra wrote:
On Mon, Sep 14, 2015 at 03:37:32PM -0400, Waiman Long wrote:
BTW, the queue head vCPU at pv_wait_head_and_lock() doesn't wait early, it
will spin the full threshold as there is no way for it to figure out if the
lock holder is running or not.
We can
On Mon, Sep 14, 2015 at 03:37:32PM -0400, Waiman Long wrote:
> BTW, the queue head vCPU at pv_wait_head_and_lock() doesn't wait early, it
> will spin the full threshold as there is no way for it to figure out if the
> lock holder is running or not.
We can know its cpu id, right? Surely we should t
On 09/14/2015 10:10 AM, Peter Zijlstra wrote:
On Fri, Sep 11, 2015 at 02:37:38PM -0400, Waiman Long wrote:
In an overcommitted guest where some vCPUs have to be halted to make
forward progress in other areas, it is highly likely that a vCPU later
in the spinlock queue will be spinning while the
On Fri, Sep 11, 2015 at 02:37:38PM -0400, Waiman Long wrote:
> In an overcommitted guest where some vCPUs have to be halted to make
> forward progress in other areas, it is highly likely that a vCPU later
> in the spinlock queue will be spinning while the ones earlier in the
> queue would have been
In an overcommitted guest where some vCPUs have to be halted to make
forward progress in other areas, it is highly likely that a vCPU later
in the spinlock queue will be spinning while the ones earlier in the
queue would have been halted. The spinning in the later vCPUs is then
just a waste of prec
6 matches
Mail list logo