On February 13, 2017 2:34:01 PM PST, Waiman Long wrote:
>On 02/13/2017 04:52 PM, Peter Zijlstra wrote:
>> On Mon, Feb 13, 2017 at 03:12:45PM -0500, Waiman Long wrote:
>>> On 02/13/2017 02:42 PM, Waiman Long wrote:
On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
> On
On 02/13/2017 04:52 PM, Peter Zijlstra wrote:
> On Mon, Feb 13, 2017 at 03:12:45PM -0500, Waiman Long wrote:
>> On 02/13/2017 02:42 PM, Waiman Long wrote:
>>> On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
> That way we'd end
On Mon, Feb 13, 2017 at 05:24:36PM -0500, Waiman Long wrote:
> >> movsql %edi, %rax;
> >> movq __per_cpu_offset(,%rax,8), %rax;
> >> cmpb $0, %[offset](%rax);
> >> setne %al;
> I have thought of that too. However, the goal is to eliminate memory
> read/write from/to stack. Eliminating a register
On 02/13/2017 03:06 PM, h...@zytor.com wrote:
> On February 13, 2017 2:53:43 AM PST, Peter Zijlstra
> wrote:
>> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>>> That way we'd end up with something like:
>>>
>>> asm("
>>> push %rdi;
>>> movslq %edi, %rdi;
On February 13, 2017 1:52:20 PM PST, Peter Zijlstra
wrote:
>On Mon, Feb 13, 2017 at 03:12:45PM -0500, Waiman Long wrote:
>> On 02/13/2017 02:42 PM, Waiman Long wrote:
>> > On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
>> >> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter
On February 13, 2017 1:52:20 PM PST, Peter Zijlstra
wrote:
>On Mon, Feb 13, 2017 at 03:12:45PM -0500, Waiman Long wrote:
>> On 02/13/2017 02:42 PM, Waiman Long wrote:
>> > On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
>> >> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter
On Mon, Feb 13, 2017 at 12:06:44PM -0800, h...@zytor.com wrote:
> >Maybe:
> >
> >movsql %edi, %rax;
> >movq __per_cpu_offset(,%rax,8), %rax;
> >cmpb $0, %[offset](%rax);
> >setne %al;
> >
> >?
>
> We could kill the zero or sign extend by changing the calling
> interface to pass an unsigned long
On Mon, Feb 13, 2017 at 03:12:45PM -0500, Waiman Long wrote:
> On 02/13/2017 02:42 PM, Waiman Long wrote:
> > On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
> >> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
> >>> That way we'd end up with something like:
> >>>
> >>> asm("
> >>>
On 02/13/2017 02:42 PM, Waiman Long wrote:
> On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
>> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>>> That way we'd end up with something like:
>>>
>>> asm("
>>> push %rdi;
>>> movslq %edi, %rdi;
>>> movq __per_cpu_offset(,%rdi,8), %rax;
On February 13, 2017 2:53:43 AM PST, Peter Zijlstra
wrote:
>On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>> That way we'd end up with something like:
>>
>> asm("
>> push %rdi;
>> movslq %edi, %rdi;
>> movq __per_cpu_offset(,%rdi,8), %rax;
>> cmpb $0,
On 02/13/2017 05:53 AM, Peter Zijlstra wrote:
> On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
>> That way we'd end up with something like:
>>
>> asm("
>> push %rdi;
>> movslq %edi, %rdi;
>> movq __per_cpu_offset(,%rdi,8), %rax;
>> cmpb $0, %[offset](%rax);
>> setne %al;
>> pop
On 02/13/2017 05:47 AM, Peter Zijlstra wrote:
> On Fri, Feb 10, 2017 at 12:00:43PM -0500, Waiman Long wrote:
>
+asm(
+".pushsection .text;"
+".global __raw_callee_save___kvm_vcpu_is_preempted;"
+".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
On Mon, 13 Feb 2017, Andy Lutomirski wrote:
> On Sun, Feb 12, 2017 at 11:49 PM, Dexuan Cui wrote:
> >> From: Thomas Gleixner [mailto:t...@linutronix.de]
> >> Sent: Saturday, February 11, 2017 02:02
> >> ...
> >> That's important if the stuff happens cross CPU. If the update
On Sun, Feb 12, 2017 at 11:49 PM, Dexuan Cui wrote:
>> From: Thomas Gleixner [mailto:t...@linutronix.de]
>> Sent: Saturday, February 11, 2017 02:02
>> ...
>> That's important if the stuff happens cross CPU. If the update happens on
>> the same CPU then this is a different
On Fri, Feb 10, 2017 at 07:16:10PM +0200, Michael S. Tsirkin wrote:
> On Thu, Feb 09, 2017 at 06:31:18PM +, Will Deacon wrote:
> > On ARM (and other archs such as
> > Power), having a mismatch between a cacheable and a non-cacheable mapping
> > can result in a loss of coherency between the two
On Mon, Feb 13, 2017 at 11:47:16AM +0100, Peter Zijlstra wrote:
> That way we'd end up with something like:
>
> asm("
> push %rdi;
> movslq %edi, %rdi;
> movq __per_cpu_offset(,%rdi,8), %rax;
> cmpb $0, %[offset](%rax);
> setne %al;
> pop %rdi;
> " : : [offset] "i" (((unsigned long)_time) +
On Fri, Feb 10, 2017 at 12:00:43PM -0500, Waiman Long wrote:
> >> +asm(
> >> +".pushsection .text;"
> >> +".global __raw_callee_save___kvm_vcpu_is_preempted;"
> >> +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
> >> +"__raw_callee_save___kvm_vcpu_is_preempted:"
> >> +FRAME_BEGIN
>
On Mon, 13 Feb 2017, Dexuan Cui wrote:
> > From: Thomas Gleixner [mailto:t...@linutronix.de]
> > Sent: Saturday, February 11, 2017 02:02
> > ...
> > That's important if the stuff happens cross CPU. If the update happens on
> > the same CPU then this is a different story and as there are VMexits
>
18 matches
Mail list logo