On Fri, 10 Feb 2017, Stephen Hemminger wrote:
> Since sequence count algorithm is done by hypervisor, better to not reuse
> seqcount.
> Still concerned that the code is racy.
That's a different question and can only be answered by the hypervisor
folks. Dunno, whether they have barrier requiremen
On Thu, Feb 09, 2017 at 06:31:18PM +, Will Deacon wrote:
> On ARM (and other archs such as
> Power), having a mismatch between a cacheable and a non-cacheable mapping
> can result in a loss of coherency between the two (for example, if the
> non-cacheable gues accesses bypass the cache, but the
On 02/10/2017 11:35 AM, Waiman Long wrote:
> On 02/10/2017 11:19 AM, Peter Zijlstra wrote:
>> On Fri, Feb 10, 2017 at 10:43:09AM -0500, Waiman Long wrote:
>>> It was found when running fio sequential write test with a XFS ramdisk
>>> on a VM running on a 2-socket x86-64 system, the %CPU times as re
On 02/10/2017 11:19 AM, Peter Zijlstra wrote:
> On Fri, Feb 10, 2017 at 10:43:09AM -0500, Waiman Long wrote:
>> It was found when running fio sequential write test with a XFS ramdisk
>> on a VM running on a 2-socket x86-64 system, the %CPU times as reported
>> by perf were as follows:
>>
>> 69.75%
Since sequence count algorithm is done by hypervisor, better to not reuse
seqcount.
Still concerned that the code is racy.
-Original Message-
From: Thomas Gleixner [mailto:t...@linutronix.de]
Sent: Friday, February 10, 2017 4:28 AM
To: Vitaly Kuznetsov
Cc: Stephen Hemminger ; x...@kerne
On 10/02/2017 16:43, Waiman Long wrote:
> It was found when running fio sequential write test with a XFS ramdisk
> on a VM running on a 2-socket x86-64 system, the %CPU times as reported
> by perf were as follows:
>
> 69.75% 0.59% fio [k] down_write
> 69.15% 0.01% fio [k] call_rwsem_down
On Fri, Feb 10, 2017 at 10:43:09AM -0500, Waiman Long wrote:
> It was found when running fio sequential write test with a XFS ramdisk
> on a VM running on a 2-socket x86-64 system, the %CPU times as reported
> by perf were as follows:
>
> 69.75% 0.59% fio [k] down_write
> 69.15% 0.01% fio
- Original Message -
> From: "Michael S. Tsirkin"
> To: "Paolo Bonzini"
> Cc: virtio-...@lists.oasis-open.org, virtualization@lists.linux-foundation.org
> Sent: Friday, February 10, 2017 4:20:17 PM
> Subject: Re: [virtio-dev] packed ring layout proposal v2
>
> On Fri, Feb 10, 2017 at 1
It was found when running fio sequential write test with a XFS ramdisk
on a VM running on a 2-socket x86-64 system, the %CPU times as reported
by perf were as follows:
69.75% 0.59% fio [k] down_write
69.15% 0.01% fio [k] call_rwsem_down_write_failed
67.12% 1.12% fio [k] rwsem_down_writ
On Fri, Feb 10, 2017 at 12:32:49PM +0100, Paolo Bonzini wrote:
>
>
> On 09/02/2017 19:24, Michael S. Tsirkin wrote:
> >> I don't know. Power of 2 ring size is pretty standard, I'd rather avoid
> >> the complication and the gratuitous difference with 1.0.
> >
> > I thought originally there's a re
On Tue, Feb 07, 2017 at 12:32:12AM +, Stephen Hemminger wrote:
> The netvsc part is already in net-next. This patch is not needed.
> The part that removes the per-channel state can be in another patch.
I have no idea what that means to me here, nor what I need to do, so I'm
just deleting this
On Fri, 10 Feb 2017, Vitaly Kuznetsov wrote:
> Stephen Hemminger writes:
>
> > Why not use existing seqlock's?
> >
>
> To be honest I don't quite understand how we could use it -- the
> sequence locking here is done against the page updated by the
> hypersior, we're not creating new structures
Stephen Hemminger writes:
> Why not use existing seqlock's?
>
To be honest I don't quite understand how we could use it -- the
sequence locking here is done against the page updated by the
hypersior, we're not creating new structures (so I don't understand how
we could use struct seqcount which
Andy Lutomirski writes:
> On Thu, Feb 9, 2017 at 12:45 PM, KY Srinivasan wrote:
>>
>>
>>> -Original Message-
>>> From: Thomas Gleixner [mailto:t...@linutronix.de]
>>> Sent: Thursday, February 9, 2017 9:08 AM
>>> To: Vitaly Kuznetsov
>>> Cc: x...@kernel.org; Andy Lutomirski ; Ingo Molnar
On 09/02/2017 19:24, Michael S. Tsirkin wrote:
>> I don't know. Power of 2 ring size is pretty standard, I'd rather avoid
>> the complication and the gratuitous difference with 1.0.
>
> I thought originally there's a reason 1.0 rings had to be powers of two
> but now I don't see why. OK, we can
On Fri, 10 Feb 2017, Vitaly Kuznetsov wrote:
> Thomas Gleixner writes:
>
> > On Thu, 9 Feb 2017, Vitaly Kuznetsov wrote:
> >> +#ifdef CONFIG_HYPERV_TSCPAGE
> >> +static notrace u64 vread_hvclock(int *mode)
> >> +{
> >> + const struct ms_hyperv_tsc_page *tsc_pg =
> >> + (const struct ms_
Thomas Gleixner writes:
> On Thu, 9 Feb 2017, Vitaly Kuznetsov wrote:
>> +#ifdef CONFIG_HYPERV_TSCPAGE
>> +static notrace u64 vread_hvclock(int *mode)
>> +{
>> +const struct ms_hyperv_tsc_page *tsc_pg =
>> +(const struct ms_hyperv_tsc_page *)&hvclock_page;
>> +u64 sequence, sc
17 matches
Mail list logo