Re: [PATCH] virtio_ring: Shadow available ring flags & index

2015-11-23 Thread Xie, Huawei
On 11/21/2015 2:30 AM, Venkatesh Srinivas wrote:
> On Thu, Nov 19, 2015 at 04:15:48PM +, Xie, Huawei wrote:
>> On 11/18/2015 12:28 PM, Venkatesh Srinivas wrote:
>>> On Tue, Nov 17, 2015 at 08:08:18PM -0800, Venkatesh Srinivas wrote:
 On Mon, Nov 16, 2015 at 7:46 PM, Xie, Huawei  wrote:

> On 11/14/2015 7:41 AM, Venkatesh Srinivas wrote:
>> On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
>>> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
 Improves cacheline transfer flow of available ring header.

 Virtqueues are implemented as a pair of rings, one producer->consumer
 avail ring and one consumer->producer used ring; preceding the
 avail ring in memory are two contiguous u16 fields -- avail->flags
 and avail->idx. A producer posts work by writing to avail->idx and
 a consumer reads avail->idx.

 The flags and idx fields only need to be written by a producer CPU
 and only read by a consumer CPU; when the producer and consumer are
 running on different CPUs and the virtio_ring code is structured to
 only have source writes/sink reads, we can continuously transfer the
 avail header cacheline between 'M' states between cores. This flow
 optimizes core -> core bandwidth on certain CPUs.

 (see: "Software Optimization Guide for AMD Family 15h Processors",
 Section 11.6; similar language appears in the 10h guide and should
 apply to CPUs w/ exclusive caches, using LLC as a transfer cache)

 Unfortunately the existing virtio_ring code issued reads to the
 avail->idx and read-modify-writes to avail->flags on the producer.

 This change shadows the flags and index fields in producer memory;
 the vring code now reads from the shadows and only ever writes to
 avail->flags and avail->idx, allowing the cacheline to transfer
 core -> core optimally.
>>> Sounds logical, I'll apply this after a  bit of testing
>>> of my own, thanks!
>> Thanks!
> Venkatesh:
> Is it that your patch only applies to CPUs w/ exclusive caches?
 No --- it applies when the inter-cache coherence flow is optimized by
 'M' -> 'M' transfers and when producer reads might interfere w/
 consumer prefetchw/reads. The AMD Optimization guides have specific
 language on this subject, but other platforms may benefit.
 (see Intel #'s below)
>> For core2core case(not HT paire), after consumer reads that M cache line
>> for avail_idx, is that line still in the producer core's L1 data cache
>> with state changing from M->O state?
> Textbook MOESI would not allow that state combination -- when the consumer
> gets the line in 'M' state, the producer cannot hold it in 'O' state.
Hi Venkatesh:
On consumer core, you are using (prefetchw + load) to get the cache line
anyway, even it doesn't mean to write, right? That makes sense for your
cache line transfer.
If using load only, the cache line on producer core should be changed
from M -> O, meaning dirty sharing, and the consumer gets the line with
S state.

I might miss something important in your case. Could you give more
detailed description?
For non-shadow case,
1) Producer updates flags or idx, cache line is set to be M state.
2) When consumer reads the idx or flags, cache line is set to be S state
on consumer core, while the cache line on producer is set to be O state.
What is the problem reading avail idx/flag whose cache line is either M
or O state on producer core? What is the benefit with and without prefetchw?

>
> On the AMD Piledriver, per the Optimization guide, I use PREFETCHW/Load to
> get the line in 'M' state on the consumer (invalidating it in the Producer's
> cache):
>
> "* Use PREFETCHW on the consumer side, even if the consumer will not modify
>the data"
>
> That, plus the "Optimizing Inter-Core Data Transfer" section imply that
> PREFETCHW + MOV will cause the consumer to load the line into 'M' state.
>
> PREFETCHW was not available on Intel CPUs pre-Broadwell; from the public
> documentation alone, I don't think we can tell what transition the producer's
> cacheline undergoes on these cores. For that matter, the latest documentation
> I can find (for Nehalem), indicated there was no 'O' state -- Nehalem
> implemented MESIF, not MOESI.
By O, i mean AMD MOESI, and i thought you were using only load to load
the cache line on the consumer core. If you are using prefetchw + load,
that makes sense for the state transfer.
>
> HTH,
> -- vs;
>

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virtio_ring: Shadow available ring flags & index

2015-11-20 Thread Venkatesh Srinivas
On Thu, Nov 19, 2015 at 04:15:48PM +, Xie, Huawei wrote:
> On 11/18/2015 12:28 PM, Venkatesh Srinivas wrote:
> > On Tue, Nov 17, 2015 at 08:08:18PM -0800, Venkatesh Srinivas wrote:
> >> On Mon, Nov 16, 2015 at 7:46 PM, Xie, Huawei  wrote:
> >>
> >>> On 11/14/2015 7:41 AM, Venkatesh Srinivas wrote:
>  On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
> > On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
> >> Improves cacheline transfer flow of available ring header.
> >>
> >> Virtqueues are implemented as a pair of rings, one producer->consumer
> >> avail ring and one consumer->producer used ring; preceding the
> >> avail ring in memory are two contiguous u16 fields -- avail->flags
> >> and avail->idx. A producer posts work by writing to avail->idx and
> >> a consumer reads avail->idx.
> >>
> >> The flags and idx fields only need to be written by a producer CPU
> >> and only read by a consumer CPU; when the producer and consumer are
> >> running on different CPUs and the virtio_ring code is structured to
> >> only have source writes/sink reads, we can continuously transfer the
> >> avail header cacheline between 'M' states between cores. This flow
> >> optimizes core -> core bandwidth on certain CPUs.
> >>
> >> (see: "Software Optimization Guide for AMD Family 15h Processors",
> >> Section 11.6; similar language appears in the 10h guide and should
> >> apply to CPUs w/ exclusive caches, using LLC as a transfer cache)
> >>
> >> Unfortunately the existing virtio_ring code issued reads to the
> >> avail->idx and read-modify-writes to avail->flags on the producer.
> >>
> >> This change shadows the flags and index fields in producer memory;
> >> the vring code now reads from the shadows and only ever writes to
> >> avail->flags and avail->idx, allowing the cacheline to transfer
> >> core -> core optimally.
> > Sounds logical, I'll apply this after a  bit of testing
> > of my own, thanks!
>  Thanks!
> >>> Venkatesh:
> >>> Is it that your patch only applies to CPUs w/ exclusive caches?
> >> No --- it applies when the inter-cache coherence flow is optimized by
> >> 'M' -> 'M' transfers and when producer reads might interfere w/
> >> consumer prefetchw/reads. The AMD Optimization guides have specific
> >> language on this subject, but other platforms may benefit.
> >> (see Intel #'s below)
> For core2core case(not HT paire), after consumer reads that M cache line
> for avail_idx, is that line still in the producer core's L1 data cache
> with state changing from M->O state?

Textbook MOESI would not allow that state combination -- when the consumer
gets the line in 'M' state, the producer cannot hold it in 'O' state.

On the AMD Piledriver, per the Optimization guide, I use PREFETCHW/Load to
get the line in 'M' state on the consumer (invalidating it in the Producer's
cache):

"* Use PREFETCHW on the consumer side, even if the consumer will not modify
   the data"

That, plus the "Optimizing Inter-Core Data Transfer" section imply that
PREFETCHW + MOV will cause the consumer to load the line into 'M' state.

PREFETCHW was not available on Intel CPUs pre-Broadwell; from the public
documentation alone, I don't think we can tell what transition the producer's
cacheline undergoes on these cores. For that matter, the latest documentation
I can find (for Nehalem), indicated there was no 'O' state -- Nehalem
implemented MESIF, not MOESI.

HTH,
-- vs;
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virtio_ring: Shadow available ring flags & index

2015-11-19 Thread Xie, Huawei
On 11/18/2015 12:28 PM, Venkatesh Srinivas wrote:
> On Tue, Nov 17, 2015 at 08:08:18PM -0800, Venkatesh Srinivas wrote:
>> On Mon, Nov 16, 2015 at 7:46 PM, Xie, Huawei  wrote:
>>
>>> On 11/14/2015 7:41 AM, Venkatesh Srinivas wrote:
 On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
>> Improves cacheline transfer flow of available ring header.
>>
>> Virtqueues are implemented as a pair of rings, one producer->consumer
>> avail ring and one consumer->producer used ring; preceding the
>> avail ring in memory are two contiguous u16 fields -- avail->flags
>> and avail->idx. A producer posts work by writing to avail->idx and
>> a consumer reads avail->idx.
>>
>> The flags and idx fields only need to be written by a producer CPU
>> and only read by a consumer CPU; when the producer and consumer are
>> running on different CPUs and the virtio_ring code is structured to
>> only have source writes/sink reads, we can continuously transfer the
>> avail header cacheline between 'M' states between cores. This flow
>> optimizes core -> core bandwidth on certain CPUs.
>>
>> (see: "Software Optimization Guide for AMD Family 15h Processors",
>> Section 11.6; similar language appears in the 10h guide and should
>> apply to CPUs w/ exclusive caches, using LLC as a transfer cache)
>>
>> Unfortunately the existing virtio_ring code issued reads to the
>> avail->idx and read-modify-writes to avail->flags on the producer.
>>
>> This change shadows the flags and index fields in producer memory;
>> the vring code now reads from the shadows and only ever writes to
>> avail->flags and avail->idx, allowing the cacheline to transfer
>> core -> core optimally.
> Sounds logical, I'll apply this after a  bit of testing
> of my own, thanks!
 Thanks!
>>> Venkatesh:
>>> Is it that your patch only applies to CPUs w/ exclusive caches?
>> No --- it applies when the inter-cache coherence flow is optimized by
>> 'M' -> 'M' transfers and when producer reads might interfere w/
>> consumer prefetchw/reads. The AMD Optimization guides have specific
>> language on this subject, but other platforms may benefit.
>> (see Intel #'s below)
For core2core case(not HT paire), after consumer reads that M cache line
for avail_idx, is that line still in the producer core's L1 data cache
with state changing from M->O state?
>>
>>> Do you have perf data on Intel CPUs?
>> Good idea -- I ran some tests on a couple of Intel platforms:
>>
>> (these are perf data from sample runs; for each I ran many runs, the
>>  numbers were pretty stable except for Haswell-EP cross-socket)
>>
>> One-socket Intel Xeon W3690 ("Westmere"), 3.46 GHz; core turbo disabled
>> ===
>> (note -- w/ core turbo disabled, performance is _very_ stable; variance of
>>  < 0.5% run-to-run; figure of merit is "seconds elapsed" here)
>>
>> * Producer / consumer bound to Hyperthread pairs:
>>
>>  Performance counter stats for './vring_bench_noshadow 10':
>>
>>  343,425,166,916 L1-dcache-loads
>>   21,393,148 L1-dcache-load-misses #0.01% of all L1-dcache hits
>>   61,709,640,363 L1-dcache-stores
>>5,745,690 L1-dcache-store-misses
>>   10,186,932,553 L1-dcache-prefetches
>>1,491 L1-dcache-prefetch-misses
>>121.335699344 seconds time elapsed
>>
>>  Performance counter stats for './vring_bench_shadow 10':
>>
>>  334,766,413,861 L1-dcache-loads
>>   15,787,778 L1-dcache-load-misses #0.00% of all L1-dcache hits
>>   62,735,792,799 L1-dcache-stores
>>3,252,113 L1-dcache-store-misses
>>9,018,273,596 L1-dcache-prefetches
>>  819 L1-dcache-prefetch-misses
>>121.206339656 seconds time elapsed
>>
>> Effectively Performance-neutral.
>>
>> * Producer / consumer bound to separate cores, same socket:
>>
>>  Performance counter stats for './vring_bench_noshadow 10':
>>
>>399,943,384,509 L1-dcache-loads
>>  8,868,334,693 L1-dcache-load-misses #2.22% of all L1-dcache hits
>> 62,721,376,685 L1-dcache-stores
>>  2,786,806,982 L1-dcache-store-misses
>> 10,915,046,967 L1-dcache-prefetches
>>328,508 L1-dcache-prefetch-misses
>>  146.585969976 seconds time elapsed
>>
>>  Performance counter stats for './vring_bench_shadow 10':
>>
>>425,123,067,750 L1-dcache-loads 
>>  6,689,318,709 L1-dcache-load-misses #1.57% of all L1-dcache hits
>> 62,747,525,005 L1-dcache-stores 
>>  2,496,274,505 L1-dcache-store-misses
>>  8,627,873,397 L1-dcache-prefetches
>>146,729 L1-dcache-prefetch-misses
>>  142.657327765 seconds time elapsed
>>
>> 2.6% reduction in runtime; note that L1-dcache-load-misses reduced
>> dramatically, 2 Billion(!) L1d misses saved.
>>
>> Two-socket Intel S

Re: [PATCH] virtio_ring: Shadow available ring flags & index

2015-11-17 Thread Venkatesh Srinivas
On Tue, Nov 17, 2015 at 08:08:18PM -0800, Venkatesh Srinivas wrote:
> On Mon, Nov 16, 2015 at 7:46 PM, Xie, Huawei  wrote:
> 
> > On 11/14/2015 7:41 AM, Venkatesh Srinivas wrote:
> > > On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
> > >> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
> > >>> Improves cacheline transfer flow of available ring header.
> > >>>
> > >>> Virtqueues are implemented as a pair of rings, one producer->consumer
> > >>> avail ring and one consumer->producer used ring; preceding the
> > >>> avail ring in memory are two contiguous u16 fields -- avail->flags
> > >>> and avail->idx. A producer posts work by writing to avail->idx and
> > >>> a consumer reads avail->idx.
> > >>>
> > >>> The flags and idx fields only need to be written by a producer CPU
> > >>> and only read by a consumer CPU; when the producer and consumer are
> > >>> running on different CPUs and the virtio_ring code is structured to
> > >>> only have source writes/sink reads, we can continuously transfer the
> > >>> avail header cacheline between 'M' states between cores. This flow
> > >>> optimizes core -> core bandwidth on certain CPUs.
> > >>>
> > >>> (see: "Software Optimization Guide for AMD Family 15h Processors",
> > >>> Section 11.6; similar language appears in the 10h guide and should
> > >>> apply to CPUs w/ exclusive caches, using LLC as a transfer cache)
> > >>>
> > >>> Unfortunately the existing virtio_ring code issued reads to the
> > >>> avail->idx and read-modify-writes to avail->flags on the producer.
> > >>>
> > >>> This change shadows the flags and index fields in producer memory;
> > >>> the vring code now reads from the shadows and only ever writes to
> > >>> avail->flags and avail->idx, allowing the cacheline to transfer
> > >>> core -> core optimally.
> > >> Sounds logical, I'll apply this after a  bit of testing
> > >> of my own, thanks!
> > > Thanks!
> >
> 
> > Venkatesh:
> > Is it that your patch only applies to CPUs w/ exclusive caches?
> 
> No --- it applies when the inter-cache coherence flow is optimized by
> 'M' -> 'M' transfers and when producer reads might interfere w/
> consumer prefetchw/reads. The AMD Optimization guides have specific
> language on this subject, but other platforms may benefit.
> (see Intel #'s below)
> 
> > Do you have perf data on Intel CPUs?
> 
> Good idea -- I ran some tests on a couple of Intel platforms:
> 
> (these are perf data from sample runs; for each I ran many runs, the
>  numbers were pretty stable except for Haswell-EP cross-socket)
> 
> One-socket Intel Xeon W3690 ("Westmere"), 3.46 GHz; core turbo disabled
> ===
> (note -- w/ core turbo disabled, performance is _very_ stable; variance of
>  < 0.5% run-to-run; figure of merit is "seconds elapsed" here)
> 
> * Producer / consumer bound to Hyperthread pairs:
> 
>  Performance counter stats for './vring_bench_noshadow 10':
> 
>  343,425,166,916 L1-dcache-loads
>   21,393,148 L1-dcache-load-misses #0.01% of all L1-dcache hits
>   61,709,640,363 L1-dcache-stores
>5,745,690 L1-dcache-store-misses
>   10,186,932,553 L1-dcache-prefetches
>1,491 L1-dcache-prefetch-misses
>121.335699344 seconds time elapsed
> 
>  Performance counter stats for './vring_bench_shadow 10':
> 
>  334,766,413,861 L1-dcache-loads
>   15,787,778 L1-dcache-load-misses #0.00% of all L1-dcache hits
>   62,735,792,799 L1-dcache-stores
>3,252,113 L1-dcache-store-misses
>9,018,273,596 L1-dcache-prefetches
>  819 L1-dcache-prefetch-misses
>121.206339656 seconds time elapsed
> 
> Effectively Performance-neutral.
> 
> * Producer / consumer bound to separate cores, same socket:
> 
>  Performance counter stats for './vring_bench_noshadow 10':
> 
>399,943,384,509 L1-dcache-loads
>  8,868,334,693 L1-dcache-load-misses #2.22% of all L1-dcache hits
> 62,721,376,685 L1-dcache-stores
>  2,786,806,982 L1-dcache-store-misses
> 10,915,046,967 L1-dcache-prefetches
>328,508 L1-dcache-prefetch-misses
>  146.585969976 seconds time elapsed
> 
>  Performance counter stats for './vring_bench_shadow 10':
> 
>425,123,067,750 L1-dcache-loads 
>  6,689,318,709 L1-dcache-load-misses #1.57% of all L1-dcache hits
> 62,747,525,005 L1-dcache-stores 
>  2,496,274,505 L1-dcache-store-misses
>  8,627,873,397 L1-dcache-prefetches
>146,729 L1-dcache-prefetch-misses
>  142.657327765 seconds time elapsed
> 
> 2.6% reduction in runtime; note that L1-dcache-load-misses reduced
> dramatically, 2 Billion(!) L1d misses saved.
> 
> Two-socket Intel Sandy Bridge(-EP) Xeon, 2.6 GHz; core turbo disabled
> =
> 
> * Producer / consumer bound to Hyperthread pairs:
> 
>  Performance counter stats for './vring_bench_noshadow 1

Re: [PATCH] virtio_ring: Shadow available ring flags & index

2015-11-16 Thread Xie, Huawei
On 11/14/2015 7:41 AM, Venkatesh Srinivas wrote:
> On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
>> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
>>> Improves cacheline transfer flow of available ring header.
>>>
>>> Virtqueues are implemented as a pair of rings, one producer->consumer
>>> avail ring and one consumer->producer used ring; preceding the
>>> avail ring in memory are two contiguous u16 fields -- avail->flags
>>> and avail->idx. A producer posts work by writing to avail->idx and
>>> a consumer reads avail->idx.
>>>
>>> The flags and idx fields only need to be written by a producer CPU
>>> and only read by a consumer CPU; when the producer and consumer are
>>> running on different CPUs and the virtio_ring code is structured to
>>> only have source writes/sink reads, we can continuously transfer the
>>> avail header cacheline between 'M' states between cores. This flow
>>> optimizes core -> core bandwidth on certain CPUs.
>>>
>>> (see: "Software Optimization Guide for AMD Family 15h Processors",
>>> Section 11.6; similar language appears in the 10h guide and should
>>> apply to CPUs w/ exclusive caches, using LLC as a transfer cache)
>>>
>>> Unfortunately the existing virtio_ring code issued reads to the
>>> avail->idx and read-modify-writes to avail->flags on the producer.
>>>
>>> This change shadows the flags and index fields in producer memory;
>>> the vring code now reads from the shadows and only ever writes to
>>> avail->flags and avail->idx, allowing the cacheline to transfer
>>> core -> core optimally.
>> Sounds logical, I'll apply this after a  bit of testing
>> of my own, thanks!
> Thanks!
Venkatesh:
Is it that your patch only applies to CPUs w/ exclusive caches? Do you
have perf data on Intel CPUs?
For the perf metric you provide, why not L1-dcache-load-misses which is
more meaning full?
>
>>> In a concurrent version of vring_bench, the time required for
>>> 10,000,000 buffer checkout/returns was reduced by ~2% (average
>>> across many runs) on an AMD Piledriver (15h) CPU:
>>>
>>> (w/o shadowing):
>>>  Performance counter stats for './vring_bench':
>>>  5,451,082,016  L1-dcache-loads
>>>  ...
>>>2.221477739 seconds time elapsed
>>>
>>> (w/ shadowing):
>>>  Performance counter stats for './vring_bench':
>>>  5,405,701,361  L1-dcache-loads
>>>  ...
>>>2.168405376 seconds time elapsed
>> Could you supply the full command line you used
>> to test this?
> Yes --
>
> perf stat -e L1-dcache-loads,L1-dcache-load-misses,L1-dcache-stores \
>   ./vring_bench
>
> The standard version of vring_bench is single-threaded (posted on this list
> but never submitted); my tests were with a version that has a worker thread
> polling the VQ. How should I share it? Should I just attach it to an email
> here?
>
>>> The further away (in a NUMA sense) virtio producers and consumers are
>>> from each other, the more we expect to benefit. Physical implementations
>>> of virtio devices and implementations of virtio where the consumer polls
>>> vring avail indexes (vhost) should also benefit.
>>>
>>> Signed-off-by: Venkatesh Srinivas 
>> Here's a similar patch for the ring itself:
>> https://lkml.org/lkml/2015/9/10/111
>>
>> Does it help you as well?
> I tested your patch in our environment; our virtqueues do not support
> Indirect entries and your patch does not manage to elide many writes, so I
> do not see a performance difference. In an environment with Indirect, your
> patch will likely be a win.
>
> (My patch gets most of its win by eliminating reads on the producer; when
> the producer reads avail fields at the same time the consumer is polling,
> we see cacheline transfers that hurt performance. Your patch eliminates
> writes, which is nice, but our tests w/ polling are not as sensitive to
> writes from the producer.)
>
> I have two quick comments on your patch --
> 1) I think you need to kfree vq->avail when deleting the virtqueue.
>
> 2) Should we avoid allocating a cache for virtqueues that are not
>performance critical? (ex: virtio-scsi eventq/controlq, virtio-net
>controlq)
>
> Should I post comments in reply to the original patch email (given that it
> is ~2 months old)?
>
> Thanks!
> -- vs;
>

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virtio_ring: Shadow available ring flags & index

2015-11-13 Thread Venkatesh Srinivas
On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
> > Improves cacheline transfer flow of available ring header.
> >
> > Virtqueues are implemented as a pair of rings, one producer->consumer
> > avail ring and one consumer->producer used ring; preceding the
> > avail ring in memory are two contiguous u16 fields -- avail->flags
> > and avail->idx. A producer posts work by writing to avail->idx and
> > a consumer reads avail->idx.
> >
> > The flags and idx fields only need to be written by a producer CPU
> > and only read by a consumer CPU; when the producer and consumer are
> > running on different CPUs and the virtio_ring code is structured to
> > only have source writes/sink reads, we can continuously transfer the
> > avail header cacheline between 'M' states between cores. This flow
> > optimizes core -> core bandwidth on certain CPUs.
> >
> > (see: "Software Optimization Guide for AMD Family 15h Processors",
> > Section 11.6; similar language appears in the 10h guide and should
> > apply to CPUs w/ exclusive caches, using LLC as a transfer cache)
> >
> > Unfortunately the existing virtio_ring code issued reads to the
> > avail->idx and read-modify-writes to avail->flags on the producer.
> >
> > This change shadows the flags and index fields in producer memory;
> > the vring code now reads from the shadows and only ever writes to
> > avail->flags and avail->idx, allowing the cacheline to transfer
> > core -> core optimally.
>
> Sounds logical, I'll apply this after a  bit of testing
> of my own, thanks!

Thanks!

> > In a concurrent version of vring_bench, the time required for
> > 10,000,000 buffer checkout/returns was reduced by ~2% (average
> > across many runs) on an AMD Piledriver (15h) CPU:
> >
> > (w/o shadowing):
> >  Performance counter stats for './vring_bench':
> >  5,451,082,016  L1-dcache-loads
> >  ...
> >2.221477739 seconds time elapsed
> >
> > (w/ shadowing):
> >  Performance counter stats for './vring_bench':
> >  5,405,701,361  L1-dcache-loads
> >  ...
> >2.168405376 seconds time elapsed
>
> Could you supply the full command line you used
> to test this?

Yes --

perf stat -e L1-dcache-loads,L1-dcache-load-misses,L1-dcache-stores \
./vring_bench

The standard version of vring_bench is single-threaded (posted on this list
but never submitted); my tests were with a version that has a worker thread
polling the VQ. How should I share it? Should I just attach it to an email
here?

> > The further away (in a NUMA sense) virtio producers and consumers are
> > from each other, the more we expect to benefit. Physical implementations
> > of virtio devices and implementations of virtio where the consumer polls
> > vring avail indexes (vhost) should also benefit.
> >
> > Signed-off-by: Venkatesh Srinivas 
>
> Here's a similar patch for the ring itself:
> https://lkml.org/lkml/2015/9/10/111
>
> Does it help you as well?

I tested your patch in our environment; our virtqueues do not support
Indirect entries and your patch does not manage to elide many writes, so I
do not see a performance difference. In an environment with Indirect, your
patch will likely be a win.

(My patch gets most of its win by eliminating reads on the producer; when
the producer reads avail fields at the same time the consumer is polling,
we see cacheline transfers that hurt performance. Your patch eliminates
writes, which is nice, but our tests w/ polling are not as sensitive to
writes from the producer.)

I have two quick comments on your patch --
1) I think you need to kfree vq->avail when deleting the virtqueue.

2) Should we avoid allocating a cache for virtqueues that are not
   performance critical? (ex: virtio-scsi eventq/controlq, virtio-net
   controlq)

Should I post comments in reply to the original patch email (given that it
is ~2 months old)?

Thanks!
-- vs;
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virtio_ring: Shadow available ring flags & index

2015-11-11 Thread Michael S. Tsirkin
On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
> Improves cacheline transfer flow of available ring header.
> 
> Virtqueues are implemented as a pair of rings, one producer->consumer
> avail ring and one consumer->producer used ring; preceding the
> avail ring in memory are two contiguous u16 fields -- avail->flags
> and avail->idx. A producer posts work by writing to avail->idx and
> a consumer reads avail->idx.
> 
> The flags and idx fields only need to be written by a producer CPU
> and only read by a consumer CPU; when the producer and consumer are
> running on different CPUs and the virtio_ring code is structured to
> only have source writes/sink reads, we can continuously transfer the
> avail header cacheline between 'M' states between cores. This flow
> optimizes core -> core bandwidth on certain CPUs.
> 
> (see: "Software Optimization Guide for AMD Family 15h Processors",
> Section 11.6; similar language appears in the 10h guide and should
> apply to CPUs w/ exclusive caches, using LLC as a transfer cache)
> 
> Unfortunately the existing virtio_ring code issued reads to the
> avail->idx and read-modify-writes to avail->flags on the producer.
> 
> This change shadows the flags and index fields in producer memory;
> the vring code now reads from the shadows and only ever writes to
> avail->flags and avail->idx, allowing the cacheline to transfer
> core -> core optimally.

Sounds logical, I'll apply this after a  bit of testing
of my own, thanks!

> In a concurrent version of vring_bench, the time required for
> 10,000,000 buffer checkout/returns was reduced by ~2% (average
> across many runs) on an AMD Piledriver (15h) CPU:
> 
> (w/o shadowing):
>  Performance counter stats for './vring_bench':
>  5,451,082,016  L1-dcache-loads
>  ...
>2.221477739 seconds time elapsed
> 
> (w/ shadowing):
>  Performance counter stats for './vring_bench':
>  5,405,701,361  L1-dcache-loads
>  ...
>2.168405376 seconds time elapsed

Could you supply the full command line you used
to test this?

> The further away (in a NUMA sense) virtio producers and consumers are
> from each other, the more we expect to benefit. Physical implementations
> of virtio devices and implementations of virtio where the consumer polls
> vring avail indexes (vhost) should also benefit.
> 
> Signed-off-by: Venkatesh Srinivas 

Here's a similar patch for the ring itself:
https://lkml.org/lkml/2015/9/10/111

Does it help you as well?


> ---
>  drivers/virtio/virtio_ring.c | 46 
> 
>  1 file changed, 34 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 096b857..6262015 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -80,6 +80,12 @@ struct vring_virtqueue {
>   /* Last used index we've seen. */
>   u16 last_used_idx;
>  
> + /* Last written value to avail->flags */
> + u16 avail_flags_shadow;
> +
> + /* Last written value to avail->idx in guest byte order */
> + u16 avail_idx_shadow;
> +
>   /* How to notify other side. FIXME: commonalize hcalls! */
>   bool (*notify)(struct virtqueue *vq);
>  
> @@ -235,13 +241,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  
>   /* Put entry in available array (but don't update avail->idx until they
>* do sync). */
> - avail = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) & 
> (vq->vring.num - 1);
> + avail = vq->avail_idx_shadow & (vq->vring.num - 1);
>   vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
>  
>   /* Descriptors and available array need to be set before we expose the
>* new available array entries. */
>   virtio_wmb(vq->weak_barriers);
> - vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, 
> virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) + 1);
> + vq->avail_idx_shadow++;
> + vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
>   vq->num_added++;
>  
>   pr_debug("Added buffer head %i to %p\n", head, vq);
> @@ -354,8 +361,8 @@ bool virtqueue_kick_prepare(struct virtqueue *_vq)
>* event. */
>   virtio_mb(vq->weak_barriers);
>  
> - old = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) - vq->num_added;
> - new = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx);
> + old = vq->avail_idx_shadow - vq->num_added;
> + new = vq->avail_idx_shadow;
>   vq->num_added = 0;
>  
>  #ifdef DEBUG
> @@ -510,7 +517,7 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned 
> int *len)
>   /* If we expect an interrupt for the next entry, tell host
>* by writing event index and flush out the write before
>* the read in the next get_buf call. */
> - if (!(vq->vring.avail->flags & cpu_to_virtio16(_vq->vdev, 
> VRING_AVAIL_F_NO_INTERRUPT))) {
> + if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) 

[PATCH] virtio_ring: Shadow available ring flags & index

2015-11-10 Thread Venkatesh Srinivas
Improves cacheline transfer flow of available ring header.

Virtqueues are implemented as a pair of rings, one producer->consumer
avail ring and one consumer->producer used ring; preceding the
avail ring in memory are two contiguous u16 fields -- avail->flags
and avail->idx. A producer posts work by writing to avail->idx and
a consumer reads avail->idx.

The flags and idx fields only need to be written by a producer CPU
and only read by a consumer CPU; when the producer and consumer are
running on different CPUs and the virtio_ring code is structured to
only have source writes/sink reads, we can continuously transfer the
avail header cacheline between 'M' states between cores. This flow
optimizes core -> core bandwidth on certain CPUs.

(see: "Software Optimization Guide for AMD Family 15h Processors",
Section 11.6; similar language appears in the 10h guide and should
apply to CPUs w/ exclusive caches, using LLC as a transfer cache)

Unfortunately the existing virtio_ring code issued reads to the
avail->idx and read-modify-writes to avail->flags on the producer.

This change shadows the flags and index fields in producer memory;
the vring code now reads from the shadows and only ever writes to
avail->flags and avail->idx, allowing the cacheline to transfer
core -> core optimally.

In a concurrent version of vring_bench, the time required for
10,000,000 buffer checkout/returns was reduced by ~2% (average
across many runs) on an AMD Piledriver (15h) CPU:

(w/o shadowing):
 Performance counter stats for './vring_bench':
 5,451,082,016  L1-dcache-loads
 ...
   2.221477739 seconds time elapsed

(w/ shadowing):
 Performance counter stats for './vring_bench':
 5,405,701,361  L1-dcache-loads
 ...
   2.168405376 seconds time elapsed

The further away (in a NUMA sense) virtio producers and consumers are
from each other, the more we expect to benefit. Physical implementations
of virtio devices and implementations of virtio where the consumer polls
vring avail indexes (vhost) should also benefit.

Signed-off-by: Venkatesh Srinivas 
---
 drivers/virtio/virtio_ring.c | 46 
 1 file changed, 34 insertions(+), 12 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 096b857..6262015 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -80,6 +80,12 @@ struct vring_virtqueue {
/* Last used index we've seen. */
u16 last_used_idx;
 
+   /* Last written value to avail->flags */
+   u16 avail_flags_shadow;
+
+   /* Last written value to avail->idx in guest byte order */
+   u16 avail_idx_shadow;
+
/* How to notify other side. FIXME: commonalize hcalls! */
bool (*notify)(struct virtqueue *vq);
 
@@ -235,13 +241,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 
/* Put entry in available array (but don't update avail->idx until they
 * do sync). */
-   avail = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) & 
(vq->vring.num - 1);
+   avail = vq->avail_idx_shadow & (vq->vring.num - 1);
vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
 
/* Descriptors and available array need to be set before we expose the
 * new available array entries. */
virtio_wmb(vq->weak_barriers);
-   vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, 
virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) + 1);
+   vq->avail_idx_shadow++;
+   vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
vq->num_added++;
 
pr_debug("Added buffer head %i to %p\n", head, vq);
@@ -354,8 +361,8 @@ bool virtqueue_kick_prepare(struct virtqueue *_vq)
 * event. */
virtio_mb(vq->weak_barriers);
 
-   old = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) - vq->num_added;
-   new = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx);
+   old = vq->avail_idx_shadow - vq->num_added;
+   new = vq->avail_idx_shadow;
vq->num_added = 0;
 
 #ifdef DEBUG
@@ -510,7 +517,7 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int 
*len)
/* If we expect an interrupt for the next entry, tell host
 * by writing event index and flush out the write before
 * the read in the next get_buf call. */
-   if (!(vq->vring.avail->flags & cpu_to_virtio16(_vq->vdev, 
VRING_AVAIL_F_NO_INTERRUPT))) {
+   if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, 
vq->last_used_idx);
virtio_mb(vq->weak_barriers);
}
@@ -537,7 +544,11 @@ void virtqueue_disable_cb(struct virtqueue *_vq)
 {
struct vring_virtqueue *vq = to_vvq(_vq);
 
-   vq->vring.avail->flags |= cpu_to_virtio16(_vq->vdev, 
VRING_AVAIL_F_NO_INTERRUPT);
+   if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
+   vq->avail_f