Hi Jason,

On Mon, Oct 21, 2019 at 01:48:53PM +0800, Jason Wang wrote:
> On 2019/10/19 上午4:58, Will Deacon wrote:
> > Staring at the code some more, my best bet at the moment
> > is that the load of 'indirect->addr' is probably the one to worry about,
> > since it's part of the vring and can be updated concurrently.
> 
> I'm also confused about the barrier here, basically in driver side we did:
> 
> 1) allocate pages
> 
> 2) store pages in indirect->addr
> 
> 3) smp_wmb()
> 
> 4) increase the avail idx (somehow a tail pointer of vring)
> 
> in vhost we did:
> 
> 1) read avail idx
> 
> 2) smp_rmb()
> 
> 3) read indirect->addr
> 
> 4) read from indirect->addr
> 
> It looks to me even the data dependency barrier is not necessary since we
> have rmb() which is sufficient for us to the correct indirect->addr and
> driver are not expected to do any writing to indirect->addr after avail idx
> is increased ?

That makes sense to me: the synchronization is done via vq->avail_idx()
and so the subsequent access of the indirect pages doesn't need additional
barriers, even on Alpha. Thanks.

I'll write up a patch and adopt your explanation above.

Will
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to