> -----Original Message-----
> From: Stephen Hemminger [mailto:step...@networkplumber.org]
> Sent: Friday, January 27, 2017 10:36 AM
> To: KY Srinivasan <k...@microsoft.com>
> Cc: Haiyang Zhang <haiya...@microsoft.com>;
> de...@linuxdriverproject.org; Stephen Hemminger
> <sthem...@microsoft.com>
> Subject: Re: [PATCH 07/14] vmbus: remove conditional locking of
> vmbus_write
> 
> On Fri, 27 Jan 2017 18:08:35 +0000
> KY Srinivasan <k...@microsoft.com> wrote:
> 
> > > -----Original Message-----
> > > From: Stephen Hemminger [mailto:step...@networkplumber.org]
> > > Sent: Monday, January 23, 2017 5:40 PM
> > > To: KY Srinivasan <k...@microsoft.com>; Haiyang Zhang
> > > <haiya...@microsoft.com>
> > > Cc: de...@linuxdriverproject.org; Stephen Hemminger
> > > <sthem...@microsoft.com>
> > > Subject: [PATCH 07/14] vmbus: remove conditional locking of
> vmbus_write
> > >
> > > All current usage of vmbus write uses the acquire_lock flag, therefore
> > > having it be optional is unnecessary. This also fixes a sparse warning
> > > since sparse doesn't like when a function has conditional locking.
> >
> > In order to avoid cross-tree dependency, I got this functionality into 
> > Greg's
> > tree first. I plan to submit patches to netvsc that will use this 
> > functionality.
> > As you know, we hold a higher level lock in the protocol stack when we
> send on
> > a sub-channel. This will avoid an unnecessary spin lock round trip in the
> data path.
> >
> > Regards,
> >
> > K. Y
> 
> Conditional locking is in general a bad idea because it breaks static analysis
> tools like
> sparse. In order to have a non-locking code path then it is
> better to have a locked and unlocked version of the same functions.

Agreed. That said, I think this pattern is used in the kernel in many places.
> 
> Also, in networking receive path is not specially single threaded. With my
> per-channel
> tasklet (and later NAPI), the each channel's receive path would be single
> threaded
> but there are still races possible in the shutdown logic (as written). Longer
> term
> it would be best if all vmbus events were handled per-channel without lock,
> ie
> networking is not a special case.

The receive path is single threaded since the channel is bound to a CPU and all
receive interrupts are delivered on the affinitized CPU. Indeed, on the read 
side
we don't acquire any lock on the ring buffer (see hv_ringbuffer_read).

The optimization I wanted to implement is on the write side. The sub-channel 
send
code in netvsc is already serialized by a lock acquired at the upper level and 
this is what I wanted
to take advantage of.

> 
> Probably best to figure out how to make all VMBUS ring access lockless, ie it
> is
> callers responsibilty. Storage and networking are multi-queue already.

This is probably the best way forward.

Regards,

K. Y
_______________________________________________
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

Reply via email to