On Tue, Feb 27, 2018 at 10:29:26AM +0800, Jason Wang wrote:
> 
> 
> On 2018年02月27日 04:34, Michael S. Tsirkin wrote:
> > On Mon, Feb 26, 2018 at 11:15:42AM +0800, Jason Wang wrote:
> > > On 2018年02月26日 09:17, Michael S. Tsirkin wrote:
> > > > So pointer rings work fine, but they have a problem: make them too small
> > > > and not enough entries fit.  Make them too large and you start flushing
> > > > your cache and running out of memory.
> > > > 
> > > > This is a new idea of mine: a ring backed by a linked list. Once you run
> > > > out of ring entries, instead of a drop you fall back on a list with a
> > > > common lock.
> > > > 
> > > > Should work well for the case where the ring is typically sized
> > > > correctly, but will help address the fact that some user try to set e.g.
> > > > tx queue length to 1000000.
> > > > 
> > > > In other words, the idea is that if a user sets a really huge TX queue
> > > > length, we allocate a ptr_ring which is smaller, and use the backup
> > > > linked list when necessary to provide the requested TX queue length
> > > > legitimately.
> > > > 
> > > > My hope this will move us closer to direction where e.g. fw codel can
> > > > use ptr rings without locking at all.  The API is still very rough, and
> > > > I really need to take a hard look at lock nesting.
> > > > 
> > > > Compiled only, sending for early feedback/flames.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin<m...@redhat.com>
> > > > ---
> > > > 
> > > > changes from v1:
> > > > - added clarifications by DaveM in the commit log
> > > > - build fixes
> > > > 
> > > >    include/linux/ptr_ring.h | 64 
> > > > +++++++++++++++++++++++++++++++++++++++++++++---
> > > >    1 file changed, 61 insertions(+), 3 deletions(-)
> > > > 
> > > > diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
> > > > index d72b2e7..8aa8882 100644
> > > > --- a/include/linux/ptr_ring.h
> > > > +++ b/include/linux/ptr_ring.h
> > > > @@ -31,11 +31,18 @@
> > > >    #include <asm/errno.h>
> > > >    #endif
> > > > +/* entries must start with the following structure */
> > > > +struct plist {
> > > > +       struct plist *next;
> > > > +       struct plist *last; /* only valid in the 1st entry */
> > > > +};
> > > So I wonder whether or not it's better to do this in e.g skb_array
> > > implementation. Then it can use its own prev/next field.
> > XDP uses ptr ring directly, doesn't it?
> > 
> 
> Well I believe the main user for this is qdisc, which use skb array. And we
> can not use what implemented in this patch directly for sk_buff without some
> changes on the data structure.

Why not? skb has next and prev pointers at 1st two fields:

struct sk_buff {
        union { 
                struct {
                        /* These two members must be first. */
                        struct sk_buff          *next;
                        struct sk_buff          *prev;
...
}

so it's just a question of casting to struct plist.

Or we can add plist to a union:


struct sk_buff {
        union { 
                struct {
                        /* These two members must be first. */
                        struct sk_buff          *next;
                        struct sk_buff          *prev;
                        
                        union { 
                                struct net_device       *dev;
                                /* Some protocols might use this space to store 
information,
                                 * while device pointer would be NULL.
                                 * UDP receive path is one user.
                                 */
                                unsigned long           dev_scratch;
                        };
                };
                struct rb_node  rbnode; /* used in netem & tcp stack */
+               struct plist plist; /* For use with ptr_ring */
        };



> For XDP, we need to embed plist in struct xdp_buff too,

Right - that's pretty straightforward, isn't it?

> so it looks to me
> that the better approach is to have separated function for ptr ring and skb
> array.
> 
> Thanks

Reply via email to