> On 21 Mar 2019, at 17:11, Nitin Saxena <nsax...@marvell.com> wrote:
> 
> Hi Damjan,
> 
> See my comments inline.
> 
> From: Damjan Marion <dmar...@me.com <mailto:dmar...@me.com>>
> Sent: Thursday, March 21, 2019 9:34 PM
> To: Nitin Saxena
> Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>; Narayana Prasad Raju 
> Athreya
> Subject: Re: [EXT] Re: [vpp-dev] 128 byte cache line support
>  
> 
> 
> > On 21 Mar 2019, at 16:36, Nitin Saxena <nsax...@marvell.com 
> > <mailto:nsax...@marvell.com>> wrote:
> > 
> > Hi Damjan,
> > 
> > From: Damjan Marion <dmar...@me.com <mailto:dmar...@me.com>>
> > Sent: Thursday, March 21, 2019 8:47 PM
> > To: Nitin Saxena
> > Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>; Narayana Prasad Raju 
> > Athreya
> > Subject: Re: [EXT] Re: [vpp-dev] 128 byte cache line support
> >  
> > 
> > 
> >> On 21 Mar 2019, at 16:04, Nitin Saxena <nsax...@marvell.com 
> >> <mailto:nsax...@marvell.com>> wrote:
> >> 
> >> Hi Damjan,
> >> 
> >> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
> >> <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>> on behalf of Damjan 
> >> Marion via Lists.Fd.Io <dmarion=me....@lists.fd.io 
> >> <mailto:dmarion=me....@lists.fd.io>>
> >> Sent: Thursday, March 21, 2019 6:03 PM
> >> To: Nitin Saxena
> >> Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
> >> Subject: Re: [EXT] Re: [vpp-dev] 128 byte cache line support
> >>  
> >> 
> >> 
> >> > On 21 Mar 2019, at 06:51, Nitin Saxena <nsax...@marvell.com 
> >> > <mailto:nsax...@marvell.com>> wrote:
> >> > 
> >> > Hi,
> >> > 
> >> > First all sorry for responding late to this mail chain. Please see my 
> >> > answers inline in blue
> >> > 
> >> > Thanks,
> >> > Nitin
> >> > 
> >> > 
> >> > From: Damjan Marion <dmar...@me.com <mailto:dmar...@me.com>>
> >> > Sent: Monday, March 18, 2019 4:48 PM
> >> > To: Honnappa Nagarahalli
> >> > Cc: vpp-dev; Nitin Saxena
> >> > Subject: [EXT] Re: [vpp-dev] 128 byte cache line support
> >> >  
> >> > External Email
> >> > 
> >> > 
> >> >> On 15 Mar 2019, at 04:52, Honnappa Nagarahalli 
> >> >> <honnappa.nagaraha...@arm.com <mailto:honnappa.nagaraha...@arm.com>> 
> >> >> wrote:
> >> >> 
> >> >>  
> >> >> 
> >> >> Related to change 18278[1], I was wondering if there is really a 
> >> >> benefit of dealing with 128-byte cachelines like we do today.
> >> >> Compiling VPP with cacheline size set to 128 will basically just add 64 
> >> >> bytes of unused space at the end of each cacheline so
> >> >> vlib_buffer_t for example will grow from 128 bytes to 256 bytes, but we 
> >> >> will still need to prefetch 2 cachelines like we do by default.
> >> >> 
> >> >> [Nitin]: This is the existing model. In case of forwarding mainly first 
> >> >> vlib cache line size is being used. We are utilising existing hole (in 
> >> >> first vlib cache line) by putting packet parsing info (Size ==64B). 
> >> >> This has many benefits, one of them is to avoid ipv4-input-no-chksum() 
> >> >> software checks. It gives us ~20 cycles benefits on our platform. So I 
> >> >> do not want to lose that gain.
> >> 
> >> That sounds like a terribly bad idea, and it likely will never be 
> >> upstreamed.
> >> vlib_buffer_t is 128-byte data structure, and it is perfect fit for 
> >> 128-byte cacheline size systems. I don't see a point in growing this to 
> >> 256-byte.
> >> If you need more space, you can always grow headroom space for additional 
> >> cacheline and store whatever you want there.
> >> [Nitin]: In current upstream code, size of VLIB_BUFFER_HDR_SIZE is 256 
> >> byte for 128B cache line target and not 128 byte. So we haven't done any 
> >> changes in the upstream code except introducing CLIB_MIN_CACHE_LINE_BYTES 
> >> == 64 macro and putting it across vlib_buffer_t as MARKER. So we are 
> >> utilising existing hole (unused) in first cache line of vlib_buffer_t.
> > 
> > I understood that, i’m proposing that we fix that so header size is always 
> > 128, as it should be.
> > [Nitin]: At this point I am not in favour of changing existing 
> > vlib_buffer_t structure as I already mentioned that we gain ~20 cycles in 
> > case of L3Fwd. I don't have data how much it would help me in TCP or UDP 
> > handling where we would touch second cache line of vlib header.  
> 
> What you are doing in your closed repo cannot be argument here.
> We need to do what is right for everybody, not something to support your 
> closed repo hacks.
> [Nitin]: We have plan to upstream that piece of code. We are not willing to 
> keep it closed as its inline with the upstream code. Currently we are in 
> process of adding hardware mempool in VPP and native plugin for our target. 
> We have plan to upstream those changes but we cannot upstream code while they 
> are failing on our target.
> 
> If you plan to keep it closed source, you can do with that code whatever you 
> want anyway.
> If you want to upstream it, right thing to do is to discuss such changes 
> before coding them.
> [Nitin]: Actually I did discuss this piece of code, we have implemented 
> "ipv4-input-nochksum-nottl-nodata" node which you and Dave had suggested. 

Storing data in vlib_buffer_t as you described above is definitely not 
something we agreed on and is hardly something we will upstream.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12607): https://lists.fd.io/g/vpp-dev/message/12607
Mute This Topic: https://lists.fd.io/mt/30699557/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to