[vpp-dev] Run failed on 1807 version vpp code on centos7.

2018-10-31 Thread 沈先捷
Hi,
 
 
make run didn't seem to work,
 
 
 

  

 
 
Logs :
[root@localhost vpp]# ls
build-data  build-root  docs  doxygen  dpdk  extras  gmod  LICENSE  MAINTAINERS 
 Makefile  README.md  RELEASE.md  src  test  virtualenv
[root@localhost vpp]# 
[root@localhost vpp]# make run
WARNING: STARTUP_CONF not defined or file doesn't exist.
         Running with minimal startup config:  unix { interactive cli-listen 
/run/vpp/cli.sock gid 0 } \n
vlib_plugin_early_init:361: plugin path 
/root/vpp/build-root/install-vpp_debug-native/vpp/lib/vpp_plugins:/root/vpp/build-root/install-vpp_debug-native/vpp/lib64/vpp_plugins
load_one_plugin:189: Loaded plugin: abf_plugin.so (ACL based Forwarding)
load_one_plugin:189: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:189: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual 
Function (AVF) Device Plugin)
load_one_plugin:191: Loaded plugin: cdp_plugin.so
load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:189: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:189: Loaded plugin: gbp_plugin.so (Group Based Policy)
load_one_plugin:189: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:189: Loaded plugin: igmp_plugin.so (IGMP messaging)
load_one_plugin:189: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)
load_one_plugin:189: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
load_one_plugin:189: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:189: Loaded plugin: lacp_plugin.so (Link Aggregation Control 
Protocol)
load_one_plugin:189: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:189: Loaded plugin: mactime_plugin.so (Time-based MAC 
source-address filter)
load_one_plugin:189: Loaded plugin: map_plugin.so (Mapping of address and port 
(MAP))
load_one_plugin:189: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))
load_one_plugin:189: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:189: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:189: Loaded plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
load_one_plugin:189: Loaded plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
load_one_plugin:189: Loaded plugin: srv6as_plugin.so (Static SRv6 proxy)
load_one_plugin:189: Loaded plugin: stn_plugin.so (VPP Steals the NIC for 
Container integration)
load_one_plugin:189: Loaded plugin: tlsopenssl_plugin.so (openssl based TLS 
Engine)
svm_map_region:748: region /global_vm mutex held by dead pid 10351, tag 2, 
force unlock
svm_map_region:756: recovery: attempt to re-lock region
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
...
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/root/vpp/build-data/../src/vppinfra/vec.h:134 (_vec_resize_inline) assertion 
`clib_mem_is_heap_object (p)' fails
/bin/sh: line 1: 10448 Segmentation fault      sudo 
/root/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp " unix { interactive 
cli-listen /run/vpp/cli.sock gid 0  }   "
make: *** [run] Error 139
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11063): https://lists.fd.io/g/vpp-dev/message/11063
Mute This Topic: https://lists.fd.io/mt/27812693/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] hi

2018-10-31 Thread 沈先捷

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11062): https://lists.fd.io/g/vpp-dev/message/11062
Mute This Topic: https://lists.fd.io/mt/27812346/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP and RCU?

2018-10-31 Thread Florin Coras


> On Oct 31, 2018, at 4:03 PM, Stephen Hemminger  
> wrote:
> 
> On Wed, 31 Oct 2018 00:24:36 -0700
> Florin Coras  wrote:
> 
>> No reader-writer locks are 100's of times slower.  In fact reader
>> write locks are slower than normal spin lock.
>> 
> 
> I guess you meant that in general, and I can see how for scenarios with  
 multiple writers and readers performance can be bad. But from your original
 message I assumed you’re mostly interested in concurrent read performance
 with few writes. For such scenarios I would expect our current, simple, 
 spin
 and rw lock implementations to not be that bad. If that’s provably not the 
 case,
 we should definitely consider doing RCU.  
> 
> Also worth noting that a common pattern in vpp is to have per thread data 
>  
 structures and then entirely avoid locking. For lookups we typically use 
 the
 bihash and that is thread safe.  
>>> When you say 'per thread data structures', does it mean the data structures 
>>> will be duplicated for each data plane thread?  
>> 
>> No, we don’t duplicate the data. Instead, we rely on RSS hashing to pin 
>> flows to workers and then build per worker state. 
>> 
>> For scenarios when that doesn’t work, we handoff flows between workers. 
> 
> Ok, the tradeoff is that having a single worker is a bottleneck, and if 
> packet arrives on one
> core and then is processed on another core there is a cache miss.

Handoffs are not the common case, that is, for things like l2, ip, tunneling 
protocols, connection oriented transports like tcp, as long as the interface 
supports rss hashing, flows are uniformly distributed (statistically) between 
the workers. So, from a pure forwarding perspective, we do get horizontal 
scaling because we don’t need any inter-worker locking. 

On the other hand, if the forwarding state needs to be updated frequently by 
either a “control plane” or an “in-band” protocol, then you’re right to point 
out that in some cases RCU is probably superior to what we currently use. The 
bihash and apis that are mp safe (e.g., ip route addition) would be some 
counter examples. 

> Per the original discussion, a reader/writer lock even uncontended requires 
> an atomic
> increment, and that increment is a locked instruction and a cache miss with 
> multiple readers.
> 
> RCU has zero overhead for readers.  The problem is pushed to the writer to 
> deal with.
> It works fine for data structures like lists or trees that can be updated 
> with a specific
> access pattern such that reader always sees valid data.

Agreed with both points. 

> 
> The case I was thinking of is things like flow and routing tables.

For the former, you may be able to use the bihash since it uses a more 
sophisticated rw-lock scheme than the simple rw-locks from vppinfra. As for the 
latter, I see your point if the goal is to program the table from any worker. 

Florin


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11061): https://lists.fd.io/g/vpp-dev/message/11061
Mute This Topic: https://lists.fd.io/mt/27785182/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [tsc] Project Proposal for Sweetcomb

2018-10-31 Thread Ni, Hongjun

Welcome ZTE to join!

Thanks,
Hongjun

From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ni, Hongjun
Sent: Thursday, October 25, 2018 9:22 AM
To: Edward Warnicke 
Cc: t...@lists.fd.io; vpp-dev@lists.fd.io; Wang, Drenfong 
; ??? ; 
chen...@huachentel.com; lizhuo...@cmhi.chinamobile.com; ??? 
; zhijl@chinatelecom.cn; changlin...@nxp.com; Wang 
Tianyi ; davidfgao(?? ; 
lixin...@huachentel.com; jingqing@alibaba-inc.com
Subject: Re: [vpp-dev] [tsc] Project Proposal for Sweetcomb

Hi all,

Some guys are asking for the original code in private.
Here is our answer:


We are working on reworking the original code, and doing internal legal review.

When it is done,  we will submit the code to FD.io community for IPR review in 
one or two weeks.

Thanks a lot,
Hongjun

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Edward Warnicke
Sent: Tuesday, October 23, 2018 9:47 PM
To: Ni, Hongjun mailto:hongjun...@intel.com>>
Cc: t...@lists.fd.io; 
vpp-dev@lists.fd.io; Wang, Drenfong 
mailto:drenfong.w...@intel.com>>; ??? 
mailto:wangchuan...@huachentel.com>>; 
chen...@huachentel.com; 
lizhuo...@cmhi.chinamobile.com; ??? 
mailto:lihf...@chinaunicom.cn>>; 
zhijl@chinatelecom.cn; 
changlin...@nxp.com; Wang Tianyi 
mailto:tianyi.w...@tieto.com>>; davidfgao(?? 
mailto:davidf...@tencent.com>>; 
lixin...@huachentel.com; 
jingqing@alibaba-inc.com
Subject: Re: [vpp-dev] [tsc] Project Proposal for Sweetcomb

I look forward to it :)

Ed

On Tue, Oct 23, 2018 at 8:42 AM Ni, Hongjun 
mailto:hongjun...@intel.com>> wrote:
Hi Ed,

OK. I or some project proposer will join the Nov 8 8am PT TSC meeting and 
present the proposal.

Thanks a lot,
Hongjun

From: t...@lists.fd.io 
[mailto:t...@lists.fd.io] On Behalf Of Edward Warnicke
Sent: Tuesday, October 23, 2018 9:15 PM
To: Ni, Hongjun mailto:hongjun...@intel.com>>
Cc: t...@lists.fd.io; 
vpp-dev@lists.fd.io; Wang, Drenfong 
mailto:drenfong.w...@intel.com>>; ??? 
mailto:wangchuan...@huachentel.com>>; 
chen...@huachentel.com; 
lizhuo...@cmhi.chinamobile.com; ??? 
mailto:lihf...@chinaunicom.cn>>; 
zhijl@chinatelecom.cn; 
changlin...@nxp.com; Wang Tianyi 
mailto:tianyi.w...@tieto.com>>; davidfgao(?? 
mailto:davidf...@tencent.com>>; 
lixin...@huachentel.com; 
jingqing@alibaba-inc.com
Subject: Re: [tsc] Project Proposal for Sweetcomb

Hongjun,

Thank you for the proposal :)  Per the FD.io technical charter, all proposals 
must be out for public review for two weeks prior to approval by the TSC.  I 
believe this makes Nov 8 the earliest TSC meeting where we could approve at the 
TSC.  Would you like to schedule the project creation review for the Nov 8 8am 
PT TSC meeting?  Will you (or other proposers of the project) be able to make 
that time to present the proposal?

Ed

On Tue, Oct 23, 2018 at 7:01 AM Ni, Hongjun 
mailto:hongjun...@intel.com>> wrote:
Hello FD.io TSCs

Please accept this project proposal for Sweetcomb for consideration.
https://wiki.fd.io/view/Project_Proposals/Sweetcomb

This project has nine founding companies:
Intel, HuachenTel, China Mobile, China Unicom, China Telecom, NXP, Tieto, 
Tencent, Alibaba.

If possible, I would like to present this on TSC meeting.

Thanks,
Hongjun

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#856): https://lists.fd.io/g/tsc/message/856
Mute This Topic: https://lists.fd.io/mt/27567539/464962
Group Owner: tsc+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/tsc/unsub  
[hagb...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11060): https://lists.fd.io/g/vpp-dev/message/11060
Mute This Topic: https://lists.fd.io/mt/27568384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP and RCU?

2018-10-31 Thread Stephen Hemminger
On Wed, 31 Oct 2018 00:24:36 -0700
Florin Coras  wrote:

>  No reader-writer locks are 100's of times slower.  In fact reader
>  write locks are slower than normal spin lock.
>    
> >>> 
> >>> I guess you meant that in general, and I can see how for scenarios with  
> >> multiple writers and readers performance can be bad. But from your original
> >> message I assumed you’re mostly interested in concurrent read performance
> >> with few writes. For such scenarios I would expect our current, simple, 
> >> spin
> >> and rw lock implementations to not be that bad. If that’s provably not the 
> >> case,
> >> we should definitely consider doing RCU.  
> >>> 
> >>> Also worth noting that a common pattern in vpp is to have per thread data 
> >>>  
> >> structures and then entirely avoid locking. For lookups we typically use 
> >> the
> >> bihash and that is thread safe.  
> > When you say 'per thread data structures', does it mean the data structures 
> > will be duplicated for each data plane thread?  
> 
> No, we don’t duplicate the data. Instead, we rely on RSS hashing to pin flows 
> to workers and then build per worker state. 
> 
> For scenarios when that doesn’t work, we handoff flows between workers. 

Ok, the tradeoff is that having a single worker is a bottleneck, and if packet 
arrives on one
core and then is processed on another core there is a cache miss.

Per the original discussion, a reader/writer lock even uncontended requires an 
atomic
increment, and that increment is a locked instruction and a cache miss with 
multiple readers.

RCU has zero overhead for readers.  The problem is pushed to the writer to deal 
with.
It works fine for data structures like lists or trees that can be updated with 
a specific
access pattern such that reader always sees valid data.

The case I was thinking of is things like flow and routing tables.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11059): https://lists.fd.io/g/vpp-dev/message/11059
Mute This Topic: https://lists.fd.io/mt/27785182/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffer chains with vlib_buffer_t

2018-10-31 Thread Damjan Marion via Lists.Fd.Io


> On 31 Oct 2018, at 19:42, Prashant Upadhyaya  wrote:
> 
> Hi,
> 
> I have two buffer chains whose starting vlib_buffer_t's are --
> vlib_buffer_t* chainHead1;  (let's call this chain1)
> vlib_buffer_t* chainHead2; (let's call this chain2)
> The chain1, chain2 may have one or more buffers each.
> 
> Is there any convenience function which connects the last buffer of
> first chain1 to the first buffer of chain2, so that the entire bigger
> chain can be accessed via chainHead1 as the starting point.
> 
> So I need something like this --
> void vlib_buffer_cat(vlib_buffer_t* chain1, vlib_buffer_t* chain2)
> 
> I suppose I will have to chase the last buffer of chain1 and then
> connect it to the first of chain2 and then modify the chain1 first
> buffer contents suitably for the length, flags etc. not to forget the
> possible modifications in the first buffer of chain2.
> 
> If someone has this already, that will save me some rookie mistakes
> and hours of debugging when it goofs up my packet processing at my
> business logic level :)
> 

Should be something like:

void
vlib_buffer_join (vlib_main_t * vm, vlib_buffer_t * c1, vlib_buffer_t *c2)
{
  vlib_buffer_t *c1t = c1;

  /* find c1 tail */
  while (c1t->flags & VLIB_BUFFER_NEXT_PRESENT)
c1t = vlib_get_buffer (vm, c1t->next_buffer);

  c1t->flags &= VLIB_BUFFER_NEXT_PRESENT;
  c1t->next_buffer = vlib_get_buffer_index (vm, c2);

  if (PREDICT_TRUE (c2->flags & VLIB_BUFFER_TOTAL_LENGTH_VALID))
c1->total_length_not_including_first_buffer +=
  c2->total_length_not_including_first_buffer + c2->current_length;
  else
vlib_buffer_length_in_chain_slow_path (vm, c1);
}

Not tested, hope will not cause hours of debugging...


-- 
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11058): https://lists.fd.io/g/vpp-dev/message/11058
Mute This Topic: https://lists.fd.io/mt/27808613/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding buffer chains with vlib_buffer_t

2018-10-31 Thread Prashant Upadhyaya
Hi,

I have two buffer chains whose starting vlib_buffer_t's are --
vlib_buffer_t* chainHead1;  (let's call this chain1)
vlib_buffer_t* chainHead2; (let's call this chain2)
The chain1, chain2 may have one or more buffers each.

Is there any convenience function which connects the last buffer of
first chain1 to the first buffer of chain2, so that the entire bigger
chain can be accessed via chainHead1 as the starting point.

So I need something like this --
void vlib_buffer_cat(vlib_buffer_t* chain1, vlib_buffer_t* chain2)

I suppose I will have to chase the last buffer of chain1 and then
connect it to the first of chain2 and then modify the chain1 first
buffer contents suitably for the length, flags etc. not to forget the
possible modifications in the first buffer of chain2.

If someone has this already, that will save me some rookie mistakes
and hours of debugging when it goofs up my packet processing at my
business logic level :)


Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11057): https://lists.fd.io/g/vpp-dev/message/11057
Mute This Topic: https://lists.fd.io/mt/27808613/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] TAP v1/v2 API questions

2018-10-31 Thread Coulson, Ken
Per emails on this list tap v1 is being deprecated.
tap v1 create/modify/delete and tap v2 is different in that it only has 
create/delete.

Some questions:
Is the intent to create a tap v2 modify API and/or to add capabilities to set 
interface API?
tap APIs presently have no MTU setting.  Is there any plan to add that 
capability?
Is there any plan to use DPDK tap PMD?

Ken Coulson

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11056): https://lists.fd.io/g/vpp-dev/message/11056
Mute This Topic: https://lists.fd.io/mt/27808322/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] auto-abandon

2018-10-31 Thread Jim Thompson

It would seem to make it difficult to find a half-implemented feature.  “Has 
anyone tried this before?”

Case in-point: L3SPAN (#9336), which seems abandoned, but is of interest.

Jim

> On Oct 26, 2018, at 3:01 PM, Damjan Marion via Lists.Fd.Io 
>  wrote:
> 
> 
> Folks,
> 
> Gerrit have this nice feature and our list of open changes is growing:
> 
> https://gerrit-review.googlesource.com/Documentation/user-change-cleanup.html 
> 
> 
> Should we enable that? IMO everything without activity in last 3 months 
> should be abandoned.
> 
> Thoughts? 
> 
> -- 
> Damjan
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11004): https://lists.fd.io/g/vpp-dev/message/11004
> Mute This Topic: https://lists.fd.io/mt/27743376/675164
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [j...@netgate.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11055): https://lists.fd.io/g/vpp-dev/message/11055
Mute This Topic: https://lists.fd.io/mt/27743376/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] auto-abandon

2018-10-31 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
> https://gerrit-review.googlesource.com/Documentation/user-change-cleanup.html

Pros are worth it.

Contributors can still Restore (and rebase) manually if needed,
when they receive an e-mail about the auto-abandon hitting their Change.

> Should we enable that?

+1.

Vratko.

From: vpp-dev@lists.fd.io  On Behalf Of Sirshak Das
Sent: Friday, 2018-October-26 22:12
To: dmar...@me.com; vpp-dev 
Subject: Re: [vpp-dev] auto-abandon

+1

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan Marion 
via Lists.Fd.Io
Sent: Friday, October 26, 2018 3:01 PM
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] auto-abandon


Folks,

Gerrit have this nice feature and our list of open changes is growing:

https://gerrit-review.googlesource.com/Documentation/user-change-cleanup.html

Should we enable that? IMO everything without activity in last 3 months should 
be abandoned.

Thoughts?

--
Damjan

IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11054): https://lists.fd.io/g/vpp-dev/message/11054
Mute This Topic: https://lists.fd.io/mt/27743376/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FD.io CSIT Project weekly meeting - UTC GMT time change, UK and EU no change

2018-10-31 Thread Maciek Konstantynowicz (mkonstan) via Lists.Fd.Io
Hi,

Just wanted to make everyone aware that today’s call is taking 
place at 15:00-16:00 UTC, as UK and Europe moved away from Daylight
Saving Time (BST, CEST respectively), and looking thru the window
Summer time is definitely over, so time to change our regular habits :)

Updated details are in the usual place, on FD.io wiki:

https://wiki.fd.io/view/CSIT/Meeting

It is LFN FD.io wiki that is used to coordinate FD.io sub-projects'
events, but as it’s not automated nor intelligent, it doesn’t know
human time changed in some earthly TZs, but didn’t in other ones.

And as a critical mass of committers lives in England and Europe
geo locations, it’s is logical and functional, that meeting time
follows those geo locations. 

Cheers,
-Maciek

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11053): https://lists.fd.io/g/vpp-dev/message/11053
Mute This Topic: https://lists.fd.io/mt/27805352/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev]ping local address

2018-10-31 Thread saint_sun 孙 via Lists . Fd . Io
OK , it is better, I will try, thanks!



saint_...@aliyun.com
 
From: Neale Ranns (nranns)
Date: 2018-10-31 17:01
To: saint_...@aliyun.com
CC: vpp-dev
Subject: Re: [vpp-dev]ping local address
Hi Saint,
 
With this change an attacker could send a packet with both the source and 
destination both set to one of VPP’s own addresses. If you include in this new 
sub-condition to only accept locally generated packets, then we should be good 
(b->flags & VNET_BUFFER_F_LOCALLY_ORIGINATED).
 
Regards,
neale
 
De : "saint_...@aliyun.com" 
Date : mercredi 31 octobre 2018 à 08:49
À : "Neale Ranns (nranns)" 
Cc : vpp-dev 
Objet : Re: Re: [vpp-dev]ping local address
 
hello neale,
I found and modified a piece of code in the ip4_forward.c, and now it is 
able to ping local address, as follows:
 
I think the source- check should only discard the packet which comes from the 
attacker(forged a source address) and wants to attack another host, so I 
changed the judgement conditions. 
can you help me to check it right or wrong?  


The attachment is the modified file.


saint_...@aliyun.com
 
From: Neale Ranns (nranns)
Date: 2018-10-25 15:55
To: saint_...@aliyun.com; vpp-dev
Subject: Re: [vpp-dev]ping local address
 
It’s a known limitation. Contributions to fix it would be welcome.
 
/neale
 
 
De :  au nom de "saint_sun 孙 via Lists.Fd.Io" 

Répondre à : "saint_...@aliyun.com" 
Date : jeudi 25 octobre 2018 à 09:40
À : vpp-dev 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev]ping local address
 
Hello all:
An basic features: ping myself. when I configure an IP address for an 
interface, then I ping the address from VPP, it's failed, why?should I do other 
more settings?
 
DBGvpp# ping 10.0.0.1   

Aborted due to a keypress.
 
Statistics: 1 sent, 0 received, 100% packet loss
 
 
DBGvpp# show ip fib 
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:default-route:1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:21 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:27 to:[0:0]]
[0] [@4]: ipv4-glean: line1: mtu:9000 000e5e513c380806
10.0.0.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:25 to:[0:0]]
[0] [@2]: dpo-receive: 10.0.0.1 on line1
10.0.0.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:23 to:[0:0]]
[0] [@0]: dpo-drop ip4
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
[0] [@0]: dpo-drop ip4
 
 
 


saint_...@aliyun.com
 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11051): https://lists.fd.io/g/vpp-dev/message/11051
Mute This Topic: https://lists.fd.io/mt/27630267/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev]ping local address

2018-10-31 Thread Neale Ranns via Lists.Fd.Io
Hi Saint,

With this change an attacker could send a packet with both the source and 
destination both set to one of VPP’s own addresses. If you include in this new 
sub-condition to only accept locally generated packets, then we should be good 
(b->flags & VNET_BUFFER_F_LOCALLY_ORIGINATED).

Regards,
neale

De : "saint_...@aliyun.com" 
Date : mercredi 31 octobre 2018 à 08:49
À : "Neale Ranns (nranns)" 
Cc : vpp-dev 
Objet : Re: Re: [vpp-dev]ping local address

hello neale,
I found and modified a piece of code in the ip4_forward.c, and now it is 
able to ping local address, as follows:

I think the source- check should only discard the packet which comes from the 
attacker(forged a source address) and wants to attack another host, so I 
changed the judgement conditions.
can you help me to check it right or wrong?


The attachment is the modified file.

saint_...@aliyun.com

From: Neale Ranns (nranns)
Date: 2018-10-25 15:55
To: saint_...@aliyun.com; 
vpp-dev
Subject: Re: [vpp-dev]ping local address

It’s a known limitation. Contributions to fix it would be welcome.

/neale


De :  au nom de "saint_sun 孙 via Lists.Fd.Io" 

Répondre à : "saint_...@aliyun.com" 
Date : jeudi 25 octobre 2018 à 09:40
À : vpp-dev 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev]ping local address

Hello all:
An basic features: ping myself. when I configure an IP address for an 
interface, then I ping the address from VPP, it's failed, why?should I do other 
more settings?

DBGvpp# ping 10.0.0.1
Aborted due to a keypress.

Statistics: 1 sent, 0 received, 100% packet loss


DBGvpp# show ip fib
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] 
locks:[src:default-route:1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:21 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.0.0.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:27 to:[0:0]]
[0] [@4]: ipv4-glean: line1: mtu:9000 000e5e513c380806
10.0.0.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:25 to:[0:0]]
[0] [@2]: dpo-receive: 10.0.0.1 on line1
10.0.0.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:23 to:[0:0]]
[0] [@0]: dpo-drop ip4
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
[0] [@0]: dpo-drop ip4




saint_...@aliyun.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11050): https://lists.fd.io/g/vpp-dev/message/11050
Mute This Topic: https://lists.fd.io/mt/27630267/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] 18.10 release artifacts

2018-10-31 Thread Marco Varlese
I am going to check with Vanessa...

On Tue, 2018-10-30 at 21:32 +, Michal Cmarada via Lists.Fd.Io
wrote:
jvpp-ioampot
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11049): https://lists.fd.io/g/vpp-dev/message/11049
Mute This Topic: https://lists.fd.io/mt/27799255/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP and RCU?

2018-10-31 Thread Florin Coras


> On Oct 30, 2018, at 9:02 PM, Honnappa Nagarahalli 
>  wrote:
> 
> 
> 
 
> Hi Stephen,
> 
> No, we don’t support RCU. Wouldn’t rw-locks be enough to support your
>> usecases?
> 
> Florin
> 
>> On Oct 29, 2018, at 12:40 PM, Stephen Hemminger
>>  wrote:
>> 
>> Is it possible to do Read Copy Update with VPP? Either using
>> Userspace RCU (https://librcu.org) or manually. RCU is very
>> efficient way to handle read mostly tables and other dynamic cases such
>> as plugins.
>> 
>> The several things that are needed are non-preempt, atomic update
>> of a pointer and a mechanism to be sure all active threads have
>> gone through a quiescent period. I don't think VPP will preempt
>> one node for another so that is done. The atomic update is
>> relatively easy with basic barriers, either from FD.IO, DPDK, or native
>> compiler operations. But is there an API to have a quiescent period marker in
>> the main VPP vector scheduler?
>> 
>> Something like the QSBR model of Userspace RCU library.
>> (each thread calls rcu_queiscent_state() periodically) would be
>> ideal.
>> 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#11023):
>> https://lists.fd.io/g/vpp-dev/message/11023
>> Mute This Topic: https://lists.fd.io/mt/27785182/675152
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
>> [fcoras.li...@gmail.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
 
 No reader-writer locks are 100's of times slower.  In fact reader
 write locks are slower than normal spin lock.
 
>>> 
>>> I guess you meant that in general, and I can see how for scenarios with
>> multiple writers and readers performance can be bad. But from your original
>> message I assumed you’re mostly interested in concurrent read performance
>> with few writes. For such scenarios I would expect our current, simple, spin
>> and rw lock implementations to not be that bad. If that’s provably not the 
>> case,
>> we should definitely consider doing RCU.
>>> 
>>> Also worth noting that a common pattern in vpp is to have per thread data
>> structures and then entirely avoid locking. For lookups we typically use the
>> bihash and that is thread safe.
> When you say 'per thread data structures', does it mean the data structures 
> will be duplicated for each data plane thread?

No, we don’t duplicate the data. Instead, we rely on RSS hashing to pin flows 
to workers and then build per worker state. 

For scenarios when that doesn’t work, we handoff flows between workers. 

Florin   

> 
>>> 
>>> Florin
>>> 
>>> 
>> 
>> https://www.researchgate.net/publication/247337469_RCU_vs_Locking_Perf 
>> 
>> ormance_on_Different_CPUs
>> 
>> https://lwn.net/Articles/263130/ 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11047): https://lists.fd.io/g/vpp-dev/message/11047
Mute This Topic: https://lists.fd.io/mt/27785182/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-