Re: [vpp-dev] 17.07 Release

2017-08-07 Thread Алексей Болдырев
Tell me please, how many MPLS-labels on the stack supports VPP?

07.08.2017, 07:25, "Kinsella, Ray" :
> Thanks !
>
> On 31/07/2017 13:51, Neale Ranns (nranns) wrote:
>>  Hi Chris,
>>
>>  Thanks for fixing it!
>>  Release notes now available at:
>>    https://docs.fd.io/vpp/17.07/release_notes_1707.html
>>
>>  regards,
>>  neale
>>
>>  -Original Message-
>>  From: "Luke, Chris" 
>>  Date: Monday, 31 July 2017 at 18:01
>>  To: "Neale Ranns (nranns)" , "Kinsella, Ray" 
>> , "vpp-dev@lists.fd.io" 
>>  Subject: Re: [vpp-dev] 17.07 Release
>>
>>  The next merge job to run on each branch will trigger it; or at least I 
>> hope it does.
>>
>>  I’ll do a ‘remerge’ on the current HEAD commit to see if that will do 
>> the needful
>>
>>  Chris.
>>
>>  On 7/31/17, 10:34, "vpp-dev-boun...@lists.fd.io on behalf of Neale 
>> Ranns (nranns)"  
>> wrote:
>>
>>  Hi Ray,
>>
>>  The release notes will appear here eventually:
>>   https://docs.fd.io/vpp/17.07/release_notes.html
>>
>>  there was a breakage in the generation of the docs, which Chris 
>> fixed, and was recently merged:
>>    https://gerrit.fd.io/r/#/c/7818/
>>
>>  hopefully we will get the docs updated when the next patched is 
>> merged.
>>
>>  Regards,
>>  neale
>>
>>  -Original Message-
>>  From:  on behalf of "Kinsella, Ray" 
>> 
>>  Date: Monday, 31 July 2017 at 15:42
>>  To: "vpp-dev@lists.fd.io" 
>>  Subject: Re: [vpp-dev] 17.07 Release
>>
>>  Hi Neale,
>>
>>  Thanks for this - great work.
>>  Are there release notes archived anywhere?
>>
>>  Ray K
>>
>>  On 20/07/2017 16:56, Neale Ranns (nranns) wrote:
>>  >
>>  > Dear VPP community,
>>  >
>>  > The VPP 17.07 release is complete. The release artefacts are 
>> now available on the nexus server.
>>  >
>>  > I’d like to take this opportunity to thank you all for your 
>> continued support for VPP.
>>  >
>>  > Best regards,
>>  > Neale
>>  >
>>  >
>>  > ___
>>  > vpp-dev mailing list
>>  > vpp-dev@lists.fd.io
>>  > https://lists.fd.io/mailman/listinfo/vpp-dev
>>  >
>>  ___
>>  vpp-dev mailing list
>>  vpp-dev@lists.fd.io
>>  https://lists.fd.io/mailman/listinfo/vpp-dev
>>
>>  ___
>>  vpp-dev mailing list
>>  vpp-dev@lists.fd.io
>>  https://lists.fd.io/mailman/listinfo/vpp-dev
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 17.07 Release

2017-08-07 Thread Neale Ranns (nranns)

In general, there is no limit to the number of MPLS labels. If you can be more 
specific about what you are referring to (i.e. how many labels can be pushed 
per LSP, or how many MPLS lookups/pops per packet) then I can give you a more 
definitive answer.

/neale

-Original Message-
From: Алексей Болдырев 
Date: Monday, 7 August 2017 at 10:30
To: "Kinsella, Ray" , "Neale Ranns (nranns)" 
, "Luke, Chris" , 
"vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] 17.07 Release

Tell me please, how many MPLS-labels on the stack supports VPP?

07.08.2017, 07:25, "Kinsella, Ray" :
> Thanks !
>
> On 31/07/2017 13:51, Neale Ranns (nranns) wrote:
>>  Hi Chris,
>>
>>  Thanks for fixing it!
>>  Release notes now available at:
>>https://docs.fd.io/vpp/17.07/release_notes_1707.html
>>
>>  regards,
>>  neale
>>
>>  -Original Message-
>>  From: "Luke, Chris" 
>>  Date: Monday, 31 July 2017 at 18:01
>>  To: "Neale Ranns (nranns)" , "Kinsella, Ray" 
, "vpp-dev@lists.fd.io" 
>>  Subject: Re: [vpp-dev] 17.07 Release
>>
>>  The next merge job to run on each branch will trigger it; or at 
least I hope it does.
>>
>>  I’ll do a ‘remerge’ on the current HEAD commit to see if that will 
do the needful
>>
>>  Chris.
>>
>>  On 7/31/17, 10:34, "vpp-dev-boun...@lists.fd.io on behalf of Neale 
Ranns (nranns)"  
wrote:
>>
>>  Hi Ray,
>>
>>  The release notes will appear here eventually:
>>   https://docs.fd.io/vpp/17.07/release_notes.html
>>
>>  there was a breakage in the generation of the docs, which Chris 
fixed, and was recently merged:
>>https://gerrit.fd.io/r/#/c/7818/
>>
>>  hopefully we will get the docs updated when the next patched is 
merged.
>>
>>  Regards,
>>  neale
>>
>>  -Original Message-
>>  From:  on behalf of "Kinsella, 
Ray" 
>>  Date: Monday, 31 July 2017 at 15:42
>>  To: "vpp-dev@lists.fd.io" 
>>  Subject: Re: [vpp-dev] 17.07 Release
>>
>>  Hi Neale,
>>
>>  Thanks for this - great work.
>>  Are there release notes archived anywhere?
>>
>>  Ray K
>>
>>  On 20/07/2017 16:56, Neale Ranns (nranns) wrote:
>>  >
>>  > Dear VPP community,
>>  >
>>  > The VPP 17.07 release is complete. The release artefacts 
are now available on the nexus server.
>>  >
>>  > I’d like to take this opportunity to thank you all for 
your continued support for VPP.
>>  >
>>  > Best regards,
>>  > Neale
>>  >
>>  >
>>  > ___
>>  > vpp-dev mailing list
>>  > vpp-dev@lists.fd.io
>>  > https://lists.fd.io/mailman/listinfo/vpp-dev
>>  >
>>  ___
>>  vpp-dev mailing list
>>  vpp-dev@lists.fd.io
>>  https://lists.fd.io/mailman/listinfo/vpp-dev
>>
>>  ___
>>  vpp-dev mailing list
>>  vpp-dev@lists.fd.io
>>  https://lists.fd.io/mailman/listinfo/vpp-dev
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] test command

2017-08-07 Thread 薛欣颖

Hi,

Can these two commands(test http server  、 test tcp server) be configured 
together?
When I configure the two commands at the same time, there is a multiple 
registrations shown below:

DBGvpp# test tcp server 
DBGvpp# test http server 
0: vl_msg_api_config:682: BUG: multiple registrations of 
'vl_api_memclnt_create_reply_t_handler'

What should I do to slove it?

Thanks,
Xyxue


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 回复: vpp cpu usage utility

2017-08-07 Thread SAKTHIVEL ANAND S
Hi Dave, Thanks for your inputs..
I just experimenting based on your inputs.. when i push < 7Gbps i don't see
any packet losses on dpdk side and the vector/call is still ~1.1. As you
mentioned <2.0,means has lot of room left..
then i just increased the traffic to achieve bit rate of about  8gbps to 9
gbps and started seeing dpdk packet drops. ( "rx missed" in show hardware
as well as "drops" in show interfaces.

can you tel me, what i have missed here and why dpdk is dropping , even
though system has lot of cpu space?
Also pls guide me to understand why vectors/call is not increasing ?

here is the full o/p
*root@ubuntu:~# vppctl sh ha*
  NameIdx   Link  Hardware
TenGigabitEthernet3/0/05 up   TenGigabitEthernet3/0/0
  Ethernet address 14:02:ec:73:ea:d0
  Intel X710/XL710 Family
carrier up full duplex speed 1 mtu 9216

tx frames ok 159
tx bytes ok 9540
rx frames ok  3429247504
rx bytes ok4501655515148
rx missed   22942665
rx multicast frames ok   193
extended stats:
  rx good packets 3429247703
  tx good packets159
  rx good bytes4501655766820
  tx good bytes 9540
  rx unicast packets  3452190013
  rx multicast packets   193
  rx broadcast packets 3
  rx unknown protocol packets 3452190211
  tx unicast packets 159
  rx size 64 packets 161
  rx size 128 to 255 packets 193
  rx size 1024 to 1522 packets3452189888
  tx size 64 packets 159
TenGigabitEthernet3/0/16 up   TenGigabitEthernet3/0/1
  Ethernet address 14:02:ec:73:ea:d8
  Intel X710/XL710 Family
carrier up full duplex speed 1 mtu 9216

tx frames ok  3389859164
tx bytes ok4298341418744
rx frames ok 194
rx bytes ok36344
rx multicast frames ok   193
extended stats:
  rx good packets194
  tx good packets 3389859164
  rx good bytes36344
  tx good bytes4298341418744
  rx unicast packets   1
  rx multicast packets   193
  rx unknown protocol packets194
  tx unicast packets  3389859163
  tx broadcast packets 1
  rx size 64 packets   1
  rx size 128 to 255 packets 193
  tx size 64 packets   1
  tx size 1024 to 1522 packets3389859163
local0 0down  local0
  local
pg/stream-01down  pg/stream-0
  Packet generator
pg/stream-12down  pg/stream-1
  Packet generator
pg/stream-23down  pg/stream-2
  Packet generator
pg/stream-34down  pg/stream-3
  Packet generator
root@ubuntu:~# vppctl sh int
  Name   Idx   State  Counter
Count
TenGigabitEthernet3/0/0   5 up   rx packets
3430406350
 rx bytes
4473249467216
 tx
packets   159
 tx
bytes9540

drops   36931507

punts193
 ip4
3430405997

rx-miss 22942665
TenGigabitEthernet3/0/1   6 up   rx
packets   194
 rx
bytes   36344
 tx packets
3393474491
 tx bytes
4302925653362

drops  1

punts193

tx-error 2460933
local00down
pg/stream-0   1down
pg/stream-1   2down
pg/stream-2   3down
pg/stream-3   4down
*root@ubun

Re: [vpp-dev] query on L2 ACL for VLANs

2017-08-07 Thread Balaji Kn
Hi John,

ACL feature is working after setting IP address on the sub-interface.
Thanks for help

Regards,
Balaji

On Fri, Aug 4, 2017 at 10:24 PM, John Lo (loj)  wrote:

> Hi Balaj
>
>
>
> I think the problem is that you did not configure an IP address on the
> sub-interface. Thus, IP4 forwarding is not enabled.   You can check state
> of various forwarding features on an interface or sub-interface using the
> command:
>
>   show int feat TenGigabitEthernet1/0/0.100
>
>
>
> If an interface does not have IP4 address configured, you will see the
> ip4-unitcast feature listed as ip4-drop:
>
>ip4-unicast:
>
> ip4-drop
>
>
>
> Regards,
>
> John
>
>
>
> *From:* Balaji Kn [mailto:balaji.s...@gmail.com]
> *Sent:* Friday, August 04, 2017 7:28 AM
> *To:* John Lo (loj) 
> *Cc:* vpp-dev@lists.fd.io; l.s.abhil...@gmail.com
> *Subject:* Re: [vpp-dev] query on L2 ACL for VLANs
>
>
>
> Hi John,
>
>
>
> Thanks for quick response.
>
> I tried as you suggested to associate input ACL on IP-forwarding path for
> tagged packets. Ingress packets are not hitting ACL node and are dropped.
> However ACL with src/dst IP, MAC address, udp port numbers are fine.
>
>
>
> *Following are the configuration steps followed.*
>
>
>
> set int ip address TenGigabitEthernet1/0/0 172.27.28.5/24
>
> set interface state  TenGigabitEthernet1/0/0 up
>
> set int ip address TenGigabitEthernet1/0/1 172.27.29.5/24
>
> set interface state  TenGigabitEthernet1/0/1 up
>
> create sub-interfaces TenGigabitEthernet1/0/0  100
>
> set interface state  TenGigabitEthernet1/0/0.100 up
>
>
>
> *ACL configuration*
>
> classify table mask l2 tag1
>
> classify session acl-hit-next deny opaque-index 0 table-index 0 match l2
> tag1 100
>
> set int input acl intfc TenGigabitEthernet1/0/0.100 *ip4-table* 0
>
>
>
> *Trace captured on VPP*
>
> 00:16:11:820587: dpdk-input
>
>   TenGigabitEthernet1/0/0 rx queue 0
>
>   buffer 0x4d40: current data 0, length 124, free-list 0, clone-count 0,
> totlen-nifb 0, trace 0x0
>
>   PKT MBUF: port 0, nb_segs 1, pkt_len 124
>
> buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr
> 0x6de35040
>
> packet_type 0x291
>
> Packet Offload Flags
>
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
>
> Packet Types
>
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>
>   RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
> extension headers
>
>   RTE_PTYPE_L4_UDP (0x0200) UDP packet
>
>   IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
>
>   UDP: 172.27.28.6 -> 172.27.29.6
>
> tos 0x00, ttl 255, length 106, checksum 0x2a38
>
> fragment id 0x0008
>
>   UDP: 1024 -> 1024
>
> length 86, checksum 0x
>
> 00:16:11:820596: ethernet-input
>
>   IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
>
> 00:16:11:820616: ip4-input
>
>   UDP: 172.27.28.6 -> 172.27.29.6
>
> tos 0x00, ttl 255, length 106, checksum 0x2a38
>
> fragment id 0x0008
>
>   UDP: 1024 -> 1024
>
> length 86, checksum 0x
>
> 00:16:11:820624: ip4-drop
>
> UDP: 172.27.28.6 -> 172.27.29.6
>
>   tos 0x00, ttl 255, length 106, checksum 0x2a38
>
>   fragment id 0x0008
>
> UDP: 1024 -> 1024
>
>   length 86, checksum 0x
>
> 00:16:11:820627: error-drop
>
>   ip4-input: ip4 adjacency drop
>
>
>
> I verified in VPP code and packet is dropped while searching for intc arc
> (searching for feature enabled on interface). I assume associating
> sub-interface with ACL was enabling feature.
>
>
>
> Let me know if i missed anything.
>
>
>
> Regards,
>
> Balaji
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Aug 2, 2017 at 9:26 PM, John Lo (loj)  wrote:
>
> Hi Balaji,
>
>
>
> In order to make input ACL work on the IPv4 forwarding path, you need to
> set it as ip4-table on the interface or sub-interface. For your case for
> packets with VLAN tags, it needs to be set on sub-interface:
>
> set int input acl intfc TenGigabitEthernet1/0/0.100 ip4-table 0
>
>
>
> The names in the CLI  [ip4-table|ip6-table|l2-table] indicate which
> forwarding path the ACL would be applied, not which packet header ACL will
> be matched. The match of the packet is specified with the table/session
> used in the ACL.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Balaji Kn
> *Sent:* Wednesday, August 02, 2017 9:41 AM
> *To:* vpp-dev@lists.fd.io
> *Cc:* l.s.abhil...@gmail.com
> *Subject:* [vpp-dev] query on L2 ACL for VLANs
>
>
>
> Hello,
>
>
>
> I am using VPP 17.07 release code (tag *v17.07*).
>
>
>
> DBGvpp# show int address
>
> TenGigabitEthernet1/0/0 (up):
>
>   172.27.28.5/24
>
> TenGigabitEthernet1/0/1 (up):
>
>   172.27.29.5/24
>
>
>
> My use case is to allow packets based on VLANs. I added an ACL rule in
> classify table as below.
>
>
>
> classify table mask l2 tag1
>
> classify session acl-hit-next permit opaque-inde

Re: [vpp-dev] query on L2 ACL for VLANs

2017-08-07 Thread Balaji Kn
Hi John,

Thanks for quick response.
I tried as you suggested to associate input ACL on IP-forwarding path for
tagged packets. Ingress packets are not hitting ACL node and are dropped.
However ACL with src/dst IP, MAC address, udp port numbers are fine.

*Following are the configuration steps followed.*

set int ip address TenGigabitEthernet1/0/0 172.27.28.5/24
set interface state  TenGigabitEthernet1/0/0 up
set int ip address TenGigabitEthernet1/0/1 172.27.29.5/24
set interface state  TenGigabitEthernet1/0/1 up
create sub-interfaces TenGigabitEthernet1/0/0  100
set interface state  TenGigabitEthernet1/0/0.100 up

*ACL configuration*
classify table mask l2 tag1
classify session acl-hit-next deny opaque-index 0 table-index 0 match l2
tag1 100
set int input acl intfc TenGigabitEthernet1/0/0.100 *ip4-table* 0

*Trace captured on VPP*
00:16:11:820587: dpdk-input
  TenGigabitEthernet1/0/0 rx queue 0
  buffer 0x4d40: current data 0, length 124, free-list 0, clone-count 0,
totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 1, pkt_len 124
buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr
0x6de35040
packet_type 0x291
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
extension headers
  RTE_PTYPE_L4_UDP (0x0200) UDP packet
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a38
fragment id 0x0008
  UDP: 1024 -> 1024
length 86, checksum 0x
00:16:11:820596: ethernet-input
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
00:16:11:820616: ip4-input
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a38
fragment id 0x0008
  UDP: 1024 -> 1024
length 86, checksum 0x
00:16:11:820624: ip4-drop
UDP: 172.27.28.6 -> 172.27.29.6
  tos 0x00, ttl 255, length 106, checksum 0x2a38
  fragment id 0x0008
UDP: 1024 -> 1024
  length 86, checksum 0x
00:16:11:820627: error-drop
  ip4-input: ip4 adjacency drop

I verified in VPP code and packet is dropped while searching for intc arc
(searching for feature enabled on interface). I assume associating
sub-interface with ACL was enabling feature.

Let me know if i missed anything.

Regards,
Balaji









On Wed, Aug 2, 2017 at 9:26 PM, John Lo (loj)  wrote:

> Hi Balaji,
>
>
>
> In order to make input ACL work on the IPv4 forwarding path, you need to
> set it as ip4-table on the interface or sub-interface. For your case for
> packets with VLAN tags, it needs to be set on sub-interface:
>
> set int input acl intfc TenGigabitEthernet1/0/0.100 ip4-table 0
>
>
>
> The names in the CLI  [ip4-table|ip6-table|l2-table] indicate which
> forwarding path the ACL would be applied, not which packet header ACL will
> be matched. The match of the packet is specified with the table/session
> used in the ACL.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Balaji Kn
> *Sent:* Wednesday, August 02, 2017 9:41 AM
> *To:* vpp-dev@lists.fd.io
> *Cc:* l.s.abhil...@gmail.com
> *Subject:* [vpp-dev] query on L2 ACL for VLANs
>
>
>
> Hello,
>
>
>
> I am using VPP 17.07 release code (tag *v17.07*).
>
>
>
> DBGvpp# show int address
>
> TenGigabitEthernet1/0/0 (up):
>
>   172.27.28.5/24
>
> TenGigabitEthernet1/0/1 (up):
>
>   172.27.29.5/24
>
>
>
> My use case is to allow packets based on VLANs. I added an ACL rule in
> classify table as below.
>
>
>
> classify table mask l2 tag1
>
> classify session acl-hit-next permit opaque-index 0 table-index 0 match l2
> tag1 100
>
> set int input acl intfc TenGigabitEthernet1/0/0 l2-table 0
>
>
>
> Tagged packets were dropped in ethernet node.
>
>
>
> 00:08:39:270674: dpdk-input
>
>   TenGigabitEthernet1/0/0 rx queue 0
>
>   buffer 0x4d67: current data 0, length 124, free-list 0, clone-count 0,
> totlen-nifb 0, trace 0x1
>
>   PKT MBUF: port 0, nb_segs 1, pkt_len 124
>
> buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr
> 0x6de35a00
>
> packet_type 0x291
>
> Packet Offload Flags
>
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
>
> Packet Types
>
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>
>   RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
> extension headers
>
>   RTE_PTYPE_L4_UDP (0x0200) UDP packet
>
>   IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
>
>   UDP: 172.27.28.6 -> 172.27.29.6
>
> tos 0x00, ttl 255, length 106, checksum 0x2a24
>
> fragment id 0x001c
>
>   UDP: 1024 -> 1024
>
> length 86, checksum 0x
>
> 00:08:39:270679: ethernet-input
>
>   IP4: 00:10:94:00

[vpp-dev] user space TCP stack

2017-08-07 Thread kheirabadi

​Hi,
I trying to use mTCP in my application but it is very primitive, for example it 
permits only one epoll instance and does not have shutdown API equivalent. Does 
VPP support user space TCP stack similar to mTCP? if so where is a sample 
application using it?
Thanks,
hamid

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] test command

2017-08-07 Thread Dave Barach (dbarach)
These are test codes. No warranty, express or implied. It would take a few 
minutes to unify the two API message handlers, but at some point in the near 
future, I’m going to clean up the API client registration nonsense involved.

As you might imagine, it’s easy to make direct calls from within vpp itself - 
instead of sending API messages...

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of ???
Sent: Monday, August 7, 2017 6:54 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] test command


Hi,

Can these two commands(test http server  、 test tcp server) be configured 
together?
When I configure the two commands at the same time, there is a multiple 
registrations shown below:

DBGvpp# test tcp server
DBGvpp# test http server
0: vl_msg_api_config:682: BUG: multiple registrations of 
'vl_api_memclnt_create_reply_t_handler'

What should I do to slove it?

Thanks,
Xyxue

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] MEMIF Throuput Problem

2017-08-07 Thread khers
Hi ,
I am insterested in connecting two instance of vpp by memif (one vpp is
running in host and another vpp is running in a lxc container).
I have achieved functionality goal by memif but I have problem in
performance test.
I have done the following steps respectively:
1. first of all I installed lxc on my system
2. then, I made and installed vpp in lxc (I'll call it lxcvpp)
3. I installed vpp on my system (I'll call it hostvpp)
4. I created 2 memif on hostvpp
create memif socket /tmp/unix_socket.file1 master
create memif socket /tmp/unix_socket.file2 slave
5. I created 2 memif on lxcvpp
create memif socket /share/unix_socket.file1 slave
create memif socket /share/unix_socket.file2 master
6. I have two physical interface which are binded to hostvpp (I call two
interfaces eth1 and eth2 respectively). so, I bridged my input interface
(eth1) and memif0 (bridge-domain 1) and also bridged eth2 and memif1
(bridge-domain 2).

7. moreover, I bridged memif0 and memif1 in lxcvpp.

8. I used trex as traffic generator. trafic is transmitted from trex to
hostvpp by eth1 and it is recieved from eth2 interface of hostvpp. packets
flow of this scenario is shown below.

trex>eth0(hostvpp)>
memif0(hostvpp)>memif0(lxcvpp)>memif1(lxcvpp)>memif1(hostvpp)>eth2(hostvpp)>trex

After running trex, I got 4 MPPS with 64B packet size. Is it the maximum
throughput of memif in this scenario?
I expected much more throughput than 4 MPPS. Is there any solution and
tuning to obtain more throughput? (I allocated one core to hostvpp and
another core to lxcvpp)

Cheers,
khers
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] MEMIF Throuput Problem

2017-08-07 Thread Damjan Marion
You are passing each packet twice trough hostvpp so effectivelly your
hostvpp performance is 8 Mpps per core.

There are several factors which can impact performance (cpu speed, numa
locality, memory channel utilisation) but still you cannot expect order of
magnitude better numbers.

Best performance i managed to get so far is 14Mpps on 3.2 Broadwell Xeon
but that was with snake test setup (l2patch from physical interface to
memif). In your case you are utilizing full l2 path which includes two hash
lookups per packet (learn and forward).

Damjan

On 7 August 2017 at 15:29:18, khers (s3m2e1.6s...@gmail.com) wrote:

> Hi ,
> I am insterested in connecting two instance of vpp by memif (one vpp is
> running in host and another vpp is running in a lxc container).
> I have achieved functionality goal by memif but I have problem in
> performance test.
> I have done the following steps respectively:
> 1. first of all I installed lxc on my system
> 2. then, I made and installed vpp in lxc (I'll call it lxcvpp)
> 3. I installed vpp on my system (I'll call it hostvpp)
> 4. I created 2 memif on hostvpp
> create memif socket /tmp/unix_socket.file1 master
> create memif socket /tmp/unix_socket.file2 slave
> 5. I created 2 memif on lxcvpp
> create memif socket /share/unix_socket.file1 slave
> create memif socket /share/unix_socket.file2 master
> 6. I have two physical interface which are binded to hostvpp (I call two
> interfaces eth1 and eth2 respectively). so, I bridged my input interface
> (eth1) and memif0 (bridge-domain 1) and also bridged eth2 and memif1
> (bridge-domain 2).
>
> 7. moreover, I bridged memif0 and memif1 in lxcvpp.
>
> 8. I used trex as traffic generator. trafic is transmitted from trex to
> hostvpp by eth1 and it is recieved from eth2 interface of hostvpp. packets
> flow of this scenario is shown below.
>
> trex>eth0(hostvpp)>
> memif0(hostvpp)>memif0(lxcvpp)>memif1(lxcvpp)>memif1(hostvpp)>eth2(hostvpp)>trex
>
> After running trex, I got 4 MPPS with 64B packet size. Is it the maximum
> throughput of memif in this scenario?
> I expected much more throughput than 4 MPPS. Is there any solution and
> tuning to obtain more throughput? (I allocated one core to hostvpp and
> another core to lxcvpp)
>
> Cheers,
> khers
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] API Change: Dedicated SW interface Event

2017-08-07 Thread Neale Ranns (nranns)

Hi All,

I would like to propose the addition of a dedicated SW interface event message 
type rather than overload the set flags request. The over-loading of types 
causes problems for the automatic API generation tools.

https://gerrit.fd.io/r/#/c/7925/

regards,
neale


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] user space TCP stack

2017-08-07 Thread Florin Coras
Hi Hamid, 

Yes, we do have a userspace TCP stack but it is still under development. You 
can find examples of external apps here [1] and internal apps here [2, 3]. 

All of these use the binary api to interact with the session layer code. We’ll 
soon publish a wrapper library that should make interaction with the stack much 
easier. 

Regards, 
Florin

[1] https://git.fd.io/vpp/tree/src/uri/uri_tcp_test.c 

[2] https://git.fd.io/vpp/tree/src/vnet/tcp/builtin_server.c 

[3] https://git.fd.io/vpp/tree/src/vnet/tcp/builtin_client.c 


> On Aug 7, 2017, at 3:38 AM, kheirabadi  wrote:
> 
> ​Hi,
> I trying to use mTCP in my application but it is very primitive, for example 
> it permits only one epoll instance and does not have shutdown API equivalent. 
> Does VPP support user space TCP stack similar to mTCP? if so where is a 
> sample application using it?
> Thanks,
> hamid
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io 
> https://lists.fd.io/mailman/listinfo/vpp-dev 
> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] API Change Proposal: explicit FIB table create and delete

2017-08-07 Thread Neale Ranns (nranns)
Hi All,

In the absence of any objections I have done:
  https://gerrit.fd.io/r/#/c/7819/

I’ll have a crack at the necessary CSIT changes. Is this:
  https://wiki.fd.io/view/CSIT/Tutorials/Vagrant/Virtualbox/Ubuntu
still the recommended way to test CSIT code changes?

Thanks,
neale


From: Dave Wallace 
Date: Thursday, 3 August 2017 at 22:19
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 
, "csit-...@lists.fd.io" , 
"honeycomb-...@lists.fd.io" 
Subject: Re: [csit-dev] API Change Proposal: explicit FIB table create and 
delete

+1

Dave
On 8/3/17 3:56 AM, Neale Ranns (nranns) wrote:

Dear All,



I would like to propose the addition of a new API to explicitly create and 
delete FIB tables. At present the only way to create FIB tables (for e.g. VRFs) 
is to:

1) Bind an interface to a new table index; ‘set int ip table Eth0 

2) Add a route in a new table and set the create_vrf_if_needed flag



With the addition of an explicit create we have the possibility to set 
per-table properties, like the flow-hash and (potentially) the mtrie stride (to 
favour memory over performance for small VRFs). With an explicit delete VPP is 
aware when it is safe to delete the table.



An explicit API makes the management of FIB tables by the agent/client the same 
as managing any other table resource, like Bridge-Domains or classify tables.



Regards,

neale



___

csit-dev mailing list

csit-...@lists.fd.io

https://lists.fd.io/mailman/listinfo/csit-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] FW: [openstack-dev] [neutron][networking-vpp]networking-vpp 17.07.1 for VPP 17.07 is available

2017-08-07 Thread Jerome Tollet (jtollet)
Dear FD.io-ers,
I know some of you may have missed this announce on openstack mailing list.
Regards,
Jerome

De : Ian Wells 
Répondre à : "OpenStack Development Mailing List (not for usage questions)" 

Date : lundi 31 juillet 2017 à 01:07
À : OpenStack Development Mailing List 
Objet : [openstack-dev] [neutron][networking-vpp]networking-vpp 17.07.1 for VPP 
17.07 is available

In conjunction with the release of VPP 17.07, I'd like to invite you all to try 
out networking-vpp 17.07.1 for VPP 17.07.  VPP is a fast userspace forwarder 
based on the DPDK toolkit, and uses vector packet processing algorithms to 
minimise the CPU time spent on each packet and maximise throughput.  
networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding under Neutron.
This version has a few additional enhancements, along with supporting the VPP 
17.07 API:
- remote security group IDs are now supported
- VXLAN GPE support now includes proxy ARP at the local forwarder

Along with this, there have been the usual bug fixes, code and test 
improvements.

The README [1] explains how you can try out VPP using devstack: the devstack 
plugin will deploy etcd, the mechanism driver and VPP itself and should give 
you a working system with a minimum of hassle.
We will continuing development between now and VPP's 17.10 release in October.  
There are several features we're planning to work on (you'll find a list in our 
RFE bugs at [2]), and we welcome anyone who would like to come help us.

Everyone is welcome to join our biweekly IRC meetings, every other Monday (the 
next one is due in a week), 0900 PDT = 1600 GMT.
--
Ian.

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
[2]http://goo.gl/i3TzAt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev