[vpp-dev] gtpu tunneling decap-next ip4 issue

2017-11-01 Thread Ryota Yushina
Hi, all

Let me ask about a GTPu issue.

Although I tried to overlay IPv4 packets with GTP-u, it didn't work by 17.10.
Actually vpp was rebooted silently when I sent ping.

Could someone help or provide GTPu IPv4 sample configuration ?

My situation:
By following diagram, when I sent ping from 10.9.0.3 to 11.9.0.4 on VPP#3, 
VPP#7 was rebooted(or crashed?).
I've expected icmp echo request would be routed and encapped on VPP#4 via 
gtpu_tunnel0, but it didn't.


+- VPP#3 -
|
| [TenGigabitEthernet82/0/1: 10.9.0.3] 
+-- | 
|
+-VPP#7 | 
| [TenGigabitEthernet82/0/1: 10.9.0.1]
|
| [gtpu_tunnel0: 11.9.0.1]
|   |
| [TenGigabitEthernet82/0/0: 192.168.152.70] --> vrf:152 
+-- || ---
||
+-- || ---
| [TenGigabitEthernet82/0/0: 192.168.152.40] --> vrf:152 
|
| [loop0: 11.9.0.4]
+- VPP#4 -

My cli configurations:
<>
set interface ip address TenGigabitEthernet82/0/1 10.9.0.3/16
ip route 11.9.0.0/16 via 10.9.0.1
set interface state TenGigabitEthernet82/0/1 up

<>
set interface ip address TenGigabitEthernet82/0/1 10.9.0.1/16

ip table add 152
set interface ip table TenGigabitEthernet82/0/0 152
set interface ip address TenGigabitEthernet82/0/0 192.168.152.70/24

create gtpu tunnel src 192.168.152.70 dst 192.168.152.40 teid  encap-vrf-id 
152 decap-next ip4
set interface ip address gtpu_tunnel0 11.9.0.1/16

ip route 11.9.0.0/16 via gtpu_tunnel0
ip route 10.9.0.0/16 via TenGigabitEthernet82/0/1

set interface state TenGigabitEthernet82/0/0 up
set interface state TenGigabitEthernet82/0/1 up
set interface state loop0 up


Thanks.
---
Best Regards,

Ryota Yushina,



smime.p7s
Description: S/MIME cryptographic signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] C++ compilation uses a lot of memory

2017-11-01 Thread Burt Silverman
Thanks, Ole, misery loves company:-)

The only question I'd ask about your build change is whether
build-root/Makefile should be modified to allow command line args, or
whether it should stay as is and you write the configuration options into a
locally modified copy of build-data/platforms/$(PLATFORM).mk. Maybe a
question for Dave. (I see an example of vpp_configure_args_vpp in
build-data/platforms.vpp.mk.) Actually, I personally am leaning towards
using the file -- I think it plays better with the ebuild mechanism that
senses if you changed the file -- that tells ebuild that configuration has
to be redone -- I am pretty sure.

Burt

On Wed, Nov 1, 2017 at 6:33 PM, Ole Troan  wrote:

> Burt et al,
>
> > I hadn't built new code in a few weeks. I have a machine with 8
> hyperthreads, and I have only 8GB memory minus what is used for hugepages.
> I ran out of virtual memory when doing a build, in the new C++ code. Just a
> heads up for anyone without huge memory per processor thread. My work
> around is going into build-root/Makefile and halving the number of
> MAKE_PARALLEL_JOBS. There is a number 2 that can be changed to 1 in case
> anybody is using a modest build machine.
>
> I ran into the same issue with my 2G VPP in a box E3845 system.
>
> I pushed patch https://gerrit.fd.io/r/#/c/9186/
> As a way of passing autoconf flags from the make command line.
> E.g.:
> make CONFIGURE_ARGS="--disable-pppoe-plugin --disable-nat-plugin
> --disable-lb-plugin --disable-japi --disable-vapi --disable-vom
> --disable-stn-plugin"  build
>
> I've -2'ed for now, awaiting someone more clue'd up about the build system
> than me.
>
> Best regards,
> Ole
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Question about IP reassembly in 17.10 release

2017-11-01 Thread Yu, Michael A. (NSB - CN/Qingdao)
Hi Marion,

Thanks for this info, it’s clear for me now.

Best Regards
Michael

From: Damjan Marion [mailto:dmarion.li...@gmail.com]
Sent: 2017年11月1日 17:41
To: Yu, Michael A. (NSB - CN/Qingdao) 
Cc: vpp-dev 
Subject: Re: [vpp-dev] Question about IP reassembly in 17.10 release




On 30 Oct 2017, at 08:21, Yu, Michael A. (NSB - CN/Qingdao) 
> wrote:

Hi,

From https://docs.fd.io/vpp/17.10/release_notes_1710.html, I found that “IP 
reassembly” is claimed to support in 17.10 release.
But after checked the latest 17.10 code, I can’t find any new code change 
related to this part, and the fragmented packets are still treated as 
“experimental” in function “ip4_local_inline” (ip4_forward.c)

   /* Treat IP frag packets as "experimental" protocol for now
  until support of IP frag reassembly is implemented */
   proto0 = ip4_is_fragment (ip0) ? 0xfe : ip0->protocol;
   proto1 = ip4_is_fragment (ip1) ? 0xfe : ip1->protocol;

I am confused on it and could anyone help clarify if “IP reassembly” is 
supported now? Thanks!

No, IP Reassembly will likely be in the next release.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP @ IETF 100

2017-11-01 Thread Ole Troan
Guys,

VPP is in a couple of projects at the IETF 100 Hackathon in Singapore.

https://www.ietf.org/registration/MeetingWiki/wiki/100hackathon

If you aren't able to join there, but are in Singapore later in the week.
Would you be interested in a small VPP developer gathering? Lunch, dinner or 
hide away in a corner coding?
Please let me know directly, and I will summarize to the list.

Also if anyone has ideas for things to take to the IETF hackathon, please 
register and update the wiki.

Best regards,
Ole


signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] C++ compilation uses a lot of memory

2017-11-01 Thread Burt Silverman
I hadn't built new code in a few weeks. I have a machine with 8
hyperthreads, and I have only 8GB memory minus what is used for hugepages.
I ran out of virtual memory when doing a build, in the new C++ code. Just a
heads up for anyone without huge memory per processor thread. My work
around is going into build-root/Makefile and halving the number of
MAKE_PARALLEL_JOBS. There is a number 2 that can be changed to 1 in case
anybody is using a modest build machine.

Burt
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp building problem

2017-11-01 Thread John Wei
What you have done is build, you need to do the installation, and
configuration (if needed), before you can start the service.
Take a look at this link:

https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

I am on CentOS, the build packages are under extras/rpm. If your build is
successful, you should have something under extras/.. to install.

John


On Wed, Nov 1, 2017 at 9:50 AM, Holoo Gulakh  wrote:

> Hi all
> i'm trying to install vpp from source on Ubuntu 16.04 server (without
> virtualization). I obeyed the following this link:
>
> https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_
> Hacking_and_Pushing_VPP_Code#Pulling_anonymously_.28https.29
>
> So i did as such:
> sudo -s
> cd /
>
> git clone https://gerrit.fd.io/r/vpp
>
> cd vpp/
> make install-dep
> make bootstrap
> make build
>
> reboot
>
> after boot:
> sudo -s
> service vpp start
>
> But i got:
> * Failed to start vpp.service: Unit vpp.service not found.*
>
> What should i do?
>
> Thanks.
>
>
>
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp building problem

2017-11-01 Thread Luke, Chris
Hello,

If you want to run it as a service then I suggest you use the pre-built 
packages, or build the packages yourself and install those. Only the act of 
installing a package prepares the underlying system to run VPP as a system 
service.

The build instruction you gave leaves the built binaries in-situ in the 
build-root tree of the source directory; this is intended for development. You 
can run it in-situ with helper commands like “make debug” or “make run”. There 
are many such helpers, simply type “make” to see them.

What you do depends on what you are trying to achieve.

Cheers,
Chris.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Holoo Gulakh
Sent: Wednesday, November 01, 2017 12:50 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp building problem

Hi all
i'm trying to install vpp from source on Ubuntu 16.04 server (without 
virtualization). I obeyed the following this link:

https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Pulling_anonymously_.28https.29

So i did as such:
sudo -s
cd /

git clone https://gerrit.fd.io/r/vpp

cd vpp/
make install-dep
make bootstrap
make build

reboot

after boot:
sudo -s
service vpp start

But i got:
 Failed to start vpp.service: Unit vpp.service not found.

What should i do?

Thanks.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP low throughput

2017-11-01 Thread Steven Luong (sluong)
Avi,

You can tune a number of things to get higher throughput

- Using VPP worker threads
- Up the queue size to 1024 from the default 256 (done via qemu launch)
- Enable multi-queues (done via qemu launch)
- Up the number of streams when invoking iperf3 using –P

Nevertheless, 10s Mbps is unusually low. I don’t know what hardware you are 
using.

Steven

On 11/1/17, 8:53 AM, "vpp-dev-boun...@lists.fd.io on behalf of Avi Cohen" 
 wrote:

It is my 1st time running a VPP.
My setup is  2 VM's with a vpp in the middle (connected with vhost-user)   
as per  - 
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface
But get a very low throughput with iperf3  - 10s Mbps . any idea ?
VPP is always running with DPDK - correct ?

Thank you
Avi

-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or 
entity to whom they are addressed.
If you have received this email in error please notify the system manager. 
This message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this 
e-mail from your system. If you are not
the intended recipient you are notified that disclosing, copying, 
distributing or taking any action in reliance on
the contents of this information is strictly prohibited.



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] vpp building problem

2017-11-01 Thread Holoo Gulakh
Hi all
i'm trying to install vpp from source on Ubuntu 16.04 server (without
virtualization). I obeyed the following this link:

https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Pulling_anonymously_.28https.29

So i did as such:
sudo -s
cd /

git clone https://gerrit.fd.io/r/vpp

cd vpp/
make install-dep
make bootstrap
make build

reboot

after boot:
sudo -s
service vpp start

But i got:
* Failed to start vpp.service: Unit vpp.service not found.*

What should i do?

Thanks.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP low throughput

2017-11-01 Thread Avi Cohen
It is my 1st time running a VPP.
My setup is  2 VM's with a vpp in the middle (connected with vhost-user)   as 
per  - 
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface
But get a very low throughput with iperf3  - 10s Mbps . any idea ?
VPP is always running with DPDK - correct ?

Thank you
Avi
-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [SFC] Query regarding SFC classifier configuration for ip4 traffic

2017-11-01 Thread Phaneendra Manda
Hi Hongjun,

Thank your very much for your support.

I have tried this configuration for UDP packets and below is my
configuration

classify table mask l3 ip4 proto
classify session l2-input-hit-next input-node nsh-classifier table-index 0
match l3 ip4 proto 17 opaque-index 47615
set int l2 bridge GigabitEthernet0/9/0 1 1
set interface l2 input classify intfc GigabitEthernet0/9/0 ip4-table 0

When i send a UDP packet, the packet reaches till "*l2-input-classify*" and
goes to "*l2-learn*" node. How can i direct the packet to "*nsh-classifier*"
node?

Trace output below :

*00:49:42:211037: dpdk-input*
*  GigabitEthernet0/9/0 rx queue 0*
*  buffer 0x4d8e: current data 0, length 132, free-list 0, clone-count 0,
totlen-nifb 0, trace 0x1*
*  PKT MBUF: port 0, nb_segs 1, pkt_len 132*
*buf_len 2176, data_len 132, ol_flags 0x0, data_off 128, phys_addr
0x74d32280*
*packet_type 0x0*
*  IP4: 08:00:27:aa:bb:21 -> 08:00:27:aa:bb:01 802.1ad vlan 100 802.1ad
vlan 20*
*  UDP: 192.168.0.10 -> 192.0.0.1*
*tos 0x00, ttl 255, length 110, checksum 0x4923*
*fragment id 0xf1a7*
*  UDP: 1024 -> 1024*
*length 90, checksum 0x*
*00:49:42:211084: ethernet-input*
*  IP4: 08:00:27:aa:bb:21 -> 08:00:27:aa:bb:01 802.1ad vlan 100 802.1ad
vlan 20*
*00:49:42:211093: l2-input*
*  l2-input: sw_if_index 1 dst 08:00:27:aa:bb:01 src 08:00:27:aa:bb:21*
*00:49:42:211096: l2-input-classify*
*  l2-classify: sw_if_index 1, table -1, offset 0, next 12*
*00:49:42:211099: l2-learn*
*  l2-learn: sw_if_index 1 dst 08:00:27:aa:bb:01 src 08:00:27:aa:bb:21
bd_index 1*
*00:49:42:211102: l2-fwd*
*  l2-fwd:   sw_if_index 1 dst 08:00:27:aa:bb:01 src 08:00:27:aa:bb:21
bd_index 1*
*00:49:42:211104: l2-flood*
*  l2-flood: sw_if_index 1 dst 08:00:27:aa:bb:01 src 08:00:27:aa:bb:21
bd_index 1*
*00:49:42:211105: error-drop*
*  l2-flood: L2 replication complete*


-- 
Thanks & regards,
Phaneendra Manda.


On Wed, Nov 1, 2017 at 8:27 AM, Ni, Hongjun  wrote:

> Hi Phaneendra,
>
>
>
> Please try below scripts:
>
>
>
> classify table mask l3 ip4 proto
>
> classify session l2-input-hit-next input-node nsh-classifier table-index 0
> match l3 ip4 proto 6 opaque-index 47615
>
> set int l2 bridge TenGigabitEthernet5/0/0 1 1
>
> set interface l2 input classify intfc TenGigabitEthernet5/0/0 ip4-table 0
>
>
>
> -Hongjun
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Phaneendra Manda
> *Sent:* Tuesday, October 31, 2017 8:11 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] [SFC] Query regarding SFC classifier configuration
> for ip4 traffic
>
>
>
> Hi All,
>
>
>
> I am trying out SFC with VPP for ip4 traffic on dpdk interface. I have few
> queries.
>
>
>
> 1. I would like to know what is the configuration for IP4 traffic to reach
> the nsh-classifier node in VPP using vppctl ?
>
>
>
>  I am trying with the following command for redirecting ip4 traffic to
> nsh-classifier node. But   the command throws error: "Table index
> required"
>
>
>
>  classify table mask l3 ip4 proto
>
> * classify session hit-next input-node nsh-classifier table-index 0
> match l3 ip4 proto 17   opaque-index 47615  -- This command throws
> error*
>
>
>
>
>
> 2. Do i need to associate interface with classifier table created?
>
>
>
> Thanks in advance :)
>
>
>
> --
>
> Thanks & regards,
>
> Phaneendra Manda.
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Bernier, Daniel
Hi,

I have had the same issue and still working on the fix with Mellanox on this 
one.
​Damjan is right it is just cosmetic (although annoying).

On the Linux kernel side, it implies moving to kernel 4.8 and above and a newer 
version of ethtools.
On the VPP side, it just requires Mellanox to advertise speed capability 
correctly through DPDK and it is still half done I suppose

Thanks,

Daniel Bernier | Bell Canada


From: "Saxena, Nitin" 
Date: Wednesday, November 1, 2017 at 8:54 AM
To: "Damjan Marion (damarion)" 
Cc: "Bernier, Daniel" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] 50GE interface support on VPP


I havent ran testpmd but with VPP I am able to switch traffic between two 
ports. Both ports in VPP bridge. Seems fine right



-Nitin


From: Damjan Marion (damarion) 
Sent: Wednesday, November 1, 2017 5:50:57 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Currently it is just cosmetic…..

Does it work with testpmd?

—
Damjan


On 1 Nov 2017, at 13:14, Saxena, Nitin 
> wrote:

Ok Thanks I will debug where the problem lies.

However is this just a display issue or problem lies with data path as well 
because I am able to receive packets via this NIC to VPP from outside world? 
Any concern here?

-Nitin

From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 5:39:24 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


mlx5 dpdk driver is telling us that speed_capa = 0, so no much love here.

You should get at least ETH_LINK_SPEED_50G bit set by dpdk driver.

—
Damjan


On 1 Nov 2017, at 12:55, Saxena, Nitin 
> wrote:

Here is the detail

(gdb) p *dev_info
$1 = {pci_dev = 0x51c4e0, driver_name = 0x76eb38a8 "net_mlx5", if_index = 
8, min_rx_bufsize = 32, max_rx_pktlen = 65536, max_rx_queues = 65535,
  max_tx_queues = 65535, max_mac_addrs = 128, max_hash_mac_addrs = 0, max_vfs = 
0, max_vmdq_pools = 0, rx_offload_capa = 15, tx_offload_capa = 1679, reta_size 
= 512,
  hash_key_size = 40 '(', flow_type_rss_offloads = 0, default_rxconf = 
{rx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'}, 
rx_free_thresh = 0,
rx_drop_en = 0 '\000', rx_deferred_start = 0 '\000'}, default_txconf = 
{tx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'},
tx_rs_thresh = 0, tx_free_thresh = 0, txq_flags = 0, tx_deferred_start = 0 
'\000'}, vmdq_queue_base = 0, vmdq_queue_num = 0, vmdq_pool_base = 0, 
rx_desc_lim = {
nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, nb_mtu_seg_max = 
0}, tx_desc_lim = {nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, 
nb_mtu_seg_max = 0}, speed_capa = 0, nb_rx_queues = 0, nb_tx_queues = 0}

Thanks,
Nitin


From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 5:17 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you put breakpoint to port_type_from_speed_capa and catpture dev_info.

I.e:

$ make build debug
(gdb) b port_type_from_speed_capa
(gdb) r

(gdb) p * dev_info

—
Damjan


On 1 Nov 2017, at 12:34, Saxena, Nitin 
> wrote:

Please find show pci output

DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:0b:00.0   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.1   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.1   0  8086:10c9   2.5 GT/s x4  igb
:0b:00.1   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.0   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.0   0  8086:10c9   2.5 GT/s x4  igb

Just Fyi I am running VPP on aarch64.

Thanks,
Nitin


From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 3:09 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you share “show pci” output from VPP?

—
Damjan


On 30 Oct 2017, at 14:22, Saxena, Nitin 
> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified 

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Saxena, Nitin
I havent ran testpmd but with VPP I am able to switch traffic between two 
ports. Both ports in VPP bridge. Seems fine right


-Nitin


From: Damjan Marion (damarion) 
Sent: Wednesday, November 1, 2017 5:50:57 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Currently it is just cosmetic…..

Does it work with testpmd?

—
Damjan

On 1 Nov 2017, at 13:14, Saxena, Nitin 
> wrote:

Ok Thanks I will debug where the problem lies.

However is this just a display issue or problem lies with data path as well 
because I am able to receive packets via this NIC to VPP from outside world? 
Any concern here?

-Nitin

From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 5:39:24 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


mlx5 dpdk driver is telling us that speed_capa = 0, so no much love here.

You should get at least ETH_LINK_SPEED_50G bit set by dpdk driver.

—
Damjan

On 1 Nov 2017, at 12:55, Saxena, Nitin 
> wrote:

Here is the detail


(gdb) p *dev_info
$1 = {pci_dev = 0x51c4e0, driver_name = 0x76eb38a8 "net_mlx5", if_index = 
8, min_rx_bufsize = 32, max_rx_pktlen = 65536, max_rx_queues = 65535,
  max_tx_queues = 65535, max_mac_addrs = 128, max_hash_mac_addrs = 0, max_vfs = 
0, max_vmdq_pools = 0, rx_offload_capa = 15, tx_offload_capa = 1679, reta_size 
= 512,
  hash_key_size = 40 '(', flow_type_rss_offloads = 0, default_rxconf = 
{rx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'}, 
rx_free_thresh = 0,
rx_drop_en = 0 '\000', rx_deferred_start = 0 '\000'}, default_txconf = 
{tx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'},
tx_rs_thresh = 0, tx_free_thresh = 0, txq_flags = 0, tx_deferred_start = 0 
'\000'}, vmdq_queue_base = 0, vmdq_queue_num = 0, vmdq_pool_base = 0, 
rx_desc_lim = {
nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, nb_mtu_seg_max = 
0}, tx_desc_lim = {nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, 
nb_mtu_seg_max = 0}, speed_capa = 0, nb_rx_queues = 0, nb_tx_queues = 0}

Thanks,
Nitin



From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 5:17 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you put breakpoint to port_type_from_speed_capa and catpture dev_info.

I.e:

$ make build debug
(gdb) b port_type_from_speed_capa
(gdb) r

(gdb) p * dev_info

—
Damjan

On 1 Nov 2017, at 12:34, Saxena, Nitin 
> wrote:

Please find show pci output


DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:0b:00.0   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.1   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.1   0  8086:10c9   2.5 GT/s x4  igb
:0b:00.1   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.0   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.0   0  8086:10c9   2.5 GT/s x4  igb


Just Fyi I am running VPP on aarch64.

Thanks,
Nitin


From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 3:09 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you share “show pci” output from VPP?

—
Damjan

On 30 Oct 2017, at 14:22, Saxena, Nitin 
> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.

Attached logs.

Thanks,
Nitin



From: vpp-dev-boun...@lists.fd.io 
> on behalf of 
Damjan Marion (damarion) >
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP

Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
> wrote:

Hi,

I 

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Damjan Marion (damarion)

Currently it is just cosmetic…..

Does it work with testpmd?

—
Damjan

On 1 Nov 2017, at 13:14, Saxena, Nitin 
> wrote:

Ok Thanks I will debug where the problem lies.

However is this just a display issue or problem lies with data path as well 
because I am able to receive packets via this NIC to VPP from outside world? 
Any concern here?

-Nitin

From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 5:39:24 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


mlx5 dpdk driver is telling us that speed_capa = 0, so no much love here.

You should get at least ETH_LINK_SPEED_50G bit set by dpdk driver.

—
Damjan

On 1 Nov 2017, at 12:55, Saxena, Nitin 
> wrote:

Here is the detail


(gdb) p *dev_info
$1 = {pci_dev = 0x51c4e0, driver_name = 0x76eb38a8 "net_mlx5", if_index = 
8, min_rx_bufsize = 32, max_rx_pktlen = 65536, max_rx_queues = 65535,
  max_tx_queues = 65535, max_mac_addrs = 128, max_hash_mac_addrs = 0, max_vfs = 
0, max_vmdq_pools = 0, rx_offload_capa = 15, tx_offload_capa = 1679, reta_size 
= 512,
  hash_key_size = 40 '(', flow_type_rss_offloads = 0, default_rxconf = 
{rx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'}, 
rx_free_thresh = 0,
rx_drop_en = 0 '\000', rx_deferred_start = 0 '\000'}, default_txconf = 
{tx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'},
tx_rs_thresh = 0, tx_free_thresh = 0, txq_flags = 0, tx_deferred_start = 0 
'\000'}, vmdq_queue_base = 0, vmdq_queue_num = 0, vmdq_pool_base = 0, 
rx_desc_lim = {
nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, nb_mtu_seg_max = 
0}, tx_desc_lim = {nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, 
nb_mtu_seg_max = 0}, speed_capa = 0, nb_rx_queues = 0, nb_tx_queues = 0}

Thanks,
Nitin



From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 5:17 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you put breakpoint to port_type_from_speed_capa and catpture dev_info.

I.e:

$ make build debug
(gdb) b port_type_from_speed_capa
(gdb) r

(gdb) p * dev_info

—
Damjan

On 1 Nov 2017, at 12:34, Saxena, Nitin 
> wrote:

Please find show pci output


DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:0b:00.0   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.1   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.1   0  8086:10c9   2.5 GT/s x4  igb
:0b:00.1   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.0   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.0   0  8086:10c9   2.5 GT/s x4  igb


Just Fyi I am running VPP on aarch64.

Thanks,
Nitin


From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 3:09 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you share “show pci” output from VPP?

—
Damjan

On 30 Oct 2017, at 14:22, Saxena, Nitin 
> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.

Attached logs.

Thanks,
Nitin



From: vpp-dev-boun...@lists.fd.io 
> on behalf of 
Damjan Marion (damarion) >
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP

Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
> wrote:

Hi,

I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.

localadmin@sm981:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
81:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

localadmin@sm981:~$ ethtool ens1f0
Settings for ens1f0:

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Saxena, Nitin
Ok Thanks I will debug where the problem lies.


However is this just a display issue or problem lies with data path as well 
because I am able to receive packets via this NIC to VPP from outside world? 
Any concern here?


-Nitin


From: Damjan Marion (damarion) 
Sent: Wednesday, November 1, 2017 5:39:24 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


mlx5 dpdk driver is telling us that speed_capa = 0, so no much love here.

You should get at least ETH_LINK_SPEED_50G bit set by dpdk driver.

—
Damjan

On 1 Nov 2017, at 12:55, Saxena, Nitin 
> wrote:

Here is the detail


(gdb) p *dev_info
$1 = {pci_dev = 0x51c4e0, driver_name = 0x76eb38a8 "net_mlx5", if_index = 
8, min_rx_bufsize = 32, max_rx_pktlen = 65536, max_rx_queues = 65535,
  max_tx_queues = 65535, max_mac_addrs = 128, max_hash_mac_addrs = 0, max_vfs = 
0, max_vmdq_pools = 0, rx_offload_capa = 15, tx_offload_capa = 1679, reta_size 
= 512,
  hash_key_size = 40 '(', flow_type_rss_offloads = 0, default_rxconf = 
{rx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'}, 
rx_free_thresh = 0,
rx_drop_en = 0 '\000', rx_deferred_start = 0 '\000'}, default_txconf = 
{tx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'},
tx_rs_thresh = 0, tx_free_thresh = 0, txq_flags = 0, tx_deferred_start = 0 
'\000'}, vmdq_queue_base = 0, vmdq_queue_num = 0, vmdq_pool_base = 0, 
rx_desc_lim = {
nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, nb_mtu_seg_max = 
0}, tx_desc_lim = {nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, 
nb_mtu_seg_max = 0}, speed_capa = 0, nb_rx_queues = 0, nb_tx_queues = 0}

Thanks,
Nitin



From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 5:17 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you put breakpoint to port_type_from_speed_capa and catpture dev_info.

I.e:

$ make build debug
(gdb) b port_type_from_speed_capa
(gdb) r

(gdb) p * dev_info

—
Damjan

On 1 Nov 2017, at 12:34, Saxena, Nitin 
> wrote:

Please find show pci output


DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:0b:00.0   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.1   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.1   0  8086:10c9   2.5 GT/s x4  igb
:0b:00.1   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.0   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.0   0  8086:10c9   2.5 GT/s x4  igb


Just Fyi I am running VPP on aarch64.

Thanks,
Nitin


From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 3:09 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you share “show pci” output from VPP?

—
Damjan

On 30 Oct 2017, at 14:22, Saxena, Nitin 
> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.

Attached logs.

Thanks,
Nitin



From: vpp-dev-boun...@lists.fd.io 
> on behalf of 
Damjan Marion (damarion) >
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP

Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
> wrote:

Hi,

I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.

localadmin@sm981:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
81:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

localadmin@sm981:~$ ethtool ens1f0
Settings for ens1f0:
Supported ports: [ FIBRE Backplane ]
Supported link modes:   1000baseKX/Full
1baseKR/Full
4baseKR4/Full
   

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Damjan Marion (damarion)

mlx5 dpdk driver is telling us that speed_capa = 0, so no much love here.

You should get at least ETH_LINK_SPEED_50G bit set by dpdk driver.

—
Damjan

On 1 Nov 2017, at 12:55, Saxena, Nitin 
> wrote:

Here is the detail


(gdb) p *dev_info
$1 = {pci_dev = 0x51c4e0, driver_name = 0x76eb38a8 "net_mlx5", if_index = 
8, min_rx_bufsize = 32, max_rx_pktlen = 65536, max_rx_queues = 65535,
  max_tx_queues = 65535, max_mac_addrs = 128, max_hash_mac_addrs = 0, max_vfs = 
0, max_vmdq_pools = 0, rx_offload_capa = 15, tx_offload_capa = 1679, reta_size 
= 512,
  hash_key_size = 40 '(', flow_type_rss_offloads = 0, default_rxconf = 
{rx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'}, 
rx_free_thresh = 0,
rx_drop_en = 0 '\000', rx_deferred_start = 0 '\000'}, default_txconf = 
{tx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'},
tx_rs_thresh = 0, tx_free_thresh = 0, txq_flags = 0, tx_deferred_start = 0 
'\000'}, vmdq_queue_base = 0, vmdq_queue_num = 0, vmdq_pool_base = 0, 
rx_desc_lim = {
nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, nb_mtu_seg_max = 
0}, tx_desc_lim = {nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, 
nb_mtu_seg_max = 0}, speed_capa = 0, nb_rx_queues = 0, nb_tx_queues = 0}

Thanks,
Nitin



From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 5:17 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you put breakpoint to port_type_from_speed_capa and catpture dev_info.

I.e:

$ make build debug
(gdb) b port_type_from_speed_capa
(gdb) r

(gdb) p * dev_info

—
Damjan

On 1 Nov 2017, at 12:34, Saxena, Nitin 
> wrote:

Please find show pci output


DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:0b:00.0   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.1   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.1   0  8086:10c9   2.5 GT/s x4  igb
:0b:00.1   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.0   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.0   0  8086:10c9   2.5 GT/s x4  igb


Just Fyi I am running VPP on aarch64.

Thanks,
Nitin


From: Damjan Marion (damarion) >
Sent: Wednesday, November 1, 2017 3:09 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you share “show pci” output from VPP?

—
Damjan

On 30 Oct 2017, at 14:22, Saxena, Nitin 
> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.

Attached logs.

Thanks,
Nitin



From: vpp-dev-boun...@lists.fd.io 
> on behalf of 
Damjan Marion (damarion) >
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP

Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
> wrote:

Hi,

I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.

localadmin@sm981:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
81:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

localadmin@sm981:~$ ethtool ens1f0
Settings for ens1f0:
Supported ports: [ FIBRE Backplane ]
Supported link modes:   1000baseKX/Full
1baseKR/Full
4baseKR4/Full
4baseCR4/Full
4baseSR4/Full
4baseLR4/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Advertised link modes:  1000baseKX/Full
1baseKR/Full
4baseKR4/Full

[vpp-dev] MACCHIATObin and VPP

2017-11-01 Thread Damjan Marion (damarion)

If people are interested, there is ongoing work[1] to bring VPP up
on Marvell MACCHIATObin[2] board. Interesting ARM64 community board with
SFP+ ports.

[1] https://github.com/MarvellEmbeddedProcessors/vpp-marvell
[2] http://macchiatobin.net

— 
Damjan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Question about IP reassembly in 17.10 release

2017-11-01 Thread Damjan Marion


> On 30 Oct 2017, at 08:21, Yu, Michael A. (NSB - CN/Qingdao) 
>  wrote:
> 
> Hi,
>  
> From https://docs.fd.io/vpp/17.10/release_notes_1710.html 
> , I found that “IP 
> reassembly” is claimed to support in 17.10 release.
> But after checked the latest 17.10 code, I can’t find any new code change 
> related to this part, and the fragmented packets are still treated as 
> “experimental” in function “ip4_local_inline” (ip4_forward.c)
>  
>/* Treat IP frag packets as "experimental" protocol for now
>   until support of IP frag reassembly is implemented */
>proto0 = ip4_is_fragment (ip0) ? 0xfe : ip0->protocol;
>proto1 = ip4_is_fragment (ip1) ? 0xfe : ip1->protocol;
>  
> I am confused on it and could anyone help clarify if “IP reassembly” is 
> supported now? Thanks!

No, IP Reassembly will likely be in the next release.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Damjan Marion (damarion)

Can you share “show pci” output from VPP?

—
Damjan

On 30 Oct 2017, at 14:22, Saxena, Nitin 
> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.

Attached logs.

Thanks,
Nitin



From: vpp-dev-boun...@lists.fd.io 
> on behalf of 
Damjan Marion (damarion) >
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 50GE interface support on VPP

Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
> wrote:

Hi,

I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.

localadmin@sm981:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
81:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

localadmin@sm981:~$ ethtool ens1f0
Settings for ens1f0:
Supported ports: [ FIBRE Backplane ]
Supported link modes:   1000baseKX/Full
1baseKR/Full
4baseKR4/Full
4baseCR4/Full
4baseSR4/Full
4baseLR4/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Advertised link modes:  1000baseKX/Full
1baseKR/Full
4baseKR4/Full
4baseCR4/Full
4baseSR4/Full
4baseLR4/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 4Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Cannot get wake-on-lan settings: Operation not permitted
Current message level: 0x0004 (4)
   link
Link detected: yes

localadmin@sm981:~$ sudo vppctl show interface
  Name   Idx   State  Counter  Count
UnknownEthernet81/0/0 1 up   rx packets
723257
 rx bytes
68599505
 tx packets 
39495
 tx bytes 
2093235
 drops 
723257
 ip4
48504
UnknownEthernet81/0/1 2 up   rx packets
723194
 rx bytes
68592678
 tx packets 
39495
 tx bytes 
2093235
 drops 
723194
 ip4
48504
local00down


Any ideas where this could be fixed?

Thanks,

Daniel Bernier | Bell Canada

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] dpdk output function

2017-11-01 Thread Neale Ranns (nranns)
Hi Yuliang,

It will call:
  vnet_interface_output_node()
from
 src/vnet/interface_output.c

There is also a TenGigabitEthernet5/0/1-tx node. Since you Are using DPDK on 
this interface, that will call:
  dpdk_interface_tx()
from
  src/plugins/dpdk/device/device.c

hth
neale

From:  on behalf of Yuliang Li 

Date: Wednesday, 1 November 2017 at 05:31
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] dpdk output function

Hi,

Some node is called "TenGigabitEthernet5/0/1-output". I am using dpdk on this 
interface. Does anyone know what is the function that this node calls?

Thanks,
--
Yuliang Li
PhD student
Department of Computer Science
Yale University
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev