Re: [vpp-dev] Endless NAT Questions

2018-08-15 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco) via Lists.Fd.Io
I think nat_show_config_reply should be augmented with some fields reflecting 
newer features.
You are correct deterministic and endpoint-dependent are mutually exclusive.

Matus


From: vpp-dev@lists.fd.io  On Behalf Of Jon Loeliger
Sent: Wednesday, August 15, 2018 10:39 PM
To: vpp-dev 
Subject: [vpp-dev] Endless NAT Questions

Matus,

Should the nat_show_config_reply structure be augmented
to have the (newer) fields added to it?

I'm thinking specifically about:
- the nat64_* values
- the dpo selection,
- the dslite values,
- the endpoint-dependent indication.

Also, it looks like "deterministic" and "endpoint-dependent" are
mutually exclusive.   Is that correct?

Thanks,
jdl

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10178): https://lists.fd.io/g/vpp-dev/message/10178
Mute This Topic: https://lists.fd.io/mt/24537787/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT Fragment Reassembly

2018-08-15 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco) via Lists.Fd.Io
Max_frag value is applied when fragments arrived out of order (non-initial 
fragments arrive before first fragment which contains L4 header), fragments are 
stored and waiting for first fragment (max_frag is limit for number of stored 
fragments). Fragments are dropped in nat44-in2out-reass or nat44-out2in-reass 
node. Whether fragments are dropped depends on order. All fragments should be 
dropped when max_frag is 1 and 2 non-initial fragments are received before 
first fragment. After a brief look into the code I see that this is not current 
behaviour and dropped is only second fragment so I think some improvements 
should be done in the future.

Matus


From: Jon Loeliger 
Sent: Wednesday, August 15, 2018 4:06 PM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 

Cc: vpp-dev 
Subject: Re: [vpp-dev] NAT Fragment Reassembly

On Wed, Aug 15, 2018 at 8:50 AM, Jon Loeliger 
mailto:j...@netgate.com>> wrote:
On Wed, Aug 15, 2018 at 12:49 AM, Matus Fabian -X (matfabia - PANTHEON 
TECHNOLOGIES at Cisco) mailto:matfa...@cisco.com>> wrote:
Hi Jon,

NAT plugin does virtual fragment reassembly – it enables to translate 
non-initial fragments without L4 header otherwise NAT is unable to gather port 
information from the non-initial fragment, packet is still broken into several 
fragments after NAT translation.

Matus

Thanks, Matus!

I'm trying to understand how part of the NAT virtual reassembly works still.
When and how does the drop_frag count come into play?  For example,
if an original packet was broken into 3 fragments, and drop_frag was 1 or 2,

Naturally, I meant the "max_frag" values here.

should all three fragments get dropped?  And are they dropped on ingress
or egress?

Is there a packet trace flow where I can see them being dropped?  I ask
because it looks to me like these fragments are only sometimes dropped
when the drop_frag value is exceeded, and it also requires the

And "max_frag" there too.

ip_reassembly_enable_disable to be "on" too.

I've been doing a "trace add dpdk-input 500", sending my example packets
that need fragmentation, NAT-ing them, and then filtering the trace buffer.
What is the right node to use in the "filter" here?

Thanks,
jdl


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10177): https://lists.fd.io/g/vpp-dev/message/10177
Mute This Topic: https://lists.fd.io/mt/24529319/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Endless NAT Questions

2018-08-15 Thread Jon Loeliger
Matus,

Should the nat_show_config_reply structure be augmented
to have the (newer) fields added to it?

I'm thinking specifically about:
- the nat64_* values
- the dpo selection,
- the dslite values,
- the endpoint-dependent indication.

Also, it looks like "deterministic" and "endpoint-dependent" are
mutually exclusive.   Is that correct?

Thanks,
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10176): https://lists.fd.io/g/vpp-dev/message/10176
Mute This Topic: https://lists.fd.io/mt/24537787/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: vppctl fails to start in Container (Centos 7.5.1804)

2018-08-15 Thread Billy
So I would need to see how you are starting your container, but I think you
are not mapping in hugepages. I do need to more work in the area.
Currently, I start containers running VPP with --privileged. SELinunx is
still enabled. For example:
   docker run -it --privileged --device=/dev/hugepages:/dev/hugepages centos

I was able to reproduce what you saw by running:
   docker run -it centos


Once the container was up, ran the following in the container:
   yum install centos-release-fdio
   yum install vpp*

Then ran on host:
   docker exec  /usr/bin/vpp -c /etc/vpp/startup.conf


Billy

On Wed, Aug 15, 2018 at 8:42 AM,  wrote:

> VPP is installed only in the container.
>
>
>
> On 2018-08-15 16:50, Billy wrote:
>
> I'll take a look. So is VPP only installed in the container, or is also
> installed on the host?
>
> Billy McFall
>
> On Wed, Aug 15, 2018 at 6:00 AM,  wrote:
>
>> Thanks Ed, in centos installing vpp automatically installs
>> vpp-selinux-policy with it. So enforcing selinux on the host machine make
>> vpp work.
>>
>> However, when I try installing VPP in centos container there vpp doesn't
>> start. Can't enforce selinux in container and vpp-selinux-policy is
>> installed with vpp due to dependency. When I run vpp in the container
>>
>> $docker exec  /usr/bin/vpp -c /etc/vpp/startup.conf
>>
>> I get following errors
>>
>> tls_openssl_init:650: failed to initialize TLS CA chain
>>
>>
>> dpdk_config: mount failed 1
>>
>> Seems like an selinux issue or what? Could I get any help there? :)
>>
>> Best Regards,
>>
>> Omer
>>
>>
>> On 2018-08-13 22:43, Edward Warnicke wrote:
>>
>> We do have an se linux package that should in principle let you keep
>> working with se linux enforce
>> try
>>
>> yum install vpp-selinux-policy
>>
>> and see if that helps :)
>>
>> Ed
>>
>> On August 13, 2018 at 12:41:37 PM, omer.maj...@sofioni.com (
>> omer.maj...@sofioni.com) wrote:
>>
>>
>>
>> Thank Ed.
>>
>> Changed SELINUX=enforcing in /etc/selinux/config
>>
>> Restarted the machine, and it worked.
>>
>> Best Regards,
>>
>> Omer
>>
>>
>>
>> On 2018-08-13 22:21, Edward Warnicke wrote:
>>
>> This feels like SE Linux may be involved...
>>
>> Ed
>>
>>
>>
>> On August 13, 2018 at 12:17:09 PM, omer.maj...@sofioni.com (
>> omer.maj...@sofioni.com) wrote:
>>
>>
>>
>> Hi,
>>
>> I've built VPP on centos 7.5.1804, took the RPM packages to another
>> machine to deploy VPP there.
>>
>> After installing RPM packages there when I run $vppctl it gives me
>> following error.
>>
>> *clib_socket_init: connect (fd 3, '/run/vpp/cli.sock'): No such file or
>> directory*
>>
>>
>> I thought there might be something wrong with the build or something so
>> took packages from following repository
>>
>> http://a.centos.org/centos/7.5.1804/nfv/x86_64/fdio/vpp/vpp-1804/
>>
>> I get the same error when running vppctl after installation. Could
>> someone help with the error?
>>
>> Best Regards,
>>
>> Omer
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#10126): https://lists.fd.io/g/vpp-dev/message/10126
>> Mute This Topic: https://lists.fd.io/mt/24505099/464962
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [hagb...@gmail.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>>
>>
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#10127): https://lists.fd.io/g/vpp-dev/message/10127
>> Mute This Topic: https://lists.fd.io/mt/24505099/984664
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
>> omer.maj...@sofioni.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>>
>>
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#10129): https://lists.fd.io/g/vpp-dev/message/10129
>> Mute This Topic: https://lists.fd.io/mt/24505099/984664
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
>> omer.maj...@sofioni.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>>
>>
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>>
>> View/Reply Online (#10159): https://lists.fd.io/g/vpp-dev/message/10159
>> Mute This Topic: https://lists.fd.io/mt/24532686/675237
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [bmcf...@redhat.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>>
>>
>
>
> --
> *Billy McFall*
> Networking Group
> CTO Office
> *Red Hat*
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10160): https://lists.fd.io/g/vpp-dev/message/10160
> Mute This Topic: https://lists.fd.io/mt/24532686/984664
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [omer.maj...@sofioni.com
> ]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>


-- 
*Billy McFall*
Networking Group
CTO Office

*Red Hat*
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#1

Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
This configuration is not supported in VPP.

Steven

From:  on behalf of Aleksander Djuric 

Date: Wednesday, August 15, 2018 at 12:33 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP link bonding issue

In addition.. I have tried to configure LACP in dpdk section of vpp 
startup.conf.. and I've got the same output:

startup.conf:
unix {
   nodaemon
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /run/vpp/cli.sock
   gid vpp
}

api-trace {
   on
}

api-segment {
   gid vpp
}

socksvr {
   default
}

dpdk {
   socket-mem 2048
   num-mbufs 131072

   dev :0a:00.0
   dev :0a:00.1
   dev :0a:00.2
   dev :0a:00.3

   vdev eth_bond0,mode=4,slave=:0a:00.0,slave=:0a:00.1,xmit_policy=l23
}

plugins {
   path /usr/lib/vpp_plugins
}

vpp# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5 down 9000/0/0/0
GigabitEtherneta/0/0  1  bond-slave  9000/0/0/0
GigabitEtherneta/0/1  2  bond-slave  9000/0/0/0
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0
vpp# set interface ip address BondEthernet0 10.0.0.2/24
vpp# set interface state BondEthernet0 up
vpp# clear hardware
vpp# clear error
vpp# show hardware
 NameIdx   Link  Hardware
BondEthernet0  5 up   Slave-Idx: 1 2
 Ethernet address 00:0b:ab:f4:bd:84
 Ethernet Bonding
   carrier up full duplex speed 2000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/0   1slave GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1   2slave GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2   3down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3   4down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0 0down  local0
 local
vpp# show error
  CountNode  Reason
vpp# trace add dpdk-input 50
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer

Thanks in advance for any help..

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10174): https://lists.fd.io/g/vpp-dev/message/10174
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
Aleksander,

The problem is LACP periodic timer is not running as shown in your output. I 
wonder if lacp-process is launched properly or got stuck. Could you please do 
show run and check on the health of lacp-process?

 periodic timer: not running

Steven

From:  on behalf of Aleksander Djuric 

Date: Wednesday, August 15, 2018 at 12:11 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP link bonding issue

Hi Steven,

Thanks much for the answer. Yes, these 2 boxes’ interfaces are connected back 
to back.
Both sides shows same diagnostics results, here is the output:

vpp# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5  up  9000/0/0/0
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0   drops   
   2
vpp# clear hardware
vpp# clear error
vpp# clear hardware
vpp# clear error
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show hardware
 NameIdx   Link  Hardware
BondEthernet0  5 up   BondEthernet0
 Ethernet address 00:0b:ab:f4:bd:84
GigabitEtherneta/0/0   1 up   GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1   2 up   GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2   3down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3   4down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0 0down  local0
 local
vpp# show error
  CountNode  Reason
5ip4-glean   ARP requests sent
5BondEthernet0-txno slave
vpp# show lacp details
 GigabitEtherneta/0/0
   debug: 0
   loopback port: 0
   port moved: 0
   ready_n: 0
   ready: 0
   Actor
 system: 00:0b:ab:f4:bd:84
 system priority: 65535
 key: 5
 port priority: 255
 port number: 1
 state: 0x7
   LACP_STATE_LACP_ACTIVITY (0)
   LACP_STATE_LACP_TIMEOUT (1)
   LACP_STATE_AGGREGATION (2)
   Partner
 system: 00:00:00:00:00:00
 system priority: 65535
 key: 5
 port priority: 255
 port number: 1
 state: 0x1
   LACP_STATE_LACP_ACTIVITY (0)
 wait while timer: not running
 current while timer: not running
 periodic timer: not running
   RX-state: EXPIRED
   TX-state: TRANSMIT
   MUX-state: DETACHED
   PTX-state: PERIODIC_TX

 GigabitEtherneta/0/1
   debug: 0
   loopback port: 0
   port moved: 0
   ready_n: 0
   ready: 0
   Actor
 system: 00:0b:ab:f4:bd:84
 system priority: 65535
 key: 5
 port priority: 255
 port number: 2
 state: 0x7
   LACP_STATE_LACP_ACTIVITY (0)
   LACP_STATE_LACP_TIMEOUT (1)
   LACP_STATE_AGGREGATION (2)
   Partner
 system: 00:00:00:00:00:00
 system priority: 65535
 key: 5
 port priority: 255
 port number: 2
 state: 0x1
   LACP_STATE_LACP_ACTIVITY (0)
 wait while timer: not running
 current while timer: not running
 periodic timer: not running
   RX-state: EXPIRED
   TX-state: TRANSMIT
   MUX-state: DETACHED
   PTX-state: PERIODIC_TX

vpp# trace add dpdk-input 50
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10173): https://lists.fd.io/g/vpp-dev/message/10173
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-

Re: [vpp-dev] Where could I find the example of vcl.conf

2018-08-15 Thread Florin Coras
Hi Yalei, 

You definitely need api-socket-name /run/vpp-api.sock, don’t comment that out 
:-). If that’s not enabled, you can’t exchange file descriptors. 

Florin

> On Aug 15, 2018, at 5:43 AM, 汪亚雷  wrote:
> 
> looks like  APP not attach to the segment vpp created like /dev/shm/$pid
> 
> I think maybe need add the related ssvm_segment_attach in L125 in 
> vl_api_application_attach_reply_t_handler, not sure, will test it.
> 
> wylandrea mailto:wyland...@gmail.com>> 于2018年8月15日周三 
> 下午12:10写道:
> Thanks, Florin!  I tried, but unfortunately got segment fault like below(pull 
> from master today):
> 
> The segfault caused by the mq=0x204005440, the addr could not be refered in 
> APP side,  looks like  the address is alloc in segment_manager_init L293, 
> 
> I used the example vcl.conf you provided, just comments the line 
> "api-socket-name /run/vpp-api.sock".
> 
> ==
> 
> VCL<23469>: configured VCL debug level (4) from VCL_DEBUG!
> VCL<23469>: allocated VCL heap = 0x7fffe010, size 268435456 (0x1000)
> VCL<23469>: configured app_scope_local (1)
> VCL<23469>: configured app_scope_global (1)
> VCL<23469>: configured with mq with eventfd
> VCL<23469>: completed parsing vppcom config!
> vppcom_connect_to_vpp:803: VCL<23469>: app (ldp-23469-app) connecting to VPP 
> api (/vpe-api)...
> [New Thread 0x7fffd700 (LWP 23474)]
> vppcom_connect_to_vpp:819: VCL<23469>: app (ldp-23469-app) is connected to 
> VPP!
> [New Thread 0x7fffdf7fe700 (LWP 23475)]
> vppcom_app_create:714: VCL<23469>: sending session enable
> vppcom_app_create:724: VCL<23469>: sending app attach
> 
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffd700 (LWP 23474)]
> 0x7510a403 in svm_msg_q_set_consumer_eventfd (mq=0x204005440, fd=0) 
> at /home/wenjiang/vpp/build-data/../src/svm/message_queue.c:242
> 242   mq->q->consumer_evtfd = fd;
> Missing separate debuginfos, use: debuginfo-install dpdk-18.05-1.x86_64 
> libgcc-4.8.5-28.el7_5.1.x86_64 libstdc++-4.8.5-28.el7_5.1.x86_64 
> numactl-libs-2.0.9-7.el7.x86_64
> (gdb) bt
> #0  0x7510a403 in svm_msg_q_set_consumer_eventfd (mq=0x204005440, 
> fd=0) at /home/wenjiang/vpp/build-data/../src/svm/message_queue.c:242
> #1  0x74fede84 in vl_api_application_attach_reply_t_handler 
> (mp=0x30066c40) at /home/wenjiang/vpp/build-data/../src/vcl/vcl_bapi.c:119
> #2  0x75111bba in msg_handler_internal (am=0x75360880 , 
> the_msg=0x30066c40, trace_it=0, do_it=1, free_it=1) at 
> /home/wenjiang/vpp/build-data/../src/vlibapi/api_shared.c:425
> #3  0x75111e1a in vl_msg_api_handler (the_msg=0x30066c40) at 
> /home/wenjiang/vpp/build-data/../src/vlibapi/api_shared.c:551
> #4  0x75113344 in vl_msg_api_queue_handler (q=0x30207ec0) at 
> /home/wenjiang/vpp/build-data/../src/vlibapi/api_shared.c:762
> #5  0x75117f4e in rx_thread_fn (arg=0x0) at 
> /home/wenjiang/vpp/build-data/../src/vlibmemory/memory_client.c:94
> #6  0x7763ce25 in start_thread (arg=0x7fffd700) at 
> pthread_create.c:308
> #7  0x7715ebad in clone () at 
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
> 
> 
> 
> startup.conf
> root@192.168.122.252:/home/wenjiang/dmm/release/bin (master) $ cat 
> ~/startup.conf
> unix {
>   #nodaemon
>   interactive
>   log /var/log/vpp/vpp.log
>   cli-listen localhost:5002
>   full-coredump
> }
> 
> api-trace {
>   on
> }
> 
> 
> dpdk {
>   socket-mem 1024
>   dev :00:09.0
> }
> 
> session { evt_qs_memfd_seg  }
> #socksvr { socket-name /run/vpp-api.sock }
> 
> ==
> root@192.168.122.252:/home/wenjiang/dmm/release/bin (master) $ cat 
> /etc/vpp/vcl.conf
> vcl {
>   #rx-fifo-size 400
>   #tx-fifo-size 400
>   app-scope-local
>   app-scope-global
>   #api-socket-name /run/vpp-api.sock
>   use-mq-eventfd
> }
> 
> 
> 
> 
> 
> 
> Florin Coras mailto:fcoras.li...@gmail.com>> 
> 于2018年8月14日周二 下午11:15写道:
> Hi Yalei, 
> 
> You have an example of how to write a vcl.conf file in vcl/vcl_test.conf. 
> It’s just an example, so if you want to try out eventfd, here’s what I’ve 
> been recently using:
> 
> vcl {
>   rx-fifo-size 400
>   tx-fifo-size 400
>   app-scope-local
>   app-scope-global
>   api-socket-name /run/vpp-api.sock
>   use-mq-eventfd
> }
> 
> For this to work, vpp must come up with the binary api socket transport 
> configured and the session layer event queues must be allocated in a memfd 
> segment. So, add the following to your vpp startup conf:
> 
> socksvr { socket-name /run/vpp-api.sock }
> session { evt_qs_memfd_seg  }
> 
> Also, to have vcl read your config file, remember to do something like: 
> "export VCL_CONFIG=/path/to/your/file”. Finally, this is still very much 
> ongoing work so if you hit any issues, do let me know :-)
> 
> Hope this helps, 
> Florin
> 
> > On Aug 14, 2018, at 5:38 AM, 汪亚雷  > > wrote:
> > 
> > Hi Florin,
> > 
> > vppcom_cfg_

Re: [vpp-dev] VPP crashes with bvi interface

2018-08-15 Thread Aleksander Djuric
Hi Dave,

Thanks for quick reply.

I have updated VPP to v18.10-rc0~174-g6bd197eb

It works.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10171): https://lists.fd.io/g/vpp-dev/message/10171
Mute This Topic: https://lists.fd.io/mt/24533887/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: vppctl fails to start in Container (Centos 7.5.1804)

2018-08-15 Thread Thomas F Herbert

Billy and I are looking into it right now.


On 08/15/2018 07:51 AM, Ed Warnicke wrote:

Tom,

Do you perhaps have more insight here?

Ed

On August 15, 2018 at 5:00:32 AM, omer.maj...@sofioni.com 
 (omer.maj...@sofioni.com 
) wrote:


Thanks Ed, in centos installing vpp automatically installs 
vpp-selinux-policy with it. So enforcing selinux on the host machine 
make vpp work.


However, when I try installing VPP in centos container there vpp 
doesn't start. Can't enforce selinux in container and 
vpp-selinux-policy is installed with vpp due to dependency. When I 
run vpp in the container


$docker exec  /usr/bin/vpp -c /etc/vpp/startup.conf

I get following errors

tls_openssl_init:650: failed to initialize TLS CA chain

dpdk_config: mount failed 1

Seems like an selinux issue or what? Could I get any help there? :)

Best Regards,

Omer


On 2018-08-13 22:43, Edward Warnicke wrote:

We do have an se linux package that should in principle let you keep 
working with se linux enforce

try
yum install vpp-selinux-policy

and see if that helps :)
Ed

On August 13, 2018 at 12:41:37 PM, omer.maj...@sofioni.com 
 (omer.maj...@sofioni.com 
) wrote:


Thank Ed.

Changed SELINUX=enforcing in /etc/selinux/config

Restarted the machine, and it worked.

Best Regards,

Omer


On 2018-08-13 22:21, Edward Warnicke wrote:

This feels like SE Linux may be involved...
Ed


On August 13, 2018 at 12:17:09 PM, omer.maj...@sofioni.com
 (omer.maj...@sofioni.com
) wrote:

Hi,

I've built VPP on centos 7.5.1804, took the RPM packages
to another machine to deploy VPP there.

After installing RPM packages there when I run $vppctl
it gives me following error.

*clib_socket_init: connect (fd 3, '/run/vpp/cli.sock'):
No such file or directory*


I thought there might be something wrong with the build
or something so took packages from following repository

http://a.centos.org/centos/7.5.1804/nfv/x86_64/fdio/vpp/vpp-1804/

I get the same error when running vppctl after
installation. Could someone help with the error?

Best Regards,

Omer

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10126):
https://lists.fd.io/g/vpp-dev/message/10126
Mute This Topic: https://lists.fd.io/mt/24505099/464962
Group Owner: vpp-dev+ow...@lists.fd.io

Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
[hagb...@gmail.com ]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10127):
https://lists.fd.io/g/vpp-dev/message/10127
Mute This Topic: https://lists.fd.io/mt/24505099/984664
Group Owner: vpp-dev+ow...@lists.fd.io

Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
 [omer.maj...@sofioni.com ]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10129): https://lists.fd.io/g/vpp-dev/message/10129
Mute This Topic: https://lists.fd.io/mt/24505099/984664
Group Owner: vpp-dev+ow...@lists.fd.io 

Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
 [omer.maj...@sofioni.com ]

-=-=-=-=-=-=-=-=-=-=-=-


--
*Thomas F Herbert*
NFV and Fast Data Planes
Networking Group Office of the CTO
*Red Hat*
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10170): https://lists.fd.io/g/vpp-dev/message/10170
Mute This Topic: https://lists.fd.io/mt/24532686/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT Fragment Reassembly

2018-08-15 Thread Jon Loeliger
On Wed, Aug 15, 2018 at 8:50 AM, Jon Loeliger  wrote:

> On Wed, Aug 15, 2018 at 12:49 AM, Matus Fabian -X (matfabia - PANTHEON
> TECHNOLOGIES at Cisco)  wrote:
>
>> Hi Jon,
>>
>>
>>
>> NAT plugin does virtual fragment reassembly – it enables to translate
>> non-initial fragments without L4 header otherwise NAT is unable to gather
>> port information from the non-initial fragment, packet is still broken into
>> several fragments after NAT translation.
>>
>>
>>
>> Matus
>>
>
> Thanks, Matus!
>
> I'm trying to understand how part of the NAT virtual reassembly works
> still.
> When and how does the drop_frag count come into play?  For example,
> if an original packet was broken into 3 fragments, and drop_frag was 1 or
> 2,
>

Naturally, I meant the "max_frag" values here.


> should all three fragments get dropped?  And are they dropped on ingress
> or egress?
>
> Is there a packet trace flow where I can see them being dropped?  I ask
> because it looks to me like these fragments are only sometimes dropped
> when the drop_frag value is exceeded, and it also requires the
>

And "max_frag" there too.


> ip_reassembly_enable_disable to be "on" too.
>
> I've been doing a "trace add dpdk-input 500", sending my example packets
> that need fragmentation, NAT-ing them, and then filtering the trace buffer.
> What is the right node to use in the "filter" here?
>
> Thanks,
> jdl
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10169): https://lists.fd.io/g/vpp-dev/message/10169
Mute This Topic: https://lists.fd.io/mt/24529319/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT Fragment Reassembly

2018-08-15 Thread Jon Loeliger
On Wed, Aug 15, 2018 at 2:30 AM, Ole Troan  wrote:

> Jon,
>
> Thanks for bringing this up. In addition to Matus’ answer.
>

Hi Ole,

Thanks!


> There is a distinction to be made between forwarding and terminating
> traffic.
> And there is a nice grey middle ground between the two.
>
> Some features does forwarding on the transport header, like NAT, MAP-E,
> MAP-T and so on, those do not require reassembling the fragment chain, and
> do forward fragments in flights, aka virtual reassembly.
>

Right.


> Tunnels with  outer fragmentation require full reassembly (the packets are
> addresses to the node itself), before forwarding.
>
> But you could also argue that there are features like ACL, firewalls,
> legal intercept whatnot that would benefit from doing a full reassembly
> while forwarding.
>
> a) Virtual reassembly
> b) Full reassembly for terminating traffic (for-us / host)
> c) Full reassembly for forwarding traffic for specific features requiring
> that
>
> From a quick glance it seems like the current reassembly feature is doing
> c. And doing it without any level of granularity.
> Meaning that if you need outer reassembly for an IP in IP tunnel, you’d
> suddenly also reassemble all IP traffic.


And GRE?


> Which is unwanted and costly.
> That should be easy to fix. Klement?
>

I've not gotten to any of the IP-in-IP-like tunneling in my examples yet,
so that is a future problem. :-)  But hey, if we can fix it before we get
to it,
that always works! :-)



> So you are right if you combine the current reassembly (c) with NAT, NAT
> does not deal with the fragments.


So how does NAT's fragmentation handle the parameters of the
API_NAT_SET_REASS API call?  Specifically, the max_frag value?
It has to track all the fragments of one (original) packet and then does
it drop all the fragments if it exceeds the max_frag value?


> But at a much higher cost than virtual reassembly of course. I propose we
> move default reassembly to b instead of c, and that it’s only done in the c
> case for features that require it.
>
> Cheers,
> Ole


Thanks,
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10168): https://lists.fd.io/g/vpp-dev/message/10168
Mute This Topic: https://lists.fd.io/mt/24529319/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT Fragment Reassembly

2018-08-15 Thread Jon Loeliger
On Wed, Aug 15, 2018 at 12:49 AM, Matus Fabian -X (matfabia - PANTHEON
TECHNOLOGIES at Cisco)  wrote:

> Hi Jon,
>
>
>
> NAT plugin does virtual fragment reassembly – it enables to translate
> non-initial fragments without L4 header otherwise NAT is unable to gather
> port information from the non-initial fragment, packet is still broken into
> several fragments after NAT translation.
>
>
>
> Matus
>

Thanks, Matus!

I'm trying to understand how part of the NAT virtual reassembly works still.
When and how does the drop_frag count come into play?  For example,
if an original packet was broken into 3 fragments, and drop_frag was 1 or 2,
should all three fragments get dropped?  And are they dropped on ingress
or egress?

Is there a packet trace flow where I can see them being dropped?  I ask
because it looks to me like these fragments are only sometimes dropped
when the drop_frag value is exceeded, and it also requires the
ip_reassembly_enable_disable to be "on" too.

I've been doing a "trace add dpdk-input 500", sending my example packets
that need fragmentation, NAT-ing them, and then filtering the trace buffer.
What is the right node to use in the "filter" here?

Thanks,
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10167): https://lists.fd.io/g/vpp-dev/message/10167
Mute This Topic: https://lists.fd.io/mt/24529319/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP crashes with bvi interface

2018-08-15 Thread Dave Barach via Lists.Fd.Io
Almost no matter what, vpp shouldn’t crash. Please at share the backtrace. See 
https://wiki.fd.io/view/VPP/BugReports and also 
https://wiki.fd.io/view/VPP/VPP_Home_Gateway for a known-to-work similar BVI / 
IRB configuration.

D.

From: vpp-dev@lists.fd.io  On Behalf Of Aleksander Djuric
Sent: Wednesday, August 15, 2018 9:18 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP crashes with bvi interface

Hi all,

For test purposes I've installed VPP (v18.10-rc0) on 1'st host machine with 4 
lan adapters on it. I trying to setup BVI loopback interface with ip address to 
emulate network switch. 2'nd host with ip address 
192.168.0.2/24 is connected to GigabitEthernet1/0/0. 
Interface GigabitEthernet1/0/1 is connected to router. My config for VPP is 
below:

set int state GigabitEthernet1/0/0 up
set int state GigabitEthernet1/0/1 up
set int l2 bridge GigabitEthernet1/0/0 1
set int l2 bridge GigabitEthernet1/0/1 1
loopback create
set int state loop0 up
set int l2 bridge loop0 1 bvi
set int ip address loop0 192.168.0.1/24

VPP craches in a few seconds after setting loop 0 as bvi.

Is it a bug or it's a feature?
Please help me to correct configuration if I'm wrong.

Thanks in advance,
Aleksander

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10166): https://lists.fd.io/g/vpp-dev/message/10166
Mute This Topic: https://lists.fd.io/mt/24533887/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP crashes with bvi interface

2018-08-15 Thread Aleksander Djuric
Hi all,

For test purposes I've installed VPP (v18.10-rc0) on 1'st host machine with
4 lan adapters on it. I trying to setup BVI loopback interface with ip
address to emulate network switch. 2'nd host with ip address 192.168.0.2/24
is connected to GigabitEthernet1/0/0. Interface GigabitEthernet1/0/1 is
connected to router. My config for VPP is below:

set int state GigabitEthernet1/0/0 up
set int state GigabitEthernet1/0/1 up
set int l2 bridge GigabitEthernet1/0/0 1
set int l2 bridge GigabitEthernet1/0/1 1
loopback create
set int state loop0 up
set int l2 bridge loop0 1 bvi
set int ip address loop0 192.168.0.1/24

VPP craches in a few seconds after setting loop 0 as bvi.

Is it a bug or it's a feature?
Please help me to correct configuration if I'm wrong.

Thanks in advance,
Aleksander
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10165): https://lists.fd.io/g/vpp-dev/message/10165
Mute This Topic: https://lists.fd.io/mt/24533887/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP and mlx4 issue

2018-08-15 Thread Edward Warnicke
Brandon,
Looping in Amir from Mellanox who is trying to get to the bottom of these
kinds of issues.

Ed

On August 14, 2018 at 10:05:35 AM, Brandon Kopinski (bkopinsk...@gmail.com)
wrote:

Hi,

I have been trying to use VPP with the mlx4 pmd.  However, VPP is not
processing any packets.  My current setup includes two machines that are
directly connected.  One machine is running VPP with the mlx4 pmd.  The
other machine is using the Linux kernel for networking. I assigned an ip
address (10.0.1.1/24) to the VPP interface and brought the interface up. I
assigned (10.0.1.2/24) to the interface on the other machine and brought it
up. Pinging the VPP interface from the other machine is unsuccessful. Also,
the 'show int' command displays 'dpdk tx failure' after pinging from VPP.
I was able to verify with a debugger that the RX queues are being polled,
but there are no packets available to process.  I enabled tracing on the
dpdk input node, but no packets end up being traced.  I'm not sure if this
is just a configuration issue or some other problem.

Here is the VPP and DPDK versions I am currently using. Any help will be
greatly appreciated.

Version:  v18.10-rc0~43-g631de0d
Compiled by:  bkopinski
Compile host: localhost.localdomain
Compile date: Wed Aug  1 14:13:41 EDT 2018
Compile location: /home/bkopinski/sw/vpp
Compiler: GCC 4.8.5 20150623 (Red Hat 4.8.5-28)
Current PID:  7355

DPDK Version: DPDK 18.05.0
DPDK EAL init args:   -c 2 -n 4 --huge-dir /run/vpp/hugepages
--file-prefix vpp -w :09:00.0 --master-lcore 1 --socket-mem 1024


Sincerely,
Brandon Kopinski


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10146): https://lists.fd.io/g/vpp-dev/message/10146
Mute This Topic: https://lists.fd.io/mt/24525868/464962
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [hagb...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10164): https://lists.fd.io/g/vpp-dev/message/10164
Mute This Topic: https://lists.fd.io/mt/24525868/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Where could I find the example of vcl.conf

2018-08-15 Thread wylandrea
looks like  APP not attach to the segment vpp created like /dev/shm/$pid

I think maybe need add the related ssvm_segment_attach in L125
in vl_api_application_attach_reply_t_handler, not sure, will test it.

wylandrea  于2018年8月15日周三 下午12:10写道:

> Thanks, Florin!  I tried, but unfortunately got segment fault like
> below(pull from master today):
>
> The segfault caused by the mq=0x204005440, the addr could not be refered
> in APP side,  looks like  the address is alloc in segment_manager_init
> L293,
>
> I used the example vcl.conf you provided, just comments the line
> "api-socket-name /run/vpp-api.sock".
>
> ==
>
> VCL<23469>: configured VCL debug level (4) from VCL_DEBUG!
> VCL<23469>: allocated VCL heap = 0x7fffe010, size 268435456
> (0x1000)
> VCL<23469>: configured app_scope_local (1)
> VCL<23469>: configured app_scope_global (1)
> VCL<23469>: configured with mq with eventfd
> VCL<23469>: completed parsing vppcom config!
> vppcom_connect_to_vpp:803: VCL<23469>: app (ldp-23469-app) connecting to
> VPP api (/vpe-api)...
> [New Thread 0x7fffd700 (LWP 23474)]
> vppcom_connect_to_vpp:819: VCL<23469>: app (ldp-23469-app) is connected to
> VPP!
> [New Thread 0x7fffdf7fe700 (LWP 23475)]
> vppcom_app_create:714: VCL<23469>: sending session enable
> vppcom_app_create:724: VCL<23469>: sending app attach
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffd700 (LWP 23474)]
> 0x7510a403 in svm_msg_q_set_consumer_eventfd (mq=0x204005440,
> fd=0) at /home/wenjiang/vpp/build-data/../src/svm/message_queue.c:242
> 242   mq->q->consumer_evtfd = fd;
> Missing separate debuginfos, use: debuginfo-install dpdk-18.05-1.x86_64
> libgcc-4.8.5-28.el7_5.1.x86_64 libstdc++-4.8.5-28.el7_5.1.x86_64
> numactl-libs-2.0.9-7.el7.x86_64
> (gdb) bt
> #0  0x7510a403 in svm_msg_q_set_consumer_eventfd (mq=0x204005440,
> fd=0) at /home/wenjiang/vpp/build-data/../src/svm/message_queue.c:242
> #1  0x74fede84 in vl_api_application_attach_reply_t_handler
> (mp=0x30066c40) at /home/wenjiang/vpp/build-data/../src/vcl/vcl_bapi.c:119
> #2  0x75111bba in msg_handler_internal (am=0x75360880
> , the_msg=0x30066c40, trace_it=0, do_it=1, free_it=1) at
> /home/wenjiang/vpp/build-data/../src/vlibapi/api_shared.c:425
> #3  0x75111e1a in vl_msg_api_handler (the_msg=0x30066c40) at
> /home/wenjiang/vpp/build-data/../src/vlibapi/api_shared.c:551
> #4  0x75113344 in vl_msg_api_queue_handler (q=0x30207ec0) at
> /home/wenjiang/vpp/build-data/../src/vlibapi/api_shared.c:762
> #5  0x75117f4e in rx_thread_fn (arg=0x0) at
> /home/wenjiang/vpp/build-data/../src/vlibmemory/memory_client.c:94
> #6  0x7763ce25 in start_thread (arg=0x7fffd700) at
> pthread_create.c:308
> #7  0x7715ebad in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>
> 
>
> startup.conf
> root@192.168.122.252:/home/wenjiang/dmm/release/bin (master) $ cat
> ~/startup.conf
> unix {
>   #nodaemon
>   interactive
>   log /var/log/vpp/vpp.log
>   cli-listen localhost:5002
>   full-coredump
> }
>
> api-trace {
>   on
> }
>
>
> dpdk {
>   socket-mem 1024
>   dev :00:09.0
> }
>
> session { evt_qs_memfd_seg  }
> #socksvr { socket-name /run/vpp-api.sock }
>
> ==
> root@192.168.122.252:/home/wenjiang/dmm/release/bin (master) $ cat
> /etc/vpp/vcl.conf
> vcl {
>   #rx-fifo-size 400
>   #tx-fifo-size 400
>   app-scope-local
>   app-scope-global
>   #api-socket-name /run/vpp-api.sock
>   use-mq-eventfd
> }
>
>
>
>
>
>
> Florin Coras  于2018年8月14日周二 下午11:15写道:
>
>> Hi Yalei,
>>
>> You have an example of how to write a vcl.conf file in vcl/vcl_test.conf.
>> It’s just an example, so if you want to try out eventfd, here’s what I’ve
>> been recently using:
>>
>> vcl {
>>   rx-fifo-size 400
>>   tx-fifo-size 400
>>   app-scope-local
>>   app-scope-global
>>   api-socket-name /run/vpp-api.sock
>>   use-mq-eventfd
>> }
>>
>> For this to work, vpp must come up with the binary api socket transport
>> configured and the session layer event queues must be allocated in a memfd
>> segment. So, add the following to your vpp startup conf:
>>
>> socksvr { socket-name /run/vpp-api.sock }
>> session { evt_qs_memfd_seg  }
>>
>> Also, to have vcl read your config file, remember to do something like:
>> "export VCL_CONFIG=/path/to/your/file”. Finally, this is still very much
>> ongoing work so if you hit any issues, do let me know :-)
>>
>> Hope this helps,
>> Florin
>>
>> > On Aug 14, 2018, at 5:38 AM, 汪亚雷  wrote:
>> >
>> > Hi Florin,
>> >
>> > vppcom_cfg_read_file will try to parse the vcl.conf, but where could I
>> get the example?
>> >
>> > actually I want to have a try "use-mq-eventfd"
>> >
>> > Thanks!
>> >
>> > /yalei
>>
>> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10153): https://lists.fd.io/g/vpp-dev/message/10153
> Mu

Re: [vpp-dev] :: vppctl fails to start in Container (Centos 7.5.1804)

2018-08-15 Thread omer . majeed
VPP is installed only in the container.

On 2018-08-15 16:50, Billy wrote:

> I'll take a look. So is VPP only installed in the container, or is also 
> installed on the host? 
> 
> Billy McFall 
> 
> On Wed, Aug 15, 2018 at 6:00 AM,  wrote:
> 
> Thanks Ed, in centos installing vpp automatically installs vpp-selinux-policy 
> with it. So enforcing selinux on the host machine make vpp work. 
> 
> However, when I try installing VPP in centos container there vpp doesn't 
> start. Can't enforce selinux in container and vpp-selinux-policy is installed 
> with vpp due to dependency. When I run vpp in the container 
> 
> $docker exec  /usr/bin/vpp -c /etc/vpp/startup.conf 
> 
> I get following errors 
> 
> tls_openssl_init:650: failed to initialize TLS CA chain
> 
> dpdk_config: mount failed 1 
> 
> Seems like an selinux issue or what? Could I get any help there? :) 
> 
> Best Regards, 
> 
> Omer 
> 
> On 2018-08-13 22:43, Edward Warnicke wrote: 
> We do have an se linux package that should in principle let you keep working 
> with se linux enforce 
> try 
> 
> yum install vpp-selinux-policy 
> and see if that helps :) 
> 
> Ed 
> On August 13, 2018 at 12:41:37 PM, omer.maj...@sofioni.com 
> (omer.maj...@sofioni.com) wrote: 
> 
> Thank Ed. 
> 
> Changed SELINUX=enforcing in /etc/selinux/config 
> 
> Restarted the machine, and it worked. 
> 
> Best Regards, 
> 
> Omer
> 
> On 2018-08-13 22:21, Edward Warnicke wrote: 
> This feels like SE Linux may be involved...  
> 
> Ed 
> 
> On August 13, 2018 at 12:17:09 PM, omer.maj...@sofioni.com 
> (omer.maj...@sofioni.com) wrote: 
> 
> Hi, 
> 
> I've built VPP on centos 7.5.1804, took the RPM packages to another machine 
> to deploy VPP there. 
> 
> After installing RPM packages there when I run $vppctl it gives me following 
> error. 
> 
> CLIB_SOCKET_INIT: CONNECT (FD 3, '/RUN/VPP/CLI.SOCK'): NO SUCH FILE OR 
> DIRECTORY 
> 
> I thought there might be something wrong with the build or something so took 
> packages from following repository 
> 
> http://a.centos.org/centos/7.5.1804/nfv/x86_64/fdio/vpp/vpp-1804/ [1] 
> 
> I get the same error when running vppctl after installation. Could someone 
> help with the error? 
> 
> Best Regards, 
> 
> Omer -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10126): https://lists.fd.io/g/vpp-dev/message/10126 [2]
> Mute This Topic: https://lists.fd.io/mt/24505099/464962 [3]
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [4] [hagb...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=- 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10127): https://lists.fd.io/g/vpp-dev/message/10127 [5]
> Mute This Topic: https://lists.fd.io/mt/24505099/984664 [6]
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [4]  
> [omer.maj...@sofioni.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10129): https://lists.fd.io/g/vpp-dev/message/10129
[7]
Mute This Topic: https://lists.fd.io/mt/24505099/984664 [6]
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [4] 
[omer.maj...@sofioni.com]
-=-=-=-=-=-=-=-=-=-=-=- 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10159): https://lists.fd.io/g/vpp-dev/message/10159
[8]
Mute This Topic: https://lists.fd.io/mt/24532686/675237 [9]
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [4] 
[bmcf...@redhat.com]
-=-=-=-=-=-=-=-=-=-=-=-

  -- 

BILLY MCFALL 
Networking Group 
CTO Office
RED HAT 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10160): https://lists.fd.io/g/vpp-dev/message/10160
Mute This Topic: https://lists.fd.io/mt/24532686/984664
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
[omer.maj...@sofioni.com]
-=-=-=-=-=-=-=-=-=-=-=- 

Links:
--
[1] http://a.centos.org/centos/7.5.1804/nfv/x86_64/fdio/vpp/vpp-1804/
[2] https://lists.fd.io/g/vpp-dev/message/10126
[3] https://lists.fd.io/mt/24505099/464962
[4] https://lists.fd.io/g/vpp-dev/unsub
[5] https://lists.fd.io/g/vpp-dev/message/10127
[6] https://lists.fd.io/mt/24505099/984664
[7] https://lists.fd.io/g/vpp-dev/message/10129
[8] https://lists.fd.io/g/vpp-dev/message/10159
[9] https://lists.fd.io/mt/24532686/675237
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10162): https://lists.fd.io/g/vpp-dev/message/10162
Mute This Topic: https://lists.fd.io/mt/24532686/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: vppctl fails to start in Container (Centos 7.5.1804)

2018-08-15 Thread Edward Warnicke
Tom,

Do you perhaps have more insight here?

Ed

On August 15, 2018 at 5:00:32 AM, omer.maj...@sofioni.com (
omer.maj...@sofioni.com) wrote:

Thanks Ed, in centos installing vpp automatically installs
vpp-selinux-policy with it. So enforcing selinux on the host machine make
vpp work.

However, when I try installing VPP in centos container there vpp doesn't
start. Can't enforce selinux in container and vpp-selinux-policy is
installed with vpp due to dependency. When I run vpp in the container

$docker exec  /usr/bin/vpp -c /etc/vpp/startup.conf

I get following errors

tls_openssl_init:650: failed to initialize TLS CA chain


dpdk_config: mount failed 1

Seems like an selinux issue or what? Could I get any help there? :)

Best Regards,

Omer


On 2018-08-13 22:43, Edward Warnicke wrote:

We do have an se linux package that should in principle let you keep
working with se linux enforce
try

yum install vpp-selinux-policy

and see if that helps :)

Ed

On August 13, 2018 at 12:41:37 PM, omer.maj...@sofioni.com (
omer.maj...@sofioni.com) wrote:



Thank Ed.

Changed SELINUX=enforcing in /etc/selinux/config

Restarted the machine, and it worked.

Best Regards,

Omer



On 2018-08-13 22:21, Edward Warnicke wrote:

This feels like SE Linux may be involved...

Ed



On August 13, 2018 at 12:17:09 PM, omer.maj...@sofioni.com (
omer.maj...@sofioni.com) wrote:



Hi,

I've built VPP on centos 7.5.1804, took the RPM packages to another machine
to deploy VPP there.

After installing RPM packages there when I run $vppctl it gives me
following error.

*clib_socket_init: connect (fd 3, '/run/vpp/cli.sock'): No such file or
directory*


I thought there might be something wrong with the build or something so
took packages from following repository

http://a.centos.org/centos/7.5.1804/nfv/x86_64/fdio/vpp/vpp-1804/

I get the same error when running vppctl after installation. Could someone
help with the error?

Best Regards,

Omer
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10126): https://lists.fd.io/g/vpp-dev/message/10126
Mute This Topic: https://lists.fd.io/mt/24505099/464962
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [hagb...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10127): https://lists.fd.io/g/vpp-dev/message/10127
Mute This Topic: https://lists.fd.io/mt/24505099/984664
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [omer.maj...@sofioni.com]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10129): https://lists.fd.io/g/vpp-dev/message/10129
Mute This Topic: https://lists.fd.io/mt/24505099/984664
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [omer.maj...@sofioni.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10161): https://lists.fd.io/g/vpp-dev/message/10161
Mute This Topic: https://lists.fd.io/mt/24532686/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: vppctl fails to start in Container (Centos 7.5.1804)

2018-08-15 Thread Billy
I'll take a look. So is VPP only installed in the container, or is also
installed on the host?

Billy McFall

On Wed, Aug 15, 2018 at 6:00 AM,  wrote:

> Thanks Ed, in centos installing vpp automatically installs
> vpp-selinux-policy with it. So enforcing selinux on the host machine make
> vpp work.
>
> However, when I try installing VPP in centos container there vpp doesn't
> start. Can't enforce selinux in container and vpp-selinux-policy is
> installed with vpp due to dependency. When I run vpp in the container
>
> $docker exec  /usr/bin/vpp -c /etc/vpp/startup.conf
>
> I get following errors
>
> tls_openssl_init:650: failed to initialize TLS CA chain
>
>
> dpdk_config: mount failed 1
>
> Seems like an selinux issue or what? Could I get any help there? :)
>
> Best Regards,
>
> Omer
>
>
> On 2018-08-13 22:43, Edward Warnicke wrote:
>
> We do have an se linux package that should in principle let you keep
> working with se linux enforce
> try
>
> yum install vpp-selinux-policy
>
> and see if that helps :)
>
> Ed
>
> On August 13, 2018 at 12:41:37 PM, omer.maj...@sofioni.com (
> omer.maj...@sofioni.com) wrote:
>
>
>
> Thank Ed.
>
> Changed SELINUX=enforcing in /etc/selinux/config
>
> Restarted the machine, and it worked.
>
> Best Regards,
>
> Omer
>
>
>
> On 2018-08-13 22:21, Edward Warnicke wrote:
>
> This feels like SE Linux may be involved...
>
> Ed
>
>
>
> On August 13, 2018 at 12:17:09 PM, omer.maj...@sofioni.com (
> omer.maj...@sofioni.com) wrote:
>
>
>
> Hi,
>
> I've built VPP on centos 7.5.1804, took the RPM packages to another
> machine to deploy VPP there.
>
> After installing RPM packages there when I run $vppctl it gives me
> following error.
>
> *clib_socket_init: connect (fd 3, '/run/vpp/cli.sock'): No such file or
> directory*
>
>
> I thought there might be something wrong with the build or something so
> took packages from following repository
>
> http://a.centos.org/centos/7.5.1804/nfv/x86_64/fdio/vpp/vpp-1804/
>
> I get the same error when running vppctl after installation. Could someone
> help with the error?
>
> Best Regards,
>
> Omer
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10126): https://lists.fd.io/g/vpp-dev/message/10126
> Mute This Topic: https://lists.fd.io/mt/24505099/464962
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [hagb...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10127): https://lists.fd.io/g/vpp-dev/message/10127
> Mute This Topic: https://lists.fd.io/mt/24505099/984664
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [omer.maj...@sofioni.com
> ]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10129): https://lists.fd.io/g/vpp-dev/message/10129
> Mute This Topic: https://lists.fd.io/mt/24505099/984664
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [omer.maj...@sofioni.com
> ]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10159): https://lists.fd.io/g/vpp-dev/message/10159
> Mute This Topic: https://lists.fd.io/mt/24532686/675237
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [bmcf...@redhat.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>


-- 
*Billy McFall*
Networking Group
CTO Office
*Red Hat*
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10160): https://lists.fd.io/g/vpp-dev/message/10160
Mute This Topic: https://lists.fd.io/mt/24532686/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: vppctl fails to start in Container (Centos 7.5.1804)

2018-08-15 Thread omer . majeed
Thanks Ed, in centos installing vpp automatically installs
vpp-selinux-policy with it. So enforcing selinux on the host machine
make vpp work. 

However, when I try installing VPP in centos container there vpp doesn't
start. Can't enforce selinux in container and vpp-selinux-policy is
installed with vpp due to dependency. When I run vpp in the container 

$docker exec  /usr/bin/vpp -c /etc/vpp/startup.conf 

I get following errors 

tls_openssl_init:650: failed to initialize TLS CA chain

dpdk_config: mount failed 1 

Seems like an selinux issue or what? Could I get any help there? :) 

Best Regards, 

Omer 

On 2018-08-13 22:43, Edward Warnicke wrote:

> We do have an se linux package that should in principle let you keep working 
> with se linux enforce 
> try 
> 
> yum install vpp-selinux-policy 
> and see if that helps :) 
> 
> Ed 
> On August 13, 2018 at 12:41:37 PM, omer.maj...@sofioni.com 
> (omer.maj...@sofioni.com) wrote: 
> 
> Thank Ed. 
> 
> Changed SELINUX=enforcing in /etc/selinux/config 
> 
> Restarted the machine, and it worked. 
> 
> Best Regards, 
> 
> Omer
> 
> On 2018-08-13 22:21, Edward Warnicke wrote: 
> This feels like SE Linux may be involved...  
> 
> Ed 
> 
> On August 13, 2018 at 12:17:09 PM, omer.maj...@sofioni.com 
> (omer.maj...@sofioni.com) wrote: 
> 
> Hi, 
> 
> I've built VPP on centos 7.5.1804, took the RPM packages to another machine 
> to deploy VPP there. 
> 
> After installing RPM packages there when I run $vppctl it gives me following 
> error. 
> 
> CLIB_SOCKET_INIT: CONNECT (FD 3, '/RUN/VPP/CLI.SOCK'): NO SUCH FILE OR 
> DIRECTORY 
> 
> I thought there might be something wrong with the build or something so took 
> packages from following repository 
> 
> http://a.centos.org/centos/7.5.1804/nfv/x86_64/fdio/vpp/vpp-1804/ 
> 
> I get the same error when running vppctl after installation. Could someone 
> help with the error? 
> 
> Best Regards, 
> 
> Omer -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10126): https://lists.fd.io/g/vpp-dev/message/10126
> Mute This Topic: https://lists.fd.io/mt/24505099/464962
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [hagb...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=- 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10127): https://lists.fd.io/g/vpp-dev/message/10127
> Mute This Topic: https://lists.fd.io/mt/24505099/984664
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [omer.maj...@sofioni.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10129): https://lists.fd.io/g/vpp-dev/message/10129
Mute This Topic: https://lists.fd.io/mt/24505099/984664
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
[omer.maj...@sofioni.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10159): https://lists.fd.io/g/vpp-dev/message/10159
Mute This Topic: https://lists.fd.io/mt/24532686/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] how to get packet headers

2018-08-15 Thread Reza Mirzaei
Hi 

I want to know how can i get packet headers in vpp. I have checked
src/vnet directory in vpp , but i didn't know where can i find this
structure. can anyone help me to solve this problem? 

Best regards 

Reza
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10158): https://lists.fd.io/g/vpp-dev/message/10158
Mute This Topic: https://lists.fd.io/mt/24532410/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread Aleksander Djuric
In addition.. I have tried to configure LACP in dpdk section of vpp 
startup.conf.. and I've got the same output:

startup.conf:
unix {
   nodaemon
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /run/vpp/cli.sock
   gid vpp
}

api-trace {
   on
}

api-segment {
   gid vpp
}

socksvr {
   default
}

dpdk {
   socket-mem 2048
   num-mbufs 131072

   dev :0a:00.0
   dev :0a:00.1
   dev :0a:00.2
   dev :0a:00.3

   vdev eth_bond0,mode=4,slave=:0a:00.0,slave=:0a:00.1,xmit_policy=l23
}

plugins {
   path /usr/lib/vpp_plugins
}

vpp# sh int
 Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter 
 Count  
BondEthernet0 5 down 9000/0/0/0  
GigabitEtherneta/0/0  1  bond-slave  9000/0/0/0  
GigabitEtherneta/0/1  2  bond-slave  9000/0/0/0  
GigabitEtherneta/0/2  3 down 9000/0/0/0  
GigabitEtherneta/0/3  4 down 9000/0/0/0  
local0    0 down  0/0/0/0    
*vpp# set interface ip address BondEthernet0 10.0.0.2/24*
*vpp# set interface state BondEthernet0 up*
*vpp# clear hardware*
*vpp# clear error*
*vpp# show hardware*
 Name    Idx   Link  Hardware
BondEthernet0  5 up   Slave-Idx: 1 2
 Ethernet address 00:0b:ab:f4:bd:84
 Ethernet Bonding
   carrier up full duplex speed 2000 auto mtu 9202  
   flags: admin-up pmd maybe-multiseg
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/0   1    slave GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1   2    slave GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2   3    down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down  
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3   4    down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down  
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0 0    down  local0
 local
*vpp# show error*
  Count    Node  Reason
*vpp# trace add dpdk-input 50*
*vpp# show trace*
--- Start of thread 0 vpp_main ---
No packets in trace buffer
*vpp# ping 10.0.0.1*

Statistics: 5 sent, 0 received, 100% packet loss
*vpp# show trace*   
--- Start of thread 0 vpp_main ---
No packets in trace buffer

Thanks in advance for any help..
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10157): https://lists.fd.io/g/vpp-dev/message/10157
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT Fragment Reassembly

2018-08-15 Thread Ole Troan
Jon,

Thanks for bringing this up. In addition to Matus’ answer.

There is a distinction to be made between forwarding and terminating traffic.
And there is a nice grey middle ground between the two.

Some features does forwarding on the transport header, like NAT, MAP-E, MAP-T 
and so on, those do not require reassembling the fragment chain, and do forward 
fragments in flights, aka virtual reassembly.

Tunnels with  outer fragmentation require full reassembly (the packets are 
addresses to the node itself), before forwarding.

But you could also argue that there are features like ACL, firewalls, legal 
intercept whatnot that would benefit from doing a full reassembly while 
forwarding.

a) Virtual reassembly
b) Full reassembly for terminating traffic (for-us / host)
c) Full reassembly for forwarding traffic for specific features requiring that

From a quick glance it seems like the current reassembly feature is doing c. 
And doing it without any level of granularity.
Meaning that if you need outer reassembly for an IP in IP tunnel, you’d 
suddenly also reassemble all IP traffic. Which is unwanted and costly.
That should be easy to fix. Klement?

So you are right if you combine the current reassembly (c) with NAT, NAT does 
not deal with the fragments. But at a much higher cost than virtual reassembly 
of course. I propose we move default reassembly to b instead of c, and that 
it’s only done in the c case for features that require it.

Cheers,
Ole



> On 14 Aug 2018, at 23:52, Jon Loeliger  wrote:
> 
> VPPeople,
> 
> A few months ago, the vppctl command 'set interface reassembly' was
> added along with its API call, ip_reassembly_enable_disable (commit
> 4c53313cd7e9b866412ad3e04b2d91ac098c1398).
> 
> What is the relationship of this fragment reassembly and this
> enable/disable flag WRT to the NAT's fragment reassembly?
> 
> Specifically, should a NAT fragment reassembly be controlled by this flag?
> Empirically, the answer is 'yes'.
> 
> So it appears that one should interpret this enable/disable flag more like:
> 
>When you use `set interface reassembly  off`, the  fragments are 
> forwarded
>without any sort of reassembly.  The fragments flow through unmolested.  
> The NAT
>fragmentation limits are not respected as they aren't even involved.
> 
>When you use `set interface reassembly  on`, the fragments are 
> reassembled
>before being forwarded.  So the interface will process, and possibly 
> limit, fragment
>reassembly, even for NAT rewritten packets.
> 
> Does that sound right?
> 
> And should the reassembly be enabled/disabled on the ingress interface?
> Or are there different scenarios where one would want them reassembled on
> the egress interface?
> 
> Thanks,
> jdl
> 
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10152): https://lists.fd.io/g/vpp-dev/message/10152
> Mute This Topic: https://lists.fd.io/mt/24529319/675193
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [otr...@employees.org]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10156): https://lists.fd.io/g/vpp-dev/message/10156
Mute This Topic: https://lists.fd.io/mt/24529319/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread Aleksander Djuric
Hi Steven,

Thanks much for the answer. Yes, these 2 boxes’ interfaces are connected back 
to back.
Both sides shows same diagnostics results, here is the output:

vpp# sh int
 Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter 
 Count  
BondEthernet0 5  up  9000/0/0/0  
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error    
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error    
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0  
GigabitEtherneta/0/3  4 down 9000/0/0/0  
local0    0 down  0/0/0/0   drops   
   2
*vpp# clear hardware*
*vpp# clear error*
*vpp# clear hardware*
*vpp# clear error    *
*vpp# ping 10.0.0.1*

Statistics: 5 sent, 0 received, 100% packet loss
*vpp# show hardware*
 Name    Idx   Link  Hardware
BondEthernet0  5 up   BondEthernet0
 Ethernet address 00:0b:ab:f4:bd:84
GigabitEtherneta/0/0   1 up   GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84 

 Intel e1000    

   carrier up full duplex speed 1000 auto mtu 9202  

   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum   

   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 

   cpu socket 0 



GigabitEtherneta/0/1   2 up   GigabitEtherneta/0/1  
 
 Ethernet address 00:0b:ab:f4:bd:84 

 Intel e1000    

   carrier up full duplex speed 1000 auto mtu 9202  

   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum   

   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 

   cpu socket 0 



GigabitEtherneta/0/2   3    down  GigabitEtherneta/0/2  
 
 Ethernet address 00:0b:ab:f4:bd:86 

 Intel e1000    

   carrier down 

   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum    

   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 

   cpu socket 0 



GigabitEtherneta/0/3   4    down  GigabitEtherneta/0/3  
 
 Ethernet address 00:0b:ab:f4:bd:87 

 Intel e1000    

   carrier down 

   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum    

   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024