Re: [vpp-dev] LACP link bonding issue

2018-08-20 Thread Aleksander Djuric
Hi Steven,

Thanks a lot for your help. It works!

Best whishes,
Aleksander

On Fri, Aug 17, 2018 at 07:08 PM, steven luong wrote:

> 
> 
> 
> Aleksander,
> 
> 
> 
>  
> 
> 
> 
> I found the CLI bug. You can easily workaround with it. Please set the
> physical interface state up first in your CLI sequence and it will work.
> 
> 
> 
>  
> 
> 
> 
> create bond mode lacp load-balance l23
> 
> 
> 
> bond add BondEthernet0 GigabitEtherneta/0/0
> 
> 
> 
> bond add BondEthernet0 GigabitEtherneta/0/1
> 
> 
> 
> set interface ip address BondEthernet0 10.0.0.1/24 ( http://10.0.0.1/24 )
> 
> 
> 
> set interface state GigabitEtherneta/0/0 up   < move these two lines
> to the beginning, prior to create bond
> 
> 
> 
> set interface state GigabitEtherneta/0/1 up
> 
> 
> 
> set interface state BondEthernet0 up
> 
> 
> 
>  
> 
> 
> 
> Steven
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10215): https://lists.fd.io/g/vpp-dev/message/10215
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-17 Thread steven luong via Lists.Fd.Io
Aleksander,

I found the CLI bug. You can easily workaround with it. Please set the physical 
interface state up first in your CLI sequence and it will work.

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.1/24
set interface state GigabitEtherneta/0/0 up   < move these two lines to the 
beginning, prior to create bond
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

Steven
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10207): https://lists.fd.io/g/vpp-dev/message/10207
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-17 Thread Aleksander Djuric
Hi Steven,

GDB shows that the vlib_process_get_events function always return ~0, except 
one time at start and lacp_schedule_periodic_timer is never runs after that. 
It's looks the same on both sides.
I have added some debug info. Please look at the log:

### VPP1:9.58 #
 
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:75: BEGIN
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:88: 
partner.state: 0x1, actor.state: 0x7
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:89: 
LACP_START_SLOW_PERIODIC_TIMER
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:75: BEGIN
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:85: 
LACP_START_FAST_PERIODIC_TIMER
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:75: BEGIN
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:88: 
partner.state: 0x1, actor.state: 0x7
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:89: 
LACP_START_SLOW_PERIODIC_TIMER
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:75: BEGIN
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_schedule_periodic_timer:85: 
LACP_START_FAST_PERIODIC_TIMER
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_process:172: LACP_PROCESS_EVENT_START
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_process:169: LACP_PROCESS_TIMEOUT
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_process:169: LACP_PROCESS_TIMEOUT
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_process:169: LACP_PROCESS_TIMEOUT
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_process:169: LACP_PROCESS_TIMEOUT
Aug 17 10:54:45 vpp1 vnet[1588]: lacp_process:169: LACP_PROCESS_TIMEOUT
 
### VPP2:6.155 #
 
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:75: BEGIN
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:88: 
partner.state: 0x1, actor.state: 0x7
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:89: 
LACP_START_SLOW_PERIODIC_TIMER
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:75: BEGIN
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:85: 
LACP_START_FAST_PERIODIC_TIMER
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:75: BEGIN
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:88: 
partner.state: 0x1, actor.state: 0x7
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:89: 
LACP_START_SLOW_PERIODIC_TIMER
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:75: BEGIN
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_schedule_periodic_timer:85: 
LACP_START_FAST_PERIODIC_TIMER
Aug 17 10:55:09 vpp2 vnet[1722]: lacp_process:172: LACP_PROCESS_EVENT_START
Aug 17 10:55:10 vpp2 vnet[1722]: lacp_process:169: LACP_PROCESS_TIMEOUT
Aug 17 10:55:10 vpp2 vnet[1722]: lacp_process:169: LACP_PROCESS_TIMEOUT
Aug 17 10:55:10 vpp2 vnet[1722]: lacp_process:169: LACP_PROCESS_TIMEOUT
Aug 17 10:55:10 vpp2 vnet[1722]: lacp_process:169: LACP_PROCESS_TIMEOUT
Aug 17 10:55:10 vpp2 vnet[1722]: lacp_process:169: LACP_PROCESS_TIMEOUT

Both of the hosts are bare metal. System information:

# uname -a
Linux vpp1 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 
GNU/Linux
# lscpu 
Architecture:  x86_64 
CPU op-mode(s):    32-bit, 64-bit 
Byte Order:    Little Endian 
CPU(s):    8 
On-line CPU(s) list:   0-7 
Thread(s) per core:    2 
Core(s) per socket:    4 
Socket(s): 1 
NUMA node(s):  1 
Vendor ID: GenuineIntel 
CPU family:    6 
Model: 60 
Model name:    Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz 
Stepping:  3 
CPU MHz:   3500.000 
CPU max MHz:   3500. 
CPU min MHz:   800. 
BogoMIPS:  6999.94 
Virtualization:    VT-x 
L1d cache: 32K 
L1i cache: 32K 
L2 cache:  256K 
L3 cache:  8192K 
NUMA node0 CPU(s): 0-7 
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss
ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts 
rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu p
ni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr 
pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline
_timer aes xsave avx f16c rdrand lahf_lm abm epb invpcid_single kaiser 
tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi
1 hle avx2 smep bmi2 erms invpcid rtm xsaveopt dtherm arat pln pts

# hwinfo --short   
cpu: 
  Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz, 3500 MHz
  Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz, 3500 MHz
  Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz, 3500 MHz
  Intel(R) Xeon(R) CPU E3-1275 v3 @ 3.50GHz, 3500 MHz
  Intel(R) Xeon(R) CPU 

Re: [vpp-dev] LACP link bonding issue

2018-08-16 Thread steven luong via Lists.Fd.Io
Aleksander,

This problem should be easy to figure out if you can gdb the code. When the 
very first slave interface is added to the bonding group via the command “bond 
add BondEthernet0 GigabitEthnerneta/0/0/1”,

- The PTX machine schedules the interface with the periodic timer via 
lacp_schedule_periodic_timer().
- lacp-process is signaled with event_start to enable with periodic timer. 
lacp_process() only calls lacp_periodic() if “enabled” is set .

One of these two things is not happening in your platform/environment and I 
cannot explain why with bare eyes. GDB the above two places will solve the 
mystery. Of course, it works in my environment all the times and I am not 
seeing the problem. What is your working environment? VM or bare metal? What 
flavor of linux distro and version? I am running VPP on Ubuntu-1604 on bare 
metal.

Steven


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10189): https://lists.fd.io/g/vpp-dev/message/10189
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
This configuration is not supported in VPP.

Steven

From:  on behalf of Aleksander Djuric 

Date: Wednesday, August 15, 2018 at 12:33 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP link bonding issue

In addition.. I have tried to configure LACP in dpdk section of vpp 
startup.conf.. and I've got the same output:

startup.conf:
unix {
   nodaemon
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /run/vpp/cli.sock
   gid vpp
}

api-trace {
   on
}

api-segment {
   gid vpp
}

socksvr {
   default
}

dpdk {
   socket-mem 2048
   num-mbufs 131072

   dev :0a:00.0
   dev :0a:00.1
   dev :0a:00.2
   dev :0a:00.3

   vdev eth_bond0,mode=4,slave=:0a:00.0,slave=:0a:00.1,xmit_policy=l23
}

plugins {
   path /usr/lib/vpp_plugins
}

vpp# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5 down 9000/0/0/0
GigabitEtherneta/0/0  1  bond-slave  9000/0/0/0
GigabitEtherneta/0/1  2  bond-slave  9000/0/0/0
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0
vpp# set interface ip address BondEthernet0 10.0.0.2/24
vpp# set interface state BondEthernet0 up
vpp# clear hardware
vpp# clear error
vpp# show hardware
 NameIdx   Link  Hardware
BondEthernet0  5 up   Slave-Idx: 1 2
 Ethernet address 00:0b:ab:f4:bd:84
 Ethernet Bonding
   carrier up full duplex speed 2000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/0   1slave GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1   2slave GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2   3down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3   4down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0 0down  local0
 local
vpp# show error
  CountNode  Reason
vpp# trace add dpdk-input 50
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer

Thanks in advance for any help..

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10174): https://lists.fd.io/g/vpp-dev/message/10174
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
Aleksander,

The problem is LACP periodic timer is not running as shown in your output. I 
wonder if lacp-process is launched properly or got stuck. Could you please do 
show run and check on the health of lacp-process?

 periodic timer: not running

Steven

From:  on behalf of Aleksander Djuric 

Date: Wednesday, August 15, 2018 at 12:11 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP link bonding issue

Hi Steven,

Thanks much for the answer. Yes, these 2 boxes’ interfaces are connected back 
to back.
Both sides shows same diagnostics results, here is the output:

vpp# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5  up  9000/0/0/0
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0   drops   
   2
vpp# clear hardware
vpp# clear error
vpp# clear hardware
vpp# clear error
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show hardware
 NameIdx   Link  Hardware
BondEthernet0  5 up   BondEthernet0
 Ethernet address 00:0b:ab:f4:bd:84
GigabitEtherneta/0/0   1 up   GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1   2 up   GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202
   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2   3down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3   4down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0 0down  local0
 local
vpp# show error
  CountNode  Reason
5ip4-glean   ARP requests sent
5BondEthernet0-txno slave
vpp# show lacp details
 GigabitEtherneta/0/0
   debug: 0
   loopback port: 0
   port moved: 0
   ready_n: 0
   ready: 0
   Actor
 system: 00:0b:ab:f4:bd:84
 system priority: 65535
 key: 5
 port priority: 255
 port number: 1
 state: 0x7
   LACP_STATE_LACP_ACTIVITY (0)
   LACP_STATE_LACP_TIMEOUT (1)
   LACP_STATE_AGGREGATION (2)
   Partner
 system: 00:00:00:00:00:00
 system priority: 65535
 key: 5
 port priority: 255
 port number: 1
 state: 0x1
   LACP_STATE_LACP_ACTIVITY (0)
 wait while timer: not running
 current while timer: not running
 periodic timer: not running
   RX-state: EXPIRED
   TX-state: TRANSMIT
   MUX-state: DETACHED
   PTX-state: PERIODIC_TX

 GigabitEtherneta/0/1
   debug: 0
   loopback port: 0
   port moved: 0
   ready_n: 0
   ready: 0
   Actor
 system: 00:0b:ab:f4:bd:84
 system priority: 65535
 key: 5
 port priority: 255
 port number: 2
 state: 0x7
   LACP_STATE_LACP_ACTIVITY (0)
   LACP_STATE_LACP_TIMEOUT (1)
   LACP_STATE_AGGREGATION (2)
   Partner
 system: 00:00:00:00:00:00
 system priority: 65535
 key: 5
 port priority: 255
 port number: 2
 state: 0x1
   LACP_STATE_LACP_ACTIVITY (0)
 wait while timer: not running
 current while timer: not running
 periodic timer: not running
   RX-state: EXPIRED
   TX-state: TRANSMIT
   MUX-state: DETACHED
   PTX-state: PERIODIC_TX

vpp# trace add dpdk-input 50
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
vpp# ping 10.0.0.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10173): https://lists.fd.io/g/vpp-dev/message/10173
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread Aleksander Djuric
In addition.. I have tried to configure LACP in dpdk section of vpp 
startup.conf.. and I've got the same output:

startup.conf:
unix {
   nodaemon
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /run/vpp/cli.sock
   gid vpp
}

api-trace {
   on
}

api-segment {
   gid vpp
}

socksvr {
   default
}

dpdk {
   socket-mem 2048
   num-mbufs 131072

   dev :0a:00.0
   dev :0a:00.1
   dev :0a:00.2
   dev :0a:00.3

   vdev eth_bond0,mode=4,slave=:0a:00.0,slave=:0a:00.1,xmit_policy=l23
}

plugins {
   path /usr/lib/vpp_plugins
}

vpp# sh int
 Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter 
 Count  
BondEthernet0 5 down 9000/0/0/0  
GigabitEtherneta/0/0  1  bond-slave  9000/0/0/0  
GigabitEtherneta/0/1  2  bond-slave  9000/0/0/0  
GigabitEtherneta/0/2  3 down 9000/0/0/0  
GigabitEtherneta/0/3  4 down 9000/0/0/0  
local0    0 down  0/0/0/0    
*vpp# set interface ip address BondEthernet0 10.0.0.2/24*
*vpp# set interface state BondEthernet0 up*
*vpp# clear hardware*
*vpp# clear error*
*vpp# show hardware*
 Name    Idx   Link  Hardware
BondEthernet0  5 up   Slave-Idx: 1 2
 Ethernet address 00:0b:ab:f4:bd:84
 Ethernet Bonding
   carrier up full duplex speed 2000 auto mtu 9202  
   flags: admin-up pmd maybe-multiseg
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/0   1    slave GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/1   2    slave GigabitEtherneta/0/1
 Ethernet address 00:0b:ab:f4:bd:84
 Intel e1000
   carrier up full duplex speed 1000 auto mtu 9202  promisc
   flags: pmd maybe-multiseg bond-slave bond-slave-up tx-offload 
intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/2   3    down  GigabitEtherneta/0/2
 Ethernet address 00:0b:ab:f4:bd:86
 Intel e1000
   carrier down  
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

GigabitEtherneta/0/3   4    down  GigabitEtherneta/0/3
 Ethernet address 00:0b:ab:f4:bd:87
 Intel e1000
   carrier down  
   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum
   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
   cpu socket 0

local0 0    down  local0
 local
*vpp# show error*
  Count    Node  Reason
*vpp# trace add dpdk-input 50*
*vpp# show trace*
--- Start of thread 0 vpp_main ---
No packets in trace buffer
*vpp# ping 10.0.0.1*

Statistics: 5 sent, 0 received, 100% packet loss
*vpp# show trace*   
--- Start of thread 0 vpp_main ---
No packets in trace buffer

Thanks in advance for any help..
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10157): https://lists.fd.io/g/vpp-dev/message/10157
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread Aleksander Djuric
Hi Steven,

Thanks much for the answer. Yes, these 2 boxes’ interfaces are connected back 
to back.
Both sides shows same diagnostics results, here is the output:

vpp# sh int
 Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter 
 Count  
BondEthernet0 5  up  9000/0/0/0  
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error    
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error    
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0  
GigabitEtherneta/0/3  4 down 9000/0/0/0  
local0    0 down  0/0/0/0   drops   
   2
*vpp# clear hardware*
*vpp# clear error*
*vpp# clear hardware*
*vpp# clear error    *
*vpp# ping 10.0.0.1*

Statistics: 5 sent, 0 received, 100% packet loss
*vpp# show hardware*
 Name    Idx   Link  Hardware
BondEthernet0  5 up   BondEthernet0
 Ethernet address 00:0b:ab:f4:bd:84
GigabitEtherneta/0/0   1 up   GigabitEtherneta/0/0
 Ethernet address 00:0b:ab:f4:bd:84 

 Intel e1000    

   carrier up full duplex speed 1000 auto mtu 9202  

   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum   

   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 

   cpu socket 0 



GigabitEtherneta/0/1   2 up   GigabitEtherneta/0/1  
 
 Ethernet address 00:0b:ab:f4:bd:84 

 Intel e1000    

   carrier up full duplex speed 1000 auto mtu 9202  

   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum   

   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 

   cpu socket 0 



GigabitEtherneta/0/2   3    down  GigabitEtherneta/0/2  
 
 Ethernet address 00:0b:ab:f4:bd:86 

 Intel e1000    

   carrier down 

   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum    

   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 

   cpu socket 0 



GigabitEtherneta/0/3   4    down  GigabitEtherneta/0/3  
 
 Ethernet address 00:0b:ab:f4:bd:87 

 Intel e1000    

   carrier down 

   flags: pmd maybe-multiseg tx-offload intel-phdr-cksum    

   rx queues 1, rx desc 1024, tx queues 1, tx desc 1024

Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
I forgot to ask if these 2 boxes’ interfaces are connected back to back or 
through a switch.

Steven

From:  on behalf of "steven luong via Lists.Fd.Io" 

Reply-To: "Steven Luong (sluong)" 
Date: Tuesday, August 14, 2018 at 8:24 AM
To: Aleksander Djuric , "vpp-dev@lists.fd.io" 

Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] LACP link bonding issue

Aleksander

It looks like the LACP packets are not going out to the interfaces as expected 
or being dropped. Additional output and trace are needed to determine why. 
Please collect the following from both sides.

clear hardware
clear error

wait a few seconds

show hardware
show error
show lacp details
trace add dpdk-input 50

wait a few seconds

show trace

Steven

From:  on behalf of Aleksander Djuric 

Date: Tuesday, August 14, 2018 at 7:28 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] LACP link bonding issue

Hi all,

I'm trying to setup bonding in mode 4 (LACP) between 2 VPP hosts and
I have encounterd the problem of no active slaves on bond interface. Both hosts 
runs VPP v18.10-rc0. Same config runs perfect in other modes. Any idea?

1st VPP config:

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.1/24<http://10.0.0.1/24>
set interface state GigabitEtherneta/0/0 up
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

2nd VPP config:

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.2/24<http://10.0.0.2/24>
set interface state GigabitEtherneta/0/0 up
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

vpp1# ping 10.0.0.2
Statistics: 5 sent, 0 received, 100% packet loss

vpp1# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5  up  9000/0/0/0 tx packets  
  10
   tx bytes 
420
   drops
 10
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0   drops   
   2

vpp1# sh bond
interface name   sw_if_index  mode load balance  active slaves  slaves
BondEthernet05lacp l23   0  2

vpp1# show lacp
actor state 
 partner state
interface namesw_if_index  bond interface   
exp/def/dis/col/syn/agg/tim/act  exp/def/dis/col/syn/agg/tim/act
GigabitEtherneta/0/0  1BondEthernet0  0   0   0   0   0   1 
  1   10   0   0   0   0   0   0   1
  LAG ID: [(,00-0b-ab-f4-f9-66,0005,00ff,0001), 
(,00-00-00-00-00-00,0005,00ff,0001)]
  RX-state: EXPIRED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX
GigabitEtherneta/0/1  2BondEthernet0  0   0   0   0   0   1 
  1   10   0   0   0   0   0   0   1
  LAG ID: [(,00-0b-ab-f4-f9-66,0005,00ff,0002), 
(,00-00-00-00-00-00,0005,00ff,0002)]
  RX-state: EXPIRED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX

Regards,
Aleksander

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10149): https://lists.fd.io/g/vpp-dev/message/10149
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
Aleksander

It looks like the LACP packets are not going out to the interfaces as expected 
or being dropped. Additional output and trace are needed to determine why. 
Please collect the following from both sides.

clear hardware
clear error

wait a few seconds

show hardware
show error
show lacp details
trace add dpdk-input 50

wait a few seconds

show trace

Steven

From:  on behalf of Aleksander Djuric 

Date: Tuesday, August 14, 2018 at 7:28 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] LACP link bonding issue

Hi all,

I'm trying to setup bonding in mode 4 (LACP) between 2 VPP hosts and
I have encounterd the problem of no active slaves on bond interface. Both hosts 
runs VPP v18.10-rc0. Same config runs perfect in other modes. Any idea?

1st VPP config:

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.1/24
set interface state GigabitEtherneta/0/0 up
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

2nd VPP config:

create bond mode lacp load-balance l23
bond add BondEthernet0 GigabitEtherneta/0/0
bond add BondEthernet0 GigabitEtherneta/0/1
set interface ip address BondEthernet0 10.0.0.2/24
set interface state GigabitEtherneta/0/0 up
set interface state GigabitEtherneta/0/1 up
set interface state BondEthernet0 up

vpp1# ping 10.0.0.2
Statistics: 5 sent, 0 received, 100% packet loss

vpp1# sh int
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count
BondEthernet0 5  up  9000/0/0/0 tx packets  
  10
   tx bytes 
420
   drops
 10
GigabitEtherneta/0/0  1  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/1  2  up  9000/0/0/0 tx-error
   1
GigabitEtherneta/0/2  3 down 9000/0/0/0
GigabitEtherneta/0/3  4 down 9000/0/0/0
local00 down  0/0/0/0   drops   
   2

vpp1# sh bond
interface name   sw_if_index  mode load balance  active slaves  slaves
BondEthernet05lacp l23   0  2

vpp1# show lacp
actor state 
 partner state
interface namesw_if_index  bond interface   
exp/def/dis/col/syn/agg/tim/act  exp/def/dis/col/syn/agg/tim/act
GigabitEtherneta/0/0  1BondEthernet0  0   0   0   0   0   1 
  1   10   0   0   0   0   0   0   1
  LAG ID: [(,00-0b-ab-f4-f9-66,0005,00ff,0001), 
(,00-00-00-00-00-00,0005,00ff,0001)]
  RX-state: EXPIRED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX
GigabitEtherneta/0/1  2BondEthernet0  0   0   0   0   0   1 
  1   10   0   0   0   0   0   0   1
  LAG ID: [(,00-0b-ab-f4-f9-66,0005,00ff,0002), 
(,00-00-00-00-00-00,0005,00ff,0002)]
  RX-state: EXPIRED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX

Regards,
Aleksander

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10148): https://lists.fd.io/g/vpp-dev/message/10148
Mute This Topic: https://lists.fd.io/mt/24525535/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-