Re: [vpp-dev] VPP crash while allocation buffer in Azure with MLX5

2021-01-05 Thread Christian Hopps
No I don't think it was a build issue. I think it doesn't work. :)

I don't have the cycles currently to circle back and check again, perhaps in 
the future.

Thanks,
Chris.

> On Jan 5, 2021, at 3:11 AM, Benoit Ganne (bganne) via lists.fd.io 
>  wrote:
> 
>> Isn't dpdk/connectx-5 broken in 20.09?
> 
> I am not sure - I did not test it, but if you refer to the build issue, it 
> was caused by meson and this has been reverted from 20.09 if I am not 
> mistaken (and should be fixed in master).
> 
> Best
> ben
> 
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18473): https://lists.fd.io/g/vpp-dev/message/18473
Mute This Topic: https://lists.fd.io/mt/79422123/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crash while allocation buffer in Azure with MLX5

2021-01-05 Thread Benoit Ganne (bganne) via lists.fd.io
> Isn't dpdk/connectx-5 broken in 20.09?

I am not sure - I did not test it, but if you refer to the build issue, it was 
caused by meson and this has been reverted from 20.09 if I am not mistaken (and 
should be fixed in master).

Best
ben


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18470): https://lists.fd.io/g/vpp-dev/message/18470
Mute This Topic: https://lists.fd.io/mt/79422123/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crash while allocation buffer in Azure with MLX5

2021-01-04 Thread Himanshu Rakshit
Hi Ben,

Thanks for the reply. Actually we have some dependency on VPP 19.08, so we
can not switch to VPP 20.09 immediately.
Any help on 19.08 will be appreciated.

Thanks,
Himanshu

On Mon, 4 Jan 2021 at 23:56, Christian Hopps  wrote:

> Isn't dpdk/connectx-5 broken in 20.09?
>
> Thanks,
> Chris.
>
> On Jan 4, 2021, at 9:02 AM, Benoit Ganne (bganne) via lists.fd.io <
> bganne=cisco@lists.fd.io> wrote:
>
> Hi,
>
> VPP 19.08 is no longer supported. Can you try with VPP 20.09 instead?
>
> Best
> ben
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Himanshu
> Rakshit
> Sent: lundi 4 janvier 2021 12:42
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP crash while allocation buffer in Azure with MLX5
>
> Hi All,
>
> We are observing the the following crash in VPP while running in Azure
> with MLX5
>
> Program received signal SIGABRT, Aborted.
> 0x7f8c2387c387 in raise () from /lib64/libc.so.6
> (gdb) bt
> #0  0x7f8c2387c387 in raise () from /lib64/libc.so.6
> #1  0x7f8c2387da78 in abort () from /lib64/libc.so.6
> #2  0x0040755a in os_panic () at /home/vpp/src/vpp/vnet/main.c:355
> #3  0x7f8c2461eb39 in debugger () at /home/vpp/src/vppinfra/error.c:84
> #4  0x7f8c2461ef08 in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x7f8bdffa7df8 "%s:%d (%s) assertion `%s' fails")
>at /home/vpp/src/vppinfra/error.c:143
> #5  0x7f8bdfd38823 in vlib_get_buffer_index (vm=0x7f8c25479d80
> , p=0x17f19ed00)
>at /home/vpp/src/vlib/buffer_funcs.h:261
> #6  0x7f8bdfd38b87 in vlib_get_buffer_indices_with_offset
> (vm=0x7f8c25479d80 , b=0x7f8be48c5fc8,
> bi=0x7f8be4a1c514,
>count=2, offset=128) at /home/vpp/src/vlib/buffer_funcs.h:322
> #7  0x7f8bdfd3ae2d in dpdk_device_input (vm=0x7f8c25479d80
> , dm=0x7f8be06968e0 , xd=0x7f8be494e240,
>node=0x7f8be3afae80, thread_index=0, queue_id=0) at
> /home/vpp/src/plugins/dpdk/device/node.c:371
> #8  0x7f8bdfd3b362 in dpdk_input_node_fn_avx2 (vm=0x7f8c25479d80
> , node=0x7f8be3afae80, f=0x0)
>at /home/vpp/src/plugins/dpdk/device/node.c:469
> #9  0x7f8c251d6c42 in dispatch_node (vm=0x7f8c25479d80
> , node=0x7f8be3afae80, type=VLIB_NODE_TYPE_INPUT,
>dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0,
> last_time_stamp=265582086742909)
>at /home/vpp/src/vlib/main.c:1209
> #10 0x7f8c251d8d94 in vlib_main_or_worker_loop (vm=0x7f8c25479d80
> , is_main=1)
>at /home/vpp/src/vlib/main.c:1781
> #11 0x7f8c251d98a9 in vlib_main_loop (vm=0x7f8c25479d80
> )
>at /home/vpp/src/vlib/main.c:1930
> #12 0x7f8c251da571 in vlib_main (vm=0x7f8c25479d80 ,
> input=0x7f8be2695fb0)
>at /home/vpp/src/vlib/main.c:2147
> #13 0x7f8c25240533 in thread0 (arg=140239897599360) at
> /home/vpp/src/vlib/unix/main.c:640
> #14 0x7f8c2463f9b0 in clib_calljmp ()
>   from /home/vpp/build-root/install-vpp_debug-
> native/vpp/lib/libvppinfra.so.19.08.1
> #15 0x7ffc224c65e0 in ?? ()
> #16 0x7f8c25240aa9 in vlib_unix_main (argc=67, argv=0x855030)
>at /home/vpp/src/vlib/unix/main.c:710
> #17 0x00406ece in main (argc=67, argv=0x855030) at
> /home/vpp/src/vpp/vnet/main.c:280
>
> Details:
> VPP Version: 19.08
> DPDK version: 19.05
> Linux Version: CentOS Linux release 7.8.2003 (Core)
> Kernel version: 3.10.0-1127.19.1.el7.x86_64
> Step Followed: https://fd.io/docs/vpp/master/usecases/vppinazure.html
>
> Interfaces are coming up properly, normal ping is working fine but when we
> are trying to run more data we are running into this issue.
>
>
> More Info:
> - 1G hge pages is configured
> - We can see the following error in vpp show logging
>
> [root@exe91-fpm vpp]# lspci
> :00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX
> Host bridge (AGP disabled) (rev 03)
> :00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev
> 01)
> :00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev
> 01)
> :00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
> :00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V
> virtual VGA
> 042d:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family
> [ConnectX-4 Lx Virtual Function] (rev 80)
> 1d94:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family
> [ConnectX-4 Lx Virtual Function] (rev 80)
>
> int
>  Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
> FiftyGigabitEthernet2 1  up  2000/0/0/0 rx
> packets 1
>rx
> bytes  42
>tx
> packets 1
>tx
> bytes  42
>drops
> 1
> FiftyGigabitEthernet3 2  up  90

Re: [vpp-dev] VPP crash while allocation buffer in Azure with MLX5

2021-01-04 Thread Christian Hopps
Isn't dpdk/connectx-5 broken in 20.09?

Thanks,
Chris.

> On Jan 4, 2021, at 9:02 AM, Benoit Ganne (bganne) via lists.fd.io 
>  wrote:
> 
> Hi,
> 
> VPP 19.08 is no longer supported. Can you try with VPP 20.09 instead?
> 
> Best
> ben
> 
>> -Original Message-
>> From: vpp-dev@lists.fd.io  > > On Behalf Of Himanshu
>> Rakshit
>> Sent: lundi 4 janvier 2021 12:42
>> To: vpp-dev@lists.fd.io 
>> Subject: [vpp-dev] VPP crash while allocation buffer in Azure with MLX5
>> 
>> Hi All,
>> 
>> We are observing the the following crash in VPP while running in Azure
>> with MLX5
>> 
>> Program received signal SIGABRT, Aborted.
>> 0x7f8c2387c387 in raise () from /lib64/libc.so.6
>> (gdb) bt
>> #0  0x7f8c2387c387 in raise () from /lib64/libc.so.6
>> #1  0x7f8c2387da78 in abort () from /lib64/libc.so.6
>> #2  0x0040755a in os_panic () at /home/vpp/src/vpp/vnet/main.c:355
>> #3  0x7f8c2461eb39 in debugger () at /home/vpp/src/vppinfra/error.c:84
>> #4  0x7f8c2461ef08 in _clib_error (how_to_die=2, function_name=0x0,
>> line_number=0, fmt=0x7f8bdffa7df8 "%s:%d (%s) assertion `%s' fails")
>>at /home/vpp/src/vppinfra/error.c:143
>> #5  0x7f8bdfd38823 in vlib_get_buffer_index (vm=0x7f8c25479d80
>> , p=0x17f19ed00)
>>at /home/vpp/src/vlib/buffer_funcs.h:261
>> #6  0x7f8bdfd38b87 in vlib_get_buffer_indices_with_offset
>> (vm=0x7f8c25479d80 , b=0x7f8be48c5fc8,
>> bi=0x7f8be4a1c514,
>>count=2, offset=128) at /home/vpp/src/vlib/buffer_funcs.h:322
>> #7  0x7f8bdfd3ae2d in dpdk_device_input (vm=0x7f8c25479d80
>> , dm=0x7f8be06968e0 , xd=0x7f8be494e240,
>>node=0x7f8be3afae80, thread_index=0, queue_id=0) at
>> /home/vpp/src/plugins/dpdk/device/node.c:371
>> #8  0x7f8bdfd3b362 in dpdk_input_node_fn_avx2 (vm=0x7f8c25479d80
>> , node=0x7f8be3afae80, f=0x0)
>>at /home/vpp/src/plugins/dpdk/device/node.c:469
>> #9  0x7f8c251d6c42 in dispatch_node (vm=0x7f8c25479d80
>> , node=0x7f8be3afae80, type=VLIB_NODE_TYPE_INPUT,
>>dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0,
>> last_time_stamp=265582086742909)
>>at /home/vpp/src/vlib/main.c:1209
>> #10 0x7f8c251d8d94 in vlib_main_or_worker_loop (vm=0x7f8c25479d80
>> , is_main=1)
>>at /home/vpp/src/vlib/main.c:1781
>> #11 0x7f8c251d98a9 in vlib_main_loop (vm=0x7f8c25479d80
>> )
>>at /home/vpp/src/vlib/main.c:1930
>> #12 0x7f8c251da571 in vlib_main (vm=0x7f8c25479d80 ,
>> input=0x7f8be2695fb0)
>>at /home/vpp/src/vlib/main.c:2147
>> #13 0x7f8c25240533 in thread0 (arg=140239897599360) at
>> /home/vpp/src/vlib/unix/main.c:640
>> #14 0x7f8c2463f9b0 in clib_calljmp ()
>>   from /home/vpp/build-root/install-vpp_debug-
>> native/vpp/lib/libvppinfra.so.19.08.1
>> #15 0x7ffc224c65e0 in ?? ()
>> #16 0x7f8c25240aa9 in vlib_unix_main (argc=67, argv=0x855030)
>>at /home/vpp/src/vlib/unix/main.c:710
>> #17 0x00406ece in main (argc=67, argv=0x855030) at
>> /home/vpp/src/vpp/vnet/main.c:280
>> 
>> Details:
>> VPP Version: 19.08
>> DPDK version: 19.05
>> Linux Version: CentOS Linux release 7.8.2003 (Core)
>> Kernel version: 3.10.0-1127.19.1.el7.x86_64
>> Step Followed: https://fd.io/docs/vpp/master/usecases/vppinazure.html
>> 
>> Interfaces are coming up properly, normal ping is working fine but when we
>> are trying to run more data we are running into this issue.
>> 
>> 
>> More Info:
>> - 1G hge pages is configured
>> - We can see the following error in vpp show logging
>> 
>> [root@exe91-fpm vpp]# lspci
>> :00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX
>> Host bridge (AGP disabled) (rev 03)
>> :00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev
>> 01)
>> :00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev
>> 01)
>> :00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
>> :00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V
>> virtual VGA
>> 042d:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family
>> [ConnectX-4 Lx Virtual Function] (rev 80)
>> 1d94:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family
>> [ConnectX-4 Lx Virtual Function] (rev 80)
>> 
>> int
>>  Name   IdxState  MTU (L3/IP4/IP6/MPLS)
>> Counter  Count
>> FiftyGigabitEthernet2 1  up  2000/0/0/0 rx
>> packets 1
>>rx
>> bytes  42
>>tx
>> packets 1
>>tx
>> bytes  42
>>drops
>> 1
>> FiftyGigabitEthernet3 2  up  9000/0/0/0 rx
>> packets 9
>> 

Re: [vpp-dev] VPP crash while allocation buffer in Azure with MLX5

2021-01-04 Thread Benoit Ganne (bganne) via lists.fd.io
Hi,

VPP 19.08 is no longer supported. Can you try with VPP 20.09 instead?

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Himanshu
> Rakshit
> Sent: lundi 4 janvier 2021 12:42
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP crash while allocation buffer in Azure with MLX5
> 
> Hi All,
> 
> We are observing the the following crash in VPP while running in Azure
> with MLX5
> 
> Program received signal SIGABRT, Aborted.
> 0x7f8c2387c387 in raise () from /lib64/libc.so.6
> (gdb) bt
> #0  0x7f8c2387c387 in raise () from /lib64/libc.so.6
> #1  0x7f8c2387da78 in abort () from /lib64/libc.so.6
> #2  0x0040755a in os_panic () at /home/vpp/src/vpp/vnet/main.c:355
> #3  0x7f8c2461eb39 in debugger () at /home/vpp/src/vppinfra/error.c:84
> #4  0x7f8c2461ef08 in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x7f8bdffa7df8 "%s:%d (%s) assertion `%s' fails")
> at /home/vpp/src/vppinfra/error.c:143
> #5  0x7f8bdfd38823 in vlib_get_buffer_index (vm=0x7f8c25479d80
> , p=0x17f19ed00)
> at /home/vpp/src/vlib/buffer_funcs.h:261
> #6  0x7f8bdfd38b87 in vlib_get_buffer_indices_with_offset
> (vm=0x7f8c25479d80 , b=0x7f8be48c5fc8,
> bi=0x7f8be4a1c514,
> count=2, offset=128) at /home/vpp/src/vlib/buffer_funcs.h:322
> #7  0x7f8bdfd3ae2d in dpdk_device_input (vm=0x7f8c25479d80
> , dm=0x7f8be06968e0 , xd=0x7f8be494e240,
> node=0x7f8be3afae80, thread_index=0, queue_id=0) at
> /home/vpp/src/plugins/dpdk/device/node.c:371
> #8  0x7f8bdfd3b362 in dpdk_input_node_fn_avx2 (vm=0x7f8c25479d80
> , node=0x7f8be3afae80, f=0x0)
> at /home/vpp/src/plugins/dpdk/device/node.c:469
> #9  0x7f8c251d6c42 in dispatch_node (vm=0x7f8c25479d80
> , node=0x7f8be3afae80, type=VLIB_NODE_TYPE_INPUT,
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0,
> last_time_stamp=265582086742909)
> at /home/vpp/src/vlib/main.c:1209
> #10 0x7f8c251d8d94 in vlib_main_or_worker_loop (vm=0x7f8c25479d80
> , is_main=1)
> at /home/vpp/src/vlib/main.c:1781
> #11 0x7f8c251d98a9 in vlib_main_loop (vm=0x7f8c25479d80
> )
> at /home/vpp/src/vlib/main.c:1930
> #12 0x7f8c251da571 in vlib_main (vm=0x7f8c25479d80 ,
> input=0x7f8be2695fb0)
> at /home/vpp/src/vlib/main.c:2147
> #13 0x7f8c25240533 in thread0 (arg=140239897599360) at
> /home/vpp/src/vlib/unix/main.c:640
> #14 0x7f8c2463f9b0 in clib_calljmp ()
>from /home/vpp/build-root/install-vpp_debug-
> native/vpp/lib/libvppinfra.so.19.08.1
> #15 0x7ffc224c65e0 in ?? ()
> #16 0x7f8c25240aa9 in vlib_unix_main (argc=67, argv=0x855030)
> at /home/vpp/src/vlib/unix/main.c:710
> #17 0x00406ece in main (argc=67, argv=0x855030) at
> /home/vpp/src/vpp/vnet/main.c:280
> 
> Details:
> VPP Version: 19.08
> DPDK version: 19.05
> Linux Version: CentOS Linux release 7.8.2003 (Core)
> Kernel version: 3.10.0-1127.19.1.el7.x86_64
> Step Followed: https://fd.io/docs/vpp/master/usecases/vppinazure.html
> 
> Interfaces are coming up properly, normal ping is working fine but when we
> are trying to run more data we are running into this issue.
> 
> 
> More Info:
> - 1G hge pages is configured
> - We can see the following error in vpp show logging
> 
> [root@exe91-fpm vpp]# lspci
> :00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX
> Host bridge (AGP disabled) (rev 03)
> :00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev
> 01)
> :00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev
> 01)
> :00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
> :00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V
> virtual VGA
> 042d:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family
> [ConnectX-4 Lx Virtual Function] (rev 80)
> 1d94:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family
> [ConnectX-4 Lx Virtual Function] (rev 80)
> 
> int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
> FiftyGigabitEthernet2 1  up  2000/0/0/0 rx
> packets 1
> rx
> bytes  42
> tx
> packets 1
> tx
> bytes  42
> drops
> 1
> FiftyGigabitEthernet3 2  up  9000/0/0/0 rx
> packets 9
> rx
> bytes 890
> drops
> 9
> ip4
> 2
> ip6
> 7
> DBGvpp