Hi Marco, Please wait. I’ll re-submit the jobs when we’ve got everything ready.
Thanks, neale From: Marco Varlese <marco.varl...@suse.com> Date: Monday, 15 May 2017 at 15:12 To: "Damjan Marion (damarion)" <damar...@cisco.com>, "Neale Ranns (nranns)" <nra...@cisco.com> Cc: "csit-...@lists.fd.io" <csit-...@lists.fd.io>, vpp-dev <vpp-dev@lists.fd.io> Subject: Re: [vpp-dev] CSIT borked on master Hi Damjan, As per your input I waited for your patch to be merged and then rebased mine. However, the vpp-csit-verify-virl-master still fails... Shall I try to "recheck" again? Thanks, Marco On Mon, 2017-05-15 at 11:39 +0000, Damjan Marion (damarion) wrote: “recheck" will not be enough. All patches must be rebased so they pick up my fix... On 15 May 2017, at 13:38, Neale Ranns (nranns) <nra...@cisco.com<mailto:nra...@cisco.com>> wrote: Hi Marco, I’ll restart the jobs once we’ve got them passing again. For your reference, you can do it manually by typing ‘recheck’ as a code review comment in gerrit. regards, neale From: Marco Varlese <marco.varl...@suse.com<mailto:marco.varl...@suse.com>> Date: Monday, 15 May 2017 at 12:17 To: "Damjan Marion (damarion)" <damar...@cisco.com<mailto:damar...@cisco.com>>, "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>> Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" <csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>, vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> Subject: Re: [vpp-dev] CSIT borked on master Hi Damjan, Once you're patch is merged, is it possible to kick off the builds which currently are all marked as Verified-1 so to have a clean state on them? If I could do that manually I would do it at least for mine. Thanks, Marco On Mon, 2017-05-15 at 10:54 +0000, Damjan Marion (damarion) wrote: This issue is caused by bug in DPDK 17.05 caused by following commit: http://dpdk.org/browse/dpdk/commit/?id=ee1843b It happens only with old QEMU emulation (I repro it with “pc-1.0”) which VIRL uses. Fix (revert) is in gerrit: https://gerrit.fd.io/r/#/c/6690/ Regards, Damjan On 13 May 2017, at 20:34, Neale Ranns (nranns) <nra...@cisco.com<mailto:nra...@cisco.com>> wrote: Hi Chris, Yes, every CSIT job on master is borked. I think I’ve narrowed this down to all VAT sw_interface_dump returning bogus/garbage MAC addresses. No Idea why, can’t repro yet. I’ve a speculative DPDK 17.05 bump backout job in the queue, for purposes of elimination. Regards, /neale From: "Luke, Chris" <chris_l...@comcast.com<mailto:chris_l...@comcast.com>> Date: Saturday, 13 May 2017 at 19:04 To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, "yug...@telincn.com<mailto:yug...@telincn.com>" <yug...@telincn.com<mailto:yug...@telincn.com>>, vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> Subject: RE: [vpp-dev] Segmentation fault in recursivly lookuping fib entry. CSIT seems to be barfing on every job at the moment :( From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> [mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Neale Ranns (nranns) Sent: Saturday, May 13, 2017 11:20 To: yug...@telincn.com<mailto:yug...@telincn.com>; vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> Subject: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry. https://gerrit.fd.io/r/#/c/6674/<https://gerrit.fd.noclick_io/r/#/c/6674/> /neale From: "yug...@telincn.com<mailto:yug...@telincn.com>" <yug...@telincn.com<mailto:yug...@telincn.com>> Date: Saturday, 13 May 2017 at 14:24 To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> Subject: Re: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry. Hi neale, Could you leave me a msg then? Thanks, Ewan ________________________________ yug...@telincn.com<mailto:yug...@telincn.com> From: Neale Ranns (nranns)<mailto:nra...@cisco.com> Date: 2017-05-13 20:33 To: yug...@telincn.com<mailto:yug...@telincn.com>; vpp-dev<mailto:vpp-dev@lists.fd.io> Subject: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry. Hi Ewan, That’s a bug. I’ll fix it ASAP. Thanks, neale From: <vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on behalf of "yug...@telincn.com<mailto:yug...@telincn.com>" <yug...@telincn.com<mailto:yug...@telincn.com>> Date: Saturday, 13 May 2017 at 03:24 To: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> Subject: [vpp-dev] Segmentation fault in recursivly lookuping fib entry. Hi, all Below are my main configs, others are default. When i knock into this cmd "vppctl ip route 0.0.0.0/0 via 10.10.40.1" to add one default route, the vpp crashed, it looks like this func fib_entry_get_resolving_interface call itself recursivly till vpp's crash. Is there something wrong? config info root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# vppctl show int addr GigabitEthernet2/6/0 (up): 192.168.60.1/24 GigabitEthernet2/7/0 (up): 10.10.55.51/24 host-vGE2_6_0 (up): host-vGE2_7_0 (up): local0 (dn): root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# vppctl show ip fib ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto 0.0.0.0/0 unicast-ip4-chain [@0]: dpo-load-balance: [index:0 buckets:1 uRPF:0 to:[142:12002]] [0] [@0]: dpo-drop ip4 0.0.0.0/32 unicast-ip4-chain [@0]: dpo-load-balance: [index:1 buckets:1 uRPF:1 to:[0:0]] [0] [@0]: dpo-drop ip4 10.10.55.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [index:10 buckets:1 uRPF:9 to:[0:0]] [0] [@4]: ipv4-glean: GigabitEthernet2/7/0 10.10.55.51/32 unicast-ip4-chain [@0]: dpo-load-balance: [index:11 buckets:1 uRPF:10 to:[0:0]] [0] [@2]: dpo-receive: 10.10.55.51 on GigabitEthernet2/7/0 192.168.60.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:7 to:[0:0]] [0] [@4]: ipv4-glean: GigabitEthernet2/6/0 192.168.60.1/32 unicast-ip4-chain [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:8 to:[60:3600]] [0] [@2]: dpo-receive: 192.168.60.1 on GigabitEthernet2/6/0 192.168.60.30/32 unicast-ip4-chain [@0]: dpo-load-balance: [index:12 buckets:1 uRPF:11 to:[60:3600]] [0] [@5]: ipv4 via 192.168.60.30 GigabitEthernet2/6/0: f44d3016eac1000c2904f74e0800 224.0.0.0/4 unicast-ip4-chain [@0]: dpo-load-balance: [index:3 buckets:1 uRPF:3 to:[0:0]] [0] [@0]: dpo-drop ip4 240.0.0.0/4 unicast-ip4-chain [@0]: dpo-load-balance: [index:2 buckets:1 uRPF:2 to:[0:0]] [0] [@0]: dpo-drop ip4 255.255.255.255/32 unicast-ip4-chain [@0]: dpo-load-balance: [index:4 buckets:1 uRPF:4 to:[0:0]] [0] [@0]: dpo-drop ip4 root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault. fib_entry_get_resolving_interface (entry_index=12) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_entry.c:1325 1325 fib_entry = fib_entry_get(entry_index); (gdb) #97831 0x00007fe5435a36a8 in fib_path_get_resolving_interface (path_index=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path.c:1637 #97832 0x00007fe5435a06f3 in fib_path_list_get_resolving_interface (path_list_index=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path_list.c:617 #97833 0x00007fe54359a7b5 in fib_entry_get_resolving_interface (entry_index=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_entry.c:1327 #97834 0x00007fe5435a36a8 in fib_path_get_resolving_interface (path_index=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path.c:1637 #97835 0x00007fe5435a06f3 in fib_path_list_get_resolving_interface (path_list_index=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path_list.c:617 #97836 0x00007fe54359a7b5 in fib_entry_get_resolving_interface (entry_index=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_entry.c:1327 #97837 0x00007fe5435a36a8 in fib_path_get_resolving_interface (path_index=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path.c:1637 #97838 0x00007fe5435a06f3 in fib_path_list_get_resolving_interface (path_list_index=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_path_list.c:617 #97839 0x00007fe54359a7b5 in fib_entry_get_resolving_interface (entry_index=entry_index@entry=0) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/fib/fib_entry.c:1327 #97840 0x00007fe5432687e1 in arp_input (vm=0x7fe5448882a0 <vlib_global_main>, node=0x7fe5021e8580, frame=0x7fe504cc4c00) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vnet/ethernet/arp.c:1381 #97841 0x00007fe5446350d9 in dispatch_node (vm=0x7fe5448882a0 <vlib_global_main>, node=0x7fe5021e8580, type=<optimized out>, dispatch_state=VLIB_NODE_STATE_POLLING, frame=<optimized out>, last_time_stamp=498535532086144) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:998 #97842 0x00007fe5446353cd in dispatch_pending_node (vm=vm@entry=0x7fe5448882a0 <vlib_global_main>, p=0x7fe504ce589c, last_time_stamp=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1144 #97843 0x00007fe544635e3d in vlib_main_or_worker_loop (is_main=1, vm=0x7fe5448882a0 <vlib_global_main>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1588 #97844 vlib_main_loop (vm=0x7fe5448882a0 <vlib_global_main>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1608 #97845 vlib_main (vm=vm@entry=0x7fe5448882a0 <vlib_global_main>, input=input@entry=0x7fe501bd9fa0) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1736 #97846 0x00007fe54466eee3 in thread0 (arg=140622674035360) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/unix/main.c:507 #97847 0x00007fe542b41c60 in clib_calljmp () at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vppinfra/longjmp.S:110 #97848 0x00007ffd39782140 in ?? () #97849 0x00007fe54466f8dd in vlib_unix_main (argc=<optimized out>, argv=<optimized out>) at /usr/src/1704/VBRASV100R001/vpp1704/build-data/../src/vlib/unix/main.c:604 ________________________________ yug...@telincn.com<mailto:yug...@telincn.com> _______________________________________________ vpp-dev mailing list vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> https://lists.fd.io/mailman/listinfo/vpp-dev _______________________________________________ vpp-dev mailing list vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> https://lists.fd.io/mailman/listinfo/vpp-dev
_______________________________________________ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev