Hello Neale,

It’s in review: https://gerrit.fd.io/r/#/c/4433/

Affected tests are skipped there at the moment.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Monday, February 20, 2017 12:49
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com>; vpp-dev@lists.fd.io
Cc: csit-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

Can you please share the test code, then I can reproduce the problem and debug 
it. Maybe push as a draft to gerrit and add me as a reviewer.

Thanks,
neale

From: "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Monday, 20 February 2017 at 09:41
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: RE: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello Neale,

I tested it with vpp_lite built up from the master branch. I did rebase to the 
current head (my parent is now 90c55724b583434957cf83555a084770f2efdd7a) but 
still the same issue.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Friday, February 17, 2017 17:19
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

What version of VPP are you testing?

Thanks,
neale

From: <csit-dev-boun...@lists.fd.io<mailto:csit-dev-boun...@lists.fd.io>> on 
behalf of "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Friday, 17 February 2017 at 14:48
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello VPP dev team,

Usage of reset_fib API command to reset IPv6 FIB leads to incorrect entry in 
the FIB and to crash of VPP.

Could somebody have a look on Jira ticket https://jira.fd.io/browse/VPP-643, 
please?

Thanks,
Jan

From make test log:

12:14:51,710 API: reset_fib ({'vrf_id': 1, 'is_ipv6': 1})
12:14:51,712 IPv6 VRF ID 1 reset
12:14:51,712 CLI: show ip6 fib
12:14:51,714 show ip6 fib
ipv6-VRF:0, fib_index 0, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[30:15175]]
    [0] [@0]: dpo-drop ip6
fd01:4::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:44 buckets:1 uRPF:5 to:[0:0]]
    [0] [@0]: dpo-drop ip6
fd01:7::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:71 buckets:1 uRPF:5 to:[0:0]]
    [0] [@0]: dpo-drop ip6
fd01:a::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:98 buckets:1 uRPF:5 to:[0:0]]
    [0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
    [0] [@2]: dpo-receive
ipv6-VRF:1, fib_index 1, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
    [0] [@0]: dpo-drop ip6
fd01:1::/64
  UNRESOLVED
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:14 to:[0:0]]
    [0] [@2]: dpo-receive

And later:

12:14:52,170 CLI: packet-generator enable
12:14:57,171 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
        ) called, err is (<type 'exceptions.IOError'>, IOError(3, 'Waiting for 
reply timed out'), <traceback object at 0x2abab83db5a8>)
12:14:57,172 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 331, in run
    testMethod()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 365, 
in test_ip6_vrf_02
    self.run_verify_test()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 322, 
in run_verify_test
    self.pg_start()
  File "/home/vpp/Documents/vpp/test/framework.py", line 398, in pg_start
    cls.vapi.cli('packet-generator enable')
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 169, in cli
    r = self.papi.cli_inband(length=len(cli), cmd=cli)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 305, in 
<lambda>
    f = lambda **kwargs: (self._call_vpp(i, msgdef, multipart, **kwargs))
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 547, in 
_call_vpp
    r = self.results_wait(context)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 395, in 
results_wait
    raise IOError(3, 'Waiting for reply timed out')
IOError: [Errno 3] Waiting for reply timed out

12:14:57,172 --- tearDown() for TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
        ) called ---
12:14:57,172 CLI: show trace
12:14:57,172 VPP subprocess died unexpectedly with returncode -6 [unknown]
12:14:57,172 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
        ) called, err is (<class 'hook.VppDiedError'>, VppDiedError('VPP 
subprocess died unexpectedly with returncode -6 [unknown]',), <traceback object 
at 0x2abab8427098>)
12:14:57,173 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 360, in run
    self.tearDown()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 148, 
in tearDown
    super(TestIP6VrfMultiInst, self).tearDown()
  File "/home/vpp/Documents/vpp/test/framework.py", line 333, in tearDown
    self.logger.debug(self.vapi.cli("show trace"))
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 167, in cli
    self.hook.before_cli(cli)
  File "/home/vpp/Documents/vpp/test/hook.py", line 138, in before_cli
    self.poll_vpp()
  File "/home/vpp/Documents/vpp/test/hook.py", line 115, in poll_vpp
    raise VppDiedError(msg)
VppDiedError: VPP subprocess died unexpectedly with returncode -6 [unknown]
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
  • Re: [vpp-dev] [cs... Neale Ranns (nranns)
    • Re: [vpp-dev... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
      • Re: [vpp... Neale Ranns (nranns)
        • Re: ... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
          • ... Neale Ranns (nranns)
            • ... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
            • ... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
              • ... Neale Ranns (nranns)
                • ... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)

Reply via email to