Hi Juraj, Could you try exporting VCL_DEBUG=1 (or higher) before running the tests?
Florin > On Dec 3, 2018, at 3:41 AM, Juraj Linkeš <juraj.lin...@pantheon.tech> wrote: > > Hi Florin, > > So the tests should work fine in parallel, thanks for the clarification. > > I tried running the tests again and I could reproduce it with keyboard > interrupt or when the test produced a core (and was then killed by the parent > run_tests process), but the logs don't say anything - just that the server > and client were started and that's where the logs stop. I guess the child vcl > worker process is not handled in this case, though I wonder why > run_in_venv_with_cleanup.sh doesn't clean it up. > > Juraj > > From: Florin Coras [mailto:fcoras.li...@gmail.com] > Sent: Thursday, November 29, 2018 5:04 PM > To: Juraj Linkeš <juraj.lin...@pantheon.tech> > Cc: Ole Troan <otr...@employees.org>; vpp-dev <vpp-dev@lists.fd.io> > Subject: Re: [vpp-dev] Verify issues (GRE) > > Hi Juraj, > > Those tests exercise the stack in vpp, so they don’t use up linux stack > ports. Moreover, both cut-through and through-the-stack tests use > self.shm_prefix when connecting to vpp’s binary api. So, as long as that > variable is properly updated, VCL and implicitly LDP will attach and use > ports on the right vpp instance. > > As for sock_test_client/server not being properly killed, did you find > something in the logs that would indicate why it happened? > > Florin > > > On Nov 29, 2018, at 3:18 AM, Juraj Linkeš <juraj.lin...@pantheon.tech > <mailto:juraj.lin...@pantheon.tech>> wrote: > > Hi Ole, > > I've noticed a few thing about the VCL testcases: > - The VCL testcasess are all using the same ports, which makes them > unsuitable for parallel test runs > - Another thing about these testcases is that when they're don't > finish properly the sock_test_server and client stay running as zombie > processes (and thus use up ports). It's easily reproducible locally by > interrupting the tests, but I'm not sure whether this could actually arise in > CI > - Which means that if one testcase finishes improperly (e.g. is killed > because of a timeout) all of the other VCL testcases will likely also fail > > Hope this helps if there's anyone looking into those tests, > Juraj > > From: Ole Troan [mailto:otr...@employees.org <mailto:otr...@employees.org>] > Sent: Wednesday, November 28, 2018 7:56 PM > To: vpp-dev <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>> > Subject: [vpp-dev] Verify issues (GRE) > > Guys, > > The verify job have been unstable over the last few days. > We see some instability in the Jenkins build system, in the test harness > itself, and in the tests. > On my 18.04 machine I’m seeing intermittent failures in GRE, GBP, DHCP, VCL. > > It looks like Jenkins is functioning correctly now. > Ed and I are also testing a revert of all the changes made to the test > framework itself over the last couple of days. A bit harsh, but we think this > might be the quickest way back to some level of stability. > > Then we need to fix the tests that are in themselves unstable. > > Any volunteers to see if they can figure out why GRE fails? > > Cheers, > Ole > > > GRE Test Case > ============================================================================== > GRE IPv4 tunnel Tests OK > GRE IPv6 tunnel Tests OK > GRE tunnel L2 Tests OK > 19:37:47,505 Unexpected packets captured: > Packet #0: > 0000 02010000FF0202FE70A06AD308004500 ........p.j...E. > 0010 002A000100003F11219FAC100101AC10 .*....?.!....... > 0020 010204D204D2001672A9343336392033 ........r.4369 3 > 0030 2033202D31202D31 3 -1 -1 > > ###[ Ethernet ]### > dst = 02:01:00:00:ff:02 > src = 02:fe:70:a0:6a:d3 > type = IPv4 > ###[ IP ]### > version = 4 > ihl = 5 > tos = 0x0 > len = 42 > id = 1 > flags = > frag = 0 > ttl = 63 > proto = udp > chksum = 0x219f > src = 172.16.1.1 > dst = 172.16.1.2 > \options \ > ###[ UDP ]### > sport = 1234 > dport = 1234 > len = 22 > chksum = 0x72a9 > ###[ Raw ]### > load = '4369 3 3 -1 -1' > > Ten more packets > > > ###[ UDP ]### > sport = 1234 > dport = 1234 > len = 22 > chksum = 0x72a9 > ###[ Raw ]### > load = '4369 3 3 -1 -1' > > ** Ten more packets > > Print limit reached, 10 out of 257 packets printed > 19:37:47,770 REG: Couldn't remove configuration for object(s): > 19:37:47,770 <vpp_ip_route.VppIpRoute object at 0x7f4c1e7e9b10> > GRE tunnel VRF Tests > ERROR [ temp dir used by test case: /tmp/vpp-unittest-TestGRE-hthaHC ] > > ============================================================================== > ERROR: GRE tunnel VRF Tests > ------------------------------------------------------------------------------ > Traceback (most recent call last): > File "/vpp/16257/test/test_gre.py", line 61, in tearDown > super(TestGRE, self).tearDown() > File "/vpp/16257/test/framework.py", line 546, in tearDown > self.registry.remove_vpp_config(self.logger) > File "/vpp/16257/test/vpp_object.py", line 86, in remove_vpp_config > (", ".join(str(x) for x in failed))) > Exception: Couldn't remove configuration for object(s): 1:2.2.2.2/32 > > ============================================================================== > FAIL: GRE tunnel VRF Tests > ------------------------------------------------------------------------------ > Traceback (most recent call last): > File "/vpp/16257/test/test_gre.py", line 787, in test_gre_vrf > remark="GRE decap packets in wrong VRF") > File "/vpp/16257/test/vpp_pg_interface.py", line 264, in > assert_nothing_captured > (self.name, remark)) > AssertionError: Non-empty capture file present for interface pg0 (GRE decap > packets in wrong VRF) > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#11453): https://lists.fd.io/g/vpp-dev/message/11453 > <https://lists.fd.io/g/vpp-dev/message/11453> > Mute This Topic: https://lists.fd.io/mt/28473762/899915 > <https://lists.fd.io/mt/28473762/899915> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub > <https://lists.fd.io/g/vpp-dev/unsub> [juraj.lin...@pantheon.tech > <mailto:juraj.lin...@pantheon.tech>] > -=-=-=-=-=-=-=-=-=-=-=- > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#11459): https://lists.fd.io/g/vpp-dev/message/11459 > <https://lists.fd.io/g/vpp-dev/message/11459> > Mute This Topic: https://lists.fd.io/mt/28473762/675152 > <https://lists.fd.io/mt/28473762/675152> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub > <https://lists.fd.io/g/vpp-dev/unsub> [fcoras.li...@gmail.com > <mailto:fcoras.li...@gmail.com>] > -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11484): https://lists.fd.io/g/vpp-dev/message/11484 Mute This Topic: https://lists.fd.io/mt/28473762/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-