чт, 23 янв. 2020 г. в 20:54, Willy Tarreau <w...@1wt.eu>: > On Thu, Jan 23, 2020 at 08:40:19PM +0500, ???? ??????? wrote: > > those timeouts are not related to travis itself, I beleive they are > mostly > > related to either real failures or tests instability (race condition). > > These tests are racy by nature and some rely on short delays (i.e. health > checks). If tests are run on an overloaded machine I don't find it > surprising that a few will fail. Often I click restart and they succeed. >
we started to observe asan failures *** h1 debug|AddressSanitizer:DEADLYSIGNAL *** h1 debug|================================================================= *** h1 debug|==10330==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000018 (pc 0x0000006e27b1 bp 0x000000000003 sp 0x7ffe0d8ee080 T0) *** h1 debug|==10330==The signal is caused by a READ memory access. *** h1 debug|==10330==Hint: address points to the zero page. **** dT 0.015 *** h1 debug|==10330==WARNING: failed to fork (errno 11) *** h1 debug|==10330==WARNING: failed to fork (errno 11) *** h1 debug|==10330==WARNING: failed to fork (errno 11) **** dT 0.016 *** h1 debug|==10330==WARNING: failed to fork (errno 11) *** h1 debug|==10330==WARNING: failed to fork (errno 11) *** h1 debug|==10330==WARNING: Failed to use and restart external symbolizer! *** h1 debug| #0 0x6e27b0 (/home/travis/build/haproxy/haproxy/haproxy+0x6e27b0) *** h1 debug| #1 0x551799 (/home/travis/build/haproxy/haproxy/haproxy+0x551799) *** h1 debug| #2 0x8218ed (/home/travis/build/haproxy/haproxy/haproxy+0x8218ed) *** h1 debug| #3 0x735000 (/home/travis/build/haproxy/haproxy/haproxy+0x735000) *** h1 debug| #4 0x7328c8 (/home/travis/build/haproxy/haproxy/haproxy+0x7328c8) *** h1 debug| #5 0x7ff91cd88b96 (/lib/x86_64-linux-gnu/libc.so.6+0x21b96) *** h1 debug| #6 0x41bd09 (/home/travis/build/haproxy/haproxy/haproxy+0x41bd09) *** h1 debug| *** h1 debug|AddressSanitizer can not provide additional info. *** h1 debug|SUMMARY: AddressSanitizer: SEGV (/home/travis/build/haproxy/haproxy/haproxy+0x6e27b0) *** h1 debug|==10330==ABORTING **** dT 0.017 **** c1 fd=9 EOF, as expected I will have a look. Maybe we will switch asan off for good. > These ones often fail there and only there, so the environment definitely > has an impact. And seeing the execution time which has become 10-30 times > what it is on simple laptop really makes me feel like the VMs are seriously > stressed. > > > the bigger is number of tests, the more we depend on those timeouts. > > > > however, we can try to run tests in parallel :) > > It would be worse! > > Willy >