traeak opened a new issue #8775: URL: https://github.com/apache/trafficserver/issues/8775
Found this while performing internal builds for ats92. Tracked this to PR #8718 which switched some tests from EXPENSIVE to normal. Verified the dumps on master. This only happened when doing a fresh build with make -j check. Running the tests individually worked fine (no mem fault errors). ``` ../../build/_aux/test-driver: line 95: 5471 Aborted (core dumped) "$@" > $log_file 2>&1 FAIL: test_Alternate_S_to_L_remove_S PASS: test_Alternate_L_to_S_remove_L ../../build/_aux/test-driver: line 95: 5474 Segmentation fault (core dumped) "$@" > $log_file 2>&1 FAIL: test_Alternate_S_to_L_remove_L ``` The best I have for this is from backtrace (no symbols): ``` 1 lt-test_Alternate_S_to_L_remove_L error on 1 host Callstack object_key_get CacheAltTest_S_to_L_remove_L::delete_earliest_dir(CacheVC*) CacheAltTest_S_to_L_remove_L::handle_cache_event(int, CacheTestBase*) CacheReadTest::read_event(int, void*) CacheVC::openReadMain(int, Event*) handleEvent AIOCallbackInternal::io_complete(int, void*) EThread::process_event(Event*, int) EThread::process_queue(Queue<Event, Event::Link_link>*, int*, int*) EThread::execute_regular() execute EThread::execute() spawn_thread_internal 1 lt-test_Alternate_S_to_L_remove_S error on 1 host Callstack abort std::terminate() Catch::AssertionHandler::complete() [clone .part.0] Catch::AssertionHandler::complete() [clone .cold] CacheAltReadAgain::handle_cache_event(int, CacheTestBase*) CacheReadTest::read_event(int, void*) CacheVC::openReadStartEarliest(int, Event*) handleEvent AIOCallbackInternal::io_complete(int, void*) EThread::process_event(Event*, int) EThread::process_queue(Queue<Event, Event::Link_link>*, int*, int*) EThread::execute_regular() execute EThread::execute() spawn_thread_internal ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
