hebbaa commented on issue #12769: URL: https://github.com/apache/apisix/issues/12769#issuecomment-3568850252
The issue is because of OOM one of the worker got killed ``` dmesg | grep -i "killed" [1742140.134417] Memory cgroup out of memory: Killed process 1454261 (openresty) total-vm:637692kB, anon-rss:377044kB, file-rss:10320kB, shmem-rss:20776kB, UID:636 pgtables:904kB oom_score_adj:970 [1743469.967848] Memory cgroup out of memory: Killed process 1454264 (openresty) total-vm:637220kB, anon-rss:375624kB, file-rss:10320kB, shmem-rss:20776kB, UID:636 pgtables:904kB oom_score_adj:970 [1748701.318216] Memory cgroup out of memory: Killed process 1454262 (openresty) total-vm:636236kB, anon-rss:375532kB, file-rss:10320kB, shmem-rss:20776kB, UID:636 pgtables:896kB oom_score_adj:970 [1763367.552343] Memory cgroup out of memory: Killed process 198704 (openresty) total-vm:633328kB, anon-rss:373304kB, file-rss:10348kB, shmem-rss:20792kB, UID:636 pgtables:888kB oom_score_adj:970 ``` from apisix data-plane logs `"2025/11/23 14:29:08 [alert] 1#1: worker process 214 exited on signal 9". ` After the above looks like master is trying to start new worker process . During its startup the below errors seen `2025/11/23 10:46:52 [error] 54#54: *643711487 stream [lua] worker.lua:266: communicate(): event worker failed: failed to receive the header bytes: closed, context: ngx.timer 2025/11/23 10:46:52 [error] 116#116: *643711540 lua entry thread aborted: runtime error: /usr/local/openresty/lualib/resty/events/compat/init.lua:47: attempt to index upvalue 'ev' (a nil value) 2025/11/23 10:46:52 [error] 52#52: *643711538 stream [lua] worker.lua:266: communicate(): event worker failed: failed to receive the header bytes: closed, context: ngx.timer` Issue to investigate is why the new worker process didn't start cleanly . -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
