lhsoft opened a new issue, #3030:
URL: https://github.com/apache/brpc/issues/3030
**Describe the bug**
高并发场景下发现后端很快会hang死,简单分析了下,原因如下:
#2819 将epoll线程的attr加入了BTHREAD_GLOBAL_PRIORITY
但高并发场景下,start_foreground在调用priority_to_run这个函数没有重试逻辑,导致极端情况下,epoll
thread会丢失,导致节点hang死
```
void TaskGroup::priority_to_run(void* args_in) {
ReadyToRunArgs* args = static_cast<ReadyToRunArgs*>(args_in);
return tls_task_group->control()->push_priority_queue(args->tag,
args->meta->tid);
}
void push_priority_queue(bthread_tag_t tag, bthread_t tid) {
_priority_queues[tag].push(tid);
}
```
普通的ready_to_run_in_worker提交任务会进行重试
```
void TaskGroup::ready_to_run_in_worker(void* args_in) {
ReadyToRunArgs* args = static_cast<ReadyToRunArgs*>(args_in);
return tls_task_group->ready_to_run(args->meta, args->nosignal);
}
inline void TaskGroup::push_rq(bthread_t tid) {
while (!_rq.push(tid)) {
// Created too many bthreads: a promising approach is to insert the
// task into another TaskGroup, but we don't use it because:
// * There're already many bthreads to run, inserting the bthread
// into other TaskGroup does not help.
// * Insertions into other TaskGroups perform worse when all
workers
// are busy at creating bthreads (proved by test_input_messenger
in
// brpc)
flush_nosignal_tasks();
LOG_EVERY_SECOND(ERROR) << "_rq is full, capacity=" <<
_rq.capacity();
// TODO(gejun): May cause deadlock when all workers are spinning
here.
// A better solution is to pop and run existing bthreads, however
which
// make set_remained()-callbacks do context switches and need
extensive
// reviews on related code.
::usleep(1000);
}
}
```
**To Reproduce**
**Expected behavior**
**Versions**
OS:
Compiler:
brpc:
protobuf:
**Additional context/screenshots**
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]