zhannngchen opened a new pull request, #13967:
URL: https://github.com/apache/doris/pull/13967
…triggered twice
# Proposed changes
Issue Number: close #xxx
## Problem summary
When the load mem hard limit reached, all load channel should wait on the
lock of LoadChannelMgr, util current reduce mem work finished. In current
implementation, there's a bug might cause some threads be woke up before reduce
mem work finished:
1. thread A found that soft limit reached, picked a load channel and waiting
for reduce memory work finish.
2. The memory keep increasing
3. thread B found that hard limit reached (either the load mem hard limit,
or process soft limit), it picked a load channel to reduce memory and set the
variable `_should_wait_flush` to true
4. thread C found that `_should_wait_flush` is true, waiting on
`_wait_flush_cond`
5. thread A finished it's reduce memory work, found that
`_should_wait_flush` is true, set it to false, and notify all threads.
6. thread C is woke up and pick a load channel to do the reduce memory work.
## Checklist(Required)
1. Does it affect the original behavior:
- [ ] Yes
- [ ] No
- [ ] I don't know
8. Has unit tests been added:
- [ ] Yes
- [ ] No
- [ ] No Need
9. Has document been added or modified:
- [ ] Yes
- [ ] No
- [ ] No Need
10. Does it need to update dependencies:
- [ ] Yes
- [ ] No
11. Are there any changes that cannot be rolled back:
- [ ] Yes (If Yes, please explain WHY)
- [ ] No
## Further comments
If this is a relatively large or complex change, kick off the discussion at
[[email protected]](mailto:[email protected]) by explaining why you
chose the solution you did and what alternatives you considered, etc...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]