gunli opened a new issue, #11703:
URL: https://github.com/apache/inlong/issues/11703
### Description
```go
func (p *connPool) recoverAndRebalance() {
// server failure is a low-probability event, so there's basically no
endpoint need to recover, a higher frequency is also acceptable
recoverTicker := time.NewTicker(10 * time.Second)
defer recoverTicker.Stop()
// dump conn pool info every 10s
dumpTicker := time.NewTicker(10 * time.Second)
defer dumpTicker.Stop()
// rebalancing will calculate a new conn count per endpoint based on
the total conn count, 'cause our conn is closed after some timeout, so we set
the ticker duration bigger than the close time out
reBalanceTicker := time.NewTicker(defaultConnCloseDelay +
30*time.Second)
defer reBalanceTicker.Stop()
// clean expired conn every minute
var cleanExpiredConnTicker *time.Ticker
if p.maxConnLifetime > 0 {
cleanExpiredConnTicker = time.NewTicker(1 * time.Minute)
}
defer func() {
if cleanExpiredConnTicker != nil {
cleanExpiredConnTicker.Stop()
}
}()
for {
select {
case <-recoverTicker.C:
// rebalace
recovered := p.recover()
if recovered {
p.rebalance()
}
case <-dumpTicker.C:
p.dump()
case <-reBalanceTicker.C:
p.rebalance()
case <-p.closeCh:
return
default:
if cleanExpiredConnTicker != nil {
select {
case <-cleanExpiredConnTicker.C:
p.cleanExpiredConns()
default:
time.Sleep(time.Second)
}
} else {
time.Sleep(time.Second)
}
}
}
}
```
Currently, the implementation of `connPool.recoverAndRebalance()` is too
complicated, especially the polling logic of cleanExpiredConnTicker, it will be
beneficial to refactor it.
### InLong Component
InLong SDK
### Are you willing to submit PR?
- [x] Yes, I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of
Conduct](https://www.apache.org/foundation/policies/conduct)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]