Sn0rt commented on issue #9225:
URL: https://github.com/apache/apisix/issues/9225#issuecomment-1506645873
> > The current implementation may suffer from imbalance at low quantity
levels. Currently using the heap to maintain the connection is only balanced
within a single worker. Can you try using a single nginx worker?
>
> Is it normal for you to test?
I wrtie a unit test and run it in local machine . it work normal.
the unit test as follow.
```lua
=== TEST 2: select least conn
--- apisix_yaml
upstreams:
-
id: 1
type: least_conn
nodes:
"127.0.0.1:1980": 1
"127.0.0.2:1980": 1
"127.0.0.3:1980": 1
"127.0.0.4:1980": 1
--- config
location /t {
content_by_lua_block {
local http = require "resty.http"
local uri = "http://127.0.0.1:" .. ngx.var.server_port
.. "/mysleep?seconds=0.1"
local t = {}
for i = 1, 3000 do
local th = assert(ngx.thread.spawn(function(i)
local httpc = http.new()
local res, err = httpc:request_uri(uri..i, {method =
"GET"})
if not res then
ngx.log(ngx.ERR, err)
return
end
end, i))
table.insert(t, th)
end
for i, th in ipairs(t) do
ngx.thread.wait(th)
end
}
}
--- request
GET /t
--- grep_error_log eval
qr/proxy request to \S+ while connecting to upstream/
--- grep_error_log_out
proxy request to 127.0.0.1:1980 while connecting to upstream
proxy request to 0.0.0.0:1980 while connecting to upstream
proxy request to 127.0.0.1:1980 while connecting to upstream
```
try to run it , and igonre the error output.
```bash
$ prove -I. -I../test-nginx/inc -I../test-nginx/lib -r t/node/least_conn.t
```
Use the log to count the back-end distribution
```bash
$ cat t/servroot/logs/error.log |grep 'while connecting to upstream' | grep
-o 'run():.*client'|grep -o 'run(): proxy request to 127\.0\.0\.[1-3]'| sort |
uniq -c
773 run(): proxy request to 127.0.0.1
770 run(): proxy request to 127.0.0.2
843 run(): proxy request to 127.0.0.3
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]