Hello,

I have a situation where I'm running out of client ports on a huge reverse-proxy.

Say I have an nginx upstream like this:

upstream geoplatform {
        hash $hashkey consistent;
        server 127.0.0.1:4079 fail_timeout=10s;
        server 127.0.0.1:4080 fail_timeout=10s;
        server 10.100.34.5:4079 fail_timeout=10s;
        server 10.100.34.5:4080 fail_timeout=10s;
        server 10.100.34.7:4079 fail_timeout=10s;
        server 10.100.34.7:4080 fail_timeout=10s;
        server 10.100.34.8:4079 fail_timeout=10s;
        server 10.100.34.8:4080 fail_timeout=10s;
}

And as soon as I'm switching to it from DNS RR I'm starting to get get "Can't assign outgoing address when connecting to ...". The usual approach would be to assign multiple IP aliases to the destination backends, so I will get more of socket tuples. So I did this:

upstream geoplatform {
        hash $hashkey consistent;
        server 127.0.0.1:4079 fail_timeout=10s;
        server 127.0.0.1:4080 fail_timeout=10s;
        server 127.0.0.2:4079 fail_timeout=10s;
        server 127.0.0.2:4080 fail_timeout=10s;
        server 127.0.0.3:4079 fail_timeout=10s;
        server 127.0.0.3:4080 fail_timeout=10s;
        server 10.100.34.5:4079 fail_timeout=10s;
        server 10.100.34.5:4080 fail_timeout=10s;
        server 10.100.33.8:4079 fail_timeout=10s;
        server 10.100.33.8:4080 fail_timeout=10s;
        server 10.100.33.9:4079 fail_timeout=10s;
        server 10.100.33.9:4080 fail_timeout=10s;
        server 10.100.33.10:4079 fail_timeout=10s;
        server 10.100.33.10:4080 fail_timeout=10s;
        server 10.100.34.7:4079 fail_timeout=10s;
        server 10.100.34.7:4080 fail_timeout=10s;
        server 10.100.34.8:4079 fail_timeout=10s;
        server 10.100.34.8:4080 fail_timeout=10s;
        server 10.100.34.10:4079 fail_timeout=10s;
        server 10.100.34.10:4080 fail_timeout=10s;
        server 10.100.34.11:4079 fail_timeout=10s;
        server 10.100.34.11:4080 fail_timeout=10s;
        server 10.100.34.12:4079 fail_timeout=10s;
        server 10.100.34.12:4080 fail_timeout=10s;

}

Surprisingly, this didn't work. So... I just checked if I really have that much of connections. Seems like I'm starting to get troubles on 130K of connections, but even on the initial upstream configuration I should be able to handle 65535 - 10K (since net.inet.ip.portrange.first is 10K) = 55535, 55535 * 8 ~ 450K of connections. Looks like the client port is not reused at all in socket tuples !

Indeed it does not: the below line is taken when there's no free ports, since the nearby console window is flooded with "Can't assign requested address", so I assume I should already have 10.100.34.6.57026 (local IP-port pair) used in as many connection, as many servers I have. But it occurs only once:

# netstat -an | grep 10.100.34.6.57026
tcp4       0      0 10.100.34.6.57026      10.100.34.5.4079 ESTABLISHED

[root@geo2ng:vhost.d/balancer]#

Second test: lets count how many times each port is used in netstat -an:


# netstat -an -p tcp | grep -v LISTEN | grep 10.100 | awk '{print $4}' | sort | uniq -c | more | grep -v 1\

 (none)

So, seems like FreeBSD isn't reusing client ports out-of-the-box.

Linux, on the other hand, does reuse ports for client connection, as long as the socket tuple stays unique. How do I get the same behavior on FreeBSD ?


Thanks.

Eugene.


_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to