Andrew Svetlov wrote:
> > Not sure how that works?  Each client has an accepted socket, which is
> > bound to a local port number, and there are 65536 TCP port numbers
> > available.  Unless you're using 15+ coroutines per client, you probably
> > won't reach 1M coroutines that way.
>
> I'm sorry, but the accepted socket has the same local port number as
> the listening one.
> Routing is performed by (local_ip, local_port, remote_ip, remote_port)
quad.

IIRC, that combination is typically referred to as the "port address" (IP
addr + port num source, IP addr + port num destination). All four are are
required in TCP, and in UDP, the IP and source port are optional. So in
UDP, this could potentially just be (remote_ip, remote_port).

Also, it's possible to bind multiple AF_INET (or AF_INET6) sockets to a
single port address by using the SO_REUSEPORT socket option, which we
discussed recently in bpo-37228 (https://bugs.python.org/issue37228). The
only requirement is that the same UID is used for each socket bound to the
same port address (from my understanding, SO_REUSEPORT is typically used
for binding a single process to multiple listening sockets).

TL;DR: It's definitely possible to have more than one client per TCP port.

I'll admit that I've never personally seen production code that uses
anywhere near 1M coroutine objects, but we shouldn't limit users from doing
that without a good reason. At the present moment, it's rather trivial to
create 1M coroutine objects on any system with ~1.3GB+ available main
memory (see my code example in
https://mail.python.org/archives/list/python-dev@python.org/message/WYZHKRGNOT252O3BUTFNDVCIYI6WSBXZ/
).

There's also the infamous "10M" problem, of accepting 10 million concurrent
clients without significant performance issues. This is mostly theoretical
at the moment, but there's an article that explains how the problem could
be addressed by using 10M goroutines: https://goroutines.com/10m. I see no
reason why this couldn't be potentially translated into Python's
coroutines, with the usage of an optimized event loop policy such as uvloop.

But, either way, Mark Shannon removed the 1M coroutine limit from PEP 611,
due to it having the least strong argument out of all of the proposed
limits and a significant amount of push-back from the dev community.

Andrew Svetlov wrote:
> The listening socket can accept hundreds of thousands of concurrent
> client connections.
> The only thing that should be tuned for this is increasing the limit
> of file descriptors.

The default soft limit of file descriptors per process on Linux is 1024
(which can be increased), but you could exceed a per-process limitation of
file descriptors by forking child processes. I'm have no idea what the
realistic maximum limit of global FDs would be for most modern servers
though, but here's the upper bound limit on Linux kernel 5.3.13:

[aeros:~]$ cat /proc/sys/fs/file-max
9223372036854775807

My system's current hard limit of file descriptors is much lower, but is
still fairly substantial:

[aeros:~]$ ulimit -nH
524288

I recall reading somewhere that per additional 100 file descriptors, it
requires approximately 1MB of main memory. Based on that estimate, 1M FDs
would require ~10GB+. This doesn't seem unreasonable to me, especially on a
modern server. But I'd imagine the actual memory usage depends upon how
much data is being buffered at once through the pipes associated with each
FD, but I believe this can be limited through the FD option F_SETPIPE_SZ (
https://linux.die.net/man/2/fcntl).

Note: I was unable to find a credible source on the minimum memory usage
per additional FD, so clarification on that would be appreciated.

On Wed, Dec 11, 2019 at 5:06 PM Andrew Svetlov <andrew.svet...@gmail.com>
wrote:

> On Wed, Dec 11, 2019 at 11:52 PM Antoine Pitrou <solip...@pitrou.net>
> wrote:
> >
> > On Mon, 9 Dec 2019 21:42:36 -0500
> > Kyle Stanley <aeros...@gmail.com> wrote:
> >
> > >
> > > There's also a practical use case for having a large number of
> coroutine
> > > objects, such as for asynchronously:
> > >
> > > 1) Handling a large number of concurrent clients on a continuously
> running
> > > web server that receives a significant amount of traffic.
> >
> > Not sure how that works?  Each client has an accepted socket, which is
> > bound to a local port number, and there are 65536 TCP port numbers
> > available.  Unless you're using 15+ coroutines per client, you probably
> > won't reach 1M coroutines that way.
> >
>
> I'm sorry, but the accepted socket has the same local port number as
> the listening one.
> Routing is performed by (local_ip, local_port, remote_ip, remote_port)
> quad.
>
> The listening socket can accept hundreds of thousands of concurrent
> client connections.
> The only thing that should be tuned for this is increasing the limit
> of file descriptors.
>
> > _______________________________________________
> > Python-Dev mailing list -- python-dev@python.org
> > To unsubscribe send an email to python-dev-le...@python.org
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/O3ZODXHEIJ2SM5SZBOVJ4PIAQMSYNXEJ/
> > Code of Conduct: http://python.org/psf/codeofconduct/
>
>
>
> --
> Thanks,
> Andrew Svetlov
> _______________________________________________
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZVTAHRNWGI4ESWRT44PG3JUJLWJBYXFT/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BCN6LNG5KIDZF4EQXVXJIDOHQ7OHMP7G/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to