You can certainly run multiple instances of the content server. It just needs a connection to the database and access to the storage.
Have you tuned the number of worker processes in Gunicorn? It defaults to 1, but should almost certainly be increased for any sort of volume. https://docs.gunicorn.org/en/stable/settings.html#worker-processes There are several moving pieces, but that's really all I had to touch here. --Danny On Tue, Jun 22, 2021 at 10:34 AM Bin Li (BLOOMBERG/ 120 PARK) < [email protected]> wrote: > We recently add more clients to use the pulp content server. The processes > run out the file descriptor first. We then increased both nginx and > pulp-content by creating a override.conf > /etc/systemd/system/pulpcore-content.service.d # cat override.conf > [Service] > LimitNOFILE=65536 > > and updated nginx.conf > # Gunicorn docs suggest this value. > worker_processes 1; > events { > worker_connections 10000; # increase if you have lots of clients > accept_mutex off; # set to 'on' if nginx worker_processes > 1 > } > > worker_rlimit_nofile 20000; > > > Now we are keep getting this error. > 2021/06/22 11:26:36 [error] 78373#0: *112823 upstream timed out (110: > Connection timed out) while connecting to upstream, client: > > It looks like pulp-content server cannot keep up with requests. Is there > anything we could do to increase the performance of the content server? > _______________________________________________ > Pulp-list mailing list > [email protected] > https://listman.redhat.com/mailman/listinfo/pulp-list
_______________________________________________ Pulp-list mailing list [email protected] https://listman.redhat.com/mailman/listinfo/pulp-list
