On 2020/04/22 13:11, Aldo Mazzeo wrote: > My system completely hangs when downloading huge files (> 2 GB, maybe > even smaller) when using Nextcloud client application. > > When downloading such files, I can clearly see httpd process eating > more and more CPU and RAM, until it reaches ~95%: after this, my system > completely freezes and no access to it is possible, neither through ssh > nor with serial. > > I am running OpenBSD 6.6 on amd64 (apu4d4), with all syspatches applied, > and running Nextcloud 18.0.3 (although I had this problem with previous > versions as well) on PHP 7.2, served by httpd and relayd. > > I searched on the Internet and I found a similar issue at > > https://www.reddit.com/r/openbsd/comments/9qsh1i/httpd_slow_downloads_of_large_files/ > > No real solution was provided, though. > > Is this a real bug? Is it a configuration issue? > Can I help with more details? > > Thank you, > Aldo >
Web front-end servers (e.g. httpd, nginx, apache httpd) buffer fastcgi response then send to clients. Depending on how the buffering is done it may either read as much data from the backend as possible (which can be good if you have clients on slow net, and a backend that uses a lot of RAM and produces a medium amount of output), or may do some kind of flow control with the backend (only reading a bit ahead of what can be fed to the client, this is what nginx normally does), or in some cases can be configured to buffer to disk (nginx with fastcgi_buffering and fastcgi_temp* settings). I suspect httpd is in the "buffer as much as possible" camp (there are no config options relating to this). If that's the case then switching to a different web front-end is likely the easiest workaround. Most of the comments in the reddit article talk about serving files directly from the web front-end rather than going via nextcloud. That isn't an option - nextcloud does access control on files so it can't just output a link that is served directly by the web front-end, it has to serve the files itself.