Thanks for the suggestion Wendall! Am I correct in thinking that might be 
related to the "Maximum open file descriptors" documentation in the Couch docs? 
https://docs.couchdb.org/en/stable/maintenance/performance.html#maximum-open-file-descriptors-ulimit
Either way, it seems like this is not the cause of our behavior since we are 
only seeing 1% of the inodes being used...
Josh
On Jul 25 2022, at 11:18 am, Wendall Cada <[email protected]> wrote:
> You may have run out of inodes, or inodes limit is being reached. I've seen
> this before. Just something to check, not saying it is the issue. Given how
> many resources you are handling, this is the first thing that came to mind.
>
> Wendall
> On Mon, Jul 25, 2022 at 8:36 AM Josh Kuestersteffen <[email protected]>
> wrote:
>
> > I am trying to figure out a weird situation where we have a decent sized
> > CouchDB instance (~75GB of data, 2000+ databases) deployed on a host with
> > 31GB of memory. The issue I am running into is that under heavy load, Couch
> > starts timing out and throwing errors like it has run out of system
> > resources to use (e.g. req_timedout, exit:timeout), but it is not maxing
> > out on anything. CPU usage rises as expected but still only peaks around
> > 70%.
> >
> > It is the memory usage, though, that is very strange to me. Regardless of
> > how much load is on the server, the memory usage remains essentially
> > constant at ~2.5GB used and ~28GB in "buff/cache". I had expected the
> > memory usage to be considerably higher (given there seems to be plenty of
> > free memory on the server).
> > Has anyone seen this behavior before? It feels like there is some
> > bottleneck that I am missing here that is preventing Couch from actually
> > utilizing the available resources...
> > Also FWIW, this is a Couch 2.x instance with these config options (among
> > others):
> > [couchdb]
> > os_process_timeout = 60000
> > max_dbs_open = 5000
> >
> > Thank you,
> > Josh Kuestersteffen
> >
>

Reply via email to