janl commented on issue #5544: URL: https://github.com/apache/couchdb/issues/5544#issuecomment-2897705494
> We group multiple views in design documents (based on if they belong together from a domain knowledge perspective) is this a good idea? This is a good idea for your setup. > We use partitioned databases, does it make to short circuit views by checking the incoming document id to only "run" for the "correct" partition? What is your partition key? View updates will by per-shard, not per-partition. > Any other advice given to reduce system load, or general learnings / things we might have missed in the documentation? For this we need more info about your hardware, data sizes and shapes, and traffic patterns. > noticed that some databases show the below warning in Fauxton, and I wondered if this can be an issue and how to "clean" them (if needed). The easiest fix _today_ is replicating the db into a new db with a replication filter (mango selector preferred) that drops deleted docs. There is [some tooling](https://github.com/neighbourhoodie/couch-continuum) that can help with that (`-f` option) if you have a lot of dbs to go through. Alternatively you can purge all those docs in a script, but there are some subtleties to get right about index updates, and purge batch sizes. Best avoid that if can be. — if you migrate to a cluster as Nick outlined, you can drop the deleted docs there. — Finally, if you can live with this for a little while longer, we are aiming to ship a feature that lets CouchDB automatically prune those deleted docs for you. No promises on availability, but we are actively working on it. — the main impact this will have for you is new indexes being added on those databases and replication peers starting from scratch. If you don’t have either of those, just wait for a future release. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
