You should probably make a test that fits your perceived number of databases but otherwise I would not speculate too much about what the maximum is (I do not believe there is any built-in maximum number of databases), if it works you're done.
A configuration you may want to experiment with is the number of shards if you set up a cluster, the "q". Possibly reduce to one i.e. "q = 1". https://docs.couchdb.org/en/stable/config/cluster.html On Sun, 9 Feb 2020 at 17:02, Marcus <[email protected]> wrote: > How many databases can be used without causing issues with replication and > server performance? > > I found two very different opinions. The pouchdb blog quotes 100K (based > on a discussion about Cloudant in 2014). However a Cloudant blog series > from March 2019 recommends a maximum of 500. > > Can anyone explain the huge difference? I understand it's going to depend > on use cases, but a difference of 90,500 databases is significant. > > 500 are too few when databases are needed for read access control using > roles. One for each user's personal document locker, one for public data > (web), and one for a private group. That leaves about 160 users. > > Here are two excerpts from that Cloudant blog series of March 2019. > > "Rule 4: Fewer databases are better than many > > If you can, limit the number of databases per Cloudant account to 500 or > fewer. While there is nothing magical about this particular number > (Cloudant can safely handle more), there are several use cases that are > adversely affected by large numbers of databases in an account." > > "Rule 5: Avoid the “database per user” anti-pattern like the plague > If you’re building out a multi-user service on top of Cloudant, it is > tempting to let each user store their data in a separate database under the > application account. That works well, mostly, if the number of users is > small." > > Source: > https://www.ibm.com/cloud/blog/cloudant-best-and-worst-practices-part-1 > > What are your personal experiences with large numbers of databases? > > Marcus > > >
