Hello list,

Couch 1.x and Couch 2.x will choke as soon as the indexer tries to
process a too large document that was added. The indexing stops and
you have to manually remove the doc. In the best case you built an
automatic process around the process. The automatic process removes
the document instead of the human.

In any case you can't or should not try to submit a similar sized
document, because you hit a limit. The user can't do what they want
(put a big JSON into Couch) and additionally a lot of work is created
to fix the issue.

I was wondering if we can tackle the root cause instead? This would
help to maintain CouchDB in production systems.

I don't want to push this into 2.0. I want to spark a discussion how
we can get rid of small day-to-day operation issues. The goal is to
make CouchDB easier to run and provide a more pleasant experience for
everyone.

Some limits from other databases:

Postgres: 1GB -
https://www.postgresql.org/docs/current/static/datatype-character.html
Dynamo: 400 KB -
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-data-types
Mongo BSON Doc: 16 megabytes - https://docs.mongodb.com/manual/reference/limits/
Couchbase: 20 MB -
http://developer.couchbase.com/documentation/server/current/clustersetup/server-setup.html

Reply via email to