[
https://issues.apache.org/jira/browse/OAK-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18012049#comment-18012049
]
Thomas Mueller commented on OAK-4322:
-------------------------------------
Many components now depend on the current storage data format, eg. the indexing
job. All these things would need to be changed as well... first we would need
to find out what exactly needs to be changed, which is quite hard.
An alternative approach is to phase out the current behavior. That can be done
with a (configurable) limit, and if breached take some action eg.
* log an error (I think we already log warnings)
* follow up with the most common such cases
* artificially slow down the store operation, e.g. 10 seconds
* throw an exception (behind a feature toggle)
> Large values of a property should be handled gracefully
> -------------------------------------------------------
>
> Key: OAK-4322
> URL: https://issues.apache.org/jira/browse/OAK-4322
> Project: Jackrabbit Oak
> Issue Type: Improvement
> Components: documentmk
> Reporter: Vikas Saurabh
> Assignee: Vikas Saurabh
> Priority: Minor
>
> Sometimes values of a property can be really huge (in an observed case a
> comment was in the order of MBs). This can lead to document size going over
> the limit by underlying persistence (like 16MB for mongo).
> While, of course, such cases should be avoided at application level. But,
> from storage side, it'd be useful to bear a bit of pain, handle the situation
> gracefully (and possibly shout loudly in logs)
> One possible idea is to have a configurable limit on allowed size of values.
> If the size is more than that, we can potentially offload the actual value as
> blob and create a proxy value (having some meta information and blob ref) in
> the document.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)