[
https://issues.apache.org/jira/browse/ZOOKEEPER-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15828966#comment-15828966
]
Shawn Heisey commented on ZOOKEEPER-1162:
-----------------------------------------
I can see that this is a VERY old issue, but I think it's one that's still
relevant.
It seems very wrong that Zookeeper allows child node creation when the
operation will result in the size of the parent znode exceeding jute.maxbuffer.
This caused one of the problems documented on SOLR-7191 -- the overseer queue
became populated with over 850000 entries, resulting in a parent znode size of
14+ megabytes. Since jute.maxbuffer was at its 1MB default, the overseer
stopped functioning shortly after the problem surfaced, but another part of the
system was still adding entries to the queue.
> consistent handling of jute.maxbuffer when attempting to read large zk
> "directories"
> ------------------------------------------------------------------------------------
>
> Key: ZOOKEEPER-1162
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1162
> Project: ZooKeeper
> Issue Type: Improvement
> Components: server
> Affects Versions: 3.3.3
> Reporter: Jonathan Hsieh
> Priority: Critical
> Fix For: 3.5.3, 3.6.0
>
>
> Recently we encountered a sitaution where a zk directory got sucessfully
> populated with 250k elements. When our system attempted to read the znode
> dir, it failed because the contents of the dir exceeded the default 1mb
> jute.maxbuffer limit. There were a few odd things
> 1) It seems odd that we could populate to be very large but could not read
> the listing
> 2) The workaround was bumping up jute.maxbuffer on the client side
> Would it make more sense to have it reject adding new znodes if it exceeds
> jute.maxbuffer?
> Alternately, would it make sense to have zk dir listing ignore the
> jute.maxbuffer setting?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)