[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunhaitao updated ZOOKEEPER-1162:
---------------------------------
    Description: 
Recently we encountered a sitaution where a zk directory got sucessfully 
populated with 250k elements.  When our system attempted to read the znode dir, 
it failed because the contents of the dir exceeded the default 1mb 
jute.maxbuffer limit.  There were a few odd things

1) It seems odd that we could populate to be very large but could not read the 
listing 
2) The workaround was bumping up jute.maxbuffer on the c
Would it make more sense to have it reject adding new znodes if it exceeds 
jute.maxbuffer? 
Alternately, would it make sense to have zk dir listing ignore the 
jute.maxbuffer setting?

  was:
Recently we encountered a sitaution where a zk directory got sucessfully 
populated with 250k elements.  When our system attempted to read the znode dir, 
it failed because the contents of the dir exceeded the default 1mb 
jute.maxbuffer limit.  There were a few odd things

1) It seems odd that we could populate to be very large but could not read the 
listing 
2) The workaround was bumping up jute.maxbuffer on the client side setting.

Would it make more sense to have it reject adding new znodes if it exceeds 
jute.maxbuffer? 
Alternately, would it make sense to have zk dir listing ignore the 
jute.maxbuffer setting?


> consistent handling of jute.maxbuffer when attempting to read large zk 
> "directories"
> ------------------------------------------------------------------------------------
>
>                 Key: ZOOKEEPER-1162
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1162
>             Project: ZooKeeper
>          Issue Type: Improvement
>          Components: server
>    Affects Versions: 3.3.3
>            Reporter: Jonathan Hsieh
>            Priority: Critical
>             Fix For: 3.5.2, 3.6.0
>
>
> Recently we encountered a sitaution where a zk directory got sucessfully 
> populated with 250k elements.  When our system attempted to read the znode 
> dir, it failed because the contents of the dir exceeded the default 1mb 
> jute.maxbuffer limit.  There were a few odd things
> 1) It seems odd that we could populate to be very large but could not read 
> the listing 
> 2) The workaround was bumping up jute.maxbuffer on the c
> Would it make more sense to have it reject adding new znodes if it exceeds 
> jute.maxbuffer? 
> Alternately, would it make sense to have zk dir listing ignore the 
> jute.maxbuffer setting?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to