Agree, created a new JIRA for this:
https://issues.apache.org/jira/browse/ZOOKEEPER-430
See the following JIRA for one example why not to do this:
https://issues.apache.org/jira/browse/ZOOKEEPER-327
In general you don't want to create large node sizes since all of the
data/nodes are stored in memory by all of the servers. The latency issue
is also a factor. However if you are storing a handful of nodes in the
cluster then obv these aren't much of a problem (could bite you at some
point in the future though if you start using ZK more...) In general we
advise ppl to store "tokens" in ZK, so perhaps you might store the 7mb
of data in a data store (filesystem?), and use ZK to coordinate access
to that data (this is similar for example to how AWS does things with S3
and SQS, SQS has a limit of 8k iirc, so you store the task in SQS which
includes a pointer (url) to the data to be acted upon in S3...)
Patrick
Eric Bowman wrote:
Ted Dunning wrote:
Isn't the max file size a megabyte?
On Wed, Jun 3, 2009 at 9:01 AM, Eric Bowman <ebow...@boboco.ie> wrote:
On the client, I see this when trying to write a node with 7,641,662 bytes:
Ok, indeed, from
http://hadoop.apache.org/zookeeper/docs/r3.0.1/zookeeperAdmin.html#sc_configuration
I see:
jute.maxbuffer:
(Java system property:* jute.maxbuffer*)
This option can only be set as a Java system property. There is no
zookeeper prefix on it. It specifies the maximum size of the data
that can be stored in a znode. The default is 0xfffff, or just under
1M. If this option is changed, the system property must be set on
all servers and clients otherwise problems will arise. This is
really a sanity check. ZooKeeper is designed to store data on the
order of kilobytes in size.
A more helpful exception would be nice :)
Anybody have any experience popping this up a bit bigger? What kind of
bad things happen?
Thanks,
Eric