Re: Errors during shutdown/startup of ZooKeeper

2009-06-03 Thread Nitay
I'm still working on it (going on in parallel with a bunch of other things). Will let you guys know what I figure out as soon as I get some results. I think you are on to something Patrick. That is some gold advice. Thanks guys. -n On Wed, Jun 3, 2009 at 11:39 AM, Patrick Hunt wrote: > Nitay, a

Re: Errors during shutdown/startup of ZooKeeper

2009-06-03 Thread Patrick Hunt
Nitay, any luck? Feel free to create a JIRA to track this. If you point to the test code that's experiencing the problem we'll try and take a look. Patrick Patrick Hunt wrote: This log manifests if the client is running ahead of the server. say you have: 1) client connects to server A and see

Re: ZooKeeper heavy CPU utilisation

2009-06-03 Thread Patrick Hunt
Turns out there is a bug in the JDK: https://issues.apache.org/jira/browse/ZOOKEEPER-427?focusedCommentId=12716020&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12716020 We are looking to see if we can solve this via some workaround in 3.2. Currently the only so

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Scott Carey
If its only a handful of things that aren't updated very often, it will probably be fine. Just beware of the drawbacks and danger if there are many or they are updated frequently. You may want to consider compressing the data in there too. On 6/3/09 9:57 AM, "Eric Bowman" wrote: > Thanks for

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Patrick Hunt
Would love to see it - the wiki might be a better choice in terms of visibility, I created this page fairly recently: http://wiki.apache.org/hadoop/ZooKeeper/Troubleshooting A section on data size impact would be great (incl general information on the cluster configuration (ie 3 vs 5 servers i

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Patrick Hunt
wrt bandwidth the issue there is when you do a write you end up copying the data btw servers in the quorum: 1) client setdata("largedata") -> follower ZK server (copy data) 2) follower ZK server forwards the proposal to the ZK server leader (copy data) 3) ZK server leader does atomic broadcast

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Henry Robinson
On Wed, Jun 3, 2009 at 5:57 PM, Eric Bowman wrote: > At some point I'll spend some time understanding how this really affects > latency in my case ... I'm keeping just a handful of things that are > about 10M in the ensemble, so the memory footprint is no problem. But > the network bandwidth cou

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Eric Bowman
Thanks for the quick reply Henry & Patrick. I understand the important of "small things" for a common use case point of view; I don't think my case is so common, but it's also not that big a deal to just write the data to an NFS volume and puts its path in ZK. I was kind of hoping to avoid that,

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Henry Robinson
On Wed, Jun 3, 2009 at 5:27 PM, Eric Bowman wrote: > > Anybody have any experience popping this up a bit bigger? What kind of > bad things happen? > I don't have personal experience of upping this restriction. However, my understanding is that if data sizes get large, writing them to network an

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Patrick Hunt
Agree, created a new JIRA for this: https://issues.apache.org/jira/browse/ZOOKEEPER-430 See the following JIRA for one example why not to do this: https://issues.apache.org/jira/browse/ZOOKEEPER-327 In general you don't want to create large node sizes since all of the data/nodes are stored in m

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Eric Bowman
Ted Dunning wrote: > Isn't the max file size a megabyte? > > On Wed, Jun 3, 2009 at 9:01 AM, Eric Bowman wrote: > > >> On the client, I see this when trying to write a node with 7,641,662 bytes: >> Ok, indeed, from http://hadoop.apache.org/zookeeper/docs/r3.0.1/zookeeperAdmin.html#sc_conf

Re: ConnectionLoss (node too big?)

2009-06-03 Thread Ted Dunning
Isn't the max file size a megabyte? On Wed, Jun 3, 2009 at 9:01 AM, Eric Bowman wrote: > On the client, I see this when trying to write a node with 7,641,662 bytes: -- Ted Dunning, CTO DeepDyve

ConnectionLoss (node too big?)

2009-06-03 Thread Eric Bowman
I'm seeing an error that I suspect is due to me trying to put to big a blob a data into a node. Can anyone confirm? (version 3.1.1) On the client, I see this when trying to write a node with 7,641,662 bytes: 2009-06-03 16:57:44,583 INFO [Serializer] bytes=7641662 2009-06-03 16:57:44,662 WARN

Re: ZooKeeper viewer

2009-06-03 Thread Ted Dunning
Thanks very much. I have found a few oversights in the code as well and will post a new version shortly (with your suggested changes). On Wed, Jun 3, 2009 at 8:17 AM, Eric Bowman wrote: > Ted Dunning wrote: > > Please add comments, suggestions and improvements to the JIRA ticket. I > am > > su

Re: ZooKeeper viewer

2009-06-03 Thread Eric Bowman
Ted Dunning wrote: > Please add comments, suggestions and improvements to the JIRA ticket. I am > sure there is plenty to improve. > > I stuck a more useful (for me, at least) pom up there: forces compiler to -source 1.6 (perhaps 1.5 would be better, but most people using ZK are on 1.6 anyhow