Each node can take a fair bit of overhead in memory. That 2G you allocated may not go as far as you might like. I am a bit surprised if this eats as much memory as your test seems to imply, but it isn't impossible.
Have you tried a row or column compressed format? Do you really need to store elements separately? Also, Zookeeper is intended primarily as a coordination service. That tends to allow/encourage design decisions that may have a negative impact on using it as a data store. The fact is, it tends to do much better than you might expect at datastore tasks, but that doesn't change the fact that it really isn't a great platform for that. On Wed, Nov 9, 2011 at 4:14 PM, Aniket Chakrabarti < chakr...@cse.ohio-state.edu> wrote: > Hi, > > I am trying to load a huge matrix(100,000 by 500) to my zookeeper > instance. Each element of the matrix is a ZNODE and value of each element > is a digit(0-9). > > But I'm only able to load around 1000 x 500 nodes. Zookeeper is throwing > an error after that. Mostly it is throwing a "-5" error code which is a > marshalling/unmarshalling error. I'm using the perl interface of the > zookeeper. > > My question is: Is there a limit to the maximum number of ZNODES a > zookeeper instance can hold or this is limited by the system memory? > > Any pointers on how to avoid the error would be very helpful. > > Thanks, > Aniket Chakrabarti > PhD student > The Ohio State University >