Thiago Borges wrote:
On 16/12/2009 16:45, Patrick Hunt wrote:
This test has 910 clients (sessions) involved:
http://hadoop.apache.org/zookeeper/docs/current/zookeeperOver.html#Performance

We have users with 10k sessions accessing a single 5 node ZK ensemble. That's the largest I know about that's in production. I've personally tested up to 20k sessions attaching to a 3 node ensemble with 10 second session timeout and it was fine (although I didn't do much other than test session establishment and teardown).

Also see this: http://bit.ly/4ekN8G

The network of this test is a gigabit ethernet, ok? You know someone with was running ensembles in 100 Mbit/s ethernet?

Rt, this was with 1gE. No, I don't know anyone who has done this. But it should be easy enough for you to test. Limit the amount of data you are storing in znodes and it shouldn't be too terrible.

Can Zookeeper ensemble runs only in memory rather than write in both memory and disk? This makes senses since I have a high reliable system? (Of course at some time we need a "dump" to shutdown and restart the entire system).

Not currently, this feature is looking for someone interested enough to provide some patches ;-)
https://issues.apache.org/jira/browse/ZOOKEEPER-546


Well, the disk IO or network first limits the throughput?

I believe the current limitation is CPU bound on the ack processor (given that you have a dedicated txlog device). So neither afaik.

Thanks for you quick response. I'm studding Zookeeper in my master thesis, for coordinate distributed index structures.

NP. Enjoy.

Patrick

Reply via email to