I've just released Curator 1.1.0 that adds support for ZooKeeper transactions.
I'm now going to maintain two branches of Curator:
* 1.0.x for ZooKeeper 3.3.x
* 1.1.x+ for ZooKeeper 3.4.x+
The Curator Transaction APIs use the same oh-so-cool Fluent style as the rest
of Curator. E.g.
Hello here,
I have a question regarding ZK clients behaviour during ZK Leader reelection.
We have a situation where server where ZK Leader was running run low on memory
and essentially ZK was "frozen" for quite a bit of time.
Meanwhile other 2 ZK servers did reelections among them and new Leade
This severely limits the throughput of this kind of approach.
The pro is that you get quite a bit of fine-grained resiliency.
The micro-sharding of traffic approach gives you very high throughput
(easily high enough, for instance, to handle all of twitter's traffic).
The con for micro-sharding i
They're stored in ZooKeeper, so both. ZooKeeper backs everything to disk
but keeps the entire DB in memory for performance.
-JZ
On 1/5/12 10:54 AM, "Josh Stone" wrote:
>Are the distributed queue and locks written to disk or can they be held in
>memory?
>
>josh
>
>On Thu, Jan 5, 2012 at 10:02 AM
Are the distributed queue and locks written to disk or can they be held in
memory?
josh
On Thu, Jan 5, 2012 at 10:02 AM, Jordan Zimmerman wrote:
> Curator's queue handles a node going down (when you use setLockPath()).
> Curator will hold a lock for each message that is being processed. You can
Curator's queue handles a node going down (when you use setLockPath()).
Curator will hold a lock for each message that is being processed. You can
see the implementation in the method processWithLockSafety() here:
https://github.com/Netflix/curator/blob/master/curator-recipes/src/main/jav
a/com/net
Yes, something like that with lock safety would satisfy my third use case.
Some questions: Is the distributed queue effectively located by a single
z-node? What happens when that node goes down? Will a node going down still
clear any distributed locks?
Josh
On Thu, Jan 5, 2012 at 9:41 AM, Jordan
FYI - Curator has a resilient message Queue:
https://github.com/Netflix/curator/wiki/Distributed-Queue
On 1/5/12 5:00 AM, "Inder Pall" wrote:
>Third use case: Fault tolerance. If we utilized ZooKeeper to distribute
>messages to workers, can it be made to handle a node going down by
>re-distribut
We're thinking along the same lines. Specifically, I was thinking of using
a hash ring to minimize disruptions to the key space when nodes come and
go. Either that, or micro-sharding would be nice and I'm curious how this
has went with anyone else using ZooKeeper? I should mention, this is
basicall
Third use case: Fault tolerance. If we utilized ZooKeeper to distribute
messages to workers, can it be made to handle a node going down by
re-distributing the work to another node (perhaps messages that are not
ack'ed within a timeout are resent)?
>>Third use-case is done by kafka(ZK based consume
Hi,
I wrote a blog post and an example implementation on how to do single file
leader election with zookeeper:
http://cyberroadie.wordpress.com/2011/12/20/zookeeper-single-file-leader-election-with-retry-logic/
https://github.com/cyberroadie/zk-leader-single-file
please feel free to comments &
Care to work on it?
On 1/5/12 12:50 AM, "Ted Dunning" wrote:
>This pattern would make a nice addition to Curator, actually. It comes up
>repeatedly in different contexts.
Jordan, I don't think that leader election does what Josh wants.
I don't think that consistent hashing is particularly good for that either
because the loss of one node causes the sequential state for lots of
entities to move even among nodes that did not fail.
What I would recommend is a variant
OK - so this is two options for doing the same thing. You use a Leader
Election algorithm to make sure that only one node in the cluster is
operating on a work unit. Curator has an implementation (it's really just
a distributed lock with a slightly different API).
-JZ
On 1/5/12 12:04 AM, "Josh St
Thanks for the response. Comments below:
On Wed, Jan 4, 2012 at 10:46 PM, Jordan Zimmerman wrote:
> Hi Josh,
>
> >Second use case: Distributed locking
> This is one of the most common uses of ZooKeeper. There are many
> implementations - one included with the ZK distro. Also, there is Curator:
>
15 matches
Mail list logo