Hi Mahadev,
Thanks for you reply. Yes, I did check the example implementation.
However, it is not suitable for my purpose as such because it needs a
separate ZooKeeper instance for each WriteLock. I want to support
multiple threads through one ZooKeeper instance (don't want to pay for
connect over
Hi kay,
the namespace partitioning in zookeeper has been on a back burner for a
long time. There isnt any jira open on it. There had been some discussions
on this but no real work. Flavio/Ben have had this on there minds for a
while but no real work/proposal is out yet.
May I know is this someth
Digging up some old tickets + search results - I am trying to understand
what current state is , w.r.t support for namespace partitioning in
zookeeper. Is it already in / any tickets-mailing lists to understand
the current state.
Hi Falvio,
Yes i am concerned about the latency between the DC's (Across continents),
We actually have 6 locations but how exactly are we going to do it if we
have the 3rd DC?
Regards,
On Thu, Jan 14, 2010 at 1:46 PM, Flavio Junqueira wrote:
> Hi Vijay, I'm just curious: why exactly you want
Hi Vijay, I'm just curious: why exactly you want all voting nodes in a
single data center? Are you concerned about latency?
It might not be possible in your case, but if you have a third
location available, you would be able to tolerate one location going
down.
-Flavio
On Jan 14, 2010, a
Hi Vijay,
Sadly there isnt any. It would be great to have someone contribute one to
zookeeper code base :).
Thanks
mahadev
On 1/14/10 12:58 PM, "Vijay" wrote:
> Thanks Mahadev that helps,
>
> Is there any hookup's (In zoo keeper) or examples which i can take a look
> for the bridging pro
Thanks Henry,
Regards,
On Thu, Jan 14, 2010 at 12:40 PM, Henry Robinson wrote:
> Hi -
>
> If you put all your voting nodes in one datacenter, that datacenter becomes
> a 'single point of failure' for the cluster. If it gets cut off from any
> other datacenters, the cluster will not be avail
Thanks Mahadev that helps,
Is there any hookup's (In zoo keeper) or examples which i can take a look
for the bridging process?
Regards,
On Thu, Jan 14, 2010 at 12:38 PM, Mahadev Konar wrote:
> Hi Vijay,
> Unfortunately you wont be able to keep running the observer in the other
> DC if the
Hi -
If you put all your voting nodes in one datacenter, that datacenter becomes
a 'single point of failure' for the cluster. If it gets cut off from any
other datacenters, the cluster will not be available to those datacenters.
If you want to withstand the failure of datacenters, then you need v
Hi Vijay,
Unfortunately you wont be able to keep running the observer in the other
DC if the quorum in the DC 1 is dead. Most of the folks we have talked to
also want to avoid voiting across colos. They usually run two instances of
Zookeeper in 2 DC's and copy state of zookeeper (using a bridge)
Hi,
I read about observers in other datacenter,
My question is i dont want voting across the datacenters (So i will use
observers), at the same time when a DC goes down i dont want to loose the
cluster, whats the solution for it?
I have to have 3 nodes in primary DC to accept 1 node failure. Tha
Btw, here's an excellent example of these 4letterwords being used in a
monitoring application ;-) zktop - http://bit.ly/1iMZdg
Patrick
Patrick Hunt wrote:
ruok basically is polling to see if the ZK process is ok, which it is,
it's just that zk is not part of a quorum (which is potentially a
p
Hi Thomas, I'm not aware of any (and the stuff I use is not init.d
based) however if you do create one please consider contributing it back
to the ZK project, I'd love to include an example in contrib.
Regards,
Patrick
Thomas Koch wrote:
Hi,
does anybody have an init.d script for zookeeper
ruok basically is polling to see if the ZK process is ok, which it is,
it's just that zk is not part of a quorum (which is potentially a
problem, but really fine in the sense that it's an expected state).
http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_zkCommands
stat is
Hi Jaakko,
The lock recipe has already been implemented in zookeeper under
src/recipes/lock (version 3.* I think). It has code to deal with
connectionloss as well. I would suggest that you use the recipe. You can
file jira's in case you see some shortcomings/bugs in the code.
Thanks
mahadev
On
Hi,
does anybody have an init.d script for zookeeper lying around, which I could
adapt and include in the Debian package of zookeeper?
Thanks,
Thomas Koch, http://www.koch.ro
Hi,
I'm trying to provide mutex services through a singleton class
(methods lock and unlock). It basically follows lock recipe, but I'm
having a problem how to handle connection loss properly if it happens
during mutex wait:
pseudocode/snippet:
public class SingletonMutex implements Watcher
{
17 matches
Mail list logo