.java:
> 1078)
>
> what im able to find here that zookeeper client tring to connect to
> master at : 2181
> but according to my configuration all server are listening at...
>
> server.1 clientPort=2184
> server.2 clientPort=2185
> server.3 clientPort=2186
> .
> .
> .
> means that it is using defaut port:2181
>
> Can anybody tell me What is exactly prolem ?
> Is client process is unable to find our configuration ?
>
>
> Thanks .
> Sanjiv Singh ( iLabs)
> Impetus Infotech (India).
> Mob :+091-9990-447-339
>
>
>
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
; org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
>
>
> Cheers
> Avinash
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
0 03:08 PM, Michi Mutsuzaki wrote:
> > Hello,
> >
> > I'm looking for a good zookeeper shell. So far I've only used cli_mt (c
> > client), but it's not very user friendly. Are there any alternatives? In
> > particular, I'm looking for:
> >
&g
> each
> > transaction, only leader changes.
> >
> > -Dave Wright
> >
> >
> >
> > On Wed, Aug 25, 2010 at 6:22 PM, Todd Nine
> > wrote:
> > > Do I get any read performance increase (similar to an
> > observer) since
> > > the node will not have a voting role?
> > >
> > >
> >
> >
> >
> >
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
bserver it remains an observer.
> >
> > >
> > > 3. When a new node comes online, it may have a different ip than the
> > > previous node. Do I need to update all node configurations and perform
> > > a rolling restart, or will simply connecting the new node to the
> > > existing ensemble make all nodes aware it is running?
> >
> > Unfortunately ZK doesn't have any kind of dynamic configuration like
> > that currently. You need to update all the config files and restart
> > the ensemble.
> >
> > -Dave Wright
> >
> >
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
s as well as normal voting followers. However, given that writes are
rare, perhaps this kind of overhead would be acceptable?
Henry
> Any insight would be very helpful.
>
> Thanks
> A
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
ixes" this.
>
> My question is, if this really is due to session expiration why is a
> SessionExpiredException not raised? Another question, is there an easy way
> to determine the version of the ZooKeeper Python bindings I'm using? I
> built the 3.3.0 bindings but I just want to be able to verify that.
>
> Thanks for the help,
>
> Rich
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
> segfault, hang issues we used to run into with 3.2.1 now show up as
> exit(1).
>
Sorry to hear that - can you open JIRAs with reproducible test cases? We'll
be glad to try and fix the problems you're having.
Henry
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
hs. Is the 2.6 dependency for real,
> or is it just that the maintainer isn't testing older versions
> any more and thus is unsure?
>
> Thanks,
>
> -- |)aniel Thumim
>
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
ating servers reappears in order to make sure
it sees the veiw change.
Even then, the problem of 'locating' the cluster still exists in the case
that there are no clients connected to tell anyone about it.
Henry
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
es it really guarantee that a majority of nodes in
> the user cluster can’t reach the current leader? The same question applies
> to the membership service as well. Because the zookeeper can be partitioned
> from a majority of the nodes in the user cluster. How does the zookeeper
> handle situations like this?
>
> Thanks,
>
> Lei
>
>
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
>
> Anyone know if this is intentional and/or how to fix?
>
> Thanks,
>
> DR
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
ll as hopefully producing some
great code!
cheers,
Henry
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
ldren(self.zh, self.zparent, self.watcher,
> > > self.handler)
> > >
> > > def run(self):
> > >while True:
> > > pass
> > >
> > >
> > > def main():
> > > # Allow Ctrl-C
> > > signal.signal(signal.SIGINT, signal.SIG_DFL)
> > >
> > > parser = OptionParser()
> > > parser.add_option('-v', '--verbose',
> > >dest='verbose',
> > >default=True,
> > >action='store_true',
> > >help='Verbose logging. (default: %default)')
> > > parser.add_option('--servers',
> > >dest='servers',
> > >default='localhost:2181',
> > >help='Comma-separated list of host:port pairs. (default: %default)')
> > > global options
> > > global args
> > > (options, args) = parser.parse_args()
> > >
> > > if options.verbose:
> > >logger.setLevel(logging.DEBUG)
> > > else:
> > >logger.setLevel(logging.INFO)
> > > formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d -
> > > %(message)s")
> > > stream_handler = logging.StreamHandler()
> > > stream_handler.setFormatter(formatter)
> > > logger.addHandler(stream_handler)
> > >
> > > zktest = ZKTest()
> > > zktest.daemon = True
> > > zktest.start()
> > >
> > >
> > > if __name__ == '__main__':
> > > main()
> > >
> > >
> > > Thanks!
> > > Travis
> > >
> >
> >
> >
> > --
> > Henry Robinson
> > Software Engineer
> > Cloudera
> > 415-994-6679
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
s',
>dest='servers',
>default='localhost:2181',
> help='Comma-separated list of host:port pairs. (default: %default)')
> global options
> global args
> (options, args) = parser.parse_args()
>
> if options.verbose:
>logger.setLevel(logging.DEBUG)
> else:
>logger.setLevel(logging.INFO)
> formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d -
> %(message)s")
> stream_handler = logging.StreamHandler()
> stream_handler.setFormatter(formatter)
> logger.addHandler(stream_handler)
>
> zktest = ZKTest()
> zktest.daemon = True
> zktest.start()
>
>
> if __name__ == '__main__':
> main()
>
>
> Thanks!
> Travis
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
Exception
>at
> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:656)
>at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:378)
>at
> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:930)
>at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:901)
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
e same failure profile.
cheers,
Henry
> Thanks,
>
> DR
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
failure updating. Is there a convenient way to ensure this
> operation?
> > Can you give me some tips?
> >
> >I've looked into the src code, there is a tedious way to do. Extend
> > zookeeper instruction, struct a "createAndUpdate" interface and a txn
> > request, let DataTree to ensure the integrity. Will this do and the only
> > way?
> >
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
10 at 9:42 AM, Karthik K wrote:
>
>> Hi -
>> I am looking to delete a node (say, /katta) from a running zk ensemble
>> altogether and curious if there is any command-line tool that is available
>> that can do a delete.
>>
>> --
>> Karthik.
>>
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
ominic Williams <
> thedwilli...@googlemail.com> wrote:
>
> >
> > The current ZooKeeper client holds strong references to Watcher objects.
> I
> > want to change the client so it only holds weak references. Feedback
> > please.
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
o if a node is in minority
> but
> > > have more recent committed value this node is in Veto over the other
> > node.
> > >
> >
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
te to disk.
There is no possibility of more replicas being in the system than are
allowed - you start off with a fixed number, and never go above it.
Hope this helps - let me know if you have any further questions!
Henry
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
On 15 March
e a single instance Zookeeper that operates completely in
> memory, with no network or disk I/O.
>
> This would make it possible to pass one of the memory-only FakeZookeeper's
> into unit tests, while using a real Zookeeper in production code.
>
> Any such "animal"? :-)
ndSet is implemented - neither implementation
is wait-free, but they are still lock-free: some process will always make
progress.
Hope this helps - let me know if you'd like more detail on exactly how to
build this.
Henry
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
On 2 March 2010 20:1
inash.laksh...@gmail.com> wrote:
>
> > Why is this important? What breaks down if I have 2 servers with the same
> > myId?
> >
> > Cheers
> > A
> >
>
>
>
> --
> With Regards!
>
> Ye, Qian
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
hemeral nodes - but with a time delay.
>
> regards,
> Martin
>
--
Henry Robinson
Software Engineer
Cloudera
415-994-6679
. The server have the log of proposal A would be the
> leader, however, the client is told the proposal A failed.
>
> Do I misunderstand this?
>
>
> On Wed, Jan 27, 2010 at 10:37 AM, Henry Robinson
> wrote:
>
> > Qing -
> >
> > That part of the documen
Qing -
That part of the documentation is slightly confusing. The elected leader
must have the highest zxid that has been written to disk by a quorum of
followers. ZAB makes the guarantee that a proposal which has been logged by
a quorum of followers will eventually be committed. Conversely, any
pr
Qing -
Also, as you pointed out, ZAB requires this FIFO property of the
point-to-point links. Paxos copes with more adversarial networks which allow
reordering and missed messages. It's easy to alter Paxos so as not to
'publish' the results of consensus rounds where there are gaps in the
previous
Sounds like it is definitely worth a JIRA - please do create one!
Keeping the discussion together can focus it, and is much more likely
to lead to patches :)
Henry
2010/1/15 Kay Kay :
> Thanks Mahadev / Flavio for the pointers.
>
> There are definitely some practical scenarions that we feel wou
Hi -
If you put all your voting nodes in one datacenter, that datacenter becomes
a 'single point of failure' for the cluster. If it gets cut off from any
other datacenters, the cluster will not be available to those datacenters.
If you want to withstand the failure of datacenters, then you need v
Hi Adam -
As long as a quorum of servers is running, ZK will be live. With majority
quorums, 2/3 is enough to keep going. In general, if fewer than half your
nodes have failed, ZK will keep on keeping on.
The main concern with a cluster of 2/3 machines is that a single further
failure will bring
than this many
bytes.
Thanks again for flagging this up.
cheers,
Henry
On Tue, Dec 15, 2009 at 4:43 PM, Henry Robinson wrote:
> Hey Rich -
>
> That's a really dumb restriction :) I'll open a JIRA and get it fixed asap.
>
> Thanks for the report!
>
> Henry
&g
Hey Rich -
That's a really dumb restriction :) I'll open a JIRA and get it fixed asap.
Thanks for the report!
Henry
On Tue, Dec 15, 2009 at 4:38 PM, Rich Schumacher wrote:
> Hey all,
>
> I'm working on using ZooKeeper for an internal application at Digg. I've
> been using the zkpython package
24 AM, Something Something <
> mailinglist...@gmail.com> wrote:
>
> > Switched to 3.2.1. Much better. Got a command prompt. Thank you both.
> >
> >
> > On Wed, Dec 9, 2009 at 10:09 AM, Henry Robinson >wrote:
> >
> >> The 3.2.1 command line
The 3.2.1 command line is a lot nicer (has an actual prompt, tab
auto-completion, shows your connection status etc) - if you can upgrade to
3.2.1 which is a good deal more modern, I would recommend it. If I recall
correctly, there was no prompt in 3.1.1...
Henry
On Wed, Dec 9, 2009 at 9:36 AM, So
Niemeyer wrote:
>
> r881882 | mahadev | 2009-11-18 13:06:39 -0600 (Wed, 18 Nov 2009) | 1 line
>
> ZOOKEEPER-368. Observers: core functionality (henry robinson via mahadev)
>
>
> Sweet! Congratulations, and thanks Henry.
>
>
> --
> Gustavo Niemeyer
> http://niemeyer.net
>
Hi Gustavo -
I can't speak as to the other JIRAs, but ZK-107 (dynamic membership) is
still being worked on by me. This is a very large change to the ZK codebase,
so I can't see it getting in really before 4.0, although the committers may
view things differently.
If you have a pressing need for th
At the same event, I gave a presentation on two JIRAs I've been working on -
observers and dynamic ensembles. The slides are up on Slideshare here:
http://www.slideshare.net/cloudera/zookeeper-futures, and I will try to get
them uploaded to the wiki page.
I was also able to announce ZooKeeper pack
des in the next day or so. Hope some of you can make it!
cheers,
Henry
Henry Robinson
Software Engineer
Cloudera
Hi -
Yes there are future plans. See
https://issues.apache.org/jira/browse/ZOOKEEPER-107. I have code written for
this that works but is not rock-solid yet.
cheers,
Henry
On Thu, Nov 5, 2009 at 11:02 AM, Avinash Lakshman <
avinash.laksh...@gmail.com> wrote:
> Hi All
>
> Is it possible to remove
t; /home/mark/zookeeper
> with the numbers 1 and 2 respectively
> -Original Message-
> From: Henry Robinson [mailto:he...@cloudera.com]
> Sent: Thursday, October 22, 2009 5:43 PM
> To: zookeeper-user@hadoop.apache.org
> Subject: Re: Cluster Configuration Issues
>
>
Hi Mark -
The Python error relates to not being able to find the zoocfg module - is
zoocfg.py in the same directory as zkconf.py?
Another couple of questions - are you running zookeeper as the same user who
created myid? Can you post your entire configuration file please - copy and
paste?
Henry
Hi Mark -
You should create the myid file yourself, as you have done. What errors are
you seeing that lead you to think the id is not being read correctly?
cheers,
Henry
On Tue, Oct 20, 2009 at 10:12 AM, Mark Vigeant wrote:
> Hey-
>
> So I'm trying to run hbase on 4 nodes, and in order to do t
Hi Steven -
I also see that problem if I build on my Mac sometimes. I'm looking into a
proper fix, but for now you can do:
ant compile
sudo python src/python/setup.py install
to build and install manually. If you have a moment, can you let me know
which ant you are using? (ant -version)
Thanks
if you saw any errors when building the python
module or C module, send them along.
Let me know how you get on!
Henry
On Wed, Sep 23, 2009 at 12:07 AM, Patrick Hunt wrote:
> Erik, I think you ran into this:
> https://issues.apache.org/jira/browse/ZOOKEEPER-420
>
> Henry Robinson from
er, and read-only for others.
> The purpose of this would be for when the WAN link goes down to the
> "master" ZKs for certain types of use cases - status updates or other
> changes local to the observer that are strictly read-only outside the
> Observer's 'realm
n observer could "own" a special
> > node and its subnodes. Only these subnodes would be writable by the
> > observer when there was a session break to the master cluster, and the
> > master cluster would take all the changes when the link is
> > reestablished. Ess
You can. See ZOOKEEPER-368 - at first glance it sounds like observers will
be a good fit for your requirements.
Do bear in mind that the patch on the jira is only for discussion purposes;
I would not consider it currently fit for production use. I hope to put up a
much better patch this week.
Hen
survivor set and the set of laggards, then
> you won't have any data loss at all.
>
> You have to decide if this is too much risk for you. My feeling is that it
> is OK level of correctness for conventional weapon fire control, but not
> for
> nuclear weapons safeguards.
On Mon, Jul 6, 2009 at 7:38 PM, Ted Dunning wrote:
>
> I think that the misunderstanding is that this on-disk image is critical to
> cluster function. It is not critical because it is replicated to all
> cluster members. This means that any member can disappear and a new
> instance can replace
Hi Gustavo -
I hope to have a patch for both fairly soon. I should at least get ZK-368 to
a workable position this week, and ZK-107 will hopefully not be an enormous
amount of work on top of that. However, there doubtless be some slack time
for picking up bugs etc. before it gets committed as it w
Hi Maxime -
When a quorum of ZooKeeper servers have failed, the service stops being
available - you cannot write or read to any item. Once a quorum returns to
operation, the ensemble recovers automatically and continues where it left
off. There is the same requirement that a quorum of servers must
What else do you want to use ZK for - just leader election? It doesn't
require so much a centralised server (which implies kind of a single point
of failure) as a small amount of fixed infrastructure. If you have a highly
dynamic network - an ad-hoc network like a social net - ZK will likely not
be
Hi Harold,
Each ZooKeeper server stores updates to znodes in logfiles, and periodic
snapshots of the state of the datatree in snapshot files.
A user who has the same permissions as the server will be able to read these
files, and can therefore recover the state of the datatree without the ZK
serv
+1 to this idea. It will be good to have some more focus on examples of how
to build applications using ZK; experiences here will feed back into the
design of the core.
Henry
On Tue, Jun 23, 2009 at 2:23 AM, Mahadev Konar wrote:
> Hi Stefan,
> This would be a good addition. Feel free to open a
Hi Satish -
As you've found out, you can set multiple identical watches per znode - the
zookeeper client will not detect identical watches in case you really meant
to call them several times. There's no way currently, as far as I know, to
clear the watches once they've been set. So your options ar
On Wed, Jun 3, 2009 at 5:57 PM, Eric Bowman wrote:
> At some point I'll spend some time understanding how this really affects
> latency in my case ... I'm keeping just a handful of things that are
> about 10M in the ensemble, so the memory footprint is no problem. But
> the network bandwidth cou
On Wed, Jun 3, 2009 at 5:27 PM, Eric Bowman wrote:
>
> Anybody have any experience popping this up a bit bigger? What kind of
> bad things happen?
>
I don't have personal experience of upping this restriction. However, my
understanding is that if data sizes get large, writing them to network an
Hi -
This is designed behaviour. In the latest version, the exception
thrown will be labeled "Responded to info probe". The server
disconnects connections that send four-letter commands deliberately -
I'm guessing because these tend to be one-shot commands and keeping a
socket around indefinitely
60 matches
Mail list logo