No, broker 5 is alive with log.
[2014-06-11 13:59:45,170] ERROR Conditional update of path
/brokers/topics/topicTRACE/partitions/0/state with data
{"controller_epoch":1,"leader":5,"version":1,"leader_epoch":0,"isr":[5]} and
expected version 2 failed due to
org.apache.zookeeper.KeeperException
The info from kafka-topics is the correct one. Is broker 5 dead? It seems
that you can issue metadata request to it.
Thanks,
Jun
On Tue, Jun 10, 2014 at 8:26 PM, Bongyeon Kim
wrote:
> With some WARN log, Yes, it is.
>
> and I found interesting things separately before I mentioned.
> I have an
Could you use the kafka-topic command to describe test2 and see if the
leader is available?
Thanks,
Jun
On Tue, Jun 10, 2014 at 11:04 AM, Prakash Gowri Shankor <
prakash.shan...@gmail.com> wrote:
> Hi,
>
> I am running a cluster with a single broker, the performance producer
> script and 3 con
Could you file a jira to track this?
Thanks,
Jun
On Tue, Jun 10, 2014 at 8:22 AM, András Serény
wrote:
> Hi Kafka devs,
>
> are there currently any plans to implement the global threshold feature?
> Is there a JIRA about it?
>
> We are considering to implement a solution for this issue (eithe
Hello All,
Is there a kafka splunk loader/consumer/subscriber? I read in the
powered-by page for kafka, that square outputs kafka logs to splunk. Does
anyone know how this is done?
thanks,
Clay
With some WARN log, Yes, it is.
and I found interesting things separately before I mentioned.
I have another clusters. I run 2 brokers on 1 machine for test. and I see same
problem before I mentioned, but I can’t see any error log on controller.log.
At this time, when I list topics with kafka-t
We could extend the existing metadata to include a Kerberos-style token,
whichever scheme is used. This would mean creating a producer or consumer with
a security context and session negotiation would result in a token. It may be
a lease. Both of our modules would authenticate and authorize t
Thanks Robert and Kafka team for the detailed discussion! Unfortunately I
have been tied up with some production release issues since late last week
and haven't had a chance to weigh in, but I am very interested on the
topic. I promise to respond to the questions and comments this week.
Jonathan
Yes, I agree. There are definitely a variety of use cases that demand
differing levels of complexity here. It comes back to enabling the
development of at-rest encryption and making it as easy as possible to
implement within the Kafka system. I think that this can be done with the
concept of messag
What strikes me as an opportunity is to define a plug gable at-rest encryption
module interface, that supports each/both of our security needs.
Thanks,
Rob
> On Jun 10, 2014, at 4:01 PM, Todd Palino wrote:
>
> The situation of production before having the consumer is definitely a
> good one. T
The situation of production before having the consumer is definitely a
good one. That’s why I wanted to take a little time before responding. Had
to think about it.
I think that while we may certainly produce data before the consumer is
ready, that doesn’t mean that the consumer can’t have a key p
Do you by any chance have steps to reproduce this issue easily?
On Tue, Jun 10, 2014 at 02:23:20PM -0700, Prakash Gowri Shankor wrote:
> No i have not used the delete topic feature. I have been manually deleting
> the topics from zookeeper and removing the topic from the kafka and zk logs.
> I've
Any exceptions you saw on the controller log?
On Tue, Jun 10, 2014 at 2:23 PM, Prakash Gowri Shankor <
prakash.shan...@gmail.com> wrote:
> No i have not used the delete topic feature. I have been manually deleting
> the topics from zookeeper and removing the topic from the kafka and zk
> logs.
>
The broker logs that whenever it closes a connection to a client
(e.g., if there was an error or the client decides to close its
connection after talking to the broker).
If you enable TRACE level on kafka.request.logger you can see
precisely what requests are coming in.
Joel
On Tue, Jun 10, 2014
No i have not used the delete topic feature. I have been manually deleting
the topics from zookeeper and removing the topic from the kafka and zk logs.
I've experimented a bit more. It seems like this occurs when I have a
single broker running. When i restart with 2 brokers, the problem goes away.
Did you use the delete topic command?
That was an experimental feature in the 0.8.1 release with several
bugs. The fixes are all on trunk, but those fixes did not make it into
0.8.1.1 - except for a config option to disable delete-topic support
on the broker.
Joel
On Tue, Jun 10, 2014 at 01:07:4
>From the moment it starts occuring, it is persistent. Restarts dont seem to
make it go away. The only thing that makes it go away is following the
steps listed in this stackoverflow thread.
http://stackoverflow.com/questions/23228222/running-into-leadernotavailableexception-when-using-kafka-0-8-1
Hello Prakash,
Is this exception transient or persistent on broker startup?
Guozhang
On Tue, Jun 10, 2014 at 11:04 AM, Prakash Gowri Shankor <
prakash.shan...@gmail.com> wrote:
> Hi,
>
> I am running a cluster with a single broker, the performance producer
> script and 3 consumers.
> On a fres
Hi all!
I have 3 brokers and I keep getting this message:
[kafka-processor-9092-0] INFO kafka.network.Processor - Closing socket
connection to /172.23.44.4.
or
[kafka-processor-9092-1] INFO kafka.network.Processor - Closing socket
connection to /172.23.44.4.
Do anyone know what that means???
Yes, reducing the refresh interval to 100ms will cause it to try to select
another partition every 100ms, not necessarily a different partition tough,
since it just gets a next random int % num.partitions.
Setting the key can also resolve this issue, as long as the key values are
evenly distribute
Hi,
I am running a cluster with a single broker, the performance producer
script and 3 consumers.
On a fresh start of the cluster , the producer throws this exception.
I was able to run this cluster successfully on the same topic ( test2 )
successfully the first time.
The solution( from stackover
Can you please tell me how to set this property ?
topic.metadata.refresh.interval.ms
Is a value of 100 low enough to solve this issue ?
Im guessing I can set it to 100 and restart the command line producer and
the partitioning should work ? Please confirm.
Thanks
On Mon, Jun 9, 2014 at 5:09 PM,
Hi Kafka devs,
are there currently any plans to implement the global threshold feature?
Is there a JIRA about it?
We are considering to implement a solution for this issue (either inside
or outside of Kafka).
Thanks a lot,
András
On 5/30/2014 11:45 AM, András Serény wrote:
Sorry for the
Another way would be to have your custom decoder return an object that can
be recognized as an error.
We have a decoder that splits binary data into a series of records. If any
part of the binary data is corrupt, the decoder can be configured to either
throw an exception or add an "error record" t
Ok. Was this host (broker id:1,host:c-ccp-tk1-a58,port:9091) up when the
controller had SocketTimeoutException?
Thanks,
Jun
On Mon, Jun 9, 2014 at 10:11 PM, Bongyeon Kim
wrote:
> No, I can see any ZK session expiration log.
>
> What I have to do to prevent this? Increasing '
> zookeeper.sessi
25 matches
Mail list logo