Hi,
I am on Kafka 2.8.2. Looking for a way to check whether topic existed or
not in Java.
I saw there is a method called listTopics - TopicCommand.listTopics, but
the return type is void.
Is there an API to check topic existance? (preferable argument: (zkconnect,
topic))
Thanks in advance!
Hi Jiangjie,
There's is nothing of note in the controller log. I've attached that log
along with the state change log in the following gist:
https://gist.github.com/banker/78b56a3a5246b25ace4c
This represents a 2-hour period on April 15th.
Since I've disabled the broker on question (on April
Mingtao,
I think you are looking at scala version 2.8.2 . Check your kafka
version by kafka_2.8.2-${version} . In 0.8.2 we’ve
AdminUtils.topicExists(zkClient, topic) .
Thanks,
Harsha
On April 20, 2015 at 2:20:27 PM, Mingtao Zhang (mail2ming...@gmail.com) wrote:
Hi,
I am on
I have to perform frequent re- deployments, and i run into the offset
problem:
Unable to Receive Message:kafka server: The requested offset is outside the
range of offsets maintained by the server for the given topic/partition
I tried to re-create topics and reset topic metadata in zookeeper from
Hi Jun,
I am using the underlying protocol GSS-API that sasl also uses. I
can add details about LDAP/AD . For AD , this is in general the integration of
AD to KERBEROS. I.e kerberos can talk to AD to get the kinit login
credentials ( more of a setup details between kerberos and AD)
Hi Naidu
You'll need to escape the with a \ in the Mbean names. I've run across
this too and it was al pain. It can get a bit tricky if you're doing it in
code because you need to account for double escapes and so forth. This is a
bug in the version of Metrics that Kafka is using. There is a
You can use this
https://github.com/Stackdriver/jmxtrans-config-stackdriver/blob/master/jmxtrans/stackdriver/json-specify-instance/kafka.json
as an example of how Mbean are named and how is being escaped with \,
and just use different output writer for any thing prior to 0.8.2.1
version. After
Hi, all
Our kafka server encounter an unexpected shut down last night, I checked
the logs, it says there is a conflicted ephemeral node, but can't find what
issue it is, can anyone help check this, thanks!
[2015-04-21 03:00:00,562] INFO conflict in /brokers/ids/0 data:
Thanks for your response gharatmayuresh1, but I don't know what you mean
exactly. I have restart my server and I want to find out the cause in case
it happen again.
2015-04-21 11:36 GMT+08:00 gharatmayures...@gmail.com:
Try bouncing
10.144.38.185
This should resolve the issue.
Thanks,
1. getOffsetsBefore is per partition, not per consumer group. The
SimpleConsumer is completely unaware of consumer groups.
2. offsets closest to specified timestamp per topic-consumerGroup -
I'm not sure I understand what you mean. Each consumer group persists
one offset per partition, thats the
I believe it doesn't take consumers into account at all. Just the
offset available on the partition. Why would you need it to?
On Mon, Apr 20, 2015 at 3:46 AM, Alexey Borschenko
aborsche...@elance-odesk.com wrote:
You can also send a FetchOffsetRequest and check for the last
available offset
Try bouncing
10.144.38.185
This should resolve the issue.
Thanks,
Mayuresh
Sent from my iPhone
On Apr 20, 2015, at 8:22 PM, 小宇 mocking...@gmail.com wrote:
10.144.38.185
Hi,
Could anyone help with this?
Thanks.
On Sun, Apr 19, 2015 at 12:58 AM, Achanta Vamsi Subhash
achanta.va...@flipkart.com wrote:
Hi,
How often does Kafka query zookeeper while producing and consuming?
Ex:
If there is a single partition to which we produce and a HighLevel
consumer
Mike,
The endless rebalance errors occur due the error that Mayuresh just pasted.
The rebalance attempts fail because of the conflict in the zkNode.
Below is the exact trace.
*2014-12-09 13:22:11 k.u.ZkUtils$ [INFO] I wrote this conflicted ephemeral
node
Does *SimpleConsumer.getOffsetsBefore* takes in account consumer group
specific offsets?
I need to be able to get offsets closest to specified timestamp per
topic-consumerGroup.
Is there any way to achieve this using Kafka API ?
Thanx!
You can also send a FetchOffsetRequest and check for the last
available offset (log end offset) - this way you won't have to send a
fetch request that is likely to fail.
Does this takes in account specific consumer offsets stored in Zookeeper?
On Fri, Apr 17, 2015 at 5:57 PM, Gwen Shapira
Warren
I am looking into https://gist.github.com/asmaier/6465468;, its not
compiling in my box, i am using kafak-0.8.2.1, and i am using testng java
code for writing my test cases.
On Mon, Apr 20, 2015 at 11:33 AM, Warren Henning warren.henn...@gmail.com
wrote:
Also consider with whether you
Hi,
I updated the KIP-12 with more details. Please take a look
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=51809888
Thanks,
Harsha
On February 11, 2015 at 10:02:43 AM, Harsha (ka...@harsha.io) wrote:
Thanks Joe. It will be part of KafkaServer and will run on its own
I meant to reach spark users... sorry about the noise.
On Mon, Apr 20, 2015 at 8:44 AM, Jean-Pascal Billaud j...@tellapart.com
wrote:
Hi,
I am getting this serialization exception and I am not too sure what
Graph is unexpectedly null when DStream is being serialized means?
15/04/20
I am using kafka 0.8.1 /zookeeper combination...Can someone advise what the
proper procedure would be to fix the issue with producer / consumers
offsets going out of range when consumer has to be re- started? Do I need
to delete meta- data pertaining to topics? What would be the proper way of
Producers usually do not query zookeeper at all.
Consumers usually query zookeeper at beginning or rebalance. It is
supposed to be in frequent if you don¹t have consumers come and go all the
time. One exception is that if you are using zookeeper based consumer
offset commit, it will commit offset
Hi,
I am getting this serialization exception and I am not too sure what Graph
is unexpectedly null when DStream is being serialized means?
15/04/20 06:12:38 INFO yarn.ApplicationMaster: Final app status: FAILED,
exitCode: 15, (reason: User class threw exception: Task not serializable)
Exception
We are planning to monitor health of our Kafka environment using JMX. I have
looked at below links to find what is available via
https://cwiki.apache.org/confluence/display/KAFKA/Available+Metrics
https://cwiki.apache.org/confluence/display/KAFKA/JMX+Reporters
why some of the kafka objects
Also consider with whether you could get away with mocking out the Kafka
broker, depending on what/how you like to test.
On Sun, Apr 19, 2015 at 10:47 PM, sunil kalva kalva.ka...@gmail.com wrote:
Hi
Any one tried running zookeeper and kafka locally , which can be useful for
automating the
Hi, Harsha,
For SASL, a common use case is the integration with LDAP/AD. For
completeness, could you describe (or provide a link) how such integration
can be done?
Also, what about the SSL support, do you plan to describe it in same same
KIP or a separate one?
Thanks,
Jun
On Mon, Apr 20, 2015
I run Kafka and Zookeeper embedded inside my setup phase when running junit
4.x tests.
20. apr. 2015 08:04 skrev Warren Henning warren.henn...@gmail.com:
Also consider with whether you could get away with mocking out the Kafka
broker, depending on what/how you like to test.
On Sun, Apr 19,
Kjeltore
Is is possible to share you code base for doing this ?
On Mon, Apr 20, 2015 at 12:04 PM, Kjell Tore Fossbakk kjellt...@gmail.com
wrote:
I run Kafka and Zookeeper embedded inside my setup phase when running junit
4.x tests.
20. apr. 2015 08:04 skrev Warren Henning
27 matches
Mail list logo