Hey Guven,
A heartbeat API actually came up in the discussion of KIP-41. Ultimately we
rejected it because it led to confusing API semantics. The problem is that
heartbeat responses are used by the coordinator to tell consumers when a
rebalance is needed. But what should the user do if they call
unclean.leader.election.enable is actually a valid topic-level configuration, I
opened https://issues.apache.org/jira/browse/KAFKA-3298 to get the
documentation updated.
That code comment doesn’t tell the complete story and could probably be updated
for clarity as we’ve learned a lot since
You can fetch messages by offset.
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-FetchRequest
On Fri, Feb 26, 2016 at 7:23 AM rahul shukla
wrote:
> Hello,
> I am working SNMP trap parsing project in my acadmic.
Hi Nishant,
You could use SASL authentication and authorization (ACLs) to control
access to topics. In your use case, you would require authentication and
control which principals have access to which consumer groups. These
features are available in 0.9 but not 0.8. Here are some resources:
Hi Anatoliy,
We labelled 0.9.0.0 as beta as it's a lot of new code and we want to:
1. Give our users a chance to test it and give us feedback
2. Do additional testing ourselves
0.9.0.1 has fixes for all the security issues we became aware of after the
0.9.0.0 release, but we haven't removed the
Hello,
I am working SNMP trap parsing project in my acadmic. i am using kafka
message system in my project. Actully i want to store trap object in
kafka which is getting from snmp agent, and retrieve that object on
another side for further processing.
So, my query is , IS there any way to store a
Hi team,
What is the best way to verify a specific Kafka node functions properly?
Telnet the port is one of the approach but I don't think it tells me
whether or not the broker can still receive/send traffics. I am thinking to
ask for metadata from the broker using consumer.partitionsFor. If it
We are making a Kafka Queue into which messages are being published from
source system. Now multiple consumers can connect to this queue to read
messages.
While doing this the consumers have to specify a groupId based on which the
messages are distributed , if two apps have the same groupId both
thanks for the response Jason,
i've already experimented with a similar solution myself, lowering
max.partition.fetch.bytes to barely fit the largest message (2k at the moment)
still, i've observed similar problems, which is caused by really long
processing times, e.g. downloading a large