Hello Avi,
Yes, this is possible: although we checked nullable keys when doing reduce
/ aggregations:
https://github.com/apache/kafka/blob/0.10.0/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamReduce.java#L67
We do not check if the there are any values returned from the
When we were using the old consumer, we can use zookeeper client tool to
manually set the offset for a consumer group.
For the new consumer when the offsets is stored in broker, is there a tool
to do the same thing?
Thanks Alex and Todd, I really appreciate the insights.
Based on what you've shared: spending more time up-front to homogenize the
nodes reduces some cognitive load for day-to-day support. Keeping things
simple generally wins me over.
Thanks again.
-Kevin
On Thu, Jun 9, 2016 at 10:37 PM, Todd
> On Jun 10, 2016, at 14:24, Avi Flax wrote:
>
> Hi, I’m using Kafka Streams (0.10.0) with JRuby, most of my scripts/nodes are
> working well at this point, except for one which is using reduceByKey.
Whoops, I should probably share my code as well!
Here’s the
Hi, I’m using Kafka Streams (0.10.0) with JRuby, most of my scripts/nodes are
working well at this point, except for one which is using reduceByKey.
This is the first time I’m trying to use the local state store so it’s possible
there’s something misconfigured, I’m not sure. My config is pretty
Hi,
Is it possible to have Kafka brokers behind ELB and producers and consumers
talk only to ELB ?
If not, should we directly expose all brokers to all producers/consumers ?
Please advise.
Regards,
Ram
Anyone?
Printing e-mails wastes valuable natural resources. Please don't print this
message unless it is absolutely necessary. Thank you for thinking green!
Sent from my iPhone
> On Jun 8, 2016, at 2:31 PM, Nomar Morado wrote:
>
> Is there any way in the current API
> Date: Fri, 10 Jun 2016 16:54:47 +0530
> Subject: Re: error: ... protocols are incompatible with those of existing
> members ??
> From: bkap...@memelet.com
> To: users@kafka.apache.org
>
> I delete the group using kafka-consumer-groups.sh --delete and still I get
> the error.
Hi,
I have strange problem. I have kafka 9 cluster of 3 nodes. I removed all
topics from zookepper, but when I query topics using Java High Level
consumer it returns all three removed kafka topics. But how this is
possible when I removed them manually, by going into zookeeper shell and
removing
Barry - i believe the error refers to the consumer group "protocol" that is
used to decide which partitions get assigned to which consumers. The way it
works is that each consumer says it wants to join X group and it can
support protocols (1, 2, 3...). The broker looks at all consumers in group
X
Kafka itself handles distribution among brokers and which broker consumers
and producers connect to. There's no need for an ELB, and you have to
directly expose all brokers to producers and consumers.
On Friday, 10 June 2016, Ram Chander wrote:
> Hi,
>
>
> I am trying to
Hi,
I am trying to setup Kafka cluster in AWS.
Is it possible to have Kafka brokers behind ELB and producers and consumers
talk only to ELB ?
If not, should we directly expose all brokers to all producers/consumers ?
Please advise.
Regards,
Ram
I delete the group using kafka-consumer-groups.sh --delete and still I get
the error.
I didn't really expect this to help, but still, I tried deleting /all/
topics and recreating them. But still my connect app will no longer run due
to this error.
Does this error even have anything to do with persisted state, or is the
broker complaining about live client connections?
Tom,
Thanks, that's very good to know. What kind of instances EC2 instances are
you using for your brokers?
-barry
On Fri, Jun 10, 2016 at 4:17 PM, Tom Crayford wrote:
> Barry,
>
> No, because Kafka also relies heavily on the OS page cache, which uses
> memory. You'd
Barry,
No, because Kafka also relies heavily on the OS page cache, which uses
memory. You'd roughly want to allocate enough page cache to hold all the
messages for your consumers for say, 30s.
Kafka also (in our experience on EC2) tends to run out of network far
before it runs out of memory or
I am getting this error:
Attempt to join group connect-elasticsearch-indexer failed due to: The
> group member's supported protocols are incompatible with those of existing
> members.
This is a single kafka-connect process consuming two topics. The brokers
have never changed, and the version of
Hello!
I'm working with Kafka 0.9.1 new consumer API.
The consumer is manually assigned to a partition. For this consumer I would
like to see its progress (meaning the lag).
Since I added the group id consumer-tutorial as property, I assumed that I
can use the command
Hello
Recently we’ve run into a problem when starting our application for the first
time.
At the moment all our topics are auto-created. Now, at the first start there
are no topics, so naturally some consumers try to connect to topics which don’t
exist.
Those consumers now fail quite
If too much heap cause problems, would it make sense to run multiple
brokers on a box with lots memory? For example, an EC2 D2 instance types
has way way more ram than kafka could ever use - -but it has fast connected
disks.
Would running a broker per disk make sense in this case?
-barry
20 matches
Mail list logo