Just an update, as I was reading about Kafka Streams, this functionality is
by default supported with Kafka Streams Library.
Following links are really helpful
http://docs.confluent.io/3.0.0/streams/developer-guide.html#partition-grouper
https://github.com/apache/kafka/blob/0.10.0/streams/src/main
Hi David
Thank you for your comments. My concern about that idea is that with only
one topic, it will slow a lot of things down. I am assuming there are at
least 6~7 physical consumers so I can safely assume to have more topics. (
Separate topic by operation perhaps?)
Also according to your appro
I think it'd be possible to avoid special-casing replacements, but it might
be a bad idea network-traffic-wise, especially for rolling upgrades.
My experience running Kafka on AWS is that rebalancing with multi-day
retention periods can take a really long time, and can torch the cluster if
you reb
Hello Gaspar,
In your case, a single topic can have messages in different format, and my
guess is that they usually have different semantics (e.g. one format for
data record, and another format for control message / error log / etc).
In this case, I'd suggest similar solutions as you mentioned, t
I’ve been investigating some possible network performance issues we’re having
with our Kafka brokers, and noticed that traffic sent between brokers tends to
show frequent bursts of very small packets:
16:09:52.299863 IP stream02.chartbeat.net.9092 > stream03.chartbeat.net.39399:
Flags [P.], seq
Hi!
Regarding the configurations:
you are using the retention.ms property that should be used for a topic not
for the broker via server.properties. If you'd like to use at the broker
level then you have to use log.retention.ms.
Kafka separates the configuration for brokers and for topics. Please
Hello,
I am a newbie to Kafka and trying to understand the difference between the
following two Kafka Consumer.
1.
KafkaConsumer pooling
http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
>>> consumer.poll(100);
2.
ConsumerIterator and KafkaSt
Hello Yardena ,
You may want to take a look at manual assignment for partitions section
mentioned here ,
http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
.
However I have not tried using this for multiple topics , but looking at
api , it should
Hi
I am new to kafka and need to pick up admin responsibilities for our
brokers.
I am using kafka_2.11-0.10.0.0. I want to set up my broker so that topics
that are created automatically have a SLA of 1 hr. I.E. I want the data to
removed after 1 hr.
I added the following to my server.propertie
Would anyone with a good understanding of serialization be available to
enhance documentation of the Apache Streams examples? I mean specifically:
PageViewTypedDemo, PageViewUntypedDemo in package org
.apache.kafka.streams.examples.pageview
I'd be happy to run them with Confluent Platform 3 produc
Hi,
I think the recommended approach to this would be to have a single topic and
partition it by userId. This will give you locality and order by user. If you
think about it this would give you a better ordering guarantee than if you had
one topic per users. It's also a lot more efficient. If y
Depending on where your data is coming from, Kafka Connect may well be a good
solution for you.
Ian.
---
Ian Wrigley
Director, Education Services
Confluent, Inc
> On Jul 5, 2016, at 10:18 AM, Nomar Morado wrote:
>
> Hi
>
> I am trying to load 15 million rows/records/messages to Kafka and look
Hi
I am trying to load 15 million rows/records/messages to Kafka and looking
for the most optimal way of accomplishing this.
I can go through client API but was wondering if there's a more efficient
way of doing this.
Thanks.
Nomar
I think you mean this kafka users mailing list. Anyone can subscribe to
this. You would need to send an email to users-subscr...@kafka.apache.org.
On Sat, Jul 2, 2016 at 1:59 AM, tong...@csbucn.com
wrote:
> Hello,
>
> How can I register in Kafka forum?
>
>
>
> 童树山
> 中嘉仁和科技(北京)有限公司
> 北京市朝阳区东三环北
Thanks, thats defenetly my problem.
On Tue, Jul 5, 2016 at 5:35 PM, Ismael Juma wrote:
> Sorry, I meant (the edit comment button is too near the permanent link
> button):
>
>
> https://issues.apache.org/jira/browse/KAFKA-3358?focusedCommentId=15239013&page=com.atlassian.jira.plugin.system.issuet
Sorry, I meant (the edit comment button is too near the permanent link
button):
https://issues.apache.org/jira/browse/KAFKA-3358?focusedCommentId=15239013&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15239013
Ismael
On Tue, Jul 5, 2016 at 12:20 PM, Igor Kuzmenko
Good day Kafka users,
Was looking over the current Kafka docs;
https://kafka.apache.org/documentation.html#log
Specifically, this diagram
- https://kafka.apache.org/images/kafka_log.png
Both Segment Files *topic/82796232652.kafka* and *topic/34477849968.kafka* have
the same messages in them. Fro
Hi,
We have several topics, same number of partitions for each, same key used
for all topics.
We also have several processes consuming the topics (one consumer group).
What we wish would happen is that messages with the same key would end up
consumed by the same process, regardless of the topic.
C
Hi there!
I'm new grad engineer and is pretty new to kafka world.
I'm trying to replace rabbit mq with apache-kafka and while planning, I
bumped in to several conceptual planning problem.
First we are using rabbit mq for per user queue policy meaning each user
uses one queue. This suits our need
Thanks for reply. Can you provide issue id. Link above doesn't open.
On Tue, Jul 5, 2016 at 2:03 PM, Ismael Juma wrote:
> This looks like the following which was fixed in 0.10.0.0:
>
>
> https://issues.apache.org/jira/secure/EditComment!default.jspa?id=12948347&commentId=15239013
>
> Ismael
>
>
This looks like the following which was fixed in 0.10.0.0:
https://issues.apache.org/jira/secure/EditComment!default.jspa?id=12948347&commentId=15239013
Ismael
On Tue, Jul 5, 2016 at 11:51 AM, Igor Kuzmenko wrote:
> Hello I'm using kafka 0.9.0 and sending messages to kafka topic via
> KafkaPro
Hello I'm using kafka 0.9.0 and sending messages to kafka topic via
KafkaProducer.
My application constantly waits for a new file in directory, reads it, than
send every line of a file as a message to kafka.
KafkaProducer creates at start of application and in logs I can see that
every 5 min it re
Hi,
Comment below.
On Tue, Jul 5, 2016 at 3:39 AM, tong...@csbucn.com
wrote:
> Hi,
> I have 2 Kafka nodes and 1 zookeeper node.
> When I use kill -9 to shutdown the kafk-node1,I got the error message from
> the producer when sending messages:
> org.apache.kafka.common.errors.TimeoutException: F
23 matches
Mail list logo