It looks like you are using MirrorMaker from 0.9.0.1 while the source
broker is older.
MirrorMaker needs to be older than the older broker involved in the replication.
Gwen
On Mon, May 16, 2016 at 2:12 PM, Meghana Narasimhan
wrote:
> Hi,
> I came across the following mirrormaker issue today whi
Hello Kafka users, developers and client-developers,
This is the sixth (!) candidate for release of Apache Kafka 0.10.0.0.
This is a major release that includes: (1) New message format
including timestamps (2) client interceptor API (3) Kafka Streams.
Since this is a major release, we will give pe
Hi,
I came across the following mirrormaker issue today which caused it to
shutdown. Don't see any indications of any issues in any of the other logs.
Any insight on this error will be much appreciated. Kafka version is
0.9.0.1 (confluent platform 2.0.1).
FATAL [mirrormaker-thread-0] Mirror maker
I was reading a nice summary article by Jun Rao on the implications of # of
topics/partitions.
http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/
There are many trade-offs to be considered, as it looks.
Finding the partition for a key: Can 'custom partit
Hi there,
The Kafka producer and consumer are libraries you run inside your
application. As such, the beans from them do not exist on the brokers.
Thanks
Tom Crayford,
Heroku Kafka
On Mon, May 16, 2016 at 8:29 PM, Russ Lavoie wrote:
> I am using JMX to gather kafka metrics. It states here
> h
I am using JMX to gather kafka metrics. It states here
http://docs.confluent.io/1.0/kafka/monitoring.html that they should be
there. But when I run a jmx client and show beans kafka.consumer and
kafka.producer do not exist. Is there something special I have to do to
get these metrics?
Thanks
They did not Even after 1.5 days of waiting... I had drop everything and
start over because the entire kafka cluster was in an ISR shrink/expand
loop with larger hardware and and lower replica threads.
On Mon, May 16, 2016 at 1:05 PM, Alex Loddengaard wrote:
> Hi Russ,
>
> They should eventual
To pile on a little bit, the API is designed to ensure consumer liveness so
that partitions cannot be held indefinitely by a defunct process. Since
heartbeating and message processing are done in the same thread, the
consumer needs to demonstrate "progress" by calling poll() often enough not
to get
Hi Matteo,
There was a bug in the 0.9.1 such that task.close() can be invoked both in
the Worker thread and Herder thread. There can be a race condition that
consumer.close() is invoked in multiple threads at the same time. As the
consumer is designed to be used in single thread, thus the concurre
Thanks for providing the links. I 'll test it out using offsetStorageReader.
Kanagha
On Mon, May 16, 2016 at 10:09 AM, Christian Posta wrote:
> If you're using KafkaConnect, it does it for you!
>
> basically you set the sourceRecord's "sourcePartition" and "sourceOffset"
> fields (
>
> https://
Hi Russ,
They should eventually catch back up and rejoin the ISR. Did they not?
Alex
On Fri, May 13, 2016 at 6:33 PM, Russ Lavoie wrote:
> Hello,
>
> I moved an entire topic from one set of brokers to another set of brokers.
> The network throughput was so high, that they fell behind the leade
If you're using KafkaConnect, it does it for you!
basically you set the sourceRecord's "sourcePartition" and "sourceOffset"
fields (
https://github.com/christian-posta/kafka/blob/8db55618d5d5d5de97feab2bf8da4dc45387a76a/connect/api/src/main/java/org/apache/kafka/connect/source/SourceRecord.java#L5
Hi,
Producers don't track offsets in the same way, so there is no producer
offset API.
Thanks
Tom Crayford
Heroku Kafka
On Mon, May 16, 2016 at 5:25 PM, Kanagha wrote:
> Hi,
>
> I am trying to find out the API for committing producer offset for Kafka
>
> I found this example:
>
> https://cwik
Hi,
I am trying to find out the API for committing producer offset for Kafka
I found this example:
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka
Which would work for committing consumer offsets. Is there a separate API
for committing producer
I agree with Tom but ...
... to reply Christina question I guess that Anas thought about this kind of
solution in relation to the simplicity to read data from a specific devices
from a backend service point of view.
Using one topic per device means that on the backend side you know exactly
wh
+1 what Tom said.
Curious though Anas, what motivated you to try a topic per device? was
there something regarding management or security that you believe you can
achieve with topic per device?
On Mon, May 16, 2016 at 4:11 AM, Tom Crayford wrote:
> Hi there,
>
> Generally you don't use a single
Hi there,
Generally you don't use a single topic per device in this use case, but one
topic with some number of partitions and the key distribution based on
device id. Kafka isn't designed for millions of low volume topics, but a
few high volume ones.
Thanks
Tom Crayford
Heroku Kafka
On Mon, Ma
17 matches
Mail list logo