Not today, although that's something we might want to add support for at
the framework level (publish to a Kafka dead letter topic) and just provide
hooks for to sinks so they don't all have to handle that case.
Today the solution would be to reconfigure your connector/worker so it can
handle the
On Tue, Jun 14, 2016 at 8:08 AM, Tauzell, Dave wrote:
> I have been able to get my C# client to put avro records to a Kafka topic
> and have the HdfsSink read and save them in files. I am confused about
> interaction with the registry. The kafka message contains a schema id an I
> see the conne
There's no API for connectors to shut themselves down because that doesn't
really fit the streaming model that Kafka Connect works with -- it isn't a
batch processing system. If you want to shut down a connector, you'd
normally accomplish this via the REST API.
Technically you *could* accomplish t
Thanks for answer Otis.
The producer that i use (Logstash) does not track message sizes.
I already loaded all the metrics from JMX into my monitoring system.
I just need to confirm that "record" is equivalent to an individual log
message.
On Tue, Jun 14, 2016 at 1:27 PM, Otis Gospodnetić <
otis
Liquan,
We're constantly hitting this problem in our prod cluster. Do you have a
JIRA issue that relates to this, and when will this bugfix be backported to
the 0.9.x branch? We're not planning on upgrading to 0.10 for a while,
since the assumption was that the 0.9.x line would be more stable.
Hi,
Do you control the producers? If so, couldn't you measure the message
sizes there?
Alternatively, you can use something like SPM for Kafka or other Kafka
monitoring tools that expose relevant metrics.
For example, I think you can compute avg msg size based on metrics shown in
these charts:
h
Do you need to partition by userid? For example, does the order need to be
preserved for each users' messages?
-Dave
-Original Message-
From: Francisco Lopes [mailto:chico.lo...@gmail.com]
Sent: Tuesday, June 14, 2016 1:28 PM
To: users@kafka.apache.org
Subject: Evenly process messages f
I have the HdfsSink reading from a topic. If it finds a message that it cannot
deal with then it stops processing. Is there a way to tell the HdfsSink to put
that message somewhere else and continue on? In other queuing systems this is
referred to as "Dead Letter Queue".
-Dave
This e-mail a
Hello,
I'm processing events from several users but one user should not affect the
other user processing throughput.
My initial idea was: one single topic partitioned by the userId. So if I
have 5000 users and 1000 topics, each partition will receive messages from
5 users. For 100 workers, each w
Thanks Avi!
On Tue, Jun 14, 2016 at 7:41 AM, Avi Flax wrote:
>
> > On Jun 10, 2016, at 18:47, Guozhang Wang wrote:
> >
> > Yes, this is possible
>
> OK, good to know — thanks!
>
> I just checked the code in the Deserializers included with Kafka and I see
> that they check for null values and si
Hello,
Please see http://kafka.apache.org/contact.html
It's self-service. Cheers.
Guozhang
On Mon, Jun 13, 2016 at 10:26 PM, Srikanth Hugar
wrote:
> Hi,
>
>I started working on Apache Kafka and want to be included in users
> group.
> Please include me.
>
> Thank you.
>
> Best Regards,
> S
I digged it furthermore...
It seems the API blockingSendAndReceive hanging for a long to send/receive
response from the broker which is not affected.
I just checked send and receive time its taking near about 30 sec.
On Tue, Jun 14, 2016 at 11:03 AM, safique ahemad
wrote:
> Guys any response
Thanks Tom!
From: Todd Palino
To: "users@kafka.apache.org"
Sent: Tuesday, 14 June 2016 10:01 PM
Subject: Re: Delete Message From topic
Well, if you have a log compacted topic, you can issue a tombstone message
(key with a null message) to delete it. Outside of that, what Tom said
a
Well, if you have a log compacted topic, you can issue a tombstone message
(key with a null message) to delete it. Outside of that, what Tom said
applies.
-Todd
On Tue, Jun 14, 2016 at 9:13 PM, Mudit Kumar wrote:
> Thanks Tom!
>
>
>
>
> On 6/14/16, 8:55 PM, "Tom Crayford" wrote:
>
> >Hi Mudit
Hi Christian,
Thanks.I just wanted to delete few messages so that if any consumer start from
beginning,they shouldn’t see those messages.
Thanks,
Mudit
On 6/14/16, 9:02 PM, "Christian Posta" wrote:
>Might be worth describing your use case a bit to see if there's another way
>to help you?
>
Thanks Tom!
On 6/14/16, 8:55 PM, "Tom Crayford" wrote:
>Hi Mudit,
>
>Sorry this is not possible. The only deletion Kafka offers is retention or
>whole topic deletion.
>
>Thanks
>
>Tom Crayford
>Heroku Kafka
>
>On Tuesday, 14 June 2016, Mudit Kumar wrote:
>
>> Hey,
>>
>> How can I delete part
Might be worth describing your use case a bit to see if there's another way
to help you?
On Tue, Jun 14, 2016 at 5:29 AM, Mudit Kumar wrote:
> Hey,
>
> How can I delete particular messages from particular topic?Is that
> possible?
>
> Thanks,
> Mudit
>
>
--
*Christian Posta*
twitter: @christi
Hi Mudit,
Sorry this is not possible. The only deletion Kafka offers is retention or
whole topic deletion.
Thanks
Tom Crayford
Heroku Kafka
On Tuesday, 14 June 2016, Mudit Kumar wrote:
> Hey,
>
> How can I delete particular messages from particular topic?Is that
> possible?
>
> Thanks,
> Mudi
I have been able to get my C# client to put avro records to a Kafka topic and
have the HdfsSink read and save them in files. I am confused about interaction
with the registry. The kafka message contains a schema id an I see the
connector look that up in the registry. Then it also looks up a s
> On Jun 10, 2016, at 18:47, Guozhang Wang wrote:
>
> Yes, this is possible
OK, good to know — thanks!
I just checked the code in the Deserializers included with Kafka and I see that
they check for null values and simply pass them through; I guess that’s the
correct behavior. I’ve opened a P
Hi everyone,
I would like to know if there is a way to shutdown a connector
programmatically ?
On my project we have developped a sink-connector to write messages into
GZIP files for testing purposes. We would like to stop the connector after
no message is received for an elapsed time
Thanks,
I believe it stores it inside Kafka itself inside a topic
named __consumer_offsets
The "new" consumer groups doesn't use ZK to store consumer offsets.
On Tue, Jun 14, 2016 at 1:04 AM, thinking wrote:
> Hello everyone, I have use kafka_2.11-0.10.0.0, I write a simple
> consumer code. But I can'
Hi,
I started working on Apache Kafka and want to be included in users group.
Please include me.
Thank you.
Best Regards,
Srikanth.
Hello everyone, I have use kafka_2.11-0.10.0.0, I write a simple consumer
code. But I can't find consumer offset in zookeeper(I have check
/consumers??/config/clients??/config/topics/__consumer_offsets). I want to know
where does kafka store consumer?
here is my consumer code:
package com.nf
Thanks for reply. I am using the recent version of kafka and streams
library. I forked them from github. @Peter, yes probably this is due to
versions.
The problem was that, I already installed kafka on OSX with brew and when I
tried to run, probably it linked to that library instead of the one I
fo
This looks like the error that occurs when you use the 0.10 client with an 0.9
broker. The broker needs to be upgraded first. Jeyhun, what versions are you
running?
(I sincerely hope this error message will be improved next time there are wire
protocol changes!)
-Peter
> On Jun 14, 2016, a
Replication factor should not be a problem, only lead is used for writes
and reads.
I'd monitor consumer offsets of all consumers of that topic - to make sure
groupid is really same.
Another thing could be that sarama client is simple, not implementing high
level consumer like locking that only o
Thanks, I forgot to mention that all my consumers (sarama go consumer) have the
same groupid. And I would like the “starvation” to happen to 2 of my 3
consumers. and yes my topic really have single partition.
I am getting unique message to each consumer.
So somehow all three consumers are shar
Hey,
How can I delete particular messages from particular topic?Is that possible?
Thanks,
Mudit
Hi Rami,
Each consumer will receive every single message if they belong to different
consumer groups. Messages will only be distributed between consumers of the
same consumer group.
So make sure they are in the same consumer group, beware in your case this
means 2 of the 3 consumers will be
Never mind, "three" is a String, and config.messageSendMaxRetries is Int.
Have you identified where that error string comes from in the code you use
though (Failed to send message after three times)?
On Tue, Jun 14, 2016 at 6:54 AM, Philippe Derome wrote:
> Have you looked at this (DefaultEventH
I suppose the consumers would also need to all belong to the same consumer
group for your expectation to hold. If the three consumers belong to different
consumer groups, I'd expect each of them to receive all the messages,
regardless of the number of partitions.
So perhaps they are on different
Are you suggesting that something can be done with the DefaultEventHandler?
that it can be overridden or a callback inserted? I hit this error as well but
since it doesn't indicate the source of the error, it is an unhelpful error
message. Suggestions on how to handle would be appreciated.
hi,
1. you should check, if your topic partition really have one.
2. does your consumer get same message or different message.
e.x all message is 1,2,3,4,5,6,7. consumer1 get 1,3,7 consumer2 get 2,6,
consumer3 get 4,5?
-- Original --
From: "Al-Isawi
Hi,
I have a cluster of 3 brokers and 1 topic which has 1 partition and replication
factor of 3. There are also 3 consumers consuming from that topic.
Now all the docs I have seen say that if number of consumers is bigger than the
number of partition ( like in my case 3 consumers 1 partition),
Have you looked at this (DefaultEventHandler)?
throw new FailedToSendMessageException("Failed to send messages after " +
config.messageSendMaxRetries + " tries.", null)
On Tue, Jun 14, 2016 at 6:40 AM, Snehalata Nagaje <
snehalata.nag...@harbingergroup.com> wrote:
>
>
> Hi All,
>
>
> I am using
Hi All,
I am using kafka version kafka_2.10-0.9.0.1/
At some random time we are not able to send message to kafka
It gives error as "Failed to send message after three times."
is it network error?
Thanks,
Snehalata
There are options to compress on the wire and in the topic.
On Tue, May 31, 2016 at 8:35 AM, Igor Kravzov
wrote:
> In our system some data can be as big as 10MB.
> Is it OK to send 10 MB message through Kafka? What configuration
> parameters should I check/set?
> It is going to be one topic wit
HI Jeyhun,
What version of Kafka are you using?
I haven't run this using Eclipse, but could you try building and running from
the command line (instead of from within Eclipse) as described in that
quickstart document? Does that work?
Thanks
Eno
> On 13 Jun 2016, at 20:27, Jeyhun Karimov wro
39 matches
Mail list logo