Hi James,
This page has all the information you are looking for.
https://kafka.apache.org/contributing
On Thu, Apr 20, 2017 at 9:32 AM, James Chain
wrote:
> Hi
> Because I love this project, so I want to take part of it. But I'm brand
> new to opensource project.
>
>
Hi
Because I love this project, so I want to take part of it. But I'm brand
new to opensource project.
How can I get started to make contribution? Can you give me some advise or
something?
By the way, I already have JIRA account which called "james.c"
Sincerely,
James.C
The kafka-console-producer.sh defaults to acks=1 so just be careful with
using those tools for too much debugging. Your output is helpful though.
https://github.com/apache/kafka/blob/5a2fcdd6d480e9f003cc49a59d5952ba4c515a71/core/src/main/scala/kafka/tools/ConsoleProducer.scala#L185
-hans
On
Just to add, I see below behavior repeat with even command line console
producer and consumer that come with Kafka.
Thanks,
Shri
__
Shrikant Patel | 817.367.4302
Enterprise Architecture Team
PDX-NHIN
-Original Message-
From: Shrikant
Thanks Jeff, Onur, Jun, Hans. I am learning a lot from your response.
Just to summarize briefly my steps, 5 node Kafka and ZK cluster.
1. ZK cluster has all node working. Consumer is down.
2. Bring down majority of ZK nodes.
3. Thing are functional no issue (no dup or lost message)
4. Now first
Oops, I linked to the wrong ticket, this is the one we hit:
https://issues.apache.org/jira/browse/KAFKA-3042
On Wed, Apr 19, 2017 at 1:45 PM, Jeff Widman wrote:
>
>
>
>
>
> *As Onur explained, if ZK is down, Kafka can still work, but won't be able
> to react to actual broker
*As Onur explained, if ZK is down, Kafka can still work, but won't be able
to react to actual broker failures until ZK is up again. So if a broker is
down in that window, some of the partitions may not be ready for read or
write.*
We had a production scenario where ZK had a long GC pause and Kafka
Hi, Shri,
As Onur explained, if ZK is down, Kafka can still work, but won't be able
to react to actual broker failures until ZK is up again. So if a broker is
down in that window, some of the partitions may not be ready for read or
write.
As for the duplicates in the consumer, Hans had a good
The OP was asking about duplicate messages, not lost messages, so I think
we are discussing two different possible scenarios. When ever someone says
they see duplicate messages it's always good practice to first double check
ack mode, in flight messages, and retries. Also its important to check if
If this is what I think it is, it has nothing to do with acks,
max.in.flight.requests.per.connection, or anything client-side and is
purely about the kafka cluster.
Here's a simple example involving a single zookeeper instance, 3 brokers, a
KafkaConsumer and KafkaProducer (neither of these
Arun ,
send e-mail to users-subscr...@kafka.apache.org
Thanks,
Prahalad
On Wed, Apr 19, 2017 at 8:24 PM, Arunkumar
wrote:
>
> Hi There
> I would like to subscribe to this mailing list and know more about kafka.
> Please add me to the list. Thanks in advance
>
Hi There
I would like to subscribe to this mailing list and know more about kafka.
Please add me to the list. Thanks in advance
Thanks
Arunkumar Pichaimuthu, PMP
While we were testing, our producer had following configuration
max.in.flight.requests.per.connection=1, acks= all and retries=3.
The entire producer side set is below. The consumer has manual offset commit,
it commit offset after it has successfully processed the message.
Producer setting
Sorry about that. That was a typo.
The exact configuration is as below:
bootstrap.servers = ,
Thanks,
Ranjith
-Original Message-
From: kamaltar...@gmail.com [mailto:kamaltar...@gmail.com] On Behalf Of Kamal C
Sent: Wednesday, April 19, 2017 16:25
To: users@kafka.apache.org
Subject:
> bootstrap.servers = ,
Is your bootstrap.servers configuration is correct ? You have specified
port `9091`, but running the GetOffsetShell command on `9094`
On Wed, Apr 19, 2017 at 11:58 AM, Ranjith Anbazhakan <
ranjith.anbazha...@aspiresys.com> wrote:
> Unfortunately, there is no
Just to point to you all, I also get similar exception in my streams
application when producer is trying to commit something to changelog topic.
Error sending record to topic test-stream-key-table-changelog
org.apache.kafka.common.errors.TimeoutException: Batch containing 2 record(s)
expired due
@Robert Quinlivan: the producer is just the kafka-console-producer
shell that comes in the kafka/bin directory (kafka/bin/windows in my
case). Nothing special.
I'll try messing with acks because this problem is somewhat incidental
to what I'm trying to do which is see how big the log directory
Unfortunately, there is no specific information (error/exception) in the logs
when the buffer to queue records data goes missing i.e) When stopped broker
(say broker 2) is started and followed by stopping current running broker that
received all producer sent records in buffer (say broker 1).
18 matches
Mail list logo