Re: Re: kafka loss of data

2019-05-04 Thread 15332318...@189.cn


other
how can I I want to contribute to Apache Kafka.Would you please give me the 
contributor permission?
My JIRA ID is wenbo.sun.
From: Erick Lee
Date: 2019-05-05 13:00
To: users
Subject: Re: Re: kafka loss of data
For typical topic creation scenarios:
- Replication factor = 3
- min.insync.replicas = 2
- acks = all (ensures not only the leader receives the write, but also the
in-sync replicas)
 
Refer to "min.insync.replicas" section of
https://docs.confluent.io/current/installation/configuration/topic-configs.html
 
On Sat, May 4, 2019 at 9:47 PM 15332318...@189.cn <15332318...@189.cn>
wrote:
 
> dear:
>
> Is this a best practice?
>
>
> From: Erick Lee
> Date: 2019-05-05 12:42
> To: users
> Subject: Re: kafka loss of data
> Hi,
>
> To prevent data loss within a kafka cluster, it is recommend to set
> replication factor to 2/3 with min.insync replica of 2 and acks = all
>
> Hopefully this article can help provide further insight:
>
> https://www.confluent.io/blog/3-ways-prepare-disaster-recovery-multi-datacenter-apache-kafka-deployments
>
> On Sat, May 4, 2019 at 9:21 PM mujvf...@gmail.com 
> wrote:
>
> > dear
> >  how can I config to ensure that data is lost as little as possible,
> > what is the best practice?
> >
>


Re: Re: kafka loss of data

2019-05-04 Thread Erick Lee
For typical topic creation scenarios:
- Replication factor = 3
- min.insync.replicas = 2
- acks = all (ensures not only the leader receives the write, but also the
in-sync replicas)

Refer to "min.insync.replicas" section of
https://docs.confluent.io/current/installation/configuration/topic-configs.html

On Sat, May 4, 2019 at 9:47 PM 15332318...@189.cn <15332318...@189.cn>
wrote:

> dear:
>
> Is this a best practice?
>
>
> From: Erick Lee
> Date: 2019-05-05 12:42
> To: users
> Subject: Re: kafka loss of data
> Hi,
>
> To prevent data loss within a kafka cluster, it is recommend to set
> replication factor to 2/3 with min.insync replica of 2 and acks = all
>
> Hopefully this article can help provide further insight:
>
> https://www.confluent.io/blog/3-ways-prepare-disaster-recovery-multi-datacenter-apache-kafka-deployments
>
> On Sat, May 4, 2019 at 9:21 PM mujvf...@gmail.com 
> wrote:
>
> > dear
> >  how can I config to ensure that data is lost as little as possible,
> > what is the best practice?
> >
>


Re: Re: kafka loss of data

2019-05-04 Thread 15332318...@189.cn
dear:
   
Is this a best practice?

 
From: Erick Lee
Date: 2019-05-05 12:42
To: users
Subject: Re: kafka loss of data
Hi,
 
To prevent data loss within a kafka cluster, it is recommend to set
replication factor to 2/3 with min.insync replica of 2 and acks = all
 
Hopefully this article can help provide further insight:
https://www.confluent.io/blog/3-ways-prepare-disaster-recovery-multi-datacenter-apache-kafka-deployments
 
On Sat, May 4, 2019 at 9:21 PM mujvf...@gmail.com 
wrote:
 
> dear
>  how can I config to ensure that data is lost as little as possible,
> what is the best practice?
>


Re: kafka loss of data

2019-05-04 Thread Erick Lee
Hi,

To prevent data loss within a kafka cluster, it is recommend to set
replication factor to 2/3 with min.insync replica of 2 and acks = all

Hopefully this article can help provide further insight:
https://www.confluent.io/blog/3-ways-prepare-disaster-recovery-multi-datacenter-apache-kafka-deployments

On Sat, May 4, 2019 at 9:21 PM mujvf...@gmail.com 
wrote:

> dear
>  how can I config to ensure that data is lost as little as possible,
> what is the best practice?
>


Re: message availability and handling capacity in kafka 2.x

2019-05-04 Thread 15332318...@189.cn
dear

i am not know .how can i  ask questions .this email can use?
 
From: mujvf...@gmail.com
Date: 2019-05-04 15:21
To: users@kafka.apache.org
Subject: message availability and handling capacity in kafka 2.x
dear:
 
After reading the document, I was not particularly clear about how Kafka 
ensured that the message was not lost


kafka loss of data

2019-05-04 Thread mujvf887
dear 
 how can I config to ensure that data is lost as little as possible,
what is the best practice?


message availability and handling capacity in kafka 2.x

2019-05-04 Thread mujvf887
dear:

After reading the document, I was not particularly clear about how Kafka 
ensured that the message was not lost.


Re: Mirror Maker tool is not running

2019-05-04 Thread ASHOK MACHERLA


Sent from Outlook

On 30-Apr-2019 3:37 PM, ASHOK MACHERLA  wrote:

Dear Team



Please help us , our mirror maker tool is not running properly





Please look into this belowmirror maker log file exceptions





**

[2019-03-11 18:34:31,906] ERROR Error when sending message to topic audit-logs 
with key: null, value: 304134 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.RecordTooLargeException: The request included a 
message larger than the max message size the server will accept.

[2019-03-11 18:34:31,909] FATAL [mirrormaker-thread-15] Mirror maker thread 
failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)

java.lang.IllegalStateException: Cannot send after the producer is closed.

at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)

at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)

at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)

at 
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)

at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)

at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)

at scala.collection.Iterator$class.foreach(Iterator.scala:893)

at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)

at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)

at scala.collection.AbstractIterable.foreach(Iterable.scala:54)

at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)



Sent from Mail for Windows 10





Re: Required guidelines for kafka upgrade

2019-05-04 Thread ASHOK MACHERLA
Dear Senthil

Thanks a lot your support and help in time without delay.

Thanks senthill

Sent from Outlook


Re: Required guidelines for kafka upgrade

2019-05-04 Thread SenthilKumar K
Can you verify your producer and consumer commands ?

Console Producer :
./bin/kafka-console-producer.sh --broker-list :9093 --producer.config
/kafka/client-ssl.properties --topic kafka_220

Console Consumer:
./bin/kafka-console-consumer.sh --bootstrap-server :9093
--consumer.config /kafka/client-ssl.properties --topic kafka_220


cat /kafka/client-ssl.properties

security.protocol=SSL

ssl.truststore.location=

ssl.truststore.password=

ssl.endpoint.identification.algorithm=




/opt/kafka-new$ sh bin/kafka-console-producer.sh --broker-list
192.168.175.128:9092 --producer.config
producer-ssl.config --topic otp-email

Can you share the contents of producer-ssl.config ?


--Senthil

On Sat, May 4, 2019 at 11:14 AM ASHOK MACHERLA  wrote:

> Dear Senthil
>
> when I tried produce messages into topic ,this type errors coming
> continuously
>
> ashok@Node-1:/opt/kafka-new$ sh bin/kafka-console-producer.sh
> --broker-list 192.168.175.128:9092
> --producer.config producer-ssl.config --topic otp-email
>
> >[2019-05-03 22:37:34,382] ERROR [Producer clientId=console-producer]
> Connection to node -1 (/192.168.175.128:9092)
> failed authentication due to: SSL handshake failed
> (org.apache.kafka.clients.NetworkClient)
> [2019-05-03 22:37:34,689] ERROR [Producer clientId=console-producer]
> Connection to node -1 (/192.168.175.128:9092)
> failed authentication due to: SSL handshake failed
> (org.apache.kafka.clients.NetworkClient)
>
> Please help us to fix this
>
> if anything changes required in server.properties
>
> Sent from Outlook
> 
> From: ASHOK MACHERLA 
> Sent: 04 May 2019 00:44
> To: users@kafka.apache.org
> Subject: Re: Required guidelines for kafka upgrade
>
> Dear Senthil
>
> Could you please explain clearly
>
> Consumer client properties means ???
>
>
> Where can I set that parameter.
>
> I checked within the Kafka cluster, I pushed some messages and when I
> tried to pulling from same topic, it's not printing any messages
>
> Please tell me senthil.
>
> How can we solve this???
>
> Sent from Outlook
>


Re: Re: Required guidelines for kafka upgrade

2019-05-04 Thread 15332318...@189.cn
Dear

 How Can I send question


 
From: ASHOK MACHERLA
Date: 2019-05-04 13:44
To: users@kafka.apache.org
Subject: Re: Required guidelines for kafka upgrade
Dear Senthil
 
when I tried produce messages into topic ,this type errors coming continuously
 
ashok@Node-1:/opt/kafka-new$ sh bin/kafka-console-producer.sh --broker-list 
192.168.175.128:9092 --producer.config 
producer-ssl.config --topic otp-email
 
>[2019-05-03 22:37:34,382] ERROR [Producer clientId=console-producer] 
>Connection to node -1 (/192.168.175.128:9092) 
>failed authentication due to: SSL handshake failed 
>(org.apache.kafka.clients.NetworkClient)
[2019-05-03 22:37:34,689] ERROR [Producer clientId=console-producer] Connection 
to node -1 (/192.168.175.128:9092) failed 
authentication due to: SSL handshake failed 
(org.apache.kafka.clients.NetworkClient)
 
Please help us to fix this
 
if anything changes required in server.properties
 
Sent from Outlook

From: ASHOK MACHERLA 
Sent: 04 May 2019 00:44
To: users@kafka.apache.org
Subject: Re: Required guidelines for kafka upgrade
 
Dear Senthil
 
Could you please explain clearly
 
Consumer client properties means ???
 
 
Where can I set that parameter.
 
I checked within the Kafka cluster, I pushed some messages and when I tried to 
pulling from same topic, it's not printing any messages
 
Please tell me senthil.
 
How can we solve this???
 
Sent from Outlook