Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-14 Thread hacker win7
Congrats!

> On Feb 15, 2019, at 10:16, Guozhang Wang  wrote:
> 
> Hello all,
> 
> The PMC of Apache Kafka is happy to announce another new committer joining
> the project today: we have invited Randall Hauch as a project committer and
> he has accepted.
> 
> Randall has been participating in the Kafka community for the past 3 years,
> and is well known as the founder of the Debezium project, a popular project
> for database change-capture streams using Kafka (https://debezium.io). More
> recently he has become the main person keeping Kafka Connect moving
> forward, participated in nearly all KIP discussions and QAs on the mailing
> list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> a hundred reviews around Kafka Connect, and has also been evangelizing
> Kafka Connect at several Kafka Summit venues.
> 
> 
> Thank you very much for your contributions to the Connect community Randall
> ! And looking forward to many more :)
> 
> 
> Guozhang, on behalf of the Apache Kafka PMC



Re: [ANNOUNCE] New Committer: Vahid Hashemian

2019-01-15 Thread hacker win7
Congrats!

> On Jan 16, 2019, at 10:08, Srinivas Reddy  wrote:
> 
> Congratulation Vahid!!!
> 
> 
>> On 16 Jan 2019, at 9:58 AM, Manikumar  wrote:
>> 
>> Congrats, Vahid!
>> 
>> On Wed 16 Jan, 2019, 6:53 AM Ismael Juma > 
>>> Congratulations Vahid!
>>> 
>>> On Tue, Jan 15, 2019, 2:45 PM Jason Gustafson >> 
 Hi All,
 
 The PMC for Apache Kafka has invited Vahid Hashemian as a project
 committer and
 we are
 pleased to announce that he has accepted!
 
 Vahid has made numerous contributions to the Kafka community over the
>>> past
 few years. He has authored 13 KIPs with core improvements to the consumer
 and the tooling around it. He has also contributed nearly 100 patches
 affecting all parts of the codebase. Additionally, Vahid puts a lot of
 effort into community engagement, helping others on the mail lists and
 sharing his experience at conferences and meetups.
 
 We appreciate the contributions and we are looking forward to more.
 Congrats Vahid!
 
 Jason, on behalf of the Apache Kafka PMC
 
>>> 
> 



How to find client.id according to connection id

2018-12-20 Thread hacker win7
Hi, 

I have a connection id from broker in this format 
$localHost:$localPort-$remoteHost:$remotePort  , how can I find the related 
client.id  ? 

  

Marking the coordinator dead and stopped commit offsets

2018-11-08 Thread hacker win7
Hi,

In kafka 0.10, if kafka consumer log “Marking the coordinator dead ..”  “auto 
commit offset failed ..” then the consumer will not commit the offset any more. 
is there any issues about this, more details in this: 

http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-kafka-consumer-stopped-committing-offsets-td20624.html

Re: consumer fetch multiple topic partitions committed offset

2018-11-02 Thread hacker win7
Thanks for your reply, I want to know why new consumer did not add the 
committed(TopicPartitions)?  Just modify the source in 
committed(TopicPartition, Duration) from 
‘coordinator.fetchCommittedOffsets(Collections.single)’ to 
‘coordinator.fetchCommittedOffsets(TopicPartitions)’.
Besides, call multiple times with committed() will send multiple request to 
coordinator, maybe block in the middle request, why not accumulate the multiple 
TopicPartitions and send one fetch offset request to coordinator?

> On Nov 2, 2018, at 04:20, Matthias J. Sax  wrote:
> 
> You need to call `committed()` multiple times.
> 
> -Matthias
> 
> On 11/1/18 12:28 AM, hacker win7 wrote:
>> Hi,
>> 
>> After reviewing the KafkaConsumer source about API of *committed():*
>> I found that old consumer support committed(mutipleTopicPartitions) to
>> return multiple committed offset, while in new consumer, there is only
>> committed(singleTopicPartition) and return only one committed offset.
>> 
>> It is a little weird for me that why new consumer only support fetch single
>> topic partition committed offset. I search some KIPs but didn't find the
>> reason about this. Anyway, How to fetch multiple topic partitions committed
>> offset in new consumer?
>> 
> 



consumer fetch multiple topic partitions committed offset

2018-11-01 Thread hacker win7
Hi,

After reviewing the KafkaConsumer source about API of *committed():*
I found that old consumer support committed(mutipleTopicPartitions) to
return multiple committed offset, while in new consumer, there is only
committed(singleTopicPartition) and return only one committed offset.

It is a little weird for me that why new consumer only support fetch single
topic partition committed offset. I search some KIPs but didn't find the
reason about this. Anyway, How to fetch multiple topic partitions committed
offset in new consumer?


Re: New increased partitions could not be rebalance, until stop all consumers and start them

2018-10-18 Thread hacker win7
You can add the config *props.put("metadata.max.age.ms
", 5000);* to your cases, and re-test it.

飞翔的加菲猫 <526564...@qq.com> 于2018年10月18日周四 上午10:47写道:

> Sorry for bothering. I don't know whether it is a bug. Maybe something
> wrong in my test or there is explanation for it. Could any Kafka master
> help take a look? Thanks lot.
>
>
> Ruiping Li
>
>
> -- 原始邮件 --
> 发件人: "526564746"<526564...@qq.com>;
> 发送时间: 2018年10月12日(星期五) 下午4:42
> 收件人: "users";
>
> 主题: New increased partitions could not be rebalance, until stop all
> consumers and start them
>
>
>
>  Hi Kafka team,
>
>
> I meet a strange thing about Kafka rebalance. If I increase partitions of
> a topic which subscribed by some java consumers(in same one group), there
> is no rebalance occur. Furthermore, if I start a new consumer (or stop one)
> to cause a rebalance, the increased partitions could not be assigned, until
> I stop all consumers and start them. Is that normal?
>
>
> Thanks,
> Ruiping Li
>
>
>
> 
> Below is my test:
> 1. Start Kafka, ZK. Create a normal topic(test-topic) with 1 partitions
> $ ./bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --create --topic
> test-topic --partitions 1 --replication-factor 1 --config 
> retention.ms=60480
>
> 2. Start 2 java consumers (C1, C2), subscribe test-topic
> 3. Increase 2 partitions of test-topic
> $ ./bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --alter --topic
> test-topic --partitions 3
> WARNING: If partitions are increased for a topic that has a key, the
> partition logic or ordering of the messages will be affected
> Adding partitions succeeded!
>
> Increasing succeeded:
> $ ./bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --describe --topic
> test-topic
> Topic:test-topicPartitionCount:3ReplicationFactor:1Configs:
> retention.ms=60480
> Topic: test-topicPartition: 0Leader: 0Replicas: 0Isr:
> 0
> Topic: test-topicPartition: 1Leader: 0Replicas: 0Isr:
> 0
> Topic: test-topicPartition: 2Leader: 0Replicas: 0Isr:
> 0
>
> There is no rebalance occur in C1, C2.
> 4. Start a new consumer C3 to subscribed test-topic. Rebalance occur, but
> only partition test-topic-0 involved in reassigned, no test-topic-1 and
> test-topic-2.
> 5. I try to stop C2, C3, and test-topic-1 and test-topic-2 still not be
> assigned.
> 6. Stop all running consumers, and then start them. All test-topic-0,1,2
> assigned normally.
>
>
> Environment
> kafka & java api version: kafka_2.12-2.0.0 (I also tried kafka_2.11-1.0.0
> and kafka_2.10-0.10.2.1, same result)
> zookeeper: 3.4.13
> consumer code:
> // consumer
> public class KafkaConsumerThread extends Thread {
> // consumer settings
> public static org.apache.kafka.clients.consumer.KafkaConsumer String> createNativeConsumer(String groupName, String kafkaBootstrap) {
> Properties props = new Properties();
> props.put("bootstrap.servers", kafkaBootstrap);
> props.put("group.id", groupName);
> props.put("auto.offset.reset", "earliest");
> props.put("enable.auto.commit", true);
>
> props.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
>
>
> props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
>
>
>
> return new KafkaConsumer(props);
> }
>
>
> private static final Logger log =
> LoggerFactory.getLogger(KafkaConsumerThread.class);
> private boolean stop = false;
> private KafkaConsumer consumer;
> private String topicName;
> private ConsumerRebalanceListener consumerRebalanceListener;
> private AtomicLong receivedRecordNumber = new AtomicLong(0);
>
>
> public KafkaConsumerThread(String topicName, String groupName,
> ConsumerRebalanceListener consumerRebalanceListener, String kafkaBootstrap)
> {
> this.consumer = createNativeConsumer(groupName, kafkaBootstrap);
> this.topicName = topicName;
> this.consumerRebalanceListener = consumerRebalanceListener;
> }
>
>
> @Override
> public void run() {
> log.info("Start consumer ..");
> consumer.subscribe(Collections.singleton(topicName),
> consumerRebalanceListener);
> while (!stop) {
> try {
> ConsumerRecords records =
> consumer.poll(100);
> receivedRecordNumber.addAndGet(records.count());
> Iterator> iterator =
> records.iterator();
> while (iterator.hasNext()) {
> ConsumerRecord record =
> iterator.next();
> log.info("Receive [key:{}][value:{}]", record.key(),
> record.value());
> }
> } catch (TimeoutException e) {
> log.info("no data");
> }
> }
> consumer.close();
> }
>
>
> public void stopConsumer() {
> 

Re: Kafka_2.12-1.1.0 config

2018-09-21 Thread hacker win7
bin/kafka-console-producer.sh add --producer.config
your_producer.properties file.
you don't add --producer.config , you still use the default value of
max.request.size.

sarath reddy  于2018年9月21日周五 上午12:27写道:

> ++pushkar
>
> On Thu 20 Sep, 2018, 16:20 sarath reddy,  wrote:
>
> > Hi Team,
> >
> > We are trying to configure Kafka to produce larger messages,
> >
> > Below are the configs:-
> >
> > Server.properties
> >
> > message.max.bytes=1
> > replica.fetch.max.bytes=10001
> >
> > Producer.properties
> >
> > max.request.size=1
> > compression.type=gzip
> >
> > Consumer.properties
> >
> > fetch.message.max.bytes=1
> >
> > When trying to execute a larger file with below command
> > bin/kafka-console-producer.sh --broker-list localhost:9092 --topic
> > metadata_upload < /export/home/Userid/metadata/TopUBP-1_metadata.json
> >
> > Getting below error:-
> >
> > >[2018-09-19 11:20:03,307] ERROR Error when sending message to topic
> > metadata_upload with key: null, value: 2170060 bytes with error:
> > (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> > org.apache.kafka.common.errors.RecordTooLargeException: The message is
> > 2170148 bytes when serialized which is larger than the maximum request
> size
> > you have configured with the max.request.size configuration.
> >
> > Thanks,
> > Sarath Reddy.
> >
> >
> >
>