[ 
https://issues.apache.org/jira/browse/FLINK-16481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

likang updated FLINK-16481:
---------------------------
    Description: 
   

     The source and sink of the community version of Flink-kafka-connector do 
not have the ability to sense kafka metadata during dynamic expansion and 
contraction.

1. When writing data to flink-kafka-producer, the metadata map is not updated 
regularly, and only when the Task sends data for the first time, there is an 
update operation of metadata.

2.Flink-kafka-consumer, the current AbstractPartitionDiscoverer has bugs. The 
data consumption thread and the discoverer thread use two Kafka-consumer 
objects. The metadata update of Kafka-consumer requires the poll function to 
trigger. , Source is also unable to perceive metadata changes.

  was:
      社区版本的Flink-kafka-connector的 source和sink在动态扩缩容时都没有kafka元数据的感知能力。

     1.flink-kafka-producer的写数据,元数据的map是没有定时更新,只在Task第一次发送数据时有元数据的更新操作。

   
2.Flink-kafka-consumer,目前的的AbstractPartitionDiscoverer存在Bug,数据消费的线程和discoverer的线程使用了两个Kafka-consumer的对象,而Kafka-consumer的元数据更新需要poll函数触发,故目前扩容时,Source也是无法感知到元数据变化的。


> Improved FlinkkafkaConnector support for dynamically increasing capacity
> ------------------------------------------------------------------------
>
>                 Key: FLINK-16481
>                 URL: https://issues.apache.org/jira/browse/FLINK-16481
>             Project: Flink
>          Issue Type: Improvement
>          Components: API / Core
>            Reporter: likang
>            Priority: Major
>         Attachments: Flink-Kafka-Connetor的改进.docx
>
>
>    
>      The source and sink of the community version of Flink-kafka-connector do 
> not have the ability to sense kafka metadata during dynamic expansion and 
> contraction.
> 1. When writing data to flink-kafka-producer, the metadata map is not updated 
> regularly, and only when the Task sends data for the first time, there is an 
> update operation of metadata.
> 2.Flink-kafka-consumer, the current AbstractPartitionDiscoverer has bugs. The 
> data consumption thread and the discoverer thread use two Kafka-consumer 
> objects. The metadata update of Kafka-consumer requires the poll function to 
> trigger. , Source is also unable to perceive metadata changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to