hi:zhisheng:

这是TM日志,在这之前没有任何错误日志,

代码逻辑很简单:
SingleOutputStreamOperator<ConcurrentLinkedQueue<ProtocolEvent>> sourceStream = 
env.addSource(source)
        .setParallelism(2)
        .uid("DataProcessSource")
        .flatMap(new DataConvertFunction())
        .setParallelism(2)
        .uid("DataProcessDataCovert")
        .keyBy(new KeySelectorFunction())
        .process(new DataCleanFunction())
        .setParallelism(2)
        .uid("DataProcessDataProcess");

AsyncDataStream.orderedWait(
        sourceStream,
        new AsyncDataCleanFunction(),
        EnvUtil.TOOL.getLong(Constant.ASYNC_TOMEOUT),
        TimeUnit.MILLISECONDS,
        EnvUtil.TOOL.getInt(Constant.ASYNC_CAPACITY)
).uid("DataProcessAsync")
        .setParallelism(2)
        .addSink(sink)
        .uid("DataProcessSinkKafka")
        .setParallelism(2);

2020-07-09 19:33:37,291 WARN org.apache.kafka.clients.consumer.ConsumerConfig - 
The configuration 'gps.kafka.sasl' was supplied but isn't a known config.
2020-07-09 19:33:37,291 WARN org.apache.kafka.clients.consumer.ConsumerConfig - 
The configuration 'java.ext.dirs' was supplied but isn't a known config.
2020-07-09 19:33:37,291 WARN org.apache.kafka.clients.consumer.ConsumerConfig - 
The configuration 'java.class.version' was supplied but isn't a known config.
2020-07-09 19:33:37,291 INFO org.apache.kafka.common.utils.AppInfoParser - 
Kafka version: 2.2.0
2020-07-09 19:33:37,291 INFO org.apache.kafka.common.utils.AppInfoParser - 
Kafka commitId: 05fcfde8f69b0349
2020-07-09 19:33:38,482 INFO com.sinoi.rt.common.protocol.HttpPoolUtil - http 
pool init,maxTotal:18,maxPerRoute:6
2020-07-09 19:33:38,971 WARN org.apache.kafka.clients.NetworkClient - [Producer 
clientId=producer-1] Error while fetching metadata with correlation id 8 : 
{=INVALID_TOPIC_EXCEPTION}
2020-07-09 19:33:38,974 INFO org.apache.kafka.clients.producer.KafkaProducer - 
[Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 
9223372036854775807 ms.
2020-07-09 19:33:41,612 INFO org.apache.flink.runtime.taskmanager.Task - async 
wait operator -> Sink: Unnamed (1/2) (cdbe008dcdb76813f88c4a48b9907d77) 
switched from RUNNING to FAILED.
org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Failed to send 
data to Kafka: 
    at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.checkErroneous(FlinkKafkaProducer.java:1225)
    at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.invoke(FlinkKafkaProducer.java:767)
    at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.invoke(FlinkKafkaProducer.java:99)
    at 
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.invoke(TwoPhaseCommitSinkFunction.java:235)
    at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
    at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
    at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
    at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
    at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:53)
    at 
org.apache.flink.streaming.api.operators.async.queue.StreamRecordQueueEntry.emitResult(StreamRecordQueueEntry.java:65)
    at 
org.apache.flink.streaming.api.operators.async.queue.OrderedStreamElementQueue.emitCompletedElement(OrderedStreamElementQueue.java:71)
    at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.outputCompletedElement(AsyncWaitOperator.java:279)
    at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.access$100(AsyncWaitOperator.java:76)
    at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.processResults(AsyncWaitOperator.java:351)
    at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.lambda$processInMailbox$0(AsyncWaitOperator.java:336)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.run(StreamTaskActionExecutor.java:87)
    at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:78)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:255)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:186)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:485)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:469)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:708)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:533)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.errors.InvalidTopicException: 
2020-07-09 19:33:41,615 INFO org.apache.flink.runtime.taskmanager.Task - 
Freeing task resources for async wait operator -> Sink: Unnamed (1/2) 
(cdbe008dcdb76813f88c4a48b9907d77).
2020-07-09 19:33:41,615 INFO org.apache.flink.runtime.taskmanager.Task - 
Ensuring all FileSystem streams are closed for task async wait operator -> 
Sink: Unnamed (1/2) 




 
发件人: zhisheng
发送时间: 2020-07-09 21:06
收件人: user-zh
主题: Re: flink1.10.1在yarn上无法写入kafka的问题
hi,maqi
 
有完整的日志吗?在这个异常之前还有其他的异常信息吗?如果有,可以提供一下!
 
Best,
zhisheng
 
m...@sinoiov.com <m...@sinoiov.com> 于2020年7月9日周四 下午7:57写道:
 
>
> 请教各位:
> flink任务在本机写入测试环境kafka集群没问题,
>
> 但是上传到yarn环境,就是写不进去,其他job运行在yarn可以写入测试环境的kafka
>
> 异常信息如下:
>
> 2020-07-09 19:17:33,126 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph        -
> KeyedProcess (1/2) (9449b1e3b758a40fb5e1e60cf84fd844) switched from
> DEPLOYING to RUNNING.
> 2020-07-09 19:17:33,164 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph        -
> KeyedProcess (2/2) (bc6eefd911cf44412121939d0afa6a81) switched from
> DEPLOYING to RUNNING.
> 2020-07-09 19:17:39,049 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph        - async wait
> operator -> Sink: Unnamed (1/2) (cfc31005099a8ad7e44a94dc617dd45f) switched
> from RUNNING to FAILE
> D.
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Failed to
> send data to Kafka:
> at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.checkErroneous(FlinkKafkaProducer.java:1225)
> at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.invoke(FlinkKafkaProducer.java:767)
> at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.invoke(FlinkKafkaProducer.java:99)
> at
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.invoke(TwoPhaseCommitSinkFunction.java:235)
> at
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
> at
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:53)
> at
> org.apache.flink.streaming.api.operators.async.queue.StreamRecordQueueEntry.emitResult(StreamRecordQueueEntry.java:65)
> at
> org.apache.flink.streaming.api.operators.async.queue.OrderedStreamElementQueue.emitCompletedElement(OrderedStreamElementQueue.java:71)
>
>
>
>
>
>
>

回复