是不是作业是一个批作业呀?

Yichao Yang <1048262...@qq.com> 于2020年6月29日周一 下午6:58写道:

> Hi
>
>
> 看你的日志你的数据源是hive table?可以看下是否是批作业模式而不是流作业模式。
>
>
> Best,
> Yichao Yang
>
>
>
>
> ------------------&nbsp;原始邮件&nbsp;------------------
> 发件人:&nbsp;"MuChen"<9329...@qq.com&gt;;
> 发送时间:&nbsp;2020年6月29日(星期一) 下午4:53
> 收件人:&nbsp;"user-zh"<user-zh@flink.apache.org&gt;;
>
> 主题:&nbsp;flinksql流计算任务非正常结束
>
>
>
> hi,大家好:
>
> 我启动了yarn-session:bin/yarn-session.sh -jm 1g -tm 4g -s 4 -qu root.flink -nm
> fsql-cli&amp;nbsp; 2&amp;gt;&amp;amp;1 &amp;amp;
>
> 然后通过sql-client,提交了一个sql:
>
> 主要逻辑是将一个kafka表和一个hive维表做join,然后将聚合结果写到mysql中。&amp;nbsp;
>
> 运行过程中,经常出现短则几个小时,长则几十个小时后,任务状态变为succeeded的情况,如图:
> https://s1.ax1x.com/2020/06/29/Nf2dIA.png
>
> 日志中能看到INFO级别的异常,15:34任务结束时的日志如下:
> 2020-06-29 14:53:20,260 INFO&nbsp;
> org.apache.flink.api.common.io.LocatableInputSplitAssigner&nbsp;&nbsp;&nbsp;
> - Assigning remote split to host uhadoop-op3raf-core12 2020-06-29
> 14:53:22,845 INFO&nbsp;
> org.apache.flink.runtime.executiongraph.ExecutionGraph&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> - Source: HiveTableSource(vid, q70) TablePath: dw.video_pic_title_q70,
> PartitionPruned: false, PartitionNums: null (1/1) (68c24aa5
> 9c898cefbb20fbc929ddbafd) switched from RUNNING to FINISHED. 2020-06-29
> 15:34:52,982 INFO&nbsp;
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> - Shutting YarnSessionClusterEntrypoint down with application status
> SUCCEEDED. Diagnostics null. 2020-06-29 15:34:52,984 INFO&nbsp;
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint&nbsp;&nbsp;&nbsp;
> - Shutting down rest endpoint. 2020-06-29 15:34:53,072 INFO&nbsp;
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint&nbsp;&nbsp;&nbsp;
> - Removing cache directory
> /tmp/flink-web-cdb67193-05ee-4a83-b957-9b7a9d85c23f/flink-web-ui 2020-06-29
> 15:34:53,073 INFO&nbsp;
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint&nbsp;&nbsp;&nbsp;
> - http://uhadoop-op3raf-core1:44664 lost leadership 2020-06-29
> 15:34:53,074 INFO&nbsp;
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint&nbsp;&nbsp;&nbsp;
> - Shut down complete. 2020-06-29 15:34:53,074 INFO&nbsp;
> org.apache.flink.yarn.YarnResourceManager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> - Shut down cluster because application is in SUCCEEDED, diagnostics null.
> 2020-06-29 15:34:53,076 INFO&nbsp;
> org.apache.flink.yarn.YarnResourceManager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> - Unregister application from the YARN Resource Manager with final status
> SUCCEEDED. 2020-06-29 15:34:53,088 INFO&nbsp;
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> - Waiting for application to be successfully unregistered. 2020-06-29
> 15:34:53,306 INFO&nbsp;
> org.apache.flink.runtime.entrypoint.component.DispatcherResourceManagerComponent&nbsp;
> - Closing components. 2020-06-29 15:34:53,308 INFO&nbsp;
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess&nbsp;
> - Stopping SessionDispatcherLeaderProcess. 2020-06-29 15:34:53,309
> INFO&nbsp;
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> - Stopping dispatcher 
> akka.tcp://flink@uhadoop-op3raf-core1:38817/user/dispatcher.
> 2020-06-29 15:34:53,310 INFO&nbsp;
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> - Stopping all currently running jobs of dispatcher
> akka.tcp://flink@uhadoop-op3raf-core1:38817/user/dispatcher. 2020-06-29
> 15:34:53,311 INFO&nbsp;
> org.apache.flink.runtime.jobmaster.JobMaster&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> - Stopping the JobMaster for job default: insert into
> rt_app.app_video_cover_abtest_test ... 2020-06-29 15:34:53,322 INFO&nbsp;
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl&nbsp; -
> Interrupted while waiting for queue
> java.lang.InterruptedException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> at
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> at
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287)
> 2020-06-29 15:34:53,324 INFO&nbsp;
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy&nbsp;
> - Opening proxy : uhadoop-op3raf-core12:23333
>
> &nbsp;
> ps:&amp;nbsp;
>
> 1. kafka中一直有数据在写入的
> 2. flink版本1.10.0
> 请问,任务状态为什么会变为SUCCEEDED呢?
>
> 谢谢大家!
>
>
>
>
> 逻辑稍微有些复杂,可以忽略下面的sql代码:
> #&nbsp; -- 提供:5分钟起始时间、vid、vid_group、曝光次数、点击次数、播放次数、有效播放次数 --
> 每5分钟将近5分钟统计结果写入mysql insert into rt_app.app_video_cover_abtest_test&nbsp;
> select&nbsp; begin_time,&nbsp; vid,&nbsp; vid_group,&nbsp; max(dv),&nbsp;
> max(click),&nbsp; max(vv),&nbsp; max(effectivevv) from(&nbsp;
> select&nbsp;&nbsp; t1.begin_time begin_time,&nbsp;&nbsp; t1.u_vid
> vid,&nbsp;&nbsp; t1.u_vid_group vid_group,&nbsp;&nbsp; dv,&nbsp;&nbsp;
> click,&nbsp;&nbsp; vv,&nbsp;&nbsp; if(effectivevv is null,0,effectivevv)
> effectivevv&nbsp; from&nbsp; (&nbsp;&nbsp; -- dv、click、vv&nbsp;&nbsp;
> select&nbsp;&nbsp;&nbsp;&nbsp; CAST(TUMBLE_START(bjdt,INTERVAL '5' MINUTE)
> AS STRING) begin_time,&nbsp;&nbsp;&nbsp; cast(u_vid as bigint)
> u_vid,&nbsp;&nbsp;&nbsp; u_vid_group,&nbsp;&nbsp;&nbsp;
> sum(if(concat(u_mod,'-',u_ac)='emptylog-video_display' and
> u_c_module='M011',1,0)) dv,&nbsp;&nbsp;&nbsp;
> sum(if(concat(u_mod,'-',u_ac)='emptylog-video_click' and
> u_c_module='M011',1,0)) click,&nbsp;&nbsp;&nbsp;
> sum(if(concat(u_mod,'-',u_ac)='top-hits' and u_f_module='M011',1,0))
> vv&nbsp;&nbsp; FROM rt_ods.ods_applog_vidsplit&nbsp;&nbsp; where u_vid is
> not null and trim(u_vid)<&amp;gt;''&nbsp;&nbsp;&nbsp; and u_vid_group is
> not null and trim(u_vid_group) not in ('','-1')&nbsp;&nbsp;&nbsp; and
> (&nbsp; (concat(u_mod,'-',u_ac) in
> ('emptylog-video_display','emptylog-video_click')&nbsp; and
> u_c_module='M011')&nbsp; or&nbsp; (concat(u_mod,'-',u_ac)='top-hits' and
> u_f_module='M011')&nbsp;&nbsp;&nbsp;&nbsp; )&nbsp;&nbsp; group
> by&nbsp;&nbsp;&nbsp;&nbsp; TUMBLE(bjdt, INTERVAL '5'
> MINUTE),&nbsp;&nbsp;&nbsp; cast(u_vid as bigint),&nbsp;&nbsp;&nbsp;
> u_vid_group&nbsp; ) t1&nbsp; left join&nbsp; (&nbsp;&nbsp; --
> effectivevv&nbsp;&nbsp; select&nbsp;&nbsp;&nbsp;
> begin_time,&nbsp;&nbsp;&nbsp; u_vid,&nbsp;&nbsp;&nbsp;
> u_vid_group,&nbsp;&nbsp;&nbsp; count(1) effectivevv&nbsp;&nbsp;
> from&nbsp;&nbsp; (&nbsp;&nbsp;&nbsp; select&nbsp; begin_time,&nbsp;
> u_vid,&nbsp; u_vid_group,&nbsp; u_diu,&nbsp; u_playid,&nbsp; m_pt,&nbsp;
> q70&nbsp;&nbsp;&nbsp; from&nbsp;&nbsp;&nbsp; dw.video_pic_title_q70
> a&nbsp;&nbsp;&nbsp; join&nbsp;&nbsp;&nbsp; (&nbsp;&nbsp;&nbsp;&nbsp;
> select&nbsp;&nbsp; CAST(TUMBLE_START(bjdt,INTERVAL '5' MINUTE) AS STRING)
> begin_time,&nbsp; cast(u_vid as bigint) u_vid,&nbsp; u_vid_group,&nbsp;
> u_diu,&nbsp; u_playid,&nbsp; max(u_playtime) m_pt&nbsp;&nbsp;&nbsp;&nbsp;
> FROM rt_ods.ods_applog_vidsplit&nbsp;&nbsp;&nbsp;&nbsp; where u_vid is not
> null and trim(u_vid)<&amp;gt;''&nbsp; and u_vid_group is not null and
> trim(u_vid_group) not in ('','-1')&nbsp; and
> concat(u_mod,'-',u_ac)='emptylog-video_play_speed'&nbsp; and
> u_f_module='M011'&nbsp; and u_playtime&amp;gt;0&nbsp;&nbsp;&nbsp;&nbsp;
> group by&nbsp;&nbsp; TUMBLE(bjdt, INTERVAL '5' MINUTE),&nbsp; cast(u_vid as
> bigint),&nbsp; u_vid_group,&nbsp; u_diu,&nbsp; u_playid&nbsp;&nbsp;&nbsp; )
> b&nbsp;&nbsp;&nbsp; on a.vid=b.u_vid&nbsp;&nbsp;&nbsp; group by&nbsp;&nbsp;
> begin_time,&nbsp; u_vid,&nbsp; u_vid_group,&nbsp; u_diu,&nbsp;
> u_playid,&nbsp; m_pt,&nbsp; q70&nbsp;&nbsp; ) temp&nbsp;&nbsp; where
> m_pt&amp;gt;=q70&nbsp;&nbsp; group by&nbsp;&nbsp;&nbsp;&nbsp;
> begin_time,&nbsp;&nbsp;&nbsp; u_vid,&nbsp;&nbsp;&nbsp; u_vid_group&nbsp; )
> t2&nbsp; on t1.begin_time=t2.begin_time&nbsp;&nbsp; and
> t1.u_vid=t2.u_vid&nbsp;&nbsp; and t1.u_vid_group=t2.u_vid_group
> )t3&nbsp;&nbsp; group by begin_time,&nbsp; vid,&nbsp; vid_group ;

Reply via email to