Re: Application Mode 模式部署失败

2021-07-29 文章 Caizhi Weng
Hi!

从报错日志来看是找不到方法。很可能客户端的版本和集群里的版本不一致,或客户端的 Flink 文件有损坏。这两方面可以都检查一下。

周瑞  于2021年7月30日周五 上午11:30写道:

> 您好:Flink Appliaction mode 模式启动失败,启动命令和错误日志如下./flink run-application -t
> yarn-application \
> -yD yarn.application.name="MyFlinkApp"  \
> -yD yarn.provided.lib.dirs="hdfs://
> 10.10.98.226:8020/user/myflink/flink-common-deps/libs/yarn-flink-1.13.0/lib/;hdfs://10.10.98.226:8020/user/myflink/flink-common-deps/libs/yarn-flink-1.13.0/plugins/"
> \
> /app/qmatrix/yarn-flink-1.13.0/qmatrix_jars/Sink/Hive/0.0.2/flink-hive-sink-3.0.0-jar-with-dependencies.jar
> \
> --config
> '{"centerName":"rui","checkPointInterval":1000,"checkPointPath":"hdfs://
> 10.10.98.226:8020/tmp/checkpoint66/Sink_rui_ruihiveha_HaHive
> ","configProperties":{"tableKeyName":"{\"rui.fruitest1\":[\"show_integer\"]}"},"jobParallelism":3,"kafkaBootstrapServers":"qmatrix-1:9092,qmatrix-2:9092,qmatrix-3:9092","kafkaConsumerProperties":{"enable.auto.commit":"false","partition.assignment.strategy":"org.apache.kafka.clients.consumer.RoundRobinAssignor","max.poll.records":"500","
> group.id":"Sink_rui_ruihiveha_HaHive_g","auto.offset.reset":"earliest","
> session.timeout.ms
> ":"15000","bootstrap.servers":"qmatrix-1:9092,qmatrix-2:9092,qmatrix-3:9092","max.partition.fetch.bytes":"1048576","
> max.poll.interval.ms":"180","heartbeat.interval.ms
> ":"3000","isolation.level":"read_committed","auto.commit.interval.ms
> ":"1000"},"managerJdbcProperties":{"url":"jdbc:mysql://qmatrix-mysql:3306/qmatrix","phyTimeoutMillis":"3","maxActive":"1","driverClassName":"com.mysql.jdbc.Driver","removeAbandoned":"true","minEvictableIdleTimeMillis":"3","username":"root","minIdle":"0","removeAbandonedTimeout":"3","timeBetweenEvictionRunsMillis":"3","password":"Cljslrl0620$","keepAlive":"false","initialSize":"0"},"passwordKey":"GOODLUCKEVERYONE","pipeName":"ruihiveha","targetConfig":{"rui.ruihiveha.rac80_forever.rac80_ruitest1.1627609625759.router.qmatrix":{"migratePartitionTime":1,"partitionField":"pt_dt","partitionInterval":1,"partitionTime":0,"partitionTimeField":"SHOW_TIMESTAMP","periodMigratePartitionTime":0,"targetSchema":"rui","targetTable":"fruitest1"}},"targetNodeName":"HaHive","targetNodeProperties":{"defaultSchema":"default","hiveConfDir":"/app/qmatrix/kerberos/HaHive","hiveUser":"root","password":"123456","url":"jdbc:hive2://
> 10.10.98.42:1","username":"root"},"targetNodeType":"Hive","topologyId":1137,"topologyName":"Sink_rui_ruihiveha_HaHive","yarnJobManagerMemory":"1024MB","yarnTaskManagerMemory":"2048MB","zookeeperUrl":"qmatrix-1:2181,qmatrix-2:2181,qmatrix-3:2181"}'
> \
> --nodeType Hive \
> --jobType Sink \
> 2021-07-30 11:20:01,402 INFO  flink-akka.actor.default-dispatcher-3
> org.apache.flink.runtime.externalresource.ExternalResourceUtils [] -
> Enabled external resources: [] 2021-07-30 11:20:01,416 INFO
> flink-akka.actor.default-dispatcher-3
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl [] - Upper
> bound of the thread pool size is 500 2021-07-30 11:20:01,418 INFO
> flink-akka.actor.default-dispatcher-3
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy []
> - yarn.client.max-cached-nodemanagers-proxies : 0 2021-07-30 11:20:01,421
> INFO  flink-akka.actor.default-dispatcher-3
> org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] -
> ResourceManager akka.tcp://flink@qmatrix-web:37982/user/rpc/resourcemanager_0
> was granted leadership with fencing token 
> 2021-07-30 11:20:01,818 INFO  flink-akka.actor.default-dispatcher-4
> com.alibaba.druid.pool.DruidDataSource   [] -
> {dataSource-1} inited 2021-07-30 11:20:02,648 WARN
> flink-akka.actor.default-dispatcher-4
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap
> [] - Application failed unexpectedly:
> java.util.concurrent.CompletionException:
> org.apache.flink.client.deployment.application.ApplicationExecutionException:
> Could not execute application. at
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_161]at
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_161]  at
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
> ~[?:1.8.0_161] at
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
> ~[?:1.8.0_161] at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_161]   at
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_161] at
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap.runApplicationEntryPoint(ApplicationDispatcherBootstrap.java:257)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0] at
> 

Application Mode 模式部署失败

2021-07-29 文章 周瑞
您好:Flink Appliaction mode 模式启动失败,启动命令和错误日志如下./flink run-application -t 
yarn-application \
-yD yarn.application.name="MyFlinkApp"  \
-yD 
yarn.provided.lib.dirs="hdfs://10.10.98.226:8020/user/myflink/flink-common-deps/libs/yarn-flink-1.13.0/lib/;hdfs://10.10.98.226:8020/user/myflink/flink-common-deps/libs/yarn-flink-1.13.0/plugins/"
  \
/app/qmatrix/yarn-flink-1.13.0/qmatrix_jars/Sink/Hive/0.0.2/flink-hive-sink-3.0.0-jar-with-dependencies.jar
  \
--config 
'{"centerName":"rui","checkPointInterval":1000,"checkPointPath":"hdfs://10.10.98.226:8020/tmp/checkpoint66/Sink_rui_ruihiveha_HaHive","configProperties":{"tableKeyName":"{\"rui.fruitest1\":[\"show_integer\"]}"},"jobParallelism":3,"kafkaBootstrapServers":"qmatrix-1:9092,qmatrix-2:9092,qmatrix-3:9092","kafkaConsumerProperties":{"enable.auto.commit":"false","partition.assignment.strategy":"org.apache.kafka.clients.consumer.RoundRobinAssignor","max.poll.records":"500","group.id":"Sink_rui_ruihiveha_HaHive_g","auto.offset.reset":"earliest","session.timeout.ms":"15000","bootstrap.servers":"qmatrix-1:9092,qmatrix-2:9092,qmatrix-3:9092","max.partition.fetch.bytes":"1048576","max.poll.interval.ms":"180","heartbeat.interval.ms":"3000","isolation.level":"read_committed","auto.commit.interval.ms":"1000"},"managerJdbcProperties":{"url":"jdbc:mysql://qmatrix-mysql:3306/qmatrix","phyTimeoutMillis":"3","maxActive":"1","driverClassName":"com.mysql.jdbc.Driver","removeAbandoned":"true","minEvictableIdleTimeMillis":"3","username":"root","minIdle":"0","removeAbandonedTimeout":"3","timeBetweenEvictionRunsMillis":"3","password":"Cljslrl0620$","keepAlive":"false","initialSize":"0"},"passwordKey":"GOODLUCKEVERYONE","pipeName":"ruihiveha","targetConfig":{"rui.ruihiveha.rac80_forever.rac80_ruitest1.1627609625759.router.qmatrix":{"migratePartitionTime":1,"partitionField":"pt_dt","partitionInterval":1,"partitionTime":0,"partitionTimeField":"SHOW_TIMESTAMP","periodMigratePartitionTime":0,"targetSchema":"rui","targetTable":"fruitest1"}},"targetNodeName":"HaHive","targetNodeProperties":{"defaultSchema":"default","hiveConfDir":"/app/qmatrix/kerberos/HaHive","hiveUser":"root","password":"123456","url":"jdbc:hive2://10.10.98.42:1","username":"root"},"targetNodeType":"Hive","topologyId":1137,"topologyName":"Sink_rui_ruihiveha_HaHive","yarnJobManagerMemory":"1024MB","yarnTaskManagerMemory":"2048MB","zookeeperUrl":"qmatrix-1:2181,qmatrix-2:2181,qmatrix-3:2181"}'
 \
--nodeType Hive \
--jobType Sink \
2021-07-30 11:20:01,402 INFO  flink-akka.actor.default-dispatcher-3 
org.apache.flink.runtime.externalresource.ExternalResourceUtils [] - Enabled 
external resources: [] 2021-07-30 11:20:01,416 INFO  
flink-akka.actor.default-dispatcher-3 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl [] - Upper bound 
of the thread pool size is 500 2021-07-30 11:20:01,418 INFO  
flink-akka.actor.default-dispatcher-3 
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy [] - 
yarn.client.max-cached-nodemanagers-proxies : 0 2021-07-30 11:20:01,421 INFO  
flink-akka.actor.default-dispatcher-3 
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - 
ResourceManager akka.tcp://flink@qmatrix-web:37982/user/rpc/resourcemanager_0 
was granted leadership with fencing token  
2021-07-30 11:20:01,818 INFO  flink-akka.actor.default-dispatcher-4 
com.alibaba.druid.pool.DruidDataSource   [] - 
{dataSource-1} inited 2021-07-30 11:20:02,648 WARN  
flink-akka.actor.default-dispatcher-4 
org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
[] - Application failed unexpectedly:  
java.util.concurrent.CompletionException: 
org.apache.flink.client.deployment.application.ApplicationExecutionException: 
Could not execute application. at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
 ~[?:1.8.0_161]at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
 ~[?:1.8.0_161]  at 
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943) 
~[?:1.8.0_161] at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
 ~[?:1.8.0_161] at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) 
~[?:1.8.0_161]   at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
 ~[?:1.8.0_161] at 
org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap.runApplicationEntryPoint(ApplicationDispatcherBootstrap.java:257)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0] at 
org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap.lambda$runApplicationAsync$1(ApplicationDispatcherBootstrap.java:212)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_161]at 

Application Mode 模式部署失败

2021-07-29 文章 周瑞
您好:
我的


 

Re: flink 1.13.1 使用hive方言,执行hive sql解析报错

2021-07-29 文章 Rui Li
你好,

能不能把你insert语句中使用到的表的DDL发一下?贴一下show create table的结果就可以了。

On Thu, Jul 29, 2021 at 9:03 PM Asahi Lee <978466...@qq.com.invalid> wrote:

> hi!
> 我验证了,不是else的问题,下面的sql也报同样的问题?Invalid table alias or column reference 'u'
> ,我的sql里面没有'u'的名称!
> CREATE CATALOG `tempo_df_hive_default_catalog` WITH(
>   'type' = 'hive',
>   'default-database' = 'default'
> );
> USE CATALOG tempo_df_hive_default_catalog;
> CREATE TABLE IF NOT EXISTS `default`.`tempo_blackhole_table` (
>  f0 INT
> );
> insert into cosldatacenter.dw_riginfoparam
> select
> c.LARGE_EQUIP_ID,
> c.EQUIP_CODE,
> c.EQUIP_NAME,
> c.ENQUEUE_DATE,
> c.SHI_TOTAL_LEN,
> c.SHI_TYPE_WIDTH,
> c.SHI_TYPE_DEPTH,
> case when b.param_cn = '月池尺寸' then a.param_value else null end as Moonpool,
> case when b.param_cn = '最大风速' then a.param_value else null end as
> MaxWindvelocity,
> case when b.param_cn = '最大波浪高度' then a.param_value else null end as
> MaxWaveheight,
> case when b.param_cn = '气隙' then a.param_value else null end as Airgap,
> case when b.param_cn = '设计最大作业水深' then a.param_value else null end as
> MaxOpeWaterdepth,
> case when b.param_cn = '额定钻井深度' then a.param_value else null end as
> DrilldepthCap,
> case when b.param_cn = '钻井可变载荷' then a.param_value else null end as
> DrillVL,
> case when b.param_cn = '钻井水' then a.param_value else null end as
> DrillWater,
> case when b.param_cn = '生活水' then a.param_value else null end as
> PotableWater
> from cosldatacenter.ods_emp_maindata_iadc_paramvalue a
> inner join cosldatacenter.ods_emp_maindata_iadc_paramdef b on a.param_id =
> b.param_id
> inner join cosldatacenter.ods_emp_md_large_equip c on
> a.SUBJECT_ID=c.LARGE_EQUIP_ID;
> INSERT INTO `default`.`tempo_blackhole_table` SELECT 1 ;
>
>
>
>
>
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 2:178 Invalid
> table alias or column reference 'u': (possible column names are:
> a.paramvalue_id, a.platform_id, a.equipment_id, a.param_id, a.param_value,
> a.remark, a.create_time, a.creator, a.update_time, a.update_person,
> a.record_flag, a.subject_id, a.output_unit, a.show_seq, b.param_id,
> b.iadc_id, b.param_code, b.param_en, b.param_cn, b.output_standard,
> b.output_unit, b.param_type, b.param_value, b.remark, b.create_time,
> b.creator, b.update_time, b.update_person, b.record_flag, c.large_equip_id,
> c.equip_name, c.equip_type, c.equip_function, c.equip_board, c.ship_yard,
> c.manufacturer_date, c.enqueue_date, c.dockrepair_date, c.scrap_date,
> c.enqueue_mode, c.work_for_org, c.work_in_org, c.old_age, c.create_time,
> c.creator, c.update_time, c.update_person, c.record_flag, c.data_timestamp,
> c.work_unit_id, c.work_status, c.work_location, c.work_area, c.equip_code,
> c.shi_main_power, c.shi_total_len, c.shi_type_width, c.shi_type_depth,
> c.shi_design_draft, c.shi_total_tonnage, c.shi_load_tonnage, c.remark,
> c.unit_classification1, c.unit_classification2)
>
>
>
>
> --原始邮件--
> 发件人:
>   "user-zh"
> <
> xbjt...@gmail.com;
> 发送时间:2021年7月29日(星期四) 下午3:32
> 收件人:"user-zh"
> 主题:Re: flink 1.13.1 使用hive方言,执行hive sql解析报错
>
>
>
> 看起来是sql语法报错,这里面的ELSE呢?
>
> 祝好,
> Leonard
>
>
>  在 2021年7月27日,20:04,Asahi Lee <978466...@qq.com.INVALID 写道:
> 
>  CASE
> WHEN mipd.`param_cn` = '月池尺寸' THEN
> mipv.`param_value`nbsp;
> END AS `Moonpool`



-- 
Best regards!
Rui Li


Re: $internal.yarn.log-config-file

2021-07-29 文章 Caizhi Weng
Hi!

实际上 yarn log config file 所在的 config 目录可以通过 FLINK_CONF_DIR
这个环境变量指定。不过这要求客户端的 FLINK_CONF_DIR 目录和集群上的 FLINK_CONF_DIR 目录一样才可以。

comsir <609326...@qq.com.invalid> 于2021年7月30日周五 上午10:21写道:

> hi all :
>
> 像$internal.yarn.log-config-file,$internal.yarn.resourcemanager.enable-vcore-matching
> 这种internal的变量,有啥办法能自定义指定他的值?
> 如果不能,$internal.yarn.log-config-file为啥要这么设计,日志路径不让自定义指定?


Re: flink 1.13.1, metrics指标中表名称为 Unnamed

2021-07-29 文章 Caizhi Weng
Hi!

显示 Unnamed 的 sink 一般是 data stream api 的 sink。这个作业之前的数据是不是从 data stream api
里来的呢?

Asahi Lee <978466...@qq.com.invalid> 于2021年7月29日周四 下午9:25写道:

> Hi!
> 我执行如下sql任务时,打开度量报告,其中我的输出表度量指标中,表名显示为Unnamed,这是否为一个bug?
> 指标信息如下:
> node103.taskmanager.container_1627469681067_0030_01_02.e621b91ec4a34ababeb6b0e2c4d6f22b.Source:
> HiveSource-qc_test_t_student_score - Calc(select=[id,
> CAST(_UTF-16LE'Bob':VARCHAR(2147483647) CHARACTER SET "UTF-16LE") AS name,
> class_id, class_name, score,
> _UTF-16LE'9bdb0e98cc5b4800ae3b56575c442225':VARCHAR(2147483647) CHARACTER
> SET "UTF-16LE" AS rule_id, _UTF-16LE'测试3':VARCHAR(2147483647) CHARACTER
> SET "UTF-16LE" AS task_batch_id], where=[(name =
> _UTF-16LE'Bob':VARCHAR(2147483647) CHARACTER SET "UTF-16LE")]) - Map
> - Sink: Unnamed.0.Shuffle.Netty.Input.Buffers.inputFloatingBuffersUsage
>
>
> node103.taskmanager.container_1627469681067_0030_01_02.e621b91ec4a34ababeb6b0e2c4d6f22b.Source:
> HiveSource-qc_test_t_student_score - Calc(select=[id,
> CAST(_UTF-16LE'Bob':VARCHAR(2147483647) CHARACTER SET "UTF-16LE") AS name,
> class_id, class_name, score,
> _UTF-16LE'9bdb0e98cc5b4800ae3b56575c442225':VARCHAR(2147483647) CHARACTER
> SET "UTF-16LE" AS rule_id, _UTF-16LE'测试3':VARCHAR(2147483647) CHARACTER
> SET "UTF-16LE" AS task_batch_id], where=[(name =
> _UTF-16LE'Bob':VARCHAR(2147483647) CHARACTER SET "UTF-16LE")]) - Map
> - Sink: Unnamed.0.Shuffle.Netty.Input.Buffers.inPoolUsage
>
>
>
> node103.taskmanager.container_1627469681067_0030_01_02.e621b91ec4a34ababeb6b0e2c4d6f22b.Sink:
> Unnamed.0.numRecordsIn
>
>
> 任务sql如下:
> CREATE CATALOG `qc_hive_catalog` WITH ( 'type' = 'hive',
> 'default-database' = 'qc_test' );
> USE CATALOG `qc_hive_catalog`;
> CREATE TABLE
> IF
> NOT EXISTS QC_RESULT_T_STUDENT_SCORE ( id STRING, NAME STRING,
> class_id STRING, class_name STRING, score INTEGER, rule_id STRING,
> task_batch_id STRING ) WITH ( 'is_generic' = 'false', 'connector' = 'hive'
> );
> INSERT INTO QC_RESULT_T_STUDENT_SCORE SELECT
> id,
> NAME,
> class_id,
> class_name,
> score,
> cast( '9bdb0e98cc5b4800ae3b56575c442225' AS STRING ) AS rule_id,
> cast( '测试3' AS STRING ) AS task_batch_id
> FROM
> t_student_score
> WHERE
> t_student_score.NAME = 'Bob';


$internal.yarn.log-config-file

2021-07-29 文章 comsir
hi all ??
??$internal.yarn.log-config-file??$internal.yarn.resourcemanager.enable-vcore-matching
internal???
??$internal.yarn.log-config-file

答复: 退订

2021-07-29 文章 zhao liang
如果需要取消订阅 user-zh@flink.apache.org 
邮件组,是发送任意内容的邮件到 
user-zh-unsubscr...@flink.apache.org


发件人: 闫健儒 
日期: 星期四, 2021年7月29日 16:57
收件人: user-zh@flink.apache.org 
主题: 退订
退订


答复: 退订

2021-07-29 文章 zhao liang
如果需要取消订阅 user-zh@flink.apache.org 
邮件组,发送任意内容的邮件到 
user-zh-unsubscr...@flink.apache.org
 就可以了


发件人: zhangjunjie1130 
日期: 星期四, 2021年7月29日 16:38
收件人: user-zh@flink.apache.org 
主题: 退订
退订


| |
zhangjunj
|
|
邮箱:zhangjunjie1...@163.com
|

签名由 网易邮箱大师 定制


flink 1.13.1, metrics?????????????? Unnamed

2021-07-29 文章 Asahi Lee
Hi??
??sql??Unnamed??bug??
??
node103.taskmanager.container_1627469681067_0030_01_02.e621b91ec4a34ababeb6b0e2c4d6f22b.Source:
 HiveSource-qc_test_t_student_score - Calc(select=[id, 
CAST(_UTF-16LE'Bob':VARCHAR(2147483647) CHARACTER SET "UTF-16LE") AS name, 
class_id, class_name, score, 
_UTF-16LE'9bdb0e98cc5b4800ae3b56575c442225':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE" AS rule_id, _UTF-16LE'3':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE" AS task_batch_id], where=[(name = _UTF-16LE'Bob':VARCHAR(2147483647) 
CHARACTER SET "UTF-16LE")]) - Map - Sink: 
Unnamed.0.Shuffle.Netty.Input.Buffers.inputFloatingBuffersUsage


node103.taskmanager.container_1627469681067_0030_01_02.e621b91ec4a34ababeb6b0e2c4d6f22b.Source:
 HiveSource-qc_test_t_student_score - Calc(select=[id, 
CAST(_UTF-16LE'Bob':VARCHAR(2147483647) CHARACTER SET "UTF-16LE") AS name, 
class_id, class_name, score, 
_UTF-16LE'9bdb0e98cc5b4800ae3b56575c442225':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE" AS rule_id, _UTF-16LE'3':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE" AS task_batch_id], where=[(name = _UTF-16LE'Bob':VARCHAR(2147483647) 
CHARACTER SET "UTF-16LE")]) - Map - Sink: 
Unnamed.0.Shuffle.Netty.Input.Buffers.inPoolUsage



node103.taskmanager.container_1627469681067_0030_01_02.e621b91ec4a34ababeb6b0e2c4d6f22b.Sink:
 Unnamed.0.numRecordsIn


sql??
CREATE CATALOG `qc_hive_catalog` WITH ( 'type' = 'hive', 'default-database' = 
'qc_test' );
USE CATALOG `qc_hive_catalog`;
CREATE TABLE
IF
NOT EXISTS QC_RESULT_T_STUDENT_SCORE ( id STRING, NAME STRING, class_id 
STRING, class_name STRING, score INTEGER, rule_id STRING, task_batch_id STRING 
) WITH ( 'is_generic' = 'false', 'connector' = 'hive' );
INSERT INTO QC_RESULT_T_STUDENT_SCORE SELECT
id,
NAME,
class_id,
class_name,
score,
cast( '9bdb0e98cc5b4800ae3b56575c442225' AS STRING ) AS rule_id,
cast( '3' AS STRING ) AS task_batch_id
FROM
t_student_score
WHERE
t_student_score.NAME = 'Bob';

?????? flink 1.13.1 ????hive??????????hive sql????????

2021-07-29 文章 Asahi Lee
hi!
??else??sqlInvalid table alias or 
column reference 'u' ??sql'u'
CREATE CATALOG `tempo_df_hive_default_catalog` WITH(
  'type' = 'hive',
  'default-database' = 'default'
);
USE CATALOG tempo_df_hive_default_catalog;
CREATE TABLE IF NOT EXISTS `default`.`tempo_blackhole_table` (
 f0 INT
);
insert into cosldatacenter.dw_riginfoparam
select
c.LARGE_EQUIP_ID,
c.EQUIP_CODE,
c.EQUIP_NAME,
c.ENQUEUE_DATE,
c.SHI_TOTAL_LEN,
c.SHI_TYPE_WIDTH,
c.SHI_TYPE_DEPTH,
case when b.param_cn = '' then a.param_value else null end as Moonpool,
case when b.param_cn = '' then a.param_value else null end as 
MaxWindvelocity,
case when b.param_cn = '' then a.param_value else null end as 
MaxWaveheight,
case when b.param_cn = '' then a.param_value else null end as Airgap,
case when b.param_cn = '' then a.param_value else null end as 
MaxOpeWaterdepth,
case when b.param_cn = '' then a.param_value else null end as 
DrilldepthCap,
case when b.param_cn = '' then a.param_value else null end as 
DrillVL,
case when b.param_cn = '??' then a.param_value else null end as DrillWater,
case when b.param_cn = '??' then a.param_value else null end as PotableWater
from cosldatacenter.ods_emp_maindata_iadc_paramvalue a
inner join cosldatacenter.ods_emp_maindata_iadc_paramdef b on a.param_id = 
b.param_id
inner join cosldatacenter.ods_emp_md_large_equip c on 
a.SUBJECT_ID=c.LARGE_EQUIP_ID;
INSERT INTO `default`.`tempo_blackhole_table` SELECT 1 ;





org.apache.hadoop.hive.ql.parse.SemanticException: Line 2:178 Invalid table 
alias or column reference 'u': (possible column names are: a.paramvalue_id, 
a.platform_id, a.equipment_id, a.param_id, a.param_value, a.remark, 
a.create_time, a.creator, a.update_time, a.update_person, a.record_flag, 
a.subject_id, a.output_unit, a.show_seq, b.param_id, b.iadc_id, b.param_code, 
b.param_en, b.param_cn, b.output_standard, b.output_unit, b.param_type, 
b.param_value, b.remark, b.create_time, b.creator, b.update_time, 
b.update_person, b.record_flag, c.large_equip_id, c.equip_name, c.equip_type, 
c.equip_function, c.equip_board, c.ship_yard, c.manufacturer_date, 
c.enqueue_date, c.dockrepair_date, c.scrap_date, c.enqueue_mode, 
c.work_for_org, c.work_in_org, c.old_age, c.create_time, c.creator, 
c.update_time, c.update_person, c.record_flag, c.data_timestamp, 
c.work_unit_id, c.work_status, c.work_location, c.work_area, c.equip_code, 
c.shi_main_power, c.shi_total_len, c.shi_type_width, c.shi_type_depth, 
c.shi_design_draft, c.shi_total_tonnage, c.shi_load_tonnage, c.remark, 
c.unit_classification1, c.unit_classification2)




----
??: 
   "user-zh"



退订

2021-07-29 文章 闫健儒
退订

????

2021-07-29 文章 bai??
827931...@qq.com

退订

2021-07-29 文章 zhangjunjie1130
退订


| |
zhangjunj
|
|
邮箱:zhangjunjie1...@163.com
|

签名由 网易邮箱大师 定制

Re:Flink-1.11.1 Application-Mode提交测试

2021-07-29 文章 mispower
你好,请问下 这个问题解决了么?我现在也同样遇到这样的问题
在 2020-08-25 15:29:09,"amen...@163.com"  写道:
>hi, everyone
>
>当我把jar包都上传至hdfs时,使用如下命令进行application mode提交,
>
>./bin/flink run-application -t yarn-application 
>-Dyarn.provided.lib.dirs="hdfs:///user/flink/lib" -c 
>com.yui.flink.demo.Kafka2Mysql hdfs:///user/flink/app_jars/kafka2mysql.jar
>
>报异常如下:
>
> The program finished with the following exception:
>
>org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy 
>Yarn Application Cluster
>at 
> org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:414)
>at 
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:64)
>at 
> org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:197)
>at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:919)
>at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:422)
>at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
>at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
>Caused by: 
>org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN 
>application unexpectedly switched to state FAILED during deployment. 
>Diagnostics from YARN: Application application_1598223665550_0009 failed 1 
>times (global limit =2; local limit is =1) due to AM Container for 
>appattempt_1598223665550_0009_01 exited with  exitCode: -1
>Failing this attempt.Diagnostics: [2020-08-25 15:12:48.975]Destination must be 
>relative
>For more detailed output, check the application tracking page: 
>http://ck233:8088/cluster/app/application_1598223665550_0009 Then click on 
>links to logs of each attempt.
>. Failing the application.
>If log aggregation is enabled on your cluster, use this command to further 
>investigate the issue:
>yarn logs -applicationId application_1598223665550_0009
>at 
> org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1021)
>at 
> org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:524)
>at 
> org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:407)
>... 9 more
>
>其他没有任何的错误了,使用run -m yarn-cluster是可以正常提交的
>
>best,
>amenhub


Re: flink 1.13.1 使用hive方言,执行hive sql解析报错

2021-07-29 文章 Leonard Xu
看起来是sql语法报错,这里面的ELSE呢?

祝好,
Leonard


> 在 2021年7月27日,20:04,Asahi Lee <978466...@qq.com.INVALID> 写道:
> 
> CASE
>   WHEN mipd.`param_cn` = '月池尺寸' THEN
>   mipv.`param_value`
>   END AS `Moonpool`