退订
退订
"+
"'lookup.cache.ttl' = '1800s',"+
"'sink.buffer-flush.interval' = '60s'"+
")");
我发现这样的话checkpoint配置会失效,不能触发检查点,日志报如下错误:
job bad9f419433f78d24e703e659b169917 is notin state RUNNING but FINISHED
instead. Aborting checkpoint.
进入WEB UI 看一下视图发现该Execution处于FINISHED状态,FINISHED状态无法进行checkpoint,这种有其它办法吗?
感谢大佬指导一下,拜谢!
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
)
祝好!
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
configuration.setString("table.exec.mini-batch.size", "5000");
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
AKXS," +
"CAST(SUM(t.AK_SB_LM) AS STRING) AS AKSBXS," +
"CAST(SUM(t.AK_XS_LM)+SUM(t.AK_SB_LM) AS STRING) AS AKZXS,"+
"CAST(SUM(t.DCD_XS_LM) AS STRING) AS DCDXS," +
"CAST(SUM(t.DCD_SB_LM) AS STRING) AS DCDSBXS," +
"CAST(SUM(t.DCD_XS_LM)+SUM(t.DCD_SB_LM) AS STRING) AS DCDZXS "+
"FROM " +
"(SELECT " +
"C.entity_code," +
"COUNT(*) AS NUM_2_LM," +
"COUNT( CASE WHEN C.data_source = '20' THEN 1 END ) AS QCZJ_XS_LM," +
"COUNT( CASE WHEN C.data_source = '100' THEN 1 END ) AS QCZJ_SB_LM," +
"COUNT( CASE WHEN C.data_source = '40' THEN 1 END ) AS YC_XS_LM," +
"COUNT( CASE WHEN C.data_source = '30' THEN 1 END ) AS YC_SB_LM," +
"COUNT( CASE WHEN C.data_source = '10' THEN 1 END ) AS TPY_XS_LM," +
"COUNT( CASE WHEN C.data_source = '80' THEN 1 END ) AS TPY_SB_LM," +
"COUNT( CASE WHEN C.data_source = '60' THEN 1 END ) AS AK_XS_LM," +
"COUNT( CASE WHEN C.data_source = '50' THEN 1 END ) AS AK_SB_LM," +
"COUNT( CASE WHEN C.data_source = '130' THEN 1 END ) AS DCD_XS_LM," +
"COUNT( CASE WHEN C.data_source = '140' THEN 1 END ) AS DCD_SB_LM " +
"FROM " +
"new_clue_list_cdc AS C " +
"WHERE " +
"((C.clue_level IN ('13101007','13101006')) OR C.clue_level IN
('13101001','13101002','13101003','13101004')) " +
"GROUP BY " +
"C.entity_code,C.customer_no " +
") AS t " +
"GROUP BY t.entity_code ) AS t2 ON t1.dealer_code = t2.entity_code WHERE
t1.is_valid=12781001)");
stmtSet.execute();
}
}
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
[root@cdh1 flink-1.12.0]#
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
在2021年1月18日 10
是我本地服务器的路径,需要在三个节点上都上传这个jar包吗?
放在 /opt/flink-1.12.0/examples目录下了
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
在2021年1月18日 10:38,Yangze Guo 写道:
请问这个路径是你本地的路径么?需要client端能根据这个路径找到jar包
Best,
Yangze Guo
On Mon, Jan 18, 2021 at 10:34 AM 刘海 wrote:
你好
根据你的建议我试了一下
将提交命令改为: ./bin/flink run -d -t yarn
more
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
在2021年1月18日 10:12,Yangze Guo 写道:
Hi, 请使用 -D -tm -jm 不需要加y前缀
Best,
Yangze Guo
Best,
Yangze Guo
On Mon, Jan 18, 2021 at 9:19 AM 刘海 wrote:
刘海
liuha...@163.com
签名由 网易邮箱大师 定制
在2021年1月18日 09:15,刘海 写道:
Hi Dear All,
请教各位一个问题,下面是我的集群配置:
1、我现在使用的是
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
在2021年1月18日 09:15,刘海 写道:
Hi Dear All,
请教各位一个问题,下面是我的集群配置:
1、我现在使用的是flink1.12版本;
2、基于CDH6.3.2搭建的hadoop三个节点的集群,使用CDH自带的yarn集群;
3、flink运行模式:Per-Job Cluster on yarn(三个节点,没每个节点48核64G内存);
4、以下是我三个节点的 flink-conf.yaml 的配置,三个flink节点除了jobmanager.rpc.address不同
xamples/myProject/bi-wxfx-fqd-1.0.0.jar
这个命令提交运行flink job之后 命令中指定的内存参数没有被使用,在flink webUI里面观察到的使用内存是 flink-conf.yaml
里面配置的大小,cli命令指定的并未起作用,是我使用的不正确吗?
祝好!
刘海
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
使用ROW里面的 表别名.字段名 会出现解析错误,我之前使用hbase也有这个问题,我一般是在查询里面包一层子查询
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
On 1/11/2021 11:04,马阳阳 wrote:
We have a sql that compose a row with a table’s columns. The simplified sql is
like:
INSERT INTO flink_log_sink
SELECT
b.id,
Row(b.app_id, b.message)
FROM
这个应该和内存有关,我之前试过,存储的状态无限增长,导致运行几分钟后任务结束,并抛出异常,可以尝试一下加大内存和清理状态
| |
刘海
|
|
liuha...@163.com
|
签名由网易邮箱大师定制
在2021年1月4日 11:35,咿咿呀呀<201782...@qq.com> 写道:
我按照您这个修改了,跟我之前的也是一样的。能运行的通,输出的结果也是正确的,现在最大的问题是——运行一段时间后(3分钟左右)就出现了OSError:
Expected IPC message of type schema but got record batch这个错误
-
13 matches
Mail list logo