如果需要取消订阅 user-zh@flink.apache.org 邮件组,是发送任意内容的邮件到
user-zh-unsubscr...@flink.apache.org
发件人: 谢振爵
日期: 星期五, 2021年7月30日 15:11
收件人: user-zh
主题: 退订
退订
如果需要取消订阅 user-zh@flink.apache.org 邮件组,是发送任意内容的邮件到
user-zh-unsubscr...@flink.apache.org
发件人: 赵珠峰
日期: 星期五, 2021年7月30日 15:15
收件人: user-zh@flink.apache.org
主题: 退订
退订
本邮件载有秘密信息,请您恪守保密义务。未经许可不得披露、使用或允许他人使用。谢谢合作。
This email contains confidential information. Recipient is obliged to keep the
I am unable to restore a 1.9 savepoint into a 1.11 runtime for the very
interesting reason that the Savepoint class was renamed and repackaged between
those two releases. Apparently a Kryo serializer has that class registered in
the 1.9 runtime. I can’t think of a good reason for that
Well usually the plugins should be properly isolated but Flink 1.9 is quite
old so there is a chance the plugin classloader was not fully isolated.
But I also have a hard time concluding anything with the small stacktrace.
Do you need aws-java-sdk-core because of Kinesis?
On Fri, Jul 30, 2021 at
Hi Arvid,
Yes, we do have AWSCredentialsProvider in our user JAR. It’s coming from
aws-java-sdk-core. Must we exclude that, then?
// ah
From: Arvid Heise
Sent: Friday, July 30, 2021 11:26 AM
To: Ingo Bürk
Cc: user
Subject: Re: Unable to use custom AWS credentials provider - 1.9.2
Can you
Can you double-check if you have a AWSCredentialsProvider in your user jar
or in your flink/lib/ ? Same for S3AUtils?
On Fri, Jul 30, 2021 at 9:50 AM Ingo Bürk wrote:
> Hi Andreas,
>
> Such an exception can occur if the class in question (your provider) and
> the one being checked
Hello Yangze, thanks for responding.
I'm attempting to perform this programmatically on YARN, so looking at a log
just won't do :) What's the appropriate way to get an instance of a
ClusterClient? Do you know of any examples I can look at?
// ah
-Original Message-
From: Yangze Guo
hihive??sql??like
??
org.apache.flink.table.planner.codegen.CodeGenException: Unsupported call:
like(VARCHAR(255), STRING NOT NULL)
org.apache.flink.table.planner.codegen.CodeGenException: Unsupported call:
like(STRING, STRING NOT NULL)
If
(小的)tumbling window + (大的)over window
这样会不会好一些。
Wanghui (HiCampus) 于2021年7月30日周五 下午3:17写道:
> Hi all:
>我在测试Over窗口时,当窗口是5秒~15s级别时,处理速度能够达到2000/s。
> 但是当我把窗口调整为10分钟以上时,处理速度从2000开始急速下降,几分钟后下降至230/s。
> 请问下:
>Over窗口的性能该如何优化,因为我后续会将窗口调整为24小时,按照目前的情况来看,性能会下降很快。
>我的测试节点配置:8C +
问题:
flink 计算任务配置的TM内存是 6G,但是,TM进程占用的内存实际达到了 8G。
是什么超用那么多内存?
flink rocksdb内存超用那么多吗?
还是我的配置有什么问题 ?
内存使用:
> top - 19:05:36 up 304 days, 9:12, 0 users, load average: 7.24, 5.99,
> 5.25
> Tasks: 5 total, 1 running, 4 sleeping, 0 stopped, 0 zombie
> %Cpu(s): 5.8 us, 0.9 sy, 0.0 ni,
I am using RocksDB as the state backend. My pipeline checkpoint size is
hardly ~100kb.
I will add gc and heap dump config and will let you know of any findings
Right now I have doubts that there is some memory leak either in flink cdc
code or in iceberg sink
现有table
CREATE TABLE t (
abigint,
bbigint,
cbigint,
PRIMARY KEY (a) NOT ENFORCED
) WITH (
...
);
我们的场景只想根据主键a更新部分字段b,其余的字段保持不变,例如
mysql 支持 insert into t(a,b,c) select '1','2','3' on duplicate key update
b='4';
主键重复的时候只更新字段b,字段c的值不变
这个不知道未来怎么规划
Paul Lam 于2021年7月30日周五 下午2:51写道:
>
> 现在是不能共享的。Flink JobManager 的 principal 在启动时就确定了。
>
> Best,
> Paul Lam
>
> > 2021年7月30日 14:46,Ada Luna 写道:
> >
> > 在Flink Yarn Session中每次提交Job都更换principal。因为要做权限隔离,每个用户有自己的principal。
> >
> > 现在 Flink Session模式是不是无法满足多个principal共享一个Flink
有没有大佬帮忙看看这个问题
The RMClient's and YarnResourceManagers internal state about the
number of pending container requests for resource has
diverged .Number client's pending container requests 1 !=Number RM's pending
container requests 0;
Hello, I have been trying to Use StreamingFileSink to write to parquetFiles into azure blob storage. I am getting the following error. I did see in the ticket https://issues.apache.org/jira/browse/FLINK-17444 that support for StreamingFileSink is not yet provided.
code.java
Description:
Hi Dan,
sorry for the mixup. I think the idleness definition [1] is orthogonal to
the used source interface. The new source interface just makes it more
obvious to the user that he can override the watermark strategy.
I'd still recommend having a look at the new Kafka source though. One
Hi :
When I use Over Window Aggregation,windows size is 1 hour,I find the processing
speed decreases over time. How can I tuning the over window?
Best regards
Hui Wang
Send anything to user-zh-unsubscr...@flink.apache.org
hihl 于2021年7月27日周二 下午5:50写道:
> 退订
Thanks Chesnay. Will try and report back.
On Fri, Jul 30, 2021, 10:19 Chesnay Schepler wrote:
> Of course if is finding the file, you are actively pointing it towards it.
> The BashJavaUtils are supposed to use the log4j configuration file *that
> is bundled in the BashJavaUtils.jar, *which you
Hi Andreas,
Such an exception can occur if the class in question (your provider) and
the one being checked (AWSCredentialsProvider) were loaded from
different class loaders.
Any chance you can try once with 1.10+ to see if it would work? It does
look like a Flink issue to me, but I'm not sure
Of course if is finding the file, you are actively pointing it towards it.
The BashJavaUtils are supposed to use the log4j configuration file /that
is bundled in the BashJavaUtils.jar, /which you are now interfering
with. That's also why it doesn't require all of lib/ to be on the
classpath;
Hi all:
我在测试Over窗口时,当窗口是5秒~15s级别时,处理速度能够达到2000/s。
但是当我把窗口调整为10分钟以上时,处理速度从2000开始急速下降,几分钟后下降至230/s。
请问下:
Over窗口的性能该如何优化,因为我后续会将窗口调整为24小时,按照目前的情况来看,性能会下降很快。
我的测试节点配置:8C + 16G
Flink配置: taskmanager process memory: 8G
Best regards
WangHui
退订
本邮件载有秘密信息,请您恪守保密义务。未经许可不得披露、使用或允许他人使用。谢谢合作。
This email contains confidential information. Recipient is obliged to keep the
information confidential. Any unauthorized disclosure, use, or distribution of
the information in this email is strictly prohibited. Thank you.
退订
CREATE TABLE `cosldatacenter.ods_emp_maindata_iadc_paramvalue`(
`paramvalue_id` string COMMENT '',
`platform_id` string COMMENT '',
`equipment_id` string COMMENT '',
`param_id` string COMMENT '',
`param_value` string COMMENT '',
`remark` string COMMENT '',
`create_time` string COMMENT '',
现在是不能共享的。Flink JobManager 的 principal 在启动时就确定了。
Best,
Paul Lam
> 2021年7月30日 14:46,Ada Luna 写道:
>
> 在Flink Yarn Session中每次提交Job都更换principal。因为要做权限隔离,每个用户有自己的principal。
>
> 现在 Flink Session模式是不是无法满足多个principal共享一个Flink Session集群,只能走perjob。
> 或者每个持有独立principal的用户独享一个Session。
It is finding the file though, the problem is that the lib/ might not be on
the classpath when the file is being parsed, thus the YAML file is not
recognized as being parsable.
Is there a way to differ the inference from BashJavaUtils to the actual
bootstrap of the app? or perhaps add the lib to
在Flink Yarn Session中每次提交Job都更换principal。因为要做权限隔离,每个用户有自己的principal。
现在 Flink Session模式是不是无法满足多个principal共享一个Flink Session集群,只能走perjob。
或者每个持有独立principal的用户独享一个Session。
大佬们,请问下 在使用 sql-client 查询的过程中, /tmp 目录下面生成很多如下这样的临时文件,请问有地方可以配置吗
```txt
00615a2c-c0f6-4ca9-b5c4-ee8d69ca2513
1098b539-31f2-4fbb-9e7b-46d490ff25d6
13b0dcbb-2e2c-4b85-9969-f90915e2a9ca
21f6114e-e2f8-4a64-aba4-0087b708dd7b
22dee74f-5ef2-4763-bfb3-956cd533b33e
2774db29-0ac7-40f6-9210-eb6693c1a167
sql??sql
----
??:
"user-zh"
I actually have already specified the data type int, but that doesn't work:
public void accumulate(LastDecimalAccumulator accumulator,
> @DataTypeHint("DECIMAL(38, 18)") BigDecimal value)
> {
> if (value != null) {
> accumulator.f0 = value;
> }
> }
>
I did
31 matches
Mail list logo