Is it possible to rename execution stages from the Table API? Right now the
entire select transformation appears in plaintext in the task name so the
log entries from ExecutionGraph are over 10,000 characters long and the log
files are incredibly difficult to read.
for example a simple selected
Hi Timo,
thanks for the help here, wrapping the MapView in a case class indeed
solved the problem.
It was not immediately apparent from the documentation that using a MapView
as top level accumulator would cause an issue. it seemed a straightforward
intuitive way to use it :)
Cheers
Clemens
On
Could you please ensure that you are using the native Kubernetes mode[1]?
For standalone on K8s[2], you need to manually set the annotation in the
jobmanager yaml file.
[1].
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/
[2].
Hi,
之前遇到过这个 jobid 为 0 的报错情况。我们的场景是是任务开启了基于 zk 的 ha,但是使用未配置 ha 的 flink
client 去运行 savepoint 命令。
可以考虑下是否是相同的问题。
Michael Ran 于2021年7月23日周五 上午10:43写道:
> 有没可能是文件的问题,比如写入权限之类的?
> 在 2021-07-13 17:31:19,"仙剑……情动人间" <1510603...@qq.com.INVALID> 写道:
> >Hi All,
> >
> >
> > 我触发Flink
>
Hi comsir,
采用 kafka 集群元数据 的 offset 信息和当前 group offset 相减得到的 lag 是比较准确的。
group id 需要自己维护。
comsir <609326...@qq.com.invalid> 于2021年7月20日周二 下午12:41写道:
> hi all
> 以kafka为source的flink任务,各位都是如何监控kafka的延迟情况??
> 监控这个延迟的目的:1.大盘展示,2.延迟后报警
> 小问题:
> 1.发现flink原生的相关metric指标很多,研究后都不是太准确,大家都用哪个指标?
>
退订
本邮件载有秘密信息,请您恪守保密义务。未经许可不得披露、使用或允许他人使用。谢谢合作。
This email contains confidential information. Recipient is obliged to keep the
information confidential. Any unauthorized disclosure, use, or distribution of
the information in this email is strictly prohibited. Thank you.
退订
| |
morris996
|
|
morris...@163.com
|
签名由网易邮箱大师定制
最后我发现问题的根源是双流JOIN没设置TTL。双流JOIN task的 OutputBuffer会被打满。然后Flink就处于假死状态了。不再消费任何数据。
Ada Luna 于2021年7月19日周一 下午7:06写道:
>
> 异步IO的Order队列打满,导致算子卡死?
>
> Ada Luna 于2021年7月19日周一 下午2:02写道:
> >
> > 我通过反压信息观察到,这个 async wait operator
> > 算子上游全部出现严重反压。很有可能是这个算子死锁或者死循环等类似问题。但是我还不知道如何进一步排查。
> >
> > "async wait
Hi,
You need to make sure that PyFlink is available in the cluster nodes. There are
a few ways to achieve this, e.g.
- Install PyFlink on all the cluster nodes
- Install PyFlink in a virtual environment and specify it via python archive [1]
Regards,
Dian
[1]
Caused by: java.lang.ClassNotFoundException:
com.amazonaws.services.s3.model.AmazonS3Exception
| |
johnjlong
|
|
johnjl...@163.com
|
签名由网易邮箱大师定制
在2021年7月27日 15:18,maker_d...@foxmail.com 写道:
各位开发者:
大家好!
我在使用flink native Kubernetes方式部署,使用minio做文件系统,配置如下:
state.backend: filesystem
+ user mailing list
I don't have permission to assign to you, but here is the JIRA ticket:
https://issues.apache.org/jira/browse/FLINK-23519
Thanks!
On Tue, Jul 27, 2021 at 4:40 AM Yun Tang wrote:
> Hi Mason,
>
> I think this request is reasonable and you could create a JIRA ticket so
> that
This thread is duplicated on the dev mailing list [1].
[1]
https://lists.apache.org/x/thread.html/r87fa8153137a4968f6a4f6b47c97c4d892664d864c51a79574821165@%3Cdev.flink.apache.org%3E
Best,
D.
On Tue, Jul 27, 2021 at 5:38 PM Kathula, Sandeep
wrote:
> Hi,
>
> We have a simple Beam
Hi,
We have a simple Beam application like a work count running with Flink
runner (Beam 2.26 and Flink 1.9). We are using Beam’s value state. I am trying
to read the state from savepoint using Flink's State Processor API but getting
a NullPointerException. Converted the whole code into
Hi Till,
Having been unaware of this mail thread I've created a Jira Bug
https://issues.apache.org/jira/browse/FLINK-23509 which proposes also a simple
solution.
Regards
Matthias
Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und beinhaltet
unter Umständen vertrauliche
Hi??
??flink 1.13.1??hive sql??
CREATE CATALOG `tempo_df_hive_default_catalog` WITH(
'type' = 'hive',
'default-database' = 'default'
);
USE CATALOG tempo_df_hive_default_catalog;
CREATE TABLE IF NOT EXISTS `default`.`tempo_blackhole_table` (
f0 INT
);
use cosldatacenter;
Hi Mason,
I think this request is reasonable and you could create a JIRA ticket so that
we could resolve it later.
Best,
Yun Tang
From: Mason Chen
Sent: Tuesday, July 27, 2021 15:15
To: Yun Tang
Cc: Mason Chen ; user@flink.apache.org
Subject: Re:
退订
Hi!
Try this:
sql.zipWithIndex.foreach { case (sql, idx) =>
val result = tableEnv.executeSql(sql)
if (idx == 7) {
result.print()
}
}
igyu 于2021年7月27日周二 下午4:38写道:
> tableEnv.executeSql(sql(0))
> tableEnv.executeSql(sql(1))
> tableEnv.executeSql(sql(2))
>
tableEnv.executeSql(sql(0))
tableEnv.executeSql(sql(1))
tableEnv.executeSql(sql(2))
tableEnv.executeSql(sql(3))
tableEnv.executeSql(sql(4))
tableEnv.executeSql(sql(5))
tableEnv.executeSql(sql(6))
tableEnv.executeSql(sql(7)).print()
that is OK
but I hope
Hi Team, I have set the "kubernetes.jobmanager.annotations". But I can't
find these in the k8s deployment. As these can be found in the job manager
pod.
Is it by design or just be missed?
This feels like the simplest error, but I'm struggling to get past it. I
can run pyflink jobs locally just fine by submitting them either via
`python sample.py` or `flink run --target local -py sample.py`. But, when I
try to execute on a remote worker node, it always fails with this error:
各位开发者:
大家好!
我在使用flink native Kubernetes方式部署,使用minio做文件系统,配置如下:
state.backend: filesystem
fs.allowed-fallback-filesystems: s3
s3.endpoint: http://172.16.14.40:9000
s3.path-style: true
s3.access-key: admin
s3.secret-key: admin123
Yup, your understand is correct—that was the analogy I was trying to make!
> On Jul 26, 2021, at 7:57 PM, Yun Tang wrote:
>
> Hi Mason,
>
> In rocksDB, one state is corresponding to a column family and we could
> aggregate all RocksDB native metrics per column family. If my understanding
>
23 matches
Mail list logo