你好,1.2 以后可以修改并发的,可以看下这个官方文档[1],不过你需要注意一下最大并发这个值不要变,而最大并发如果没有指定是由并发数计算出来的[2]
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/state/savepoints.html#what-happens-when-i-change-the-parallelism-of-my-program-when-restoring
[2]
由于资源问题,想对已运行的任务执行生成savepoint并,基于改savepoint重启并调小并发,但是查询了文档并没有看到相关调小并发的描述,所以想问一下这样是否可行?
Thanks everyone for your reply.
So far all the reply tend to option 1 (dropping Python 2 support in 1.10) and
will continue to hear if there are any other opinions.
@Jincheng @Hequn, you are right, things become more complicate if dropping
Python 2 support is performed after Python UDF has
Hi,Yang :
thank you very much for your reply.
I had add the configurations on my hadoop cluster client , both hdfs-site.xml
and core-site.xml are configured, the client can read mycluster1 and mycluter2,
but when I submit the flink job to yarn cluster , the hadoop client
configurations is