可以在算子后面调用.name()方法指定名称,方法参数就是算子名称。
比如需sink的流为stream
stream.sinkTo(Sink算子).name("sink-name")
--
Best,
Hjw
在 2023-04-14 16:26:35,"小昌同学" 写道:
>我将流式数据输出到mysql,查看flink 自带的web ui界面,有一个sink节点显示为Sink: Unnamed ,这个针对sink节点可以命名嘛
>
>
>| |
>小昌同学
>|
>|
>ccc0606fight...@163.com
>|
FROM product
--
Best,
Hjw
At 2023-04-04 12:23:26, "Shammon FY" wrote:
Hi hjw
To rescale data for dim join, I think you can use `partition by` in sql before
`dim join` which will redistribute data by specific column. In addition, you
can add cache for `dim table`
, so only one
subtask in the Lookup join operator can receive data.I want to set the
relationship between the kafka Source and the Lookup join operator is reblance
so that all subtask in Lookup join operator can recevie data.
Env:
Flink version:1.15.1
--
Best,
Hjw
:
- name: TZ
value: Asia/Shanghai
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
parallelis
- name: TZ
value: Asia/Shanghai
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
parallelism: 2
upgradeMode: stateless
--
Best,
Hjw
Hi,
不好意思,忘记补充背景了。我目前使用的都是Application模式部署。
的确如你说的Flink session模式下作业失败JM和TM是不会消失的。
--
Best,
Hjw
在 2022-12-26 11:25:50,"RS" 写道:
>Hi,
>
>
>我的也是on K8S,是session模式的,得看你的模式是什么
>我的流作业失败了,配置了checkpoint,会自动重试,jm和tm都还在,可以直接看到作业异常信息。
>
>
>Thanks
>
>
>
Mode: Flink kubernetes Operator 1.2.0(Application Mode)
--
Best,
Hjw
Flink On Native K8s
模式下,如果流作业因异常失败了,作业的JobManager和TaskManager所在Pod都会消失,就无法查看作业日志。
请问在K8s模式下,在查看日志方面有没有相关解决方案。
目前我只想到将作业Jm和Tm打印的日志通过pv-pvc方式挂载NFS做持久化。这样做日志就可以保存下来方便查看。
--
Best,
Hjw
:
Flink version:1.14.4
--
Best,
Hjw
http://maven.aliyun.com/nexus/content/groups/public/
false
false
never
--
Best,
Hjw
pod。
2、直接执行kubectl delete flinkdeployment
my-deployment。作业的所有关联资源全被删除,包括HA资源,状态存储信息,Deployment等。
--
Best,
Hjw
在pod。
2、直接执行kubectl delete flinkdeployment
my-deployment。作业的所有关联资源全被删除,包括HA资源,状态存储信息,Deployment等。
--
Best,
Hjw
hi.如果是流作业呢?其实我是想在作业失败的时候能保留日志方便查看排查。
--
Best,
Hjw
在 2022-11-28 15:33:37,"Biao Geng" 写道:
>hi,主要就是针对作业FINISHED或者FAILED时也能保留作业。你可以跑一个批作业试试。
>Best,
>Biao Geng
>
>获取 Outlook for iOS<https://aka.ms/o0ukef>
>____
>发件人: hjw
在pod。
2、直接执行kubectl delete flinkdeployment
my-deployment。作业的所有关联资源全被删除,包括HA资源,状态存储信息,Deployment等。
--
Best,
Hjw
--
Best,
Hjw
--
Best,
Hjw
in the production environment ?
--
Best,
Hjw
Hi,
The yarn.classpath.include-user-jar parameter is shown as
yarn.per-job-cluster.include-user-jarparameter in Flink 1.14.
I have triedDISABLED??FIRST??LAST??ORDER .But the error still happened.
Best,
Hjw
--Original--
From
version: flink 1.14.0
Best,
Hjw
thanks for everyone. I will increase the parallelism to solve the
problem.Besides,I am looking forward to support remote state.
Best,
Hjw
Hi,Alexander
When Flink job deployed on Native k8s, taskmanager is a Pod.The data directory
size of a single container is limited in our company.Are there any idea to deal
with this ?
Best,
Hjw
should take some measures to
deal with it.(mount a volume for local data directories store RocksDB database
etc...)
thx.
Best,
Hjw
Hi,Matthias
I have solved this problem as you say.The link to the PR [1] .thank you.
[1]https://github.com/apache/flink/pull/20671
Best,
Hjw
is4 commits ahead,11 commits behindapache:release-1.15.
When I rebase the branch from upstream and push to my fork repo, the11
commitsbehindapache:release-1.15 also appear in my pr change files.
How can I handle this situation? thx.
Best,
Hjw
I commit a pr to Flink Github .
A error happened in building ci.
[error]The job running on agent Azure Pipelines 6 ran longer than the maximum
time of 310 minutes. For more information, see
https://go.microsoft.com/fwlink/?linkid=2077134
How to solve this problem?
How to triigle the ci
/flink-docs-release-1.15/docs/deployment/ha/kubernetes_ha/#high-availability-data-clean-up
Best,
Hjw
----
??:
"user-zh"
Kafka
Connector??Api??IDEAJira
https://issues.apache.org/jira/browse/FLINK-28758
----
??:
When I run mvn clean install ,It will run Flink test case .
However , I get Error??
[ERROR] Failures:
[ERROR]
KubernetesClusterDescriptorTest.testDeployApplicationClusterWithNonLocalSchema:155
Previous method call should have failed but it returned:
I try to maven clean install Flink 1.15 parent,but fail.
A Error happened in compiling flink-clients.
Error Log:
Failed to execute goal
org.apache.maven.plugins:maven-assembly-plugin:2.4:single
(create-test-dependency) on project flink-clients: Error reading assemblies:
Error locating assembly
Flink docker ??Tag??
??1.15.1??Tag??scala_2.12-java8??scala_2.12-java11??java8??
scala??JavaFlink??java??
??jdk11FlinkTag??java8??
thx
Flink docker ??Tag??
??1.15.1??Tag??scala_2.12-java8??scala_2.12-java11??java8??
scala??JavaFlink??java??
??jdk11FlinkTag??java8??
thx
will enter restarting .
It is successful to use savepoint command alone.
Error log:
13:33:42.857 [Kafka Fetcher for Source: nlp-kafka-source - nlp-clean
(1/1)#0] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer
clientId=consumer-hjw-3, groupId=hjw] Kafka consumer has been closed
13:33
Flink 1.14 Document display Flink 1.14 kafka connector only
backwards compatible with broker versions 0.10.0 or later.
Unfortunately, I have to use Flink 1.14 comsume Kafka 0.9 version?How can
I do it ??thx.
Flink version:1.15.0
??1.15.0Flink??native k8s?Flink on
Native k8s ??:)
er-zh"
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/state/
hjw <1010445...@qq.com.invalid ??2022??3??9?? 01:32??
sql??SELECT color, sum(id)
sql??SELECT color, sum(id) FROM T GROUP BY
colorFlinkTgroup
by
key??color)??Flink???
K8sk8s~/.kube/config.SSL??kubectl??k8s
(kubectl get pod -n namespace ??)??
Flink: 1.13.6
~/.kube/config
apiVersion:v1
kind:config
cluster:
-name: "yf-dev-cluster1"
cluster:
server:
Flink:1.13FlinkNative K8s
jobmanager??taskmanager??hostspod-Template??jm??tm??K8spod-Template??
38 matches
Mail list logo