发一下完整的日志文件?
On Tue, Apr 19, 2022 at 10:53 AM 799590...@qq.com.INVALID
<799590...@qq.com.invalid> wrote:
> 软件版本
>
> flink-1.13.6
> python3.6.8
> 本地win10也安装了python3.6.8且添加了python的环境变量和成功运行了$ python -m pip install
> apache-flink==1.13.6
> standalonesession方式部署的,一个JM 两个TM,3台集群都安装了python3.6.8
Hi, John.
Could you share the exception stack to us and the schema of the `dummy`
table in your database?
Best,
Shengkai
John Tipper 于2022年4月17日周日 21:15写道:
> Hi all,
>
> I'm having some issues with getting a Flink SQL application to work, where
> I get an exception and I'm not sure why it's
软件版本
flink-1.13.6
python3.6.8
本地win10也安装了python3.6.8且添加了python的环境变量和成功运行了$ python -m pip install
apache-flink==1.13.6
standalonesession方式部署的,一个JM 两个TM,3台集群都安装了python3.6.8 且安装了pyflink-1.13.6
问题:
1、调用python udf时会报如下错误
Servlet.service() for servlet [dispatcherServlet] in context with path []
Hello,
We are running Flink 1.13.6 in Kubernetes with k8s HA, the setup includes 1 JM
and TM. Recently In jobmanager log I started to see:
2022-04-19T00:11:33.102Z Association with remote system
[akka.tcp://flink@10.204.0.126:6123] has failed, address is now gated for [50]
ms. Reason:
Hey everyone,
I'm curious if anyone knows the reason behind choosing 38 as
a MAX_PRECISION for DecimalType in the Table API?
I needed to process decimal types larger than that and I ended up
monkey-patching DecimalType to use a higher precision. I understand it adds
a bit of overhead, but I
Hi Madan,
The reason might be that the -D parameters are not recognized since you used
the old-fashioned YARN CLI command, where you need to add the -y prefix for
command options. Use -yD instead of -D or use "flink run -t yarn-per-job
-Dyarn.application.name=jobname" instead of "flink run -m
Hello Team,
I am trying to use adaptive schedule with flink 1.14 to run flink job based on
available resources instead of waiting for required parallelism (scaling) but I
don't see flink is getting recognize adaptive schedule.
Ex: flink run -m yarn-cluster -ynm jobName -p 128 -D
If you are using Kubernetes to deploy Flink, you could think about an
initContainer on the TMs or a custom Docker entry point that does this
initialization.
Best,
Austin
On Mon, Apr 18, 2022 at 7:49 AM huweihua wrote:
> Hi, Init stuff when task manager comes up is not an option.
> But if the
Hi, can anyone help with this? I never looked at a dump file before.
On Thu, Apr 14, 2022 at 11:59 AM John Smith wrote:
> Hi, so I have a dump file. What do I look for?
>
> On Thu, Mar 31, 2022 at 3:28 PM John Smith wrote:
>
>> Ok so if there's a leak, if I manually stop the job and restart it
退订
Hi, Init stuff when task manager comes up is not an option.
But if the Keystore file is not changeable and you are using yarn mode, maybe
you can use ‘yarn.ship-files’[1] to localize it.
[1]https://nightlies.apache.org/flink/flink-docs-master/zh/docs/deployment/config/#yarn-ship-files
>
Hi all,
We are trying to modify our Flink job with iteration (
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/dataset/iterations/).
The job works fine with expected outputs and the checkpoints are created
successfully at regular intervals. However, when we'd like to create a
Hello,
Thank you for your answer. Yes, we are using the DataStream API.
I agree that exceptions are developer’s responsibility but errors can still
happen and I would like to have a progressive approach in case they happen
instead of a blocking one.
I will take a look at your suggestion.
如果需要在flink使用spring的话, 需要在open方法加载applicationContext对象.
你这里需要在sink的open方法初始化spring上下文对象.
override def open(conf: Configuration): Unit = {
super.open(conf)
if (Option(SpringContextHolder.getApplicationContext).isEmpty) {
您好:
首先很感谢您能在百忙之中看到我的邮件。在使用flink框架过程中我遇到了一些问题,希望能得到您的解答。
我通过网上已有的资料进行学习,在本地环境将springboot框架与flink进行结合,并可以成功运行。但是当我将项目通过maven打包成jar包后,发布到flink集群端时,在自定义sink和source类中无法获取到springboot的ApplicationContext,所以我想问下针对此情况是否有解决方案。
下面是我代码的具体实现思路:
15 matches
Mail list logo