, 2022 at 11:06 PM Roman Grebennikov wrote:
> Hi,
>
> AFAIK scala REPL was removed completely in Flink 1.15 (
> https://issues.apache.org/jira/browse/FLINK-24360), so there is nothing
> to cross-build.
>
> Roman Grebennikov | g...@dfdx.me
>
>
> On Thu, May 12, 20
done in this way. And
> the project is a bit experimental, so if you're interested in scala3 on
> Flink, you're welcome to share your feedback and ideas.
>
> with best regards,
> Roman Grebennikov | g...@dfdx.me
>
>
--
Best Regards
Jeff Zhang
he.org/jira/browse/FLINK-25128
>
>
> Best regards,
> Yuxia
>
> ------
> *发件人: *"Jeff Zhang"
> *收件人: *"User"
> *发送时间: *星期六, 2022年 5 月 07日 下午 10:05:55
> *主题: *Unable to start sql-client when putting
> flink-table-planner_
l.java:553)
at
org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:154)
... 8 more
--
Best Regards
Jeff Zhang
t;> Does not seem to include this script anymore.
>>
>> Am I missing something?
>>
>> How can I still start a scala repl?
>>
>> Best,
>>
>> Georg
>>
>>
--
Best Regards
Jeff Zhang
vices.sts.StsAsyncClient
> import software.amazon.awssdk.services.sts.StsAsyncClient
>
> scala> StsAsyncClient.builder
> :72: error: Static methods in interface require -target:jvm-1.8
>StsAsyncClient.builder
>
> Why do I have this error? Is there any way to solve this problem?
>
>
> Thanks,
> Jing
>
>
--
Best Regards
Jeff Zhang
at's too heavy.
>
>
> Thanks for your any suggestions or replies!
>
>
> Best Regards!
>
>
>
>
--
Best Regards
Jeff Zhang
-the-easy-way-d9d48a95ae57
The easy way to learn Flink Sql.
Hope it would be helpful for you and welcome to join our community to
discuss with others. http://zeppelin.apache.org/community.html
--
Best Regards
Jeff Zhang
-the-easy-way-d9d48a95ae57
The easy way to learn Flink Sql.
Hope it would be helpful for you and welcome to join our community to
discuss with others. http://zeppelin.apache.org/community.html
--
Best Regards
Jeff Zhang
https://www.yuque.com/jeffzhangjianfeng/gldg8w/dthfu2
Ada Luna 于2021年8月16日周一 下午2:26写道:
> 目前注册UDF要通过Table API。
> 未来会通过SQL直接将UDF注册到上下文中吗?
>
--
Best Regards
Jeff Zhang
>
>> > > >
>> > > > Your Personal Data: We may collect and process information about you
>> > > > that may be subject to data protection laws. For more information
>> > > > about how we use and disclose your personal data, how we protect
>> > > > your information, our legal basis to use your information, your
>> > > > rights and who you can contact, please refer to:
>> > > > http://www.gs.com/privacy-notices
>> > >
>> > >
>> > >
>> > > Your Personal Data: We may collect and process information about you
>> > > that may be subject to data protection laws. For more information
>> > > about how we use and disclose your personal data, how we protect your
>> > > information, our legal basis to use your information, your rights and
>> > > who you can contact, please refer to:
>> > > www.gs.com/privacy-notices<http://www.gs.com/privacy-notices>
>> >
>> >
>> >
>> > Your Personal Data: We may collect and process information about you
>> that may be subject to data protection laws. For more information about how
>> we use and disclose your personal data, how we protect your information,
>> our legal basis to use your information, your rights and who you can
>> contact, please refer to: www.gs.com/privacy-notices<
>> http://www.gs.com/privacy-notices>
>>
>
--
Best Regards
Jeff Zhang
connector,format,udf(这个最重要)等,
> >很多情况下不同用户定义的组件会经常混用,会有很多的依赖冲突问题难以解决。
> >目前有没有办法做到udf的依赖隔离(比如不同的udf使用独立的jar和classloader),或者社区对此有没有什么规划
>
--
Best Regards
Jeff Zhang
来支持后可以考虑引入。
> 谢谢大佬给的建议。
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2021-06-13 07:21:46,"Jeff Zhang" 写道:
> >另外一个选择是flink on zeppelin,可以调用flink on zeppelin的rest api,把zeppelin当做是flink
> >job server, zeppelin天然支持flink 1.10之后的
要抽取公用接口
> 缺点:需要“复制”平台多份实现以支持不同flink版本,开发和维护量比较大
>
>
> 目前,我暂时prefer方案2,可能不对,也可能还有更优的我没想到,欢迎各位大佬们给点建议。
>
>
--
Best Regards
Jeff Zhang
BTW, you can also send email to zeppelin user maillist to join zeppelin
slack channel to discuss more details.
http://zeppelin.apache.org/community.html
Jeff Zhang 于2021年6月9日周三 下午6:34写道:
> Hi Maciek,
>
> You can try zeppelin which support pyflink and display flink job url
> inli
nvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
> > > table_env = TableEnvironment.create(env_settings)
> > >
> > > How can I enable Web UI in this code?
> > >
> > > Regards,
> > > Maciek
> > >
> > >
> > >
> > > --
> > > Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
> >
>
>
> --
> Maciek Bryński
>
--
Best Regards
Jeff Zhang
ample.sql
>> ++
>> | result |
>> ++
>> | OK |
>> ++
>> 1 row in set
>> Job has been submitted with JobID ace45d2ff850675243e2663d3bf11701
>> ++++
>> | op | uuid |ots |
>> ++++
>>
>>
>> --
>> Regards,
>> Tao
>>
>
>
> --
> Regards,
> Tao
>
--
Best Regards
Jeff Zhang
ode: 2
> Failing this attempt.Diagnostics: [2021-04-14 19:04:02.506]Exception from
> container-launch.
> Container id: container_e13_1618298202025_0017_01_01
> Exit code: 2。
>
> 由于错误原因不明显,不好排查,也不确定是到底是哪里的问题,请问有什么办法能够定位问题。
>
--
Best Regards
Jeff Zhang
方法不可行,在公司的平台化系统里提交flink任务,自己能掌控的只有代码这块。
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2021-03-11 16:39:24,"silence" 写道:
> >启动时通过-C加到classpath里试试
> >
> >
> >
> >--
> >Sent from: http://apache-flink.147419.n8.nabble.com/
>
--
Best Regards
Jeff Zhang
zeppelin 有 rest api 接口,https://www.yuque.com/jeffzhangjianfeng/gldg8w/pz2xoh
jinsx 于2021年1月27日周三 下午2:30写道:
> 如果使用zeppelin,zeppelin可以提供rpc接口吗
>
>
>
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/
>
--
Best Regards
Jeff Zhang
>
> 现在在flinksql上想看结果,还的定义一个with='print',跑到ui页面上去看,太麻烦了
--
Best Regards
Jeff Zhang
And check the following 2 links for more details of how to use flink on
zeppelin
https://app.gitbook.com/@jeffzhang/s/flink-on-zeppelin/
http://zeppelin.apache.org/docs/0.9.0/interpreter/flink.html
--
Best Regards
Jeff Zhang
And check the following 2 links for more details of how to use flink on
zeppelin
https://app.gitbook.com/@jeffzhang/s/flink-on-zeppelin/
http://zeppelin.apache.org/docs/0.9.0/interpreter/flink.html
--
Best Regards
Jeff Zhang
以适配,请问大家有没有在构建平台时有兼容多版本Flink任务的好方案,并且能够方便快速升级迭代支持新的Flink版本,我现在想到的就是用命令行去做。
>
--
Best Regards
Jeff Zhang
gt;
> org.apache.zeppelin.flink.FlinkScalaInterpreter.open(FlinkScalaInterpreter.scala:114)
> at
> org.apache.zeppelin.flink.FlinkInterpreter.open(FlinkInterpreter.java:67)
> at
>
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
> ... 12 more
>
--
Best Regards
Jeff Zhang
> 我看了下append方式执行,jdbc仍然使用的upsertSink。
>
> 所以谁确认下这个参数是不是和具体任务没啥关系。
>
--
Best Regards
Jeff Zhang
你是hadoop2 吗?我记得这个情况只有hadoop3才会出现
gangzi <1139872...@qq.com> 于2020年10月16日周五 上午11:22写道:
> TM
> 的CLASSPATH确实没有hadoop-mapreduce-client-core.jar。这个难道是hadoop集群的问题吗?还是一定要shade-hadoop包,官方不推荐shade-hadoop包了。
>
> > 2020年10月16日 上午10:50,Jeff Zhang 写道:
> >
> > 你看看TM的log,里面有C
uce-client-core.jar这个jar包,但是这个包是在/usr/local/hadoop-2.10.0/share/hadoop/mapreduce/*:这个目录下的,这个目录是包含在HADOOP_CLASSPATH上的,按理说是会加载到的。
>
> > 2020年10月16日 上午9:59,Shubin Ruan 写道:
> >
> > export HADOOP_CLASSPATH=****
>
>
--
Best Regards
Jeff Zhang
Backend
>
>
> Remind.com <https://www.remind.com/> | BLOG <http://blog.remind.com/> |
> FOLLOW US <https://twitter.com/remindhq> | LIKE US
> <https://www.facebook.com/remindhq>
>
--
Best Regards
Jeff Zhang
地添加或者更新我的table和udf(table新增可以通过sql-client来添加,但是更倾向于将table的定义记录在文件,udf存在修改和新增的情况)。这样依赖可以保证flink不重启。
>
--
Best Regards
Jeff Zhang
tainerId。看到网上有博客使用ClusterClient的方式来提交,但是遇到了classpath的问题,会缺失一些FLINK_HOME/lib下的jar包
> 所以我想问,应该如何在自己的应用中提交任务到yarn,既能拿到任务信息,又可以解决classpath为指定带来的困扰。
> 非常感谢
>
>
> best,
> xiao
--
Best Regards
Jeff Zhang
没有搜索路径,需要用绝对路径
赵一旦 于2020年9月24日周四 下午3:22写道:
> 看了你文章,有jars。想继续问下,jars必须完全路径嘛,有没有什么默认的搜索路径,我简单写jar名字的。不想写死路径。
>
> 赵一旦 于2020年9月24日周四 下午3:17写道:
>
> > 这就有点麻烦了,公司机器一般不允许连接外部网络的。
> >
> > Jeff Zhang 于2020年9月24日周四 下午3:15写道:
> >
> >> flink.execution.packag
两个办法:
1. 用私有的maven 仓库
2. 自己打jar包,用 flink.exection.jars 来指定
https://www.yuque.com/jeffzhangjianfeng/gldg8w/rn6g1s#3BNYl
赵一旦 于2020年9月24日周四 下午3:17写道:
> 这就有点麻烦了,公司机器一般不允许连接外部网络的。
>
> Jeff Zhang 于2020年9月24日周四 下午3:15写道:
>
> > flink.execution.packages 下的包会从maven仓库下载,看这里关于如何在z
; zeppelin情况下,貌似有个flink.execution.packages,但是并没说明这个指定的包去哪找的?是zeppelin配置的FLINK_HOME中的lib嘛?我lib中有包,但还是报错。
>
--
Best Regards
Jeff Zhang
于2020年9月17日周四 下午10:07写道:
>
> > TableEnvironment 不是多线程安全的。
> >
> > btw, 你能描述一下你在多线程情况下怎么使用 TableEnvironment 的吗?
> >
> > Jeff Zhang 于2020年9月14日周一 下午12:10写道:
> >
> > > 参考zeppelin的做法,每个线程里都调用这个
> > >
> > >
> > >
> >
&
t; at
> GeneratedMetadataHandler_NonCumulativeCost.getNonCumulativeCost_$(Unknown
> Source)
> at GeneratedMetadataHandler_NonCumulativeCost.getNonCumulativeCost(Unknown
> Source)
>
> --
> Best,
> Jun Su
>
--
Best Regards
Jeff Zhang
m/springMoon/sqlSubmit
> 大家有木有用过,推荐一下
--
Best Regards
Jeff Zhang
可以看看这个zeppelin sdk,,也许适合你
https://www.yuque.com/jeffzhangjianfeng/gldg8w/pz2xoh
1115098...@qq.com 于2020年9月10日周四 上午9:09写道:
> 大家好,我在将spring boot集成到flink的过程中,遇到很多问题,感觉不太兼容。看官方文档也没有集成spring
> boot的介绍,是不是flink设计的时候就没有考虑与spring boot的集成?
--
Best Regards
Jeff Zhang
e.com/
--
Best Regards
Jeff Zhang
不要用aliyun maven repo,另外你这是1.10-SNAPSHOT 不是1.10的release版本
魏烽 于2020年8月21日周五 下午8:44写道:
> 各位好:
>
>
>
> >
> >
> >
> > --
> > Sent from: http://apache-flink.147419.n8.nabble.com/
>
--
Best Regards
Jeff Zhang
ar路径,并包含kafka-client.jar,类似的还有flink-json等格式包。此外,这些包在集群A上都没有,但运行都正常。这么来看,我在sql-client中提交一个sql的时候,部分依赖应该是被打包并且上传到集群A的?这是怎么决定的呢?从我的sql中动态判定哪些包用到?还是咋的。
>
>
>
> 总结下,FlinkSQL使用sql-client命令行方式提交sql执行任务的场景下。最终运行时候哪些包需要用到sql-client端的包呢?
>
> 目前从实验来看Flink-json、各种connector的包都是用的sql-client的包。
>
>
>
--
Best Regards
Jeff Zhang
gt;
>Best forideal
--
Best Regards
Jeff Zhang
er实现了一个
> >
> >
> >
> >
> > 杨荣 于2020年7月24日周五 上午10:53写道:
> >
> > > Hi all,
> > >
> > > 请问:
> > > 1. 在 Embedded mode 下,支持 ClusterClient 进行 job
> > > 提交作业,进行分布式计算吗?在文档中没看到,跟着文档走,只启起了 Local 在本地作业,无法运用到生产环境。
> > >
> > > 2. GateWay mode 预计在那个版本 release?
> > >
> >
> >
> > --
> >
> > Best Regards,
> > Harold Miao
> >
>
--
Best Regards
Jeff Zhang
写道:
> 代码大概是这样子的,一张kafka source表,一张es Sink表,最后通过tableEnv.executeSql("insert into
> esSinkTable select ... from kafkaSourceTable")执行
> 任务提交后任务名称为“inset-into_某某catalog_某某database.某某Table”
>
>
> 这样很不友好啊,能不能我自己指定任务名称呢?
--
Best Regards
Jeff Zhang
1], 通过
> JobClient 可以 cancel 作业,获取 job status。
>
> [1]
>
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-74%3A+Flink+JobClient+API
>
> Best,
> Godfrey
>
> Evan 于2020年7月9日周四 上午9:40写道:
>
> 这个问题之前看到过有人在问,但是没有看到答案,我想问一下,Flink Streaming
> API有没有提供类似的接口,调用后就能停止这个Stream作业呢?
>
>
--
Best Regards
Jeff Zhang
能否实现这样的方式?
> 感谢
>
--
Best Regards
Jeff Zhang
能否实现这样的方式?
> 感谢
>
--
Best Regards
Jeff Zhang
ork?
>
> Thank you!
>
> Mark
>
> ‐‐‐ Original Message ‐‐‐
> On Friday, June 5, 2020 6:13 PM, Jeff Zhang wrote:
>
> You can try JobListener which you can register to ExecutionEnvironment.
>
>
> https://github.com/apache/flink/blob/master/fl
ock is never run.
>
> Thank you!
>
> Mark
>
--
Best Regards
Jeff Zhang
batch mode. Examples and documentation I have come
>>>>> across so far recommend the following pattern to create such an
>>>>> environment
>>>>> -
>>>>>
>>>>> var settings = EnvironmentSettings.newInstance()
>>>>> .useBlinkPlanner()
>>>>> .inBatchMode()
>>>>> .build();
>>>>> var tEnv = TableEnvironment.create(settings);
>>>>>
>>>>> The above configuration, however, does not connect to a remote
>>>>> environment. Tracing code in TableEnvironment.java, I see the
>>>>> following method in BlinkExecutorFactory.java that appears to
>>>>> relevant -
>>>>>
>>>>> Executor create(Map, StreamExecutionEnvironment);
>>>>>
>>>>> However, it seems to be only accessible through the Scala bridge. I
>>>>> can't seem to find a way to instantiate a TableEnvironment that takes
>>>>> StreamExecutionEnvironment as an argument. How do I achieve that?
>>>>>
>>>>> Regards,
>>>>> Satyam
>>>>>
>>>>
--
Best Regards
Jeff Zhang
/flink/blob/master/flink-clients/src/main/java/org/apache/flink/client/program/PerJobMiniClusterFactory.java
>
>
>
>
> 月月 于2020年5月24日周日 下午9:11写道:
>
> > 您好,
> > 在單機模式使用maven執行專案時,會自動啟動MiniCluster,
> > 我想請問在這種情形下,預設是配置一個JobManager以及一個TaskManager嗎?
> >
> > 找了一下文件中並沒有相關的說明。
> >
> > 感謝!
> >
>
--
Best Regards
Jeff Zhang
.scala:118)
> > at
> >
> >
> org.apache.flink.table.api.internal.BatchTableEnvImpl.translate(BatchTableEnvImpl.scala:306)
> > at
> >
> >
> org.apache.flink.table.api.internal.BatchTableEnvImpl.translate(BatchTableEnvImpl.scala:281)
> > at
> >
> >
> org.apache.flink.table.api.scala.internal.BatchTableEnvironmentImpl.toDataSet(BatchTableEnvironmentImpl.scala:69)
> > at
> >
> >
> org.apache.flink.table.api.scala.TableConversions.toDataSet(TableConversions.scala:53)
> > ... 30 elided
> >
>
>
> --
> Best, Jingsong Lee
>
--
Best Regards
Jeff Zhang
n. I'm afraid that there might be other
> behaviors for other environments.
>
> So what's the best practice to determine whether a job has finished or
> not? Note that I'm not waiting for the job to finish. If the job hasn't
> finished I would like to know it and do something else.
>
--
Best Regards
Jeff Zhang
啥目的 ?
hsdcl...@163.com 于2020年4月22日周三 上午9:49写道:
> Hi,
> 有个脑洞大开的想法,有没有可能出一个模块功能,可以将用户写的spark代码直接运行在Flink平台
--
Best Regards
Jeff Zhang
gt; overall architecture complexity).
>
> @Oytun indeed we'd like to avoid recompiling everything when a single user
> class (i.e. not related to Flink classes) is modified or added. Glad to see
> that there are other people having the same problem here
>
> On Tue, Apr 21, 2020 at 4:39 PM
some other token (e.g.
> /userapi/*).
>
> What do you think about this? Does it sound reasonable to you?
> Am I the only one that thinks this could be useful for many use cases?
>
> Best,
> Flavio
>
--
Best Regards
Jeff Zhang
午4:44写道:
> I am only running the zeppelin word count example by clicking the
> zeppelin run arrow.
>
>
> On Mon, 20 Apr 2020, 09:42 Jeff Zhang, wrote:
>
>> How do you run flink job ? It should not always be localhost:8081
>>
>> Som Lima 于2020年4月20日周一 下午4:33写
lay.
>
>
>
>
>
--
Best Regards
Jeff Zhang
Glad to hear that.
Som Lima 于2020年4月20日周一 上午8:08写道:
> I will thanks. Once I had it set up and working.
> I switched my computers around from client to server to server to client.
> With your excellent instructions I was able to do it in 5 .minutes
>
> On Mon, 20 Apr 2020, 0
r each development.
>
>
> Anyway I kept doing fresh installs about four altogether I think.
>
> Everything works fine now
> Including remote access of zeppelin on machines across the local area
> network.
>
> Next step setup remote clusters
> Wish me luck !
>
>
>
cutionEnvironment();
>>>>
>>>> which is same on spark.
>>>>
>>>> val spark =
>>>> SparkSession.builder.master(local[*]).appname("anapp").getOrCreate
>>>>
>>>> However if I wish to run the servers on a different physical computer.
>>>> Then in Spark I can do it this way using the spark URI in my IDE.
>>>>
>>>> Conf =
>>>> SparkConf().setMaster("spark://:").setAppName("anapp")
>>>>
>>>> Can you please tell me the equivalent change to make so I can run my
>>>> servers and my IDE from different physical computers.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
--
Best Regards
Jeff Zhang
t;>>
>>>> Best,
>>>> Godfrey
>>>>
>>>> Flavio Pompermaier 于2020年4月16日周四 下午4:42写道:
>>>>
>>>>> Hi Jeff,
>>>>> FLIP-24 [1] proposed to develop a SQL gateway to query Flink via SQL
>>>>> but since then no progress has been made on that point. Do you think that
>>>>> Zeppelin could be used somehow as a SQL Gateway towards Flink for the
>>>>> moment?
>>>>> Any chance that a Flink SQL Gateway could ever be developed? Is there
>>>>> anybody interested in this?
>>>>>
>>>>> Best,
>>>>> Flavio
>>>>>
>>>>> [1]
>>>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-24+-+SQL+Client
>>>>>
>>>>
--
Best Regards
Jeff Zhang
m/RBHa2lTIg5 <https://t.co/sUapN40tvI?amp=1> 4) Advanced
usage https://link.medium.com/CAekyoXIg5 <https://t.co/MXolULmafZ?amp=1>
Welcome to use flink on zeppelin and give feedback and comments.
--
Best Regards
Jeff Zhang
ok at the README.
> >
> > Any feedback or suggestion is welcomed!
> >
> > [1]
> https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup.html
> > [2]
> https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html
> >
> > Best,
> > Yangze Guo
>
--
Best Regards
Jeff Zhang
ok at the README.
> >
> > Any feedback or suggestion is welcomed!
> >
> > [1]
> https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup.html
> > [2]
> https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html
> >
> > Best,
> > Yangze Guo
>
--
Best Regards
Jeff Zhang
gt; wrote:
> >
> > Yeah, I was wondering about that. I'm using
> > `/usr/lib/flink/bin/start-scala-shell.sh yarn`-- previously I'd use
> > `/usr/lib/flink/bin/start-scala-shell.sh yarn -n ${NUM}`
> > but that deprecated option was removed.
> >
> >
> > On T
6)
> at
> org.apache.flink.client.program.rest.RestClusterClient.(RestClusterClient.java:161)
> at
> org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
> ... 38 more
>
--
Best Regards
Jeff Zhang
com
> |
> 签名由网易邮箱大师定制
>
>
> 在2020年03月13日 16:34,Jeff Zhang 写道:
> Hi xinghalo,
>
> 在Apache Zeppelin里运行Sql是可以支持sql comment的,可以加入钉钉群 30022475 来体验
>
> godfrey he 于2020年3月13日周五 下午3:49写道:
>
> hi sql-gateway的做法目前和sql client类似,都是通过正则表达式来处理的。这一块sql-gateway的解法
> >>
> >> 目前是每次使用sql-gateway提交时手动过滤掉注释,建议增加这种注释处理。
> >>
> >>
> >> | |
> >> xinghalo
> >> |
> >> |
> >> xingh...@163.com
> >> |
> >> 签名由网易邮箱大师定制
> >>
> >>
> >
> > --
> > Best, Jingsong Lee
> >
>
--
Best Regards
Jeff Zhang
-conf.yaml can be adjust dynamically in user's program.
>>> So it will end up like some of the configurations can be overridden but
>>> some are not. The experience is not quite good for users.
>>>
>>> Best,
>>> Kurt
>>>
>>>
>>> On
20年3月5日周四 下午4:31写道:
>>>>>
>>>>>> Hi All!
>>>>>>
>>>>>> I am trying to understand if there is any way to override flink
>>>>>> configuration parameters when starting the SQL Client.
>>>>>>
>>>>>> It seems that the only way to pass any parameters is through the
>>>>>> environment yaml.
>>>>>>
>>>>>> There I found 2 possible routes:
>>>>>>
>>>>>> configuration: this doesn't work as it only sets Table specific
>>>>>> configs apparently, but maybe I am wrong.
>>>>>>
>>>>>> deployment: I tried using dynamic properties options here but
>>>>>> unfortunately we normalize (lowercase) the YAML keys so it is impossible
>>>>>> to
>>>>>> pass options like -yD or -D.
>>>>>>
>>>>>> Does anyone have any suggestions?
>>>>>>
>>>>>> Thanks
>>>>>> Gyula
>>>>>>
>>>>>
>>
>> --
>> Best, Jingsong Lee
>>
>
--
Best Regards
Jeff Zhang
t which only shows the schema.
>
> Is there anything similar to "SHOW CREATE TABLE" or is this something that
> we should maybe add in the future?
>
> Thank you!
> Gyula
>
--
Best Regards
Jeff Zhang
tulations Jingsong! Well deserved.
> > >
> > > Best,
> > > Jark
> > >
> > > On Fri, 21 Feb 2020 at 11:32, zoudan wrote:
> > >
> > >> Congratulations! Jingsong
> > >>
> > >>
> > >> Best,
> > >> Dan Zou
> > >>
> >
> >
>
--
Best Regards
Jeff Zhang
> of Apache Flink 1.10.0, which is the latest major release.
> >>>>>>>>
> >>>>>>>> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> >>>>>>>>
> >>>>>>>> The release is available for download at:
> >>>>>>>> https://flink.apache.org/downloads.html
> >>>>>>>>
> >>>>>>>> Please check out the release blog post for an overview of the
> improvements for this new major release:
> >>>>>>>> https://flink.apache.org/news/2020/02/11/release-1.10.0.html
> >>>>>>>>
> >>>>>>>> The full release notes are available in Jira:
> >>>>>>>>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12345845
> >>>>>>>>
> >>>>>>>> We would like to thank all contributors of the Apache Flink
> community who made this release possible!
> >>>>>>>>
> >>>>>>>> Cheers,
> >>>>>>>> Gary & Yu
> >>
> >>
> >
> >
> > --
> > Best, Jingsong Lee
>
--
Best Regards
Jeff Zhang
; forward to your feedback!
>>>
>>> Best,
>>> Jincheng
>>>
>>> [1]
>>>
>>> https://lists.apache.org/thread.html/4a4d23c449f26b66bc58c71cc1a5c6079c79b5049c6c6744224c5f46%40%3Cdev.flink.apache.org%3E
>>> [2]
>>>
>>> https://lists.apache.org/thread.html/8273a5e8834b788d8ae552a5e177b69e04e96c0446bb90979444deee%40%3Cprivate.flink.apache.org%3E
>>> [3]
>>>
>>> https://lists.apache.org/thread.html/ra27644a4e111476b6041e8969def4322f47d5e0aae8da3ef30cd2926%40%3Cdev.flink.apache.org%3E
>>>
>>
--
Best Regards
Jeff Zhang
; forward to your feedback!
>>>
>>> Best,
>>> Jincheng
>>>
>>> [1]
>>>
>>> https://lists.apache.org/thread.html/4a4d23c449f26b66bc58c71cc1a5c6079c79b5049c6c6744224c5f46%40%3Cdev.flink.apache.org%3E
>>> [2]
>>>
>>> https://lists.apache.org/thread.html/8273a5e8834b788d8ae552a5e177b69e04e96c0446bb90979444deee%40%3Cprivate.flink.apache.org%3E
>>> [3]
>>>
>>> https://lists.apache.org/thread.html/ra27644a4e111476b6041e8969def4322f47d5e0aae8da3ef30cd2926%40%3Cdev.flink.apache.org%3E
>>>
>>
--
Best Regards
Jeff Zhang
me congratulating Dian for becoming a Flink committer !
>
> Best,
> Jincheng(on behalf of the Flink PMC)
>
--
Best Regards
Jeff Zhang
efault planner for the whole Table API & SQL is another
>> topic
>> >> and is out of scope of this discussion.
>> >>
>> >> What do you think?
>> >>
>> >> Best,
>> >> Jark
>> >>
>> >> [1]:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/joins.html#join-with-a-temporal-table
>> >> [2]:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#top-n
>> >> [3]:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication
>> >> [4]:
>> >>
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tuning/streaming_aggregation_optimization.html
>> >> [5]:
>> >>
>> https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/conf/sql-client-defaults.yaml#L100
>> >
>>
>>
>
> --
> Best, Jingsong Lee
>
--
Best Regards
Jeff Zhang
though we've supported almost all Hive versions [3] now.
>>
>> I want to hear what the community think about this, and how to achieve it
>> if we believe that's the way to go.
>>
>> Cheers,
>> Bowen
>>
>> [1] https://flink.apache.org/downloads.html
>> [2]
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/#dependencies
>> [3]
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/#supported-hive-versions
>>
>
--
Best Regards
Jeff Zhang
e notes are available in Jira:
> https://issues.apache.org/jira/projects/FLINK/versions/12346112
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
> Great thanks to @Jincheng as a mentor during this release.
>
> Regards,
> Hequn
>
>
>
--
Best Regards
Jeff Zhang
cies.html>
> doesn't work for you, then you still need a flink-shaded-hadoop-jar that
> you can download here
> <https://flink.apache.org/downloads.html#apache-flink-191>.
>
> On 25/10/2019 09:54, Jeff Zhang wrote:
>
> Hi all,
>
> There's no new flink shaded release for fli
Jeff Zhang
mitter of the Flink project.
>
> Congratulations Zili Chen.
>
> regards.
>
--
Best Regards
Jeff Zhang
Kloudas is joining the Flink
>>> PMC.
>>> >> Kostas is contributing to Flink for many years and puts lots of
>>> effort in helping our users and growing the Flink community.
>>> >> Please join me in congratulating Kostas!
>>> >
>>> > congratulation Kostas!
>>> >
>>> > regards.
>>>
>>>
--
Best Regards
Jeff Zhang
e(FencedAkkaRpcActor.java:40)
>>> at
>>> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>>> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>>> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>>> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>>> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>>> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>>> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>>> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>> at
>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>> at
>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>> at
>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>
>>>
>>>
>>>
--
Best Regards
Jeff Zhang
上个checkpoint
王金海 于2019年8月27日周二 下午6:14写道:
> 讨论下flink异常重启问题
>
>
> 从kafka消费数据,checkpoint周期在5分钟,如果在第6分钟,因为异常导致flink任务重启,flink是从上个checkpoint恢复呢?还是从异常时的offset恢复呢?
>
>
>
> csbl...@163.com
> Have a nice day !
>
>
--
Best Regards
Jeff Zhang
你是通过flink UI看log还是yarn ui 看log ?
陈帅 于2019年8月27日周二 下午5:55写道:
> flink基于yarn的方式提交,log在web上看太卡了。能不能直接看log文件?
>
--
Best Regards
Jeff Zhang
h);
> }
>
>
> 代码如上,在idea中直接运行可以认证通过,但是打成jar包提交到集群后报错如下:
> Caused by: java.io.IOException: Login failure for biuri/
> bj142.-in.dom...@btest.com from keytab
> file:/data/realtime-flink.jar!/kerberos.keytab:
> javax.security.auth.login.LoginException: Unable to obtain password from
> user
> 这个是什么原因?或者应该如何进行正确的集群认证?
>
>
>
>
--
Best Regards
Jeff Zhang
加载完hive就会join输出了?
>
>
>
>
--
Best Regards
Jeff Zhang
r of the Flink project.
>
> Hequn has been contributing to Flink for many years, mainly working on
> SQL/Table API features. He's also frequently helping out on the user
> mailing lists and helping check/vote the release.
>
> Congratulations Hequn!
>
> Best, Jincheng
> (on behalf of the Flink PMC)
>
>
>
--
Best Regards
Jeff Zhang
of StreamExecutionEnvironment env =
>> StreamExecutionEnvironment.getExecutionEnvironment();
>>
>> With Flink 1.4.2, StreamExecutionEnvironment env =
>> StreamExecutionEnvironment.getExecutionEnvironment(); used to work on both
>> cluster as well as local environment.
>>
>> Is there any way to make
>> StreamExecutionEnvironment.getExecutionEnvironment(); work in both cluster
>> and local mode in flink 1.7.1? Specifically how to make it work locally via
>> IntelliJ.
>>
>> Thanks & Regards,
>> Vinayak
>>
>
--
Best Regards
Jeff Zhang
ew releases of
>>>> Akka and Flink.
>>>>
>>>> regards.
>>>>
>>>> --
>>>> Debasish Ghosh
>>>> http://manning.com/ghosh2
>>>> http://manning.com/ghosh
>>>>
>>>> Twttr: @debasishg
>>>> Blog: http://debasishg.blogspot.com
>>>> Code: http://github.com/debasishg
>>>>
>>>>
>>>
>>> --
>>> Debasish Ghosh
>>> http://manning.com/ghosh2
>>> http://manning.com/ghosh
>>>
>>> Twttr: @debasishg
>>> Blog: http://debasishg.blogspot.com
>>> Code: http://github.com/debasishg
>>>
>>
>
> --
> Debasish Ghosh
> http://manning.com/ghosh2
> http://manning.com/ghosh
>
> Twttr: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: http://github.com/debasishg
>
>
--
Best Regards
Jeff Zhang
te our application with a dedicated job scheduler like the one
> listed before (probably)..I don't know if some of them are nowadays already
> integrated with Flink..when we started coding our frontend application (2
> ears ago) none of them were using it.
>
> Best,
> Flavio
>
&g
is the fact that the job can't do anything after
>env.execute() while we need to call an external service to signal that the
>job has ended + some other details
>
> Best,
> Flavio
>
> On Tue, Jul 23, 2019 at 3:44 AM Jeff Zhang wrote:
>
>> Hi Flavio,
>>
>&
it.
>>
>> I really appreciate your time and your insight.
>>
>> Best,
>> tison.
>>
>> [1]
>> https://lists.apache.org/thread.html/7ffc9936a384b891dbcf0a481d26c6d13b2125607c200577780d1e18@%3Cdev.flink.apache.org%3E
>>
>
>
>
--
Best Regards
Jeff Zhang
n regarding the framework as i'm struggling to find a lot
> of documentation for my application online.
>
>
> thanks in advance.
>
>
> kind regards,
>
> Dante Van den Broeke
>
>
--
Best Regards
Jeff Zhang
ases on the cli client to start the users of outside the cluster. For
> instance, the command “flink run WordCounter.jar” it’s doesn’s work. So,
> could you give me some successful examples, please.
>
>
> Thanks!
>
--
Best Regards
Jeff Zhang
e Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on SQL
>> and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>
--
Best Regards
Jeff Zhang
and want to run multiple
>> applications on it with different flink configurations. Is there a way to
>>
>> 1. Pass the config file name for each application, or
>> 2. Overwrite the config parameters via command line arguments for the
>> application. This is similar to how we can overwrite the default
>> parameters in spark
>>
>> I searched the documents and have tried using ParameterTool with the
>> config parameter names, but it has not worked as yet.
>>
>> Thanks for your help.
>>
>> Mans
>>
>>
--
Best Regards
Jeff Zhang
1 - 100 of 130 matches
Mail list logo