Re: kafka实例重启对flink作业的影响

2023-04-20 Thread Ran Tao
作业不 fo

Best Regards,
Ran Tao


Ran Tao  于2023年4月20日周四 16:12写道:

> offset 重放,mistake
>
> Best Regards,
> Ran Tao
>
>
> Ran Tao  于2023年4月20日周四 16:11写道:
>
>> 1.一种比较干净但是暴力的做法是Flink一旦检测到分区变化,就执行作业fo.
>> fo后读取最新的分区列表,旧的分区从状态中进行offer重放,新分区执行特定的点位启动策略。它的做法比较干净暴力。
>>
>> 2.第二种就是动态的分区发现(指作业fo,异步线程一直check分区变化,针对removed或者insert的分区单独处理),
>> 这个在 newKafkaSource 中已经实现了。旧的kafka source实现社区有 FLIP[1]
>> 讨论这个问题。实现侧来看,这种方案相对于第一种复杂一些,需要开发者比较小心的处理状态以及某些极端环境的fo导致的问题[2]。
>>
>> [1]
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source
>> [2] https://issues.apache.org/jira/browse/FLINK-31006
>>
>> 其实这两种做法不仅仅适用于kafka,对于任意的source或者mq都可以使用。希望对你有所帮助。
>>
>> Best Regards,
>> Ran Tao
>>
>>
>> casel.chen  于2023年4月20日周四 15:43写道:
>>
>>>
>>> 实际工作中会遇到kafka版本升级或者kafka扩容(横向或纵向),数据重平衡等情况,想问一下发生这些情况下对线上运行的flink作业会有什么影响?flink作业能感知topic分区发生变化吗?要如何应对以减少对flink作业消费端的影响?
>>
>>


Re: kafka实例重启对flink作业的影响

2023-04-20 Thread Ran Tao
offset 重放,mistake

Best Regards,
Ran Tao


Ran Tao  于2023年4月20日周四 16:11写道:

> 1.一种比较干净但是暴力的做法是Flink一旦检测到分区变化,就执行作业fo.
> fo后读取最新的分区列表,旧的分区从状态中进行offer重放,新分区执行特定的点位启动策略。它的做法比较干净暴力。
>
> 2.第二种就是动态的分区发现(指作业fo,异步线程一直check分区变化,针对removed或者insert的分区单独处理),
> 这个在 newKafkaSource 中已经实现了。旧的kafka source实现社区有 FLIP[1]
> 讨论这个问题。实现侧来看,这种方案相对于第一种复杂一些,需要开发者比较小心的处理状态以及某些极端环境的fo导致的问题[2]。
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source
> [2] https://issues.apache.org/jira/browse/FLINK-31006
>
> 其实这两种做法不仅仅适用于kafka,对于任意的source或者mq都可以使用。希望对你有所帮助。
>
> Best Regards,
> Ran Tao
>
>
> casel.chen  于2023年4月20日周四 15:43写道:
>
>>
>> 实际工作中会遇到kafka版本升级或者kafka扩容(横向或纵向),数据重平衡等情况,想问一下发生这些情况下对线上运行的flink作业会有什么影响?flink作业能感知topic分区发生变化吗?要如何应对以减少对flink作业消费端的影响?
>
>


Re: kafka实例重启对flink作业的影响

2023-04-20 Thread Ran Tao
1.一种比较干净但是暴力的做法是Flink一旦检测到分区变化,就执行作业fo.
fo后读取最新的分区列表,旧的分区从状态中进行offer重放,新分区执行特定的点位启动策略。它的做法比较干净暴力。

2.第二种就是动态的分区发现(指作业fo,异步线程一直check分区变化,针对removed或者insert的分区单独处理),
这个在 newKafkaSource 中已经实现了。旧的kafka source实现社区有 FLIP[1]
讨论这个问题。实现侧来看,这种方案相对于第一种复杂一些,需要开发者比较小心的处理状态以及某些极端环境的fo导致的问题[2]。

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source
[2] https://issues.apache.org/jira/browse/FLINK-31006

其实这两种做法不仅仅适用于kafka,对于任意的source或者mq都可以使用。希望对你有所帮助。

Best Regards,
Ran Tao


casel.chen  于2023年4月20日周四 15:43写道:

>
> 实际工作中会遇到kafka版本升级或者kafka扩容(横向或纵向),数据重平衡等情况,想问一下发生这些情况下对线上运行的flink作业会有什么影响?flink作业能感知topic分区发生变化吗?要如何应对以减少对flink作业消费端的影响?


Re: 退订

2023-03-22 Thread Ran Tao
退订是发送邮件到 user-zh-unsubscr...@flink.apache.org 这个地址就可以了。

Best Regards,
Ran Tao


李朋 <1134415...@qq.com.invalid> 于2023年3月22日周三 20:10写道:

> 退订!


Re: Flink异步Hbase导致Too many open files异常

2023-03-08 Thread Ran Tao
+1 有遇到过类似 fd 泄露的问题。注意 close 的时候buffer 数据刷盘, 然后资源关闭,future cancel。

Best Regards,
Ran Tao


Weihua Hu  于2023年3月8日周三 16:52写道:

> Hi,
>
> 通过代码看作业在Failover 时的确会有 HBaseClient 的资源泄露。
>
> 在 HbaseDimensionAsyncFunc 中重写一下 close 方法,释放掉 HBaseClient。
>
> Best,
> Weihua
>
>
> On Wed, Mar 8, 2023 at 4:19 PM aiden <18765295...@163.com> wrote:
>
> > Hi
> >   我在使用Async Hbase时频繁遇到too many open file异常,程序自动重启后会立即报错,具体报错日志如下:
> > 2023-03-08 16:15:39
> > org.jboss.netty.channel.ChannelException: Failed to create a selector.
> > at
> >
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
> > at
> >
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.(AbstractNioSelector.java:100)
> > at
> >
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.(AbstractNioWorker.java:52)
> > at org.jboss.netty.channel.socket.nio.NioWorker.(NioWorker.java:45)
> > at
> >
> org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
> > at
> >
> org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
> > at
> >
> org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:143)
> > at
> >
> org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:81)
> > at
> >
> org.jboss.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:39)
> > at
> org.hbase.async.HBaseClient.defaultChannelFactory(HBaseClient.java:707)
> > at org.hbase.async.HBaseClient.(HBaseClient.java:507)
> > at org.hbase.async.HBaseClient.(HBaseClient.java:496)
> > at
> >
> com.topgame.function.HbaseDimTrackerAsyncFunc.open(HbaseDimTrackerAsyncFunc.java:37)
> > at
> >
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
> > at
> >
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
> > at
> >
> org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.open(AsyncWaitOperator.java:214)
> > at
> >
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:107)
> > at
> >
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:726)
> > at
> >
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
> > at
> >
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:702)
> > at
> >
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
> > at
> >
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
> > at
> > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
> > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
> > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
> > at java.lang.Thread.run(Thread.java:748)
> > Caused by: java.io.IOException: Too many open files
> > at sun.nio.ch.IOUtil.makePipe(Native Method)
> > at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
> > at sun.nio.ch
> > .EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
> > at java.nio.channels.Selector.open(Selector.java:227)
> > at
> >
> org.jboss.netty.channel.socket.nio.SelectorUtil.open(SelectorUtil.java:63)
> > at
> >
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:341)
> > ... 25 more
> >   对当前程序使用文件描述符数量进行监控,发现当程序抛出如下错误自动重启后,程序使用文件描述符数量激增。错误日志如下
> > java.io.IOException: Could not perform checkpoint 5 for operator async
> > wait operator (2/9)#0.
> > at
> >
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:1238)
> > at
> >
> org.apache.flink.streaming.runtime.io.checkpointing.CheckpointBarrierHandler.notifyCheckpoint(CheckpointBarrierHandler.java:147)
> > at
> >
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.triggerCheckpoint(SingleCheckpointBarrierHandler.java:287)
> > at
> >
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.access$100(SingleCheckpointBarrierHandler.java:64)
> > at
> >
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler$ControllerImpl.triggerGlobalCheckpoint(SingleCheckpointBarrierHandler.java:488)
> > at
> >
> org.apache.flink.streaming.runtime.io

Re: idea构建flink源码失败

2023-02-07 Thread Ran Tao
t
> scala.tools.nsc.typechecker.Typers$Typer.typedStat$1(Typers.scala:5845)
>  at
>
> scala.tools.nsc.typechecker.Typers$Typer.$anonfun$typedStats$10(Typers.scala:3337)
>  at
> scala.tools.nsc.typechecker.Typers$Typer.typedStats(Typers.scala:3337)
>  at
>
> scala.tools.nsc.typechecker.Typers$Typer.typedPackageDef$1(Typers.scala:5413)
>  at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5705)
>  at scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5781)
>  at
>
> scala.tools.nsc.typechecker.Analyzer$typerFactory$TyperPhase.apply(Analyzer.scala:114)
>  at scala.tools.nsc.Global$GlobalPhase.applyPhase(Global.scala:453)
>  at
>
> scala.tools.nsc.typechecker.Analyzer$typerFactory$TyperPhase.run(Analyzer.scala:102)
>  at scala.tools.nsc.Global$Run.compileUnitsInternal(Global.scala:1514)
>  at scala.tools.nsc.Global$Run.compileUnits(Global.scala:1498)
>  at scala.tools.nsc.Global$Run.compileSources(Global.scala:1491)
>  at scala.tools.nsc.Global$Run.compileFiles(Global.scala:1602)
>  at xsbt.CachedCompiler0.run(CompilerBridge.scala:163)
>  at xsbt.CachedCompiler0.run(CompilerBridge.scala:134)
>  at xsbt.CompilerBridge.run(CompilerBridge.scala:39)
>  at
> sbt.internal.inc.AnalyzingCompiler.compile(AnalyzingCompiler.scala:91)
>  at
>
> org.jetbrains.jps.incremental.scala.local.IdeaIncrementalCompiler.compile(IdeaIncrementalCompiler.scala:48)
>  at
>
> org.jetbrains.jps.incremental.scala.local.LocalServer.doCompile(LocalServer.scala:47)
>  at
>
> org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:25)
>  at
>
> org.jetbrains.jps.incremental.scala.remote.Main$.compileLogic(Main.scala:197)
>  at
>
> org.jetbrains.jps.incremental.scala.remote.Main$.$anonfun$handleCommand$1(Main.scala:184)
>  at
>
> org.jetbrains.jps.incremental.scala.remote.Main$.decorated$1(Main.scala:174)
>  at
>
> org.jetbrains.jps.incremental.scala.remote.Main$.handleCommand(Main.scala:181)
>      at
>
> org.jetbrains.jps.incremental.scala.remote.Main$.serverLogic(Main.scala:157)
>  at
> org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:97)
>  at
> org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala)
>  at jdk.internal.reflect.GeneratedMethodAccessor1.invoke(Unknown
> Source)
>  at
>
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>  at com.facebook.nailgun.NGSession.runImpl(NGSession.java:312)
>  at com.facebook.nailgun.NGSession.run(NGSession.java:198)
>
>
>
>
>
>
>
>

-- 
Best Regards,
Ran Tao
https://github.com/chucheng92


Re: [DISCUSS] Update Scala 2.12.7 to 2.12.15

2022-06-07 Thread Ran Tao
Hi, Martijn. I have asked the scala community, i will reply to you if there
is any respond.  however, i think you can vote or discuss with 2.12.15.
The 2.12.16 just backport some functionalies from higher scala version,
such as jvm11, jvm-17 target, the flink may need it.
But as Chesnay says, we do not limit the flink java & scala target bytecode
version must consistent.

Martijn Visser  于2022年6月7日周二 17:04写道:

> Hi all,
>
> @Ran When I created this discussion thread, the latest available version of
> Scala 2.12 was 2.12.15. I don't see 2.12.16 being released yet, only
> discussed. I did read that they were planning it in the last couple of
> weeks but haven't seen any progress since. Do you know more about the
> timeline?
>
> I would also like to make a final call towards Scala users to provide their
> input in the next 72 hours. Else, I'll open up a voting thread to make the
> upgrade.
>
> Best regards,
>
> Martijn
>
> Op vr 20 mei 2022 om 14:10 schreef Ran Tao :
>
> > Got it. But I think the runtime java environment e.g.  jdk11 env may
> cannot
> > optimize these scala lower bytecode very well.  However currently no
> direct
> > report show this problem. hah~
> >
> > Chesnay Schepler 于2022年5月20日 周五19:53写道:
> >
> > > It's not necessarily required that the scala byte code matches the
> > > version of the java byte code.
> > >
> > > By and large such inconsistencies are inevitable w.r.t. external
> > libraries.
> > >
> > > On 20/05/2022 12:23, Ran Tao wrote:
> > > > Hi, Martijn. Even if we upgrade scala to 2.12.15 to support Java17,
> it
> > > just
> > > > fix the compilation of FLINK-25000
> > > > <https://issues.apache.org/jira/browse/FLINK-25000> .  There is
> > another
> > > > problem[1] that Scala <=2.13 can't generate jvm11 or higher bytecode
> > > > version. The flink project target class bytecode version is
> > inconsistent
> > > > between scala and java source.
> > > > If we decide to upgrade scala version, how aboout scala 2.12.16, as i
> > > know
> > > > from scala community, the 2.12.16 will backport 2.13 functionilies
> like
> > > > jvm11,jvm17 target jvm class support.
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-27549
> > > >
> > > > Martijn Visser  于2022年5月20日周五 16:37写道:
> > > >
> > > >> Hi everyone,
> > > >>
> > > >> I would like to get some opinions from our Scala users, therefore
> I'm
> > > also
> > > >> looping in the user mailing list.
> > > >>
> > > >> Flink currently is tied to Scala 2.12.7. As outlined in FLINK-12461
> > [1]
> > > >> there is a binary incompatibility introduced by Scala 2.12.8, which
> > > >> currently limits Flink from upgrading to a later Scala 2.12 version.
> > > >> According to the Scala 2.12.8 release notes, "The fix is not binary
> > > >> compatible: the 2.12.8 compiler omits certain methods that are
> > > generated by
> > > >> earlier 2.12 compilers. However, we believe that these methods are
> > never
> > > >> used and existing compiled code will continue to work"
> > > >>
> > > >> We could still consider upgrading to a later Scala 2.12 version, the
> > > latest
> > > >> one currently being 2.12.15. Next to any benefits that are
> introduced
> > in
> > > >> the newer Scala versions, it would also resolve a blocker for Flink
> to
> > > add
> > > >> support for Java 17 [2].
> > > >>
> > > >> My question to Scala users of Flink and others who have an opinion
> on
> > > this:
> > > >> * Has any of you already manually compiled Flink with Scala 2.12.8
> or
> > > >> later?
> > > >> * If so, have you experienced any problems with checkpoint and/or
> > > savepoint
> > > >> incompatibility?
> > > >> * Would you prefer Flink breaking binary compatibility by upgrading
> > to a
> > > >> later Scala 2.12 version or would you prefer Flink to stick with
> Scala
> > > >> 2.12.7?
> > > >>
> > > >> Note: I know there are also questions about Scala 2.13 and Scala 3
> > > support
> > > >> in Flink; I think that deserves its own discussion thread.
> > > >>
> > > >> Best regards,
> > > >>
> > > >> Martijn Visser
> > > >> https://twitter.com/MartijnVisser82
> > > >> https://github.com/MartijnVisser
> > > >>
> > > >> [1] https://issues.apache.org/jira/browse/FLINK-12461
> > > >> [2] https://issues.apache.org/jira/browse/FLINK-25000
> > > >>
> > > >
> > >
> > > --
> > Best,
> > Ran Tao
> >
>


-- 
Best,
Ran Tao


Re: [DISCUSS] Update Scala 2.12.7 to 2.12.15

2022-06-07 Thread Ran Tao
Hi, Martijn. I have asked the scala community, i will reply to you if there
is any respond.  however, i think you can vote or discuss with 2.12.15.
The 2.12.16 just backport some functionalies from higher scala version,
such as jvm11, jvm-17 target, the flink may need it.
But as Chesnay says, we do not limit the flink java & scala target bytecode
version must consistent.

Martijn Visser  于2022年6月7日周二 17:04写道:

> Hi all,
>
> @Ran When I created this discussion thread, the latest available version of
> Scala 2.12 was 2.12.15. I don't see 2.12.16 being released yet, only
> discussed. I did read that they were planning it in the last couple of
> weeks but haven't seen any progress since. Do you know more about the
> timeline?
>
> I would also like to make a final call towards Scala users to provide their
> input in the next 72 hours. Else, I'll open up a voting thread to make the
> upgrade.
>
> Best regards,
>
> Martijn
>
> Op vr 20 mei 2022 om 14:10 schreef Ran Tao :
>
> > Got it. But I think the runtime java environment e.g.  jdk11 env may
> cannot
> > optimize these scala lower bytecode very well.  However currently no
> direct
> > report show this problem. hah~
> >
> > Chesnay Schepler 于2022年5月20日 周五19:53写道:
> >
> > > It's not necessarily required that the scala byte code matches the
> > > version of the java byte code.
> > >
> > > By and large such inconsistencies are inevitable w.r.t. external
> > libraries.
> > >
> > > On 20/05/2022 12:23, Ran Tao wrote:
> > > > Hi, Martijn. Even if we upgrade scala to 2.12.15 to support Java17,
> it
> > > just
> > > > fix the compilation of FLINK-25000
> > > > <https://issues.apache.org/jira/browse/FLINK-25000> .  There is
> > another
> > > > problem[1] that Scala <=2.13 can't generate jvm11 or higher bytecode
> > > > version. The flink project target class bytecode version is
> > inconsistent
> > > > between scala and java source.
> > > > If we decide to upgrade scala version, how aboout scala 2.12.16, as i
> > > know
> > > > from scala community, the 2.12.16 will backport 2.13 functionilies
> like
> > > > jvm11,jvm17 target jvm class support.
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-27549
> > > >
> > > > Martijn Visser  于2022年5月20日周五 16:37写道:
> > > >
> > > >> Hi everyone,
> > > >>
> > > >> I would like to get some opinions from our Scala users, therefore
> I'm
> > > also
> > > >> looping in the user mailing list.
> > > >>
> > > >> Flink currently is tied to Scala 2.12.7. As outlined in FLINK-12461
> > [1]
> > > >> there is a binary incompatibility introduced by Scala 2.12.8, which
> > > >> currently limits Flink from upgrading to a later Scala 2.12 version.
> > > >> According to the Scala 2.12.8 release notes, "The fix is not binary
> > > >> compatible: the 2.12.8 compiler omits certain methods that are
> > > generated by
> > > >> earlier 2.12 compilers. However, we believe that these methods are
> > never
> > > >> used and existing compiled code will continue to work"
> > > >>
> > > >> We could still consider upgrading to a later Scala 2.12 version, the
> > > latest
> > > >> one currently being 2.12.15. Next to any benefits that are
> introduced
> > in
> > > >> the newer Scala versions, it would also resolve a blocker for Flink
> to
> > > add
> > > >> support for Java 17 [2].
> > > >>
> > > >> My question to Scala users of Flink and others who have an opinion
> on
> > > this:
> > > >> * Has any of you already manually compiled Flink with Scala 2.12.8
> or
> > > >> later?
> > > >> * If so, have you experienced any problems with checkpoint and/or
> > > savepoint
> > > >> incompatibility?
> > > >> * Would you prefer Flink breaking binary compatibility by upgrading
> > to a
> > > >> later Scala 2.12 version or would you prefer Flink to stick with
> Scala
> > > >> 2.12.7?
> > > >>
> > > >> Note: I know there are also questions about Scala 2.13 and Scala 3
> > > support
> > > >> in Flink; I think that deserves its own discussion thread.
> > > >>
> > > >> Best regards,
> > > >>
> > > >> Martijn Visser
> > > >> https://twitter.com/MartijnVisser82
> > > >> https://github.com/MartijnVisser
> > > >>
> > > >> [1] https://issues.apache.org/jira/browse/FLINK-12461
> > > >> [2] https://issues.apache.org/jira/browse/FLINK-25000
> > > >>
> > > >
> > >
> > > --
> > Best,
> > Ran Tao
> >
>


-- 
Best,
Ran Tao


Fwd: [DISCUSS] Update Scala 2.12.7 to 2.12.15

2022-05-20 Thread Ran Tao
-- Forwarded message -
发件人: Ran Tao 
Date: 2022年5月20日周五 18:23
Subject: Re: [DISCUSS] Update Scala 2.12.7 to 2.12.15
To: 


Hi, Martijn. Even if we upgrade scala to 2.12.15 to support Java17, it just
fix the compilation of FLINK-25000
<https://issues.apache.org/jira/browse/FLINK-25000> .  There is another
problem[1] that Scala <=2.13 can't generate jvm11 or higher bytecode
version. The flink project target class bytecode version is inconsistent
between scala and java source.
If we decide to upgrade scala version, how aboout scala 2.12.16, as i know
from scala community, the 2.12.16 will backport 2.13 functionilies like
jvm11,jvm17 target jvm class support.

[1] https://issues.apache.org/jira/browse/FLINK-27549

Martijn Visser  于2022年5月20日周五 16:37写道:

> Hi everyone,
>
> I would like to get some opinions from our Scala users, therefore I'm also
> looping in the user mailing list.
>
> Flink currently is tied to Scala 2.12.7. As outlined in FLINK-12461 [1]
> there is a binary incompatibility introduced by Scala 2.12.8, which
> currently limits Flink from upgrading to a later Scala 2.12 version.
> According to the Scala 2.12.8 release notes, "The fix is not binary
> compatible: the 2.12.8 compiler omits certain methods that are generated by
> earlier 2.12 compilers. However, we believe that these methods are never
> used and existing compiled code will continue to work"
>
> We could still consider upgrading to a later Scala 2.12 version, the latest
> one currently being 2.12.15. Next to any benefits that are introduced in
> the newer Scala versions, it would also resolve a blocker for Flink to add
> support for Java 17 [2].
>
> My question to Scala users of Flink and others who have an opinion on this:
> * Has any of you already manually compiled Flink with Scala 2.12.8 or
> later?
> * If so, have you experienced any problems with checkpoint and/or savepoint
> incompatibility?
> * Would you prefer Flink breaking binary compatibility by upgrading to a
> later Scala 2.12 version or would you prefer Flink to stick with Scala
> 2.12.7?
>
> Note: I know there are also questions about Scala 2.13 and Scala 3 support
> in Flink; I think that deserves its own discussion thread.
>
> Best regards,
>
> Martijn Visser
> https://twitter.com/MartijnVisser82
> https://github.com/MartijnVisser
>
> [1] https://issues.apache.org/jira/browse/FLINK-12461
> [2] https://issues.apache.org/jira/browse/FLINK-25000
>


-- 
Best,
Ran Tao


-- 
Best,
Ran Tao


Re: Practical guidance with Scala and Flink >= 1.15

2022-05-11 Thread Ran Tao
;
>> To be clear, I have a strong preference for Scala over Java, but I'm
>> trying to look at the "grand scheme of things" here, and be pragmatic. I
>> guess I'm not alone here, and that many people are indeed evaluating the
>> same pros & cons. Any feedback will be much appreciated.
>>
>>
>>
>> Thanks in advance!
>>
>> Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und
>> beinhaltet unter Umständen vertrauliche Mitteilungen. Da die
>> Vertraulichkeit von e-Mail-Nachrichten nicht gewährleistet werden kann,
>> übernehmen wir keine Haftung für die Gewährung der Vertraulichkeit und
>> Unversehrtheit dieser Mitteilung. Bei irrtümlicher Zustellung bitten wir
>> Sie um Benachrichtigung per e-Mail und um Löschung dieser Nachricht sowie
>> eventueller Anhänge. Jegliche unberechtigte Verwendung oder Verbreitung
>> dieser Informationen ist streng verboten.
>>
>> This message is intended only for the named recipient and may contain
>> confidential or privileged information. As the confidentiality of email
>> communication cannot be guaranteed, we do not accept any responsibility for
>> the confidentiality and the intactness of this message. If you have
>> received it in error, please advise the sender by return e-mail and delete
>> this message and any attachments. Any unauthorised use or dissemination of
>> this information is strictly prohibited.
>>
>

-- 
Best,
Ran Tao