Re: flink 1.8 内的StreamExecutionEnvironment 对于 FileInputFormat 多file path 不兼容问题咨询

2020-03-04 Thread JingsongLee
Hi, 你的需求是什么?下列哪种? - 1.想用unbounded source,continuous的file source,监控文件夹,发送新文件,且需要支持多文件夹 - 2.只是想用bounded的input format,需要支持多文件 如果是1,现在仍然不支持。 如果是2,那你可以用env.addSource(new InputFormatSourceFunction(..)...)来支持多文件。 Best, Jingsong Lee --

Re: 使用Flink1.10.0读取hive时source并行度问题

2020-03-03 Thread JingsongLee
he.org ; like Subject:Re: 使用Flink1.10.0读取hive时source并行度问题 Hi jun, 很好的建议~ 这是一个优化点~ 可以建一个JIRA Best, Jingsong Lee -- From:Jun Zhang <825875...@qq.com> Send Time:2020年3月3日(星期二) 18:45 To:user-zh@flink.apache.org ; JingsongLee

Re: 使用Flink1.10.0读取hive时source并行度问题

2020-03-03 Thread JingsongLee
Hi jun, 很好的建议~ 这是一个优化点~ 可以建一个JIRA Best, Jingsong Lee -- From:Jun Zhang <825875...@qq.com> Send Time:2020年3月3日(星期二) 18:45 To:user-zh@flink.apache.org ; JingsongLee Cc:user-zh@flink.apache.org ; like Subject:回复: 使用Flink1.

Re: SHOW CREATE TABLE in Flink SQL

2020-03-02 Thread JingsongLee
Hi, Some previous discussion in [1], FYI [1] https://issues.apache.org/jira/browse/FLINK-10230 Best, Jingsong Lee -- From:Jark Wu Send Time:2020年3月2日(星期一) 22:42 To:Jeff Zhang Cc:"Gyula Fóra" ; user Subject:Re: SHOW CREATE

Re: java.time.LocalDateTime in POJO type

2020-03-02 Thread JingsongLee
Hi, I'v introduced LocalDateTime type information to flink-core. But for compatibility reason, I revert the modification in TypeExtractor. It seems that at present you can only use Types.LOCAL_DATE_TIME explicitly. [1] http://jira.apache.org/jira/browse/FLINK-12850 Best, Jingsong Lee

Re: 开发相关问题咨询Development related problems consultation

2020-03-02 Thread JingsongLee
Hi, welcome, For user side, u...@flink.apache.org is for English. user-zh@flink.apache.org is for Chinese. d...@flink.apache.org is for development related discussions, so please not send to it. Best, Jingsong Lee -- From:王博迪

Re: 使用Flink1.10.0读取hive时source并行度问题

2020-03-02 Thread JingsongLee
,JingsongLee 写道: > 自动推断可能面临资源不足无法启动的问题 理论上不应该呀?Batch作业是可以部分运行的。 Best, Jingsong Lee -- From:like Send Time:2020年3月2日(星期一) 15:35 To:user-zh@flink.apache.org ; lzljs3620...@aliyun.com Subject:回复: 使用Flink1.10.0读取hive时source并行度问题 非常感谢!

Re: 使用Flink1.10.0读取hive时source并行度问题

2020-03-02 Thread JingsongLee
推断后,已经可以控制source并行度了,自动推断可能面临资源不足无法启动的问题。 在2020年3月2日 15:18,JingsongLee 写道:Hi, 1.10中,Hive source是自动推断并发的,你可以使用以下参数配置到flink-conf.yaml里面来控制并发: - table.exec.hive.infer-source-parallelism=true (默认使用自动推断) - table.exec.hive.infer-source-parallelism.max=1000 (自动推断的最大并发) Sink的并发默认和上游的并发相同,如果有Shuf

Re: Question about runtime filter

2020-03-01 Thread JingsongLee
Hi, Does runtime filter probe side wait for building runtime filter? Can you check the start time of build side and probe side? Best, Jingsong Lee -- From:faaron zheng Send Time:2020年3月2日(星期一) 14:55 To:user Subject:Question

Re: 使用Flink1.10.0读取hive时source并行度问题

2020-03-01 Thread JingsongLee
Hi, 1.10中,Hive source是自动推断并发的,你可以使用以下参数配置到flink-conf.yaml里面来控制并发: - table.exec.hive.infer-source-parallelism=true (默认使用自动推断) - table.exec.hive.infer-source-parallelism.max=1000 (自动推断的最大并发) Sink的并发默认和上游的并发相同,如果有Shuffle,使用配置的统一并发。 Best, Jingsong Lee

Re: Re: Re: 求助帖:flink 连接kafka source 部署集群报错

2020-01-15 Thread JingsongLee
https://ci.apache.org/projects/flink/flink-docs-master/ops/cli.html#usage -- From:Others <41486...@qq.com> Send Time:2020年1月15日(星期三) 15:54 To:user-zh@flink.apache.org JingsongLee Subject:回复: Re: Re: 求助帖:flink 连接kafka source 部署

Re: Re: 求助帖:flink 连接kafka source 部署集群报错

2020-01-14 Thread JingsongLee
Hi, 我怀疑的你这样打包会导致meta-inf.services的文件相互覆盖。 你试试把flink-json和flink-kafka的jar直接放入flink/lib下 Best, Jingsong Lee -- From:Others <41486...@qq.com> Send Time:2020年1月15日(星期三) 15:27 To:user-zh@flink.apache.org JingsongLee Subject:回复:

Re: 求助帖:flink 连接kafka source 部署集群报错

2020-01-14 Thread JingsongLee
Hi, 你是不是没有把Json的jar包放入lib下?看起来你的User jar也没用jar-with-dependencies,所以也不会包含json的jar。 Best, Jingsong Lee -- From:Others <41486...@qq.com> Send Time:2020年1月15日(星期三) 15:03 To:user-zh Subject:求助帖:flink 连接kafka source 部署集群报错 我使用的flink

Re: 求助帖: 流join场景可能出现的重复计算

2020-01-14 Thread JingsongLee
Hi ren, Blink的deduplication功能应该是能match你的需求。[1] [1] https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication Best, Jingsong Lee -- From:Caizhi Weng Send Time:2020年1月15日(星期三) 11:53

Re: blink planner的org.apache.flink.table.api.ValidationException报错

2020-01-13 Thread JingsongLee
谢谢, 你可以试下最新的1.9版本或是1.10或是master吗?因为这里修了一些bug,不确定还存在不。 Best, Jingsong Lee -- From:Kevin Liao Send Time:2020年1月14日(星期二) 11:38 To:user-zh ; JingsongLee Subject:Re: blink planner的org.apache.flink.table.api.ValidationException报错

Re: 注册table时catalog无法变更

2020-01-07 Thread JingsongLee
Hi xiyueha, 你可以用TableEnv.sqlUpdate("create table ...")的DDL的方式,这会注册到当前catalog中。 Best, Jingsong Lee -- From:Kurt Young Send Time:2020年1月8日(星期三) 09:17 To:user-zh Cc:xiyueha Subject:Re: 注册table时catalog无法变更

Re: FLINK 1.9.1 StreamingFileSink 压缩问题

2020-01-01 Thread JingsongLee
Hi, 看起来你只能改下connector代码才能支持压缩了: ParquetAvroWriters.createAvroParquetWriter里:设置AvroParquetWriter.Builder的压缩格式。 Best, Jingsong Lee -- From:USERNAME Send Time:2020年1月2日(星期四) 13:36 To:user-zh Subject:FLINK 1.9.1 StreamingFileSink

Re: How should i set the field type in mysql when i use temporal table join between kafka and jdbc ?

2020-01-01 Thread JingsongLee
Hi, user-zh我就说中文啦. 你需要设置成bigint. 具体报什么错? Best, Jingsong Lee -- From:刘世民 Send Time:2020年1月2日(星期四) 13:47 To:user-zh Subject:How should i set the field type in mysql when i use temporal table join between kafka and jdbc ? for

Re: StreamTableEnvironment.registerDatastream() 开放用户自定义的schemaDescriptionh和DeserializationSchema

2019-12-31 Thread JingsongLee
Hi aven, 这是个合理的需求。 现在的问题是: - Flink table只支持Row, Pojo, Tuple, CaseClass作为结构化的数据类型。 - 而你的类型是JSONObject,它其实也是一个结构化的数据类型,但是目前Flink不支持它,所以可以考虑有这样的DeserializationSchema机制来支持它。 但是我理解其实没有差别多少,比如你提供RowDeserializationSchema,其实就是JSONObject到Row的转换,那你完全可以把这个套在DataStream.map中,把它转换成Flink table支持的结构化类型。

Re: Flink1.9批任务yn和ys对任务的影响

2019-12-26 Thread JingsongLee
slotsharing的原因,感觉并不容易提前判断。 faaron zheng 邮箱:faaronzh...@gmail.com 签名由 网易邮箱大师 定制 在2019年12月26日 15:09,JingsongLee 写道: Hi faaron zheng, 如kurt所说,强烈建议使用1.10,现在已拉分支。 TM运行的一个经验值是:TM有10个Slot,TM内存10G:JVM堆内4G、1G网络buffer、manage内存5G(也就是说单个slot的manage内存500M)。 Best, Jingsong Lee

Re: Flink1.9批任务yn和ys对任务的影响

2019-12-25 Thread JingsongLee
Hi faaron zheng, 如kurt所说,强烈建议使用1.10,现在已拉分支。 TM运行的一个经验值是:TM有10个Slot,TM内存10G:JVM堆内4G、1G网络buffer、manage内存5G(也就是说单个slot的manage内存500M)。 Best, Jingsong Lee -- From:Kurt Young Send Time:2019年12月26日(星期四) 14:07 To:user-zh Subject:Re:

Re: Flink实时数仓落Hive一般用哪种方式好?

2019-12-10 Thread JingsongLee
方法都是在事后合并文件吗? JingsongLee 于2019年12月10日周二 上午10:48写道: Hi 陈帅, 1.BulkWriter.Factory接口不适合ORC, 正如yue ma所说,你需要一些改动 2.StreamingFileSink整个机制都是基于做checkpoint才会真正move文件的,不知道你所想的streaming写是什么,以及对你的业务场景有什么要求吗? Best, Jingsong Lee -- From:陈帅 Send

Re: Flink RetractStream如何转成AppendStream?

2019-12-10 Thread JingsongLee
目前不能由SQL直接转。 Best, Jingsong Lee -- From:陈帅 Send Time:2019年12月10日(星期二) 21:48 To:JingsongLee Subject:Re: Flink RetractStream如何转成AppendStream? 代码api的方式我知道怎么转,想知道用sql的方式要如何转?需要先写到一张临时表再sink到目标表?有例子吗? JingsongLee 于2019年12月10日周二 上午10

Re: Re: Flink实时数仓落Hive一般用哪种方式好?

2019-12-09 Thread JingsongLee
Hi hjxhainan, 如果你要取消订阅。 请发送邮件到user-zh-unsubscr...@flink.apache.org Best, Jingsong Lee -- From:hjxhai...@163.com Send Time:2019年12月10日(星期二) 10:52 To:user-zh ; JingsongLee ; 陈帅 Subject:Re: Re: Flink实时数仓落Hive一般用哪种方式好? 怎么退出邮件订阅

Re: Flink RetractStream如何转成AppendStream?

2019-12-09 Thread JingsongLee
参考下lucas.wu的例子? Best, Jingsong Lee -- From:陈帅 Send Time:2019年12月10日(星期二) 08:25 To:user-zh@flink.apache.org ; JingsongLee Subject:Re: Flink RetractStream如何转成AppendStream? "你可以先把RetractStream转成DataStream,这样就出现了Tuple的stream,

Re: Flink实时数仓落Hive一般用哪种方式好?

2019-12-09 Thread JingsongLee
@flink.apache.org ; JingsongLee Subject:Re: Flink实时数仓落Hive一般用哪种方式好? 1. 相比Parquet,目前StreamingFileSink支持ORC的难点在哪里呢? 2. BulkWriter是不是攒微批写文件的? JingsongLee 于2019年12月9日周一 下午3:24写道: Hi 帅, - 目前可以通过改写StreamingFileSink的方式来支持Parquet。 (但是目前StreamingFileSink支持ORC比较难) - BulkWriter和批处理没有关系,它只是StreamingFileSink的一种概念

Re: [flink-sql]使用tableEnv.sqlUpdate(ddl);方式创表,如何指定rowtime?

2019-12-08 Thread JingsongLee
Hi 猫猫: 在DDL上定义rowtime是刚刚支持的功能,文档正在编写中。[1] 你可以通过master的代码来试用,社区正在准备发布1.10,到时候会有release版本可用。 [2] 中有使用的完整例子,FYI。 [1] https://issues.apache.org/jira/browse/FLINK-14320 [2]

Re: Flink实时数仓落Hive一般用哪种方式好?

2019-12-08 Thread JingsongLee
Hi 帅, - 目前可以通过改写StreamingFileSink的方式来支持Parquet。 (但是目前StreamingFileSink支持ORC比较难) - BulkWriter和批处理没有关系,它只是StreamingFileSink的一种概念。 - 如果sync hive分区,这需要自定义了,目前StreamingFileSink没有现成的。 在1.11中,Table层会持续深入这方面的处理,实时数仓落hive,在后续会一一解决数据倾斜、分区可见性等问题。[1] [1] https://issues.apache.org/jira/browse/FLINK-14249

Re: Flink RetractStream如何转成AppendStream?

2019-12-08 Thread JingsongLee
+1 to lucas.wu Best, Jingsong Lee -- From:lucas.wu Send Time:2019年12月9日(星期一) 11:39 To:user-zh Subject:Re: Flink RetractStream如何转成AppendStream? 可以使用类似的方式 // val sstream = result4.toRetractStream[Row],filter(_.1==trye).map(_._2)

Re: Flink RetractStream如何转成AppendStream?

2019-12-08 Thread JingsongLee
Hi 帅, 你可以先把RetractStream转成DataStream,这样就出现了Tuple的stream,然后你再写个MapFunc过滤,最后通过DataStream写入Kafka中。 Best, Jingsong Lee -- From:Jark Wu Send Time:2019年12月8日(星期日) 11:54 To:user-zh Subject:Re: Flink RetractStream如何转成AppendStream? Hi,

Re: DML去重,translate时报错

2019-11-21 Thread JingsongLee
Hi 叶贤勋: 现在去重现在支持insert into select 语法。 问题在于你的这个SQL怎么没产出UniqueKey 这里面可能有blink-planner的bug。 CC: @Jark Wu @godfrey he (JIRA) Best, Jingsong Lee -- From:叶贤勋 Send Time:2019年11月21日(星期四) 16:20 To:user-zh@flink.apache.org

Re: count distinct not supported in batch?

2019-09-19 Thread JingsongLee
Hi fanbin: It is "distinct aggregates for group window" in batch sql mode. Now, legacy planner: not support. blink planner: not support. There is no clear plan yet. But if the demand is strong, we can consider supporting it. Best, Jingsong Lee

Re: Streaming write to Hive

2019-09-05 Thread JingsongLee
Hi luoqi: With partition support[1], I want to introduce a FileFormatSink to cover streaming exactly-once and partition-related logic for flink file connectors and hive connector. You can take a look. [1]

Re: Re: flink1.9中blinkSQL对定义udf的TIMESTAMP类型报错

2019-09-05 Thread JingsongLee
override getResultType方法,返回Types.SQL_TIMESTAMP. 这样应该可以绕过。 1.10会修复这个问题。 Best, Jingsong Lee -- From:守护 <346531...@qq.com> Send Time:2019年9月5日(星期四) 12:11 To:user-zh@flink.apache.org JingsongLee ; user-zh Subject:回复: Re: fli

Re: 回复: 关于Flink SQL DISTINCT问题

2019-09-04 Thread JingsongLee
一般是按时间(比如天)来group by,state配置了超时过期的时间。 基本的去重方式就是靠state(比如RocksDbState)。 有mini-batch来减少对state的访问。 如果有倾斜,那是解倾斜问题的话题了。 Best, Jingsong Lee -- From:lvwenyuan Send Time:2019年9月4日(星期三) 15:11 To:user-zh Subject:Re:回复: 关于Flink SQL

Re: Flink SQL 时间问题

2019-09-03 Thread JingsongLee
Hi: 1.是的,目前只能是UTC,如果你有计算要求,你可以考虑改变的业务的窗口时间。 2.支持long的,你输入是不是int才会报错的,具体报错的信息? Best, Jingsong Lee -- From:hb <343122...@163.com> Send Time:2019年9月3日(星期二) 10:44 To:user-zh Subject:Flink SQL 时间问题 使用kafka connectorDescriptor ,

Re:question

2019-09-03 Thread JingsongLee
should be schema.field(“msg”, Types.ROW(...))? And you should select msg.f1 from table. Best Jingsong Lee 来自阿里邮箱 iPhone版 --Original Mail -- From:圣眼之翼 <2463...@qq.com> Date:2019-09-03 09:22:41 Recipient:user Subject:question How do you do: My problem is flink

Re: [ANNOUNCE] Apache Flink 1.9.0 released

2019-08-22 Thread JingsongLee
Congratulations~~~ Thanks gordon and everyone~ Best, Jingsong Lee -- From:Oytun Tez Send Time:2019年8月22日(星期四) 14:06 To:Tzu-Li (Gordon) Tai Cc:dev ; user ; announce Subject:Re: [ANNOUNCE] Apache Flink 1.9.0 released

Re: [Discuss] What should the "Data Source" be translated into Chinese

2019-08-13 Thread JingsongLee
可以直接保留不用翻译吗? Best, Jingsong Lee -- From:WangHengwei Send Time:2019年8月13日(星期二) 11:50 To:user-zh Subject:[Discuss] What should the "Data Source" be translated into Chinese Hi all, I'm working on [FLINK-13405] Translate

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread JingsongLee
Congrats Hequn! Best, Jingsong Lee -- From:Biao Liu Send Time:2019年8月7日(星期三) 12:05 To:Zhu Zhu Cc:Zili Chen ; Jeff Zhang ; Paul Lam ; jincheng sun ; dev ; user Subject:Re: [ANNOUNCE] Hequn becomes a Flink committer Congrats

Re: Stream to CSV Sink with SQL Distinct Values

2019-07-15 Thread JingsongLee
Hi caizhi and kali: I think this table should use toRetractStream instead of toAppendStream, and you should handle the retract messages. (If you just use distinct, the message should always be accumulate message) Best, JingsongLee

Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread JingsongLee
Congratulations Rong. Rong Rong has done a lot of nice work in the past time to the flink community. Best, JingsongLee -- From:Rong Rong Send Time:2019年7月12日(星期五) 08:09 To:Hao Sun Cc:Xuefu Z ; dev ; Flink ML Subject:Re

Re: Flink Table API and Date fields

2019-07-08 Thread JingsongLee
Flink 1.9 blink runner will support it as Generic Type, But I don't recommend it. After all, there are java.sql.Date and java.time.* in Java. Best, JingsongLee -- From:Flavio Pompermaier Send Time:2019年7月8日(星期一) 15:40

Re: Flink Table API and Date fields

2019-07-07 Thread JingsongLee
Hi Flavio: Looks like you use java.util.Date in your pojo, Now Flink table not support BasicTypeInfo.DATE_TYPE_INFO because of the limitations of some judgments in the code. Can you use java.sql.Date? Best, JingsongLee

Re: Providing Custom Serializer for Generic Type

2019-07-04 Thread JingsongLee
Hi Andrea: Why not make your MyClass POJO? [1] If it is a POJO, then flink will use PojoTypeInfo and PojoSerializer that have a good implementation already. [1] https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/types_serialization.html#rules-for-pojo-types Best, JingsongLee

Re: [ANNOUNCE] Apache Flink 1.8.1 released

2019-07-03 Thread JingsongLee
Thanks jincheng for your great job. Best, JingsongLee -- From:Congxian Qiu Send Time:2019年7月3日(星期三) 14:35 To:d...@flink.apache.org Cc:Dian Fu ; jincheng sun ; Hequn Cheng ; user ; announce Subject:Re: [ANNOUNCE] Apache Flink

Re: LookupableTableSource question

2019-07-02 Thread JingsongLee
rent from temporal table. [1] https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/udfs.html#table-functions Best, JingsongLee -- From:Flavio Pompermaier Send Time:2019年7月1日(星期一) 21:26 To:JingsongLee Cc:user S

Re: LookupableTableSource question

2019-06-28 Thread JingsongLee
hedRow : cachedRows) { collect(cachedRow); } return; } } } ... Am I missing something? On Fri, Jun 28, 2019 at 4:18 PM JingsongLee wrote: Hi Flavio: I just implement a JDBCLookupFunction[1]. You can use it as table func

Re: LookupableTableSource question

2019-06-28 Thread JingsongLee
, JingsongLee -- From:Flavio Pompermaier Send Time:2019年6月28日(星期五) 21:04 To:user Subject:LookupableTableSource question Hi to all, I have a use case where I'd like to enrich a stream using a rarely updated lookup table. Basically, I'd like

Re: Hello-world example of Flink Table API using a edited Calcite rule

2019-06-27 Thread JingsongLee
Got it, it's clear, TableStats is the important functions of ExternalCatalog. It is right way. Best, JingsongLee -- From:Felipe Gutierrez Send Time:2019年6月27日(星期四) 14:53 To:JingsongLee Cc:user Subject:Re: Hello-world example

Re: Best Flink SQL length proposal

2019-06-26 Thread JingsongLee
and make appropriate segmentation when compile) to solve this problem thoroughly in blink planner. Maybe it will in release-1.10. Best, JingsongLee -- From:Simon Su Send Time:2019年6月27日(星期四) 11:22 To:JingsongLee Cc:user

Re: Hello-world example of Flink Table API using a edited Calcite rule

2019-06-26 Thread JingsongLee
-table-planner/src/test/scala/org/apache/flink/table/api/ExternalCatalogTest.scala [2] https://github.com/apache/flink/blob/release-1.8/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/descriptors/OldCsv.scala Best, JingsongLee

Re: Hello-world example of Flink Table API using a edited Calcite rule

2019-06-26 Thread JingsongLee
/CorrelateTest.scala#L168 2.https://github.com/apache/flink/blob/release-1.8/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/calcite/CalciteConfigBuilderTest.scala Best, JingsongLee -- From:Felipe Gutierrez Send

Re: Best Flink SQL length proposal

2019-06-26 Thread JingsongLee
Hi Simon: Does your code include the PR[1]? If include: try set TableConfig.setMaxGeneratedCodeLength smaller (default 64000)? If exclude: Can you wrap some fields to a nested Row field to reduce field number. 1.https://github.com/apache/flink/pull/5613

Re: TableException

2019-06-12 Thread JingsongLee
RetractStreamTableSink or UpsertStreamTableSink. (Unfortunately, we don't have Retract/Upsert JDBC Sink now, you can try to do by yourself) [1]https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/sourceSinks.html#appendstreamtablesink Best, JingsongLee

Re: can flink sql handle udf-generated timestamp field

2019-06-06 Thread JingsongLee
w document: [1] https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/time_attributes.html#event-time Best, JingsongLee -- From:Yu Yang Send Time:2019年6月5日(星期三) 14:57 To:user Subject:can flink sql

Re: Clean way of expressing UNNEST operations

2019-06-04 Thread JingsongLee
, JingsongLee -- From:JingsongLee Send Time:2019年6月4日(星期二) 13:35 To:Piyush Narang ; user@flink.apache.org Subject:Re: Clean way of expressing UNNEST operations Hi @Piyush Narang It seems that Calcite's type inference is not perfect

Re: Clean way of expressing UNNEST operations

2019-06-03 Thread JingsongLee
UNNEST functions to try it out. (Use JOIN LATERAL TABLE) Best, JingsongLee -- From:Piyush Narang Send Time:2019年6月4日(星期二) 00:20 To:user@flink.apache.org Subject:Clean way of expressing UNNEST operations Hi folks, I’m using

Re: Flink SQL: Execute DELETE queries

2019-05-28 Thread JingsongLee
Or you can build your own Sink code, where you can delete rows of DB table. Best, JingsongLee -- From:Papadopoulos, Konstantinos Send Time:2019年5月28日(星期二) 22:54 To:Vasyl Bervetskyi Cc:user@flink.apache.org Subject:RE: Fli

Re: Generic return type on a user-defined scalar function

2019-05-20 Thread JingsongLee
it to this way to support generic return type: val functionCallCode = s""" |${parameters.map(_.code).mkString("\n")} |$resultTypeTerm $resultTerm = ($resultTypeTerm) $functionReference.eval( | ${parameters.

Re: PatternFlatSelectAdapter - Serialization issue after 1.8 upgrade

2019-04-19 Thread JingsongLee
, JingsongLee -- From:Oytun Tez Send Time:2019年4月19日(星期五) 03:38 To:user Subject:PatternFlatSelectAdapter - Serialization issue after 1.8 upgrade Hi all, We are just migration from 1.6 to 1.8. I encountered a serialization error

Re: Is it possible to handle late data when using table API?

2019-04-16 Thread JingsongLee
To set rowtime watermarks delay of source you can: val desc = Schema() .field("a", Types.INT) .field("e", Types.LONG) .field("f", Types.STRING) .field("t", Types.SQL_TIMESTAMP) .rowtime(Rowtime().timestampsFromField("t").watermarksPeriodicBounded(1000)) Use watermarksPeriodicBounded

回复:Is it possible to handle late data when using table API?

2019-04-16 Thread JingsongLee
Hi @Lasse Nedergaard, Table API don't have allowedLateness api. But you can set rowtime.watermarks.delay of source to slow down the watermark clock. -- 发件人:Lasse Nedergaard 发送时间:2019年4月16日(星期二) 16:20 收件人:user 主 题:Is it possible