??????table api????rowtime??????

2023-05-23 Thread ????????
sno=b8e947dc8498edfb9c7605f290fc13ba\npartenerName=zzinfo\nuniqueid=1C0FF05B-D047-45B4-8212-6AD8627DBA4F\nEmptyFields=Token&\ntztSDKType=0\n"} | | | | | ccc0606fight...@163.com | | ?? | L Y<531599...@qq.com.INVALID>

??????table api????rowtime??????

2023-05-23 Thread ????????
-> element.getEvenTime())); | | | | | ccc0606fight...@163.com | | ?? | L Y<531599...@qq.com.INVALID> | | | 2023??5??23?? 01:25 | | ?? | user-zh | | ???? | ??table apirowtime?? | HI??

??????table api????rowtime??????

2023-05-21 Thread ????????
flink1.14 | | | | ccc0606fight...@163.com | | ?? | L Y<531599...@qq.com.INVALID> | | | 2023??5??20?? 01:10 | | ?? | user-zh | | | ??table apirowtime?? | HI?? ??mid

??????table api????rowtime??????

2023-05-21 Thread ????????
ot;funcId"), $("serverIp"), $("outTime"), $("handleSerialNo"), $("info"), $("funcIdDesc"), $("eventTime").rowtime().as("et")); //Table tableRequest = tableEnv.fromDataStream(outRequestDataStream, Schema.newB

table api定义rowtime未生效

2023-05-16 Thread 小昌同学
各位老师好,以下是我的代码: | Table midTable = tableEnv.fromDataStream(midStream, $("funcId"), $("funcIdDesc"), $("serverIp"), $("maxTime"), $("minTime"), $("pk"), $("eventTime").rowtime()); tableEnv.createTemporaryView("

Re: Example Flink SQL fromDataStream watermark code not showing *rowtime*

2022-11-27 Thread Dan Hill
I figured this out. I get this behavior because I was running the code in a minicluster test that defaulted to batch. I switched to streaming and it renders "*ROWTIME*". On Fri, Nov 25, 2022 at 11:51 PM Dan Hill wrote: > Hi. I copied the Flink code from this page. My print

Example Flink SQL fromDataStream watermark code not showing *rowtime*

2022-11-25 Thread Dan Hill
Hi. I copied the Flink code from this page. My printSchema() does not contain **ROWTIME** in the output. I'm running Flink v1.14.4. https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/data_stream_api/ public static class User {...} DataStream dataStream

Preserve rowtime through join

2022-10-10 Thread Matthias Broecheler
Hey Flinksters, I was wondering if you had any ideas for how to preserve the rowtime across an INNER equi join so that the output can be used in a temporal join. I've attached an example based on the TemporalJoinTest where I'm creating two views by deduplicating underlying streams (to rates_pk

Re: RowTime 空值过滤问题

2021-12-17 Thread Caizhi Weng
> 于2021年12月17日周五 18:42写道: > 请教: > java.lang.RuntimeException: RowTime field should not be null, please > convert it to a non-null long value. > > > 说明:源数据表中有一个时间字段:server_end_time,MySQL,有Null值,但源数据不能修改, > String order_sql = "create TABLE sd_service_order (&q

RowTime ????????????

2021-12-17 Thread 45329722
?? java.lang.RuntimeException: RowTime field should not be null, please convert it to a non-null long value. server_end_time??MySQLNull?? String order_sql = "create TABLE sd_service_order (" + "server_end_t

Re: 创建表t1的视图v1之后rowtime属性丢失

2021-11-02 Thread godfrey he
可以把具体的sql发出来看看 yidan zhao 于2021年11月2日周二 下午7:06写道: > > 如题,我原先基于flink1.11和1.12貌似没这个问题。目前基于1.13出现这个问题。 > 问题描述如下: > 我t1是kafka表,其中有个属性是event_time是row time属性,然后创建了view v1,通过select , > event_time from t1这样创建。 现在问题是这么创建之后,我基于v1查询报错说aggre.. window只能在time > attributes上定义。 >

创建表t1的视图v1之后rowtime属性丢失

2021-11-02 Thread yidan zhao
如题,我原先基于flink1.11和1.12貌似没这个问题。目前基于1.13出现这个问题。 问题描述如下: 我t1是kafka表,其中有个属性是event_time是row time属性,然后创建了view v1,通过select , event_time from t1这样创建。 现在问题是这么创建之后,我基于v1查询报错说aggre.. window只能在time attributes上定义。 不清楚是版本变化导致,还是我其他地方搞错了呢。

Re: Define rowtime on intermediate table field

2021-05-05 Thread Yun Gao
Recipients:user Subject:Define rowtime on intermediate table field Hi, My use case involves reading raw data records from Kafka and processing them. The records are coming from a database, where a periodic job reads new rows, packages them into a single JSON object (as described below) and writes

Define rowtime on intermediate table field

2021-05-04 Thread Sumeet Malhotra
Hi, My use case involves reading raw data records from Kafka and processing them. The records are coming from a database, where a periodic job reads new rows, packages them into a single JSON object (as described below) and writes the entire record to Kafka. { 'id': 'some_id', 'key_a':

Re: rowtime 的类型序列化问题

2021-03-08 Thread JudeZhu
我也遇到了同样的问题,请问最后是怎么解决的? -- Sent from: http://apache-flink.147419.n8.nabble.com/

Re: rowtime 的类型序列化问题

2021-03-08 Thread JudeZhu
我跟你使用的方法一样,也是加工数据源创建临时view然后传递到sink,其中用到了rowtime,遇到和你同样的错,请问是怎么解决的最后 -- Sent from: http://apache-flink.147419.n8.nabble.com/

flink 1.12.2??????DataStream????Table??????rowtime????????????????

2021-03-06 Thread Asahi Lee
?? ?? StreamExecutionEnvironment bsEnv = StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment bsTableEnv = StreamTableEnvironment.create(bsEnv); DataStream

Re: BackPressure in RowTime Task of FlinkSql Job

2021-02-26 Thread Aeden Jameson
>>Is it possible that you are generating to many watermarks that need to be send to all downstream tasks? This was it basically. I had unexpected flooding on specific keys, which was guessing intermittently hot partitions that was back pressuring the rowtime task. I do have another questio

Re: BackPressure in RowTime Task of FlinkSql Job

2021-02-26 Thread Timo Walther
Hi Aeden, the rowtime task is actually just a simple map function that extracts the event-time timestamp into a field of the row for the next operator. It should not be the problem. Can you share a screenshot of your pipeline? What is your watermarking strategy? Is it possible that you

BackPressure in RowTime Task of FlinkSql Job

2021-02-24 Thread Aeden Jameson
I have a job made up of a few FlinkSQL statements using a statement set. In my job graph viewed through the Flink UI a few of the tasks/statements are preceded by this task rowtime field: (#11: event_time TIME ATTRIBUTE(ROWTIME)) that has an upstream Kafka source/sink task. Occasionally

Re: Role of Rowtime Field Task?

2021-02-20 Thread Yuval Itzchakov
https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/time_attributes.html#event-time On Sun, Feb 21, 2021 at 8:43 AM Aeden Jameson wrote: > In my job graph viewed through the Flink UI I see a task named, > > rowtime field: (#11: event_time TIME ATTRIBUT

Role of Rowtime Field Task?

2021-02-20 Thread Aeden Jameson
In my job graph viewed through the Flink UI I see a task named, rowtime field: (#11: event_time TIME ATTRIBUTE(ROWTIME)) that has an upstream Kafka source task. What exactly does the rowtime task do? -- Thank you, Aeden

Re: table rowtime timezome problem

2020-12-27 Thread Leonard Xu
Hi,Jiazhi > When DataStream is converted to table, eventTime is converted to > rowTime. Rowtime is 8 hours slow. How to solve this problem? The reason is that the only data type that used to define an event time in Table/SQL is TIMESTAMP(3), and TIMESTAMP type isn’t related t

Re: rowtime的时区问题

2020-12-26 Thread Shengkai Fang
ble时候,对应的eventTime转成rowtime,rowtime慢了8个小时,怎么解决? > 谢谢 > 嘉治

rowtime??????????

2020-12-26 Thread ?g???U?[????
Hi ?? DataStreamtableeventTimerowtime??rowtime8??

table rowtime timezome problem

2020-12-26 Thread ?g???U?[????
Hi all When DataStream is converted to table, eventTime is converted to rowTime. Rowtime is 8 hours slow. How to solve this problem? Thanks Jiazhi

Re:Table api 中指定rowtime的问题

2020-12-20 Thread hailongwang
Hi, 可以试下 CAST(eventTime AS TIMESTAMP) Best, Hailong 在 2020-12-19 11:14:53,"ゞ野蠻遊戲χ" 写道: >大家好! > >当我把DataStream流转成Table,并且指定了rowtime,然后使用带有udtf的sql传入tableEnv.sql(),抛出如下错误:Rowtime > attributes must not be in the input rows of a regular join. As a workaround

Table api ??????rowtime??????

2020-12-18 Thread ?g???U?[????
??DataStream??Tablerowtime??udtf??sqltableEnv.sql()Rowtime attributes must not be in the input rows of a regular join. As a workaround you can cast the time attributes of input tables to TIMESTAMP before

Re: Flink - Create Temporary View and "Rowtime attributes must not be in the input rows of a regular join"

2020-12-17 Thread Timo Walther
Hi Dan, are you intending to use interval joins, regular joins, or a mixture of both? For regular joins you must ensure to cast a rowtime attribute to timestamp as early as possible. For interval joins, you need to make sure that the rowtime attribute is unmodified. Currently, I see

Flink - Create Temporary View and "Rowtime attributes must not be in the input rows of a regular join"

2020-12-15 Thread Dan Hill
NTERVAL HOUR)))], joinType=[inner]) : :- FlinkLogicalDataStreamTableScan(table=[[default, mydb, input_insertion]]) : +- FlinkLogicalDataStreamTableScan(table=[[default, mydb, input_impression]]) +- FlinkLogicalDataStreamTableScan(table=[[default, mydb, input_click]]) Rowtime att

Re:回复:flink 1.11.2 rowtime和proctime流 Interval Join 问题错误问题

2020-11-25 Thread hailongwang
> "user-zh" > ><18868816...@163.com; >发送时间:2020年11月25日(星期三) 晚上7:31 >收件人:"user-zh" >主题:Re:flink 1.11.2 rowtime和proctime流 In

??????flink 1.11.2 rowtime??proctime?? Interval Join ????????????

2020-11-25 Thread Asahi Lee
?? ??join ---- ??: "user-zh"

Re:flink 1.11.2 rowtime和proctime流 Interval Join 问题错误问题

2020-11-25 Thread hailongwang
" l_a INT, " + >" l_b string, " + >" l_rt timestamp(3), " + > " r_a INT, " + >" r_b string, " + >" r_pt timestamp(3) " + >

flink 1.11.2 rowtime??proctime?? Interval Join ????????????

2020-11-25 Thread Asahi Lee
" l_b string, " + " l_rt timestamp(3), " + " r_a INT, " + " r_b string, " + " r_pt timestamp(3) " + ") WITH ( " + " 'connector' = 'print' " +

Re:Re:Re:Blink 1.11 create view是不是没有办法把rowtime带下去?

2020-11-16 Thread hailongwang
ble/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/api/bridge/java/internal/StreamTableEnvironmentImpl.java#L253 >> >> >>Best, >>Hailong >> >> >> >> >>在 2020-11-16 13:48:35,"周虓岗" 写道: >> >>通过table api的//

Re:Re:Blink 1.11 create view是不是没有办法把rowtime带下去?

2020-11-15 Thread 周虓岗
apache/flink/table/api/bridge/java/internal/StreamTableEnvironmentImpl.java#L253 > > >Best, >Hailong > > > > >在 2020-11-16 13:48:35,"周虓岗" 写道: > >通过table api的// declare an additional logical field as an event time attribute > >Tabletable=tEnv.fromDataSt

Re:Blink 1.11 create view是不是没有办法把rowtime带下去?

2020-11-15 Thread hailongwang
s an event time attribute Tabletable=tEnv.fromDataStream(stream,$("user_name"),$("data"),$("user_action_time").rowtime()"); 可以把eventtime往后传, 如果使用createview的话怎么把这个time attribute往后带吗? 不往后传的话可能会 这个有什么方法吗?

Blink 1.11 create view是不是没有办法把rowtime带下去?

2020-11-15 Thread 周虓岗
通过table api的// declare an additional logical field as an event time attribute Tabletable=tEnv.fromDataStream(stream,$("user_name"),$("data"),$("user_action_time").rowtime()"); 可以把eventtime往后传, 如果使用createview的话怎么把这个time attribute往后带吗? 不往后传的话可能会 这个有什么方法吗?

Re: flink table api中无法设置子json中的列为rowtime

2020-09-16 Thread Jark Wu
1. 只支持指定顶层字段作为 rowtime,如果要使用 nested field 作为 rowtime,可以先使用计算列(仅在 DDL 上支持)生成顶层列。 2. Descriptor API 有很多问题,且缺失很多功能,不建议使用,建议使用 DDL。 Descriptor API 将在1.12 版本中重构。 Best, Jark On Thu, 17 Sep 2020 at 10:41, kylin wrote: > flink版本1.7.2 > > flink table api从kafka读取json数据,JsonSchema如下图所示 > 发

flink table api中无法设置子json中的列为rowtime

2020-09-16 Thread kylin
flink版本1.7.2 flink table api从kafka读取json数据,JsonSchema如下图所示 发现rowtime无法从子json中的字段指定,麻烦帮忙确认下rowtime是否只能从顶层的字段来指定? tableEnv.connect( new Kafka() .version("0.10") .topic(topic_in) .property("bootstrap.servers", brokers) .property("group.id", &quo

Re: Editing Rowtime for SQL Table

2020-09-02 Thread Timo Walther
/r62b47ec6812ddbafed65ac79e31ca0305099893559f1e5a991dee550%40%3Cdev.flink.apache.org%3E On 01.09.20 22:55, Satyam Shekhar wrote: Thanks for your replies Matthias and Timo. Converting the Table to DataStream, assigning a new Watermark & Rowtime attribute, and converting back to Table makes sense. One challenge with that appr

Re: Editing Rowtime for SQL Table

2020-09-01 Thread Satyam Shekhar
Thanks for your replies Matthias and Timo. Converting the Table to DataStream, assigning a new Watermark & Rowtime attribute, and converting back to Table makes sense. One challenge with that approach is that Table to DataStream conversion could emit retractable data stream, however, I t

Editing Rowtime for SQL Table

2020-08-31 Thread Satyam Shekhar
Hello, I use Flink for continuous evaluation of SQL queries on streaming data. One of the use cases requires us to run recursive SQL queries. I am unable to find a way to edit rowtime time attribute of the intermediate result table. For example, let's assume that there is a table T0 with schema

回复:flink 1.11 order by rowtime报错

2020-08-19 Thread 郑斌斌
非常感谢,按照您给出的jira,我修改源码后好用了。 -- 发件人:Benchao Li 发送时间:2020年8月19日(星期三) 19:48 收件人:user-zh ; 郑斌斌 主 题:Re: flink 1.11 order by rowtime报错 Hi 斌斌, 感觉你应该是遇到了一个已知的bug[1] [1] https://issues.apache.org/jira/browse/FLINK-16827 郑斌斌 于2020年8月19日

Re: flink 1.11 order by rowtime报错

2020-08-19 Thread Benchao Li
> 收件人:user-zh > 主 题:Re: flink 1.11 order by rowtime报错 > > 错误呢?没看到。把代码贴出来看一下,是不是processtime没有设置或者设置不对 > > > > -- > Sent from: http://apache-flink.147419.n8.nabble.com/ > > -- Best, Benchao Li

回复:flink 1.11 order by rowtime报错

2020-08-18 Thread 郑斌斌
) -- 发件人:china_tao 发送时间:2020年8月19日(星期三) 00:17 收件人:user-zh 主 题:Re: flink 1.11 order by rowtime报错 错误呢?没看到。把代码贴出来看一下,是不是processtime没有设置或者设置不对 -- Sent from: http://apache-flink.147419.n8.nabble.com/

Re: flink 1.11 order by rowtime报错

2020-08-18 Thread china_tao
错误呢?没看到。把代码贴出来看一下,是不是processtime没有设置或者设置不对 -- Sent from: http://apache-flink.147419.n8.nabble.com/

Re: flink 1.11 order by rowtime报错

2020-08-18 Thread china_tao
没有看到错误,把代码贴出来把,是不是eventtime没有设置或者设置不正确 -- Sent from: http://apache-flink.147419.n8.nabble.com/

Re: flink 1.11 order by rowtime报错

2020-08-18 Thread chengyanan1...@foxmail.com
Hi: 你的图挂了,建议直接贴代码,图片大家是看不到的 发件人: 郑斌斌 发送时间: 2020-08-18 17:45 收件人: user-zh 主题: flink 1.11 order by rowtime报错 小伙伴们 : 大家好,请教个问题,执行order by SQL文时,为什么报下面的错误: SQL文:select order_id,product_id FROM kafka_order order by rowtime Thanks & Regards

flink 1.11 order by rowtime报错

2020-08-18 Thread 郑斌斌
小伙伴们 : 大家好,请教个问题,执行order by SQL文时,为什么报下面的错误: SQL文:select order_id,product_id FROM kafka_order order by rowtime Thanks & Regards

Re: Pyflink sink rowtime field

2020-07-16 Thread Xingbo Huang
Hi Jesse, I think that the type of rowtime you declared on the source schema is DataTypes.Timestamp(), you also use DataTypes.Timestamp() on the sink schema Best, Xingbo Jesse Lord 于2020年7月15日周三 下午11:41写道: > I am trying to sink the rowtime field in pyflink 1.10. I get the following >

Pyflink sink rowtime field

2020-07-15 Thread Jesse Lord
I am trying to sink the rowtime field in pyflink 1.10. I get the following error For the source schema I use .field("rowtime", DataTypes.TIMESTAMP(2)) .rowtime( Rowtime() .timestamps_from_field("timestamp") .watermark

Re: flink 1.11 createTemporaryTable 指定 rowtime 字段报 Field null does not exist 错误

2020-07-13 Thread Hito Zhu
好吧,感谢回答 -- Sent from: http://apache-flink.147419.n8.nabble.com/

Re: flink 1.11 createTemporaryTable 指定 rowtime 字段报 Field null does not exist 错误

2020-07-13 Thread Jark Wu
这可能是 connect API 的某个 bug 吧。 建议先用 DDL 。 Best, Jark On Tue, 14 Jul 2020 at 08:54, Hito Zhu wrote: > rowtime 定义如下,我发现 SchemaValidator#deriveFieldMapping 方法给移除了。 > Rowtime rowtime = new Rowtime() > .timestampsFromField("searchTime") > .water

Re: flink 1.11 createTemporaryTable 指定 rowtime 字段报 Field null does not exist 错误

2020-07-13 Thread Hito Zhu
rowtime 定义如下,我发现 SchemaValidator#deriveFieldMapping 方法给移除了。 Rowtime rowtime = new Rowtime() .timestampsFromField("searchTime") .watermarksPeriodicBounded(5 * 1000); -- Sent from: http://apache-flink.147419.n8.nabble.com/

Re: flink 1.11 createTemporaryTable 指定 rowtime 字段报 Field null does not exist 错误

2020-07-13 Thread Jark Wu
你的源码中 new Schema().field("searchTime",DataTypes.TIMESTAMP()).rowtime(rowtime); 里面的 rowtime 的定义能贴下吗? On Mon, 13 Jul 2020 at 20:53, Hito Zhu wrote: > Hi Jark 异常信息如下: > Exception in thread "main" org.apache.flink.table.api.ValidationException: &

Re: flink 1.11 createTemporaryTable 指定 rowtime 字段报 Field null does not exist 错误

2020-07-13 Thread Hito Zhu
Hi Jark 异常信息如下: Exception in thread "main" org.apache.flink.table.api.ValidationException: Field null does not exist at org.apache.flink.table.sources.tsextractors.TimestampExtractorUtils.lambda$mapToResolvedField$4(TimestampExtractorUtils.java:85) at

Re: flink 1.11 createTemporaryTable 指定 rowtime 字段报 Field null does not exist 错误

2020-07-13 Thread Jark Wu
程序包 Field null does not exist 错误,是我用法有问题? > 看了下 https://issues.apache.org/jira/browse/FLINK-16160 > <https://issues.apache.org/jira/browse/FLINK-16160> 这个 issue 是解决的这个问题吗? > > tableEnv.connect(kafka) > .withSchema( > new Schema().field("searchTime"

flink 1.11 createTemporaryTable 指定 rowtime 字段报 Field null does not exist 错误

2020-07-13 Thread Hito Zhu
Schema( new Schema().field("searchTime", DataTypes.TIMESTAMP()).rowtime(rowtime); ) .withFormat( new Json().failOnMissingField(false) ) .createTemporaryTable("tablename"); -- Sent from: http://apache-flink.147419.n8.nabble.com/

flink 1.11 createTemporaryTable 指定 rowtime 字段报 Field null does not exist 错误

2020-07-13 Thread Hito Zhu
Schema( new Schema().field("searchTime", DataTypes.TIMESTAMP()).rowtime(rowtime); ) .withFormat( new Json().failOnMissingField(false) ) .createTemporaryTable("tablename"); -- Sent from: http://apache-flink.147419.n8.nabble.com/

Re: 无法生成rowtime导致在window失败

2020-06-28 Thread Leonard Xu
Hi, > field("logictime","TIMESTAMP(3)”) 报错的原因这个字段在你原始的表中不存在的,理解你的需求是你想用 field evitime(Long型)生成一个新的 field logictime(TIMESTAMP(3)),这个可以用计算列解决,Table API上还不支持计算列,1.12 已经在开发中了。你可以用 DDL 加计算列完成满足你的需求,参考[1] create table test ( acct STRING, evitime BIGINT, logictime as

无法生成rowtime导致在window失败

2020-06-28 Thread naturalfree
alse).deriveSchema()) .withSchema(new Schema().field("acct", "STRING").field("evtime", "LONG").field("logictime","TIMESTAMP(3)").rowTime(new Rowtime().timestampsFromField("evtime").wa

Re: Flink SQL 新手问题,RowTime field should not be null, please convert it to a non-null long value

2020-05-24 Thread Leonard Xu
ka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) >>>at akka.japi.pf <http://akka.japi.pf/ <http://akka.japi.pf/> >>> .UnitCaseStatement.apply(CaseStatements.scala:26) >>>at akka.japi.pf <http://akka.japi.pf/ <http://akka.japi.pf/> >>> .UnitCaseStatement.appl

Re: Flink SQL 新手问题,RowTime field should not be null, please convert it to a non-null long value

2020-05-24 Thread Enzo wang
pi.pf/ > >.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) > > at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) > > at > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > > at > scala.PartialFunction$

Re: Flink SQL 新手问题,RowTime field should not be null, please convert it to a non-null long value

2020-05-24 Thread Leonard Xu
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > at akka.actor.Actor$class.aroundReceive(Actor.scala:517) > at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:22

Re: rowtime 的类型序列化问题

2020-03-20 Thread lucas.wu
=[SeqNo, Type, Table, ServerId, Database, OldData, GTID, Data, Timestamp, Offset, from_unixtime((Data.FuiUpdateTime / 1000)) AS FuiUpdateTimeSec, (from_unixtime((Data.FuiUpdateTime / 1000)) TO_TIMESTAMP _UTF-16LE'-MM-dd HH:mm:ss') AS event_ts]) - WatermarkAssigner(rowtime=[event_ts

Re: rowtime 的类型序列化问题

2020-03-19 Thread Jingsong Li
> at > org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708) > at SinkConversion$51.processElement(Unknown Source) > …… > > > > 最后查看代码,发现对于rowtime,在BaseRowTypeInfo下会是使用SqlTimestampSerializer,而在RowTypeInfo会使用LongSerializer,上下游使用serializer不一样,上游使用SqlTimestampSerializer下游使用LongSerializer就会报错。 > 请问这个问题可以避免吗? -- Best, Jingsong Lee

rowtime 的类型序列化问题

2020-03-19 Thread lucas.wu
) at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708) at SinkConversion$51.processElement(Unknown Source) …… 最后查看代码,发现对于rowtime,在BaseRowTypeInfo下会是使用SqlTimestampSerializer,而在RowTypeInfo会使用LongSerializer,上下游使用serializer不一样,上游使用SqlTimestampSerializer下游使用LongSerializer就会报错。 请问这个问题可以避免吗?

Re: [flink-sql]使用tableEnv.sqlUpdate(ddl);方式创表,如何指定rowtime?

2019-12-08 Thread JingsongLee
Hi 猫猫: 在DDL上定义rowtime是刚刚支持的功能,文档正在编写中。[1] 你可以通过master的代码来试用,社区正在准备发布1.10,到时候会有release版本可用。 [2] 中有使用的完整例子,FYI。 [1] https://issues.apache.org/jira/browse/FLINK-14320 [2] https://github.com/apache/flink/blob/2ecf7cacbe742099d78c528de962fccf13c14629/flink-table/flink-table-planner-blink/src/test

[flink-sql]????tableEnv.sqlUpdate(ddl);??????????????????rowtime??

2019-12-06 Thread ????
??tableEnv.sqlUpdate(ddl);?? rowtimerowtime?? ??flink??? ??csvkafka sql?? CREATE

Re: Flink 1.9 Sql Rowtime Error

2019-11-01 Thread OpenInx
ableName) > .startFromEarliest() > .property("zookeeper.connect", *“*xxx") > .property("bootstrap.servers", *“*xxx") > .property("group.id", *“*xxx")) > .withFormat(new Json().deriveSche

Flink 1.9 Sql Rowtime Error

2019-11-01 Thread Polarisary
“xxx") .property("group.id", “xxx")) .withFormat(new Json().deriveSchema()) .withSchema(new Schema() .field("rowtime", Types.SQL_TIMESTAMP) .rowtime(new Rowti

请问如何将已有字段设置为rowtime属性

2019-10-28 Thread 苏 欣
各位好,我想使用kafka消息中的某个字段作为rowtime属性,遇到了以下问题,使用flink版本为1.9.1。 以下是我尝试的两种用法,都会报错。请问大家有没有遇到过类似的问题,怎么解决的,谢谢! 代码一: tEnv.connect( new Kafka() .version("universal") .topic("flink-test-dept-1") .startFromGroupOffsets()

Re: blink SQL从kafka中获取rowtime

2019-10-17 Thread Jark Wu
Hi Zijie, 应该是你的 sqlTimestamp 字段中有 null 的数据,在去取 ts 的时候报 NPE 了。 目前 watermark assigner 要求每条数据的 ts 都是有值的。 Best, Jark > 在 2019年10月17日,20:25,Zijie Lu 写道: > > CREATE TABLE requests( > `rowtime` TIMESTAMP, > `requestId` VARCHAR, > `algoExtent` ROW(`mAdId` VARCHAR)) > with

Re: blink SQL从kafka中获取rowtime

2019-10-17 Thread Zijie Lu
CREATE TABLE requests( `rowtime` TIMESTAMP, `requestId` VARCHAR, `algoExtent` ROW(`mAdId` VARCHAR)) with ( 'connector.type' = 'kafka', 'connector.version' = 'universal', 'connector.topic' = 'test_request', 'connector.startup-mode' = 'latest-offset', 'connector.properties.0.key

Re: blink SQL从kafka中获取rowtime

2019-10-17 Thread Zijie Lu
而这个定义在old planner里是可以用的 On Thu, 17 Oct 2019 at 19:49, Zijie Lu wrote: > 我使用blink planner来定义了下面的表 > CREATE TABLE requests( > `rowtime` TIMESTAMP, > `requestId` VARCHAR, > `algoExtent` ROW(`mAdId` VARCHAR)) > with ( > 'connector.type' = 'kafka', > 'connect

blink SQL从kafka中获取rowtime

2019-10-17 Thread Zijie Lu
我使用blink planner来定义了下面的表 CREATE TABLE requests( `rowtime` TIMESTAMP, `requestId` VARCHAR, `algoExtent` ROW(`mAdId` VARCHAR)) with ( 'connector.type' = 'kafka', 'connector.version' = 'universal', 'connector.topic' = 'test_request', 'connector.startup-mode' = 'latest-offset

Re: Flink SQL: How to tag a column as 'rowtime' from a Avro based DataStream?

2019-09-10 Thread Fabian Hueske
ds there you can still access nested fields with >>> something like "topfield.x.y.z" in the SQL statement. >>> >>> What I found is that the easiest way to make this all work is to ensure >>> the rowtime field in the structure is at the top

Re: Flink SQL: How to tag a column as 'rowtime' from a Avro based DataStream?

2019-09-10 Thread Niels Basjes
elds there you can still access nested fields with >> something like "topfield.x.y.z" in the SQL statement. >> >> What I found is that the easiest way to make this all work is to ensure >> the rowtime field in the structure is at the top level (which makes

Re: Flink SQL: How to tag a column as 'rowtime' from a Avro based DataStream?

2019-09-10 Thread Fabian Hueske
y needs > the 'top level' fields of the Avro datastructure. > With only the top fields there you can still access nested fields with > something like "topfield.x.y.z" in the SQL statement. > > What I found is that the easiest way to make this all work is to ensure > the rowti

flink 1.8 sql rowtime window ????

2019-08-27 Thread 1142632215
??KafkaTableSource??rowtime select order_id ,last_value(timestamp) timestamp,last_value(order_status) order_status from order group by order_id ??over window ?? over(partition by df(timestamp,'-MM-dd 00:00:00') order by update_time range BETWEEN INTERVAL '24

Re: Flink SQL: How to tag a column as 'rowtime' from a Avro based DataStream?

2019-08-21 Thread Niels Basjes
fields there you can still access nested fields with something like "topfield.x.y.z" in the SQL statement. What I found is that the easiest way to make this all work is to ensure the rowtime field in the structure is at the top level (which makes sense in general) and generate the fie

Re: Flink SQL: How to tag a column as 'rowtime' from a Avro based DataStream?

2019-08-14 Thread Timo Walther
Hi Niels, if you are coming from DataStream API, all you need to do is to write a timestamp extractor. When you call: tableEnv.registerDataStream("TestStream", letterStream, "EventTime.rowtime, letter, counter"); The ".rowtime" means that the framework

Flink SQL: How to tag a column as 'rowtime' from a Avro based DataStream?

2019-08-14 Thread Niels Basjes
Hi, Experimenting with the StreamTableEnvironment I build something like this: DataStream> letterStream = ... tableEnv.registerDataStream("TestStream", letterStream, "EventTime.rowtime, letter, counter"); Because the "EventTime" was tagged with ".rowtim

请教 如何使用TableAPI connector将一个字段定义为rowtime属性

2019-07-31 Thread hegongyin
t( new Csv().schema(Types.ROW(names, types)) ) .withSchema( new Schema() .field("timestamp", Types.SQL_TIMESTAMP()) .rowtime(new Rowtime()

rowtime/proctime

2019-07-27 Thread somnussuy
在执行 SQL 的时候遇见如下情况: streamTableEnvironment.registerDataStream()注册的字段有一个是 over_time.rowtime 使用 sql:select tumble(over_time,interval '1' second),over_time from kafka_source group by over_time,tumble(over_time,interval '1' second) 会报错。ClassCastException 如果 将 group by 后面的子句交换位置, group by

rowtime/proctime

2019-07-27 Thread somnussuy
在执行 SQL 的时候遇见如下情况: streamTableEnvironment.registerDataStream()注册的字段有一个是 over_time.rowtime 使用 sql:select tumble(over_time,interval '1' second),over_time from kafka_source group by over_time,tumble(over_time,interval '1' second) 会报错。ClassCastException 如果 将 group by 后面的子句交换位置, group by

Re: FlinkSQL fails when rowtime meets dirty data

2019-05-16 Thread Fabian Hueske
| > | 2 | 2019-05-15 10:00:00 | > ... > > and I define rowtime from the server_time field: > new Schema() > .field("rowtime", Types.SQL_TIMESTAMP) >.rowtime(new Rowtime().timestampsFromField("server_time")) > .field("id"

FlinkSQL fails when rowtime meets dirty data

2019-05-15 Thread maidangdang
I use FlinkSQL to process Kafka data in the following format: | id | server_time | | 1 | 2019-05-15 10:00:00 | | 2 | 2019-05-15 10:00:00 | ... and I define rowtime from the server_time field: new Schema() .field("rowtime", Types.SQL_TIMESTAMP) .rowtime(n

Re: Flink Stream SQL group by TUMBLE(rowtime,)

2019-04-24 Thread liu_mingzhang
费掉,即便重启程序-也要发送新的数据,才会消费上次"未及时"消费的数据,而不是自动从上一次的offset+1开始。 SQL: SELECT astyle, TUMBLE_START(rowtime, INTERVAL '10' SECOND) time_start, TUMBLE_END(rowtime, INTERVAL '10' SECOND) time_end, SUM(energy) AS sum_energy, CAST(COUNT(aid) AS INT) AS cnt, CAST(AVG(age) AS INT) AS avg_age FROM t_p

Flink Stream SQL group by TUMBLE(rowtime,)

2019-04-24 Thread 邵志鹏
ECT astyle, TUMBLE_START(rowtime, INTERVAL '10' SECOND) time_start, TUMBLE_END(rowtime, INTERVAL '10' SECOND) time_end, SUM(energy) AS sum_energy, CAST(COUNT(aid) AS INT) AS cnt, CAST(AVG(age) AS INT) AS avg_age FROM t_pojo GROUP BY TUMBLE(rowtime, INTERVAL '10' SECOND), astyle assignTimestampsAndWater

Re: Rowtime for Table from DataStream without explixit fieldnames

2018-10-04 Thread Johannes Schulte
correct me if I am wrong. > > Best, > > Dawid > > > On 04/10/18 15:08, Johannes Schulte wrote: > > Hi, > > when converting a DataStream (with Watermarks) to a table like > described here > https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/table/

Re: Rowtime for Table from DataStream without explixit fieldnames

2018-10-04 Thread Dawid Wysakowicz
able like > described here > > https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/table/streaming.html#event-time > > I wonder on how to use the rowtime in a following window operation > _without_ explicitly specifying all field names and hence rely on case > class type

Rowtime for Table from DataStream without explixit fieldnames

2018-10-04 Thread Johannes Schulte
Hi, when converting a DataStream (with Watermarks) to a table like described here https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/table/streaming.html#event-time I wonder on how to use the rowtime in a following window operation _without_ explicitly specifying all field names

Re: rowTime from json nested timestamp field in SQL-Client

2018-07-17 Thread Timo Walther
ble/sqlClient.html>, we are trying to define rowTime from a nested JSON element, but struggling with syntax. JSON data format: https://pastebin.com/ByCLhEnF YML table config: https://pastebin.com/cgEtQPDQ Now, in above config, we want to access *pay

Re: rowTime from json nested timestamp field in SQL-Client

2018-07-17 Thread Ashwin Sinha
um 14:25 schrieb Ashwin Sinha: >> >> Hi Users, >> >> In Flink1.5 SQL CLient >> <https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/sqlClient.html>, >> we are trying to define rowTime from a nested JSON element, but struggling >> wit

Re: rowTime from json nested timestamp field in SQL-Client

2018-07-16 Thread Ashwin Sinha
> > Am 16.07.18 um 14:25 schrieb Ashwin Sinha: > > Hi Users, > > In Flink1.5 SQL CLient > <https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/sqlClient.html>, > we are trying to define rowTime from a nested JSON element, but struggling > with synta

Re: rowTime from json nested timestamp field in SQL-Client

2018-07-16 Thread Timo Walther
windows? Regards, Timo Am 16.07.18 um 14:25 schrieb Ashwin Sinha: Hi Users, In Flink1.5 SQL CLient <https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/sqlClient.html>, we are trying to define rowTime from a nested JSON element, but struggling with syntax. JSON data

rowTime from json nested timestamp field in SQL-Client

2018-07-16 Thread Ashwin Sinha
Hi Users, In Flink1.5 SQL CLient <https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/sqlClient.html>, we are trying to define rowTime from a nested JSON element, but struggling with syntax. JSON data format: https://pastebin.com/ByCLhEnF YML table config: https://pasteb

Re: Rowtime

2018-03-22 Thread Fabian Hueske
(*), MAX(rowtime) FROM t GROUP BY user; - SELECT user, COUNT(*), LAST_VAL(rowtime) FROM t GROUP BY user; These queries would forward the maximum (or last) event-time timestamp (rowtime) to the result table. However, none of these work in the current version or upcoming version of Flink. We also need

  1   2   >