Re: How to build dependencies and connections between stream jobs?

2019-06-18 Thread
tin > > > > > > > > [1] > https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_time.html#event-time-and-watermarks > > <https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_time.html#event-time-and-watermarks> &g

Re: Source code question - about the logic of calculating network buffer

2019-06-12 Thread
networkBufBytes = Total memory of TM * networkBufFraction. > > Therefore, the networkBufBytes = xmx / (1 - networkBufFraction) * > networkBufFraction. > > Best, > Yun > -- > From:徐涛 > Send Tim

Source code question - about the logic of calculating network buffer

2019-06-12 Thread
Hi Experts, I am debugging the WordCount Flink streaming program in local mode. Flink version is 1.7.2 I saw the following calculation logic about network buffer in class TaskManagerServices. jvmHeapNoNet is equal to -xmx amount in Java. why the networkBufB

How to build dependencies and connections between stream jobs?

2019-05-30 Thread
Hi Experts, In batch computing, there are products like Azkaban or airflow to manage batch job dependencies. By using the dependency management tool, we can build a large-scale system consist of small jobs. In stream processing, it is not practical to put all dependencies in one

Re: lack of function and low usability of provided function

2019-04-23 Thread
mpl.java:5317) at org.apache.calcite.sql.parser.impl.SqlParserImpl.BuiltinFunctionCall(SqlParserImpl.java:5281) at org.apache.calcite.sql.parser.impl.SqlParserImpl.AtomicRowExpression(SqlParserImpl.java:3474) at org.apache.calcite.sql.parser.impl.SqlParserImpl.Expression3(SqlParserImpl.

Re: Can back pressure data be gathered by Flink metric system?

2019-04-15 Thread
pache.org/projects/flink/flink-docs-release-1.8/monitoring/back_pressure.html > > Best, > Zhijiang > > ------ > From:徐涛 > Send Time:2019年4月15日(星期一) 10:19 > To:user > Subject:Can back pressure data be gathered

Can back pressure data be gathered by Flink metric system?

2019-04-14 Thread
Hi Experts, From the page Flink metric system(https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/metrics.html#system-metrics ), I do not find the info about the back p

Re: Does Flink apply API for setScale of BigDecimal type

2019-04-10 Thread
Java would be BigInt. Or you start splitting your > values into 2 parts...an unscaled value and a factor such that your value is > unscaledValue * 10^(-factor). > > Tim > > On Mon, Feb 25, 2019, 8:26 PM 徐涛 <mailto:happydexu...@gmail.com>> wrote: > Hi Experts, >

Re: Flink SQL hangs in StreamTableEnvironment.sqlUpdate, keeps executing and seems never stop, finally lead to java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-04-08 Thread
much of the search space). > Can you share the query? Did you add any optimization rules? > > Best, Fabian > > Am Mi., 3. Apr. 2019 um 12:36 Uhr schrieb 徐涛 <mailto:happydexu...@gmail.com>>: > Hi Experts, > There is a Flink application(Version 1.7.2) which is w

How to submit Flink program to Yarn without upload the fat jar?

2019-04-04 Thread
Hi Experts, When submitting a Flink program to Yarn, the app jar( a fat jar about 200M with Flink dependencies ) will be uploaded to Yarn, which will take a lot of time. I check the code in CliFrontend, and found that there is a config item named “yarn.per-job-cluster.include-user-jar”,

Flink SQL hangs in StreamTableEnvironment.sqlUpdate, keeps executing and seems never stop, finally lead to java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-04-03 Thread
Hi Experts, There is a Flink application(Version 1.7.2) which is written in Flink SQL, and the SQL in the application is quite long, consists of about 10 tables, 1500 lines in total. When executing, I found it is hanged in StreamTableEnvironment.sqlUpdate, keep executing some code about

Any suggestions about which GC collector to use in Flink?

2019-04-02 Thread
Hi Experts, In my environment, when I submit the Flink program to yarn, I do not specify which GC collector to use, in the web monitor page, I found it uses PS_Scavenge as the young generation GC collector, PS_MarkSweep as the old generation GC collector, I wonder if I can use G1 as the

Best practice to handle update messages in stream

2019-03-21 Thread
Hi Experts, Assuming there is a stream which content is like this: Seq ID MONEY 1.100 100 2.100 200 3.101 300 The record of Seq#2 is updating record of Seq#1, changing the money f

Confusing exception info in Flink-SQL

2019-03-15 Thread
Hi Experts, When I am using the following sentence in Flink-SQL if(item_name=‘xxx',u.user_id,null) The following exception was throw out, which is a bit confusing, because it is actually caused by there is no if function in Flink-SQL, I think it is more clearly to just poi

Re: How to join stream and dimension data in Flink?

2019-03-12 Thread
oc/blink/dev/table/sourcesinks#defining-a-tablesource-with-lookupable > > <https://flink-china.org/doc/blink/dev/table/sourcesinks#defining-a-tablesource-with-lookupable> > On Tue, Mar 12, 2019 at 2:13 PM 徐涛 <mailto:happydexu...@gmail.com>> wrote: > Hi Hequn, &g

Re: How to join stream and dimension data in Flink?

2019-03-11 Thread
lion records in two tables. The records in > both tables are just beginning to stream into flink, and the records as > dimension tables are not fully arrived. Therefore, your matching results may > not be as accurate as directly querying Mysql. > > In fact, the current stre

cast from bigdecimal to bigint throws ArithmeticException

2019-02-25 Thread
Hi Experts, There is a Flink table which has a column typed as java.math.BigDecimal, then in SQL I try to cast it to type long, cast(duration as bigint) however it throws the following exception: java.lang.ArithmeticException: Rounding necessary at java.math.BigDec

cast from bigdecimal to bigint throws ArithmeticException

2019-02-25 Thread
Hi Experts, There is a Flink table which has a column typed as java.math.BigDecimal, then in SQL I try to cast it to type long, cast(duration as bigint) however it throws the following exception: java.lang.ArithmeticException: Rounding necessary at java.math.BigDec

Each yarn container only use 1 vcore even if taskmanager.numberOfTaskSlots is set

2019-02-17 Thread
Hi Experts, I am running Flink 1.7.1 program on Yarn 2.7, the taskmanager.numberOfTaskSlots is set to 4. The parallelism.default is set to 8. When the program is running, 3 yarn containers is launched, but each of them only use 1 vcore, I think by default the number of vcores is set to t

UDAF Flink-SQL return null would lead to checkpoint fails

2019-01-30 Thread
Hi Experts, In my self-defined UDAF, I found if I return a null value in UDAF, would cause checkpoint fails, the following is the error log: I think it is quite a common case to return a null value in UDAF, because sometimes no value could be determined, why Flink has such a limit

Re: TimeZone shift problem in Flink SQL

2019-01-27 Thread
> > [1]: https://en.wikipedia.org/wiki/Unix_time > <https://en.wikipedia.org/wiki/Unix_time> > On Thu, Jan 24, 2019 at 4:43 PM Bowen Li <mailto:bowenl...@gmail.com>> wrote: > Hi, > > Did you consider timezone in conversion in your UDF? > > >

TimeZone shift problem in Flink SQL

2019-01-22 Thread
Hi Experts, I have the following two UDFs, unix_timestamp: transform from string to Timestamp, with the arguments (value:String, format:String), return Timestamp from_unixtime:transform from Timestamp to String, with the arguments (ts:Long, format:String), return Stri

ElasticSearch RestClient throws NoSuchMethodError due to shade mechanism

2019-01-15 Thread
Hi All, I use the following code try to build a RestClient org.elasticsearch.client.RestClient.builder( new HttpHost(xxx, xxx,"http") ).build() but when in running time, a NoSuchMethodError throws out, I think the reason is: There are two RestClient classes, one

How to get the temp result of each dynamic table when executing Flink-SQL?

2019-01-07 Thread
Hi Expert, Usually when we write Flink-SQL program, usually we need to use multiple tables to get the final result, this is due to sometimes it is not possible to implement complicated logic in one SQL, sometimes due to the clarity of logic. For example: create view A as

When could flink 1.7.1 be downloaded from maven repository

2018-12-28 Thread
Hi Experts, From github I saw flink 1.7.1 is released about 7days ago. But I still can not downloaded from maven repository. May I know why there are some lag between them? When could the jar be downloaded from maven repository? Best Henry

Deadlock happens when sink to mysql

2018-11-19 Thread
Hi Experts, I use the following sql, and sink to mysql, select album_id, date count(1) from coupon_5_discount_date_conv group by album_id, date; when sink to mysql, the following SQL is executed: insert into xxx (c1,c2,c3) values (?,?,?) on duplicate k

Re: Last batch of stream data could not be sinked when data comes very slow

2018-11-17 Thread
hat is currently happening, for example, in the Kafka / Kinesis > producer sinks. Records are buffered, and flushed at fixed intervals or when > the buffer is full. They are also flushed on every checkpoint. > > Cheers, > Gordon > > On 13 November 2018 at 5:07:32 PM,

Re: Job xxx not found exception when starting Flink program in Local

2018-11-17 Thread
started cluster? Best Henry > 在 2018年11月14日,下午5:16,Chesnay Schepler 写道: > > Did you have the WebUI open from a previous execution? If so then the UI > might still be requesting jobs from the previous job. > > On 13.11.2018 08:01, 徐涛 wrote: >> Hi Experts, >>

Last batch of stream data could not be sinked when data comes very slow

2018-11-13 Thread
Hi Experts, When we implement a sink, usually we implement a batch, according to the record number or when reaching a time interval, however this may lead to data of last batch do not write to sink. Because it is triggered by the incoming record. I also test the JDBCOutputFormat

Re: Job xxx not found exception when starting Flink program in Local

2018-11-12 Thread
After I restart my computer, the error is gone. > 在 2018年11月13日,下午2:53,徐涛 写道: > > Hi Experts, > When I start Flink program in local, I found that the following > exception throws out, I do not know why it happens because it happens in > sudden, some hours a

***UNCHECKED*** Job xxx not found exception when starting Flink program in Local

2018-11-12 Thread
Hi Experts, When I start Flink program in local, I found that the following exception throws out, I do not know why it happens because it happens in sudden, some hours ago the program can start successfully. Could anyone help to explain it? Thanks a lot!2018-11-13 14:48:45 [flink-akka.actor.default

Job xxx not found exception when starting Flink program in Local

2018-11-12 Thread
Hi Experts, When I start Flink program in local, I found that the following exception throws out, I do not know why it happens because it happens in sudden, some hours ago the program can start successfully. Could anyone help to explain it? Thanks a lot! 2018-11-13 14:48:

Job xxx not found exception when starting Flink program in Local

2018-11-12 Thread
Hi Experts, When I start Flink program in local, I found that the following exception throws out, I do not know why it happens because it happens in sudden, some hours ago the program can start successfully. Could anyone help to explain it? Thanks a lot! 2018-11-13 14:48:

Any examples on invoke the Flink REST API post method ?

2018-11-11 Thread
HI Experts, I am trying to trigger a savepoint from Flink REST API on version 1.6 , in the document it shows that I need to pass a json as a request body { "type" : "object”, "id" : "urn:jsonschema:org:apache:flink:runtime:rest:messages:job:savep

Flink SQL string literal does not support double quotation?

2018-10-31 Thread
Hi Experts, When I am running the following SQL in FLink 1.6.2, I got org.apache.calcite.sql.parser.impl.ParseException select BUYER_ID, AMOUNT, concat( from_unixtime(unix_timestamp(CREATE_TIME

Flink SQL string literal does not support double quotation?

2018-10-31 Thread
Hi Experts, When I am running the following SQL in FLink 1.6.2, I got org.apache.calcite.sql.parser.impl.ParseException select BUYER_ID, AMOUNT, concat( from_unixtime(unix_timestamp(CREATE_TIME

Wired behavior of DATE_FORMAT UDF in Flink SQL

2018-10-30 Thread
Hi Experts, I found that DATE_FORMAT(timestamp,format) returns a TIMESTAMP type, it is wired, because normally format result should be a string type. In document it says “Formats timestamp as a string using a specified format string”. But when I run it in Flink SQL, selec

Re: "405 HTTP method POST is not supported by this URL" is returned when POST a REST url on Flink on yarn

2018-10-30 Thread
367e0f548f30d98aa4efa2211e/savepoints> > > You can use > > yarn application -status > > to find the host and port of the application master (AM host & RPC Port). > > Best, > Gary > > On Wed, Sep 26, 2018 at 3:23 AM 徐涛 <mailto:happydexu...@gmail.com>

Re: "405 HTTP method POST is not supported by this URL" is returned when POST a REST url on Flink on yarn

2018-10-30 Thread
gt; > You can use > > yarn application -status > > to find the host and port of the application master (AM host & RPC Port). > > Best, > Gary > > On Wed, Sep 26, 2018 at 3:23 AM 徐涛 <mailto:happydexu...@gmail.com>> wrote: > Hi Till, > Act

Re: Savepoint failed with error "Checkpoint expired before completing"

2018-10-30 Thread
Hi Gagan, I have met with the error the checkpoint timeout too. In my case, it is not due to big checkpoint size, but due to slow sink then cause high backpressure to the upper operator. Then the barrier may take a long time to arrive to sink. Please check if it is the ca

Re: How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-30 Thread
u just need to change the node of "hadoop.version" in the parent pom file. > > Thanks, vino. > > 徐涛 mailto:happydexu...@gmail.com>> 于2018年10月29日周一 > 下午11:23写道: > Hi Vino, > Because I build the project with Maven, maybe I can not use the jars > directly d

Re: How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-29 Thread
ttps://www.apache.org/dyn/closer.lua/flink/flink-1.6.1/flink-1.6.1-bin-hadoop27-scala_2.11.tgz > > <https://www.apache.org/dyn/closer.lua/flink/flink-1.6.1/flink-1.6.1-bin-hadoop27-scala_2.11.tgz> > > Thanks, vino. > > 徐涛 mailto:happydexu...@gmail.com>> 于2018年10月26日周五 >

How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-25 Thread
Hi Experts When running flink on YARN, from ClusterEntrypoint the system environment info is print out. One of the info is "Hadoop version: 2.4.1”, I think it is from the flink-shaded-hadoop2 jar. But actually the system Hadoop version is 2.7.2. I want to know is it OK if

Re: Checkpoint acknowledge takes too long

2018-10-25 Thread
the checkpoint timeout is fixed. Best Henry > 在 2018年10月24日,下午10:43,徐涛 写道: > > Hequn & Kien, > Thanks a lot for your help, I will try it later. > > Best > Henry > > >> 在 2018年10月24日,下午8:18,Hequn Cheng > <mailto:chenghe...@gmail.com>> 写道:

Task manager count goes the expand then converge process when running flink on YARN

2018-10-24 Thread
Hi experts I am running flink job on YARN in job cluster mode, the job is divided into 2 tasks, the following are some configs of the job: parallelism.default => 16 taskmanager.numberOfTaskSlots => 8 -yn => 2 when the program starts, I found that the count

Task manager count goes the expand then converge process when running flink on YARN

2018-10-24 Thread
Hi experts I am running flink job on YARN in job cluster mode, the job is divided into 2 tasks, the following are some configs of the job: parallelism.default => 16 taskmanager.numberOfTaskSlots => 8 -yn => 2 when the program starts, I found that the count

Checkpoint acknowledge takes too long

2018-10-24 Thread
Hi I am running a flink application with parallelism 64, I left the checkpoint timeout default value, which is 10minutes, the state size is less than 1MB, I am using the FsStateBackend. The application triggers some checkpoints but all of them fails due to "Checkpoint expired be

Re: Questions in sink exactly once implementation

2018-10-13 Thread
link.apache.org/features/2018/03/01/end-to-end-exactly-once-apache-flink.html> > > > > On Fri, Oct 12, 2018 at 5:21 PM 徐涛 <mailto:happydexu...@gmail.com>> wrote: > Hi > I am reading the book “Introduction to Apache Flink”, and in the book > there mentions

Questions in sink exactly once implementation

2018-10-12 Thread
Hi I am reading the book “Introduction to Apache Flink”, and in the book there mentions two ways to achieve sink exactly once: 1. The first way is to buffer all output at the sink and commit this atomically when the sink receives a checkpoint record. 2. The second way is

Re: Small checkpoint data takes too much time

2018-10-11 Thread
> specially critical during backpressure. You can check the metric of > "checkpointAlignmentTime" for confirmation. > > Best, > Zhijiang > -- > 发件人:徐涛 > 发送时间:2018年10月10日(星期三) 13:13 > 收件人:user >

Small checkpoint data takes too much time

2018-10-09 Thread
Hi I recently encounter a problem in production. I found checkpoint takes too much time, although it doesn`t affect the job execution. I am using FsStateBackend, writing the data to a HDFS checkpointDataUri, and asynchronousSnapshots, I print the metric data “lastCheckpointDurat

Does Flink SQL "in" operation has length limit?

2018-09-28 Thread
Hi, When I am executing the following SQL in flink 1.6.1, some error throws out saying that it has a support issue, but when I reduce the number of integers in the “in” sentence, for example, trackId in (124427150,71648998) , Flink does not complain anything, so I wonder is there any len

Re: LIMIT and ORDER BY in hop window is not supported?

2018-09-27 Thread
/flink/flink-docs-master/dev/table/sql.html#orderby--limit> > > > > On Thu, Sep 27, 2018 at 9:05 PM 徐涛 <mailto:happydexu...@gmail.com>> wrote: > Hi, > I want a top n result on each hop window result, but some error throws > out when I add the

LIMIT and ORDER BY in hop window is not supported?

2018-09-27 Thread
Hi, I want a top n result on each hop window result, but some error throws out when I add the order by sentence or the limit sentence, so how do I implement such case ? Thanks a lot. SELECT trackId as id,track_title as description, count(*) as cnt FROM play WHERE appN

Re: "405 HTTP method POST is not supported by this URL" is returned when POST a REST url on Flink on yarn

2018-09-25 Thread
gt; I think when running Flink on Yarn, then you must not go through the Yarn > proxy. Instead you should directly send the post request to the node on which > the application master runs. When starting a Flink Yarn session via > yarn-session.sh, then the web interface URL is print

Re: How to join stream and batch data in Flink?

2018-09-25 Thread
the data > belonging to the mysql table is just beginning to play as a stream. > > Thanks, vino. > > 徐涛 mailto:happydexu...@gmail.com>> 于2018年9月25日周二 > 下午5:10写道: > Hi Vino & Hequn, > I am now using the table/sql API, if I import the mysql table as a > s

Re: How to join stream and batch data in Flink?

2018-09-25 Thread
implement a UDTF to access dimension table; > 3) customize the table/sql join API/statement's implementation (and change > the physical plan) > > Thanks, vino. > > 徐涛 mailto:happydexu...@gmail.com>> 于2018年9月21日周五 > 下午4:43写道: > Hi All, > Some

"405 HTTP method POST is not supported by this URL" is returned when POST a REST url on Flink on yarn

2018-09-25 Thread
Hi All, I am trying to POST a RESTful url and want to generate a savepoint, the Flink version is 1.6.0. When I executed the POST in local, everything is OK, but when I POST the url on a Flink on YARN application. The following error is returned: “405 HTTP method POST is no

Re: When should the RETAIN_ON_CANCELLATION option be used?

2018-09-24 Thread
if I am wrong. Thanks. Best Henry > 在 2018年9月25日,下午2:22,vino yang 写道: > > Hi Henry, > > I gave a blue comment in your original email. > > Thanks, vino. > > 徐涛 mailto:happydexu...@gmail.com>> 于2018年9月25日周二 > 下午12:56写道: > Hi Vino, >

Strange behavior of FsStateBackend checkpoint when local executing

2018-09-24 Thread
Hi All, I use using a FsStateBackend in local executing, I set the DELETE_ON_CANCELLATION of checkpoint. When I click the “stop” button in Intellij IDEA, the log shows that it has been switched CANCELED state, but I check the local file system, the checkpoint directory and file still exi

Re: When should the RETAIN_ON_CANCELLATION option be used?

2018-09-24 Thread
s there > any probability that I do not have a checkpoint to recover from? > > > From the latest source code, savepoint is not affected by > > CheckpointRetentionPolicy, it needs to be cleaned up manually. > > Thanks, vino. > > 徐涛 mailto:happydexu...@gmail.com>>

Re: When should the RETAIN_ON_CANCELLATION option be used?

2018-09-24 Thread
:41,徐涛 写道: > > Hi All, > In flink document, it says > DELETE_ON_CANCELLATION: “Delete the checkpoint when the job is > cancelled. The checkpoint state will only be available if the job fails.” > What is the definition and difference between job cancel and job f

When should the RETAIN_ON_CANCELLATION option be used?

2018-09-24 Thread
Hi All, In flink document, it says DELETE_ON_CANCELLATION: “Delete the checkpoint when the job is cancelled. The checkpoint state will only be available if the job fails.” What is the definition and difference between job cancel and job fails? If I run the program on yarn,

How to join stream and batch data in Flink?

2018-09-21 Thread
Hi All, Sometimes some “dimension table” need to be joined from the "fact table", if data are not joined before sent to Kafka. So if the data are joined in Flink, does the “dimension table” have to be import as a stream, or there are some other ways can achieve it? Thanks

Re: Why FlinkKafkaConsumer08 does not support exactly once or at least once semantic?

2018-09-20 Thread
I think this part of the documentation is talking about KafkaProducer, and > you are reading in the source code of KafkaConsumer. > > Best, > Stefan > >> Am 20.09.2018 um 10:48 schrieb 徐涛 > <mailto:happydexu...@gmail.com>>: >> >> Hi All, >> In do

Re: In which case the StreamNode has multiple output edges?

2018-09-20 Thread
yang <mailto:yanghua1...@gmail.com>>: > Hi tao, > > The Dataflow abstraction of Flink runtime is a DAG. In a graph, there may be > more than one in-edge and one out-edge. > A simple example of multiple out margins is that an operator is followed by > multiple sinks. &

Why FlinkKafkaConsumer08 does not support exactly once or at least once semantic?

2018-09-20 Thread
Hi All, In document of Flink 1.6, it says that "Before 0.9 Kafka did not provide any mechanisms to guarantee at-least-once or exactly-once semantics” I read the source code of FlinkKafkaConsumer08, and the comment says: “Please note that Flink snapshots the offsets internally as pa

In which case the StreamNode has multiple output edges?

2018-09-17 Thread
Hi All, I am reading the source code of flink, when I read the StreamGraph generate part, I found that there is a property named outEdges in StreamNode. I know there is a case a StreamNode has multiple inEdges, but in which case the StreamNode has multiple outEdges? Thanks a lot.

Flink application down due to RpcTimeout exception

2018-09-12 Thread
Hi All, I`m running flink1.6 on yarn,after the program run for a day, the flink program fails on yarn, and the error log is as follows: It seems that it is due to a timeout error. But I have the following questions: 1. In which step the flink components communicate failed?

Re: Semantic when table joins table from window

2018-08-28 Thread
annot be > applied. > > Have you tried to use a window join? These preserve the timestamp order. > > Fabian > > 徐涛 mailto:happydexu...@gmail.com>> schrieb am Di., > 28. Aug. 2018, 11:42: > Hi Hequn, > You can't use window or other bounded operators af

Re: Semantic when table joins table from window

2018-08-28 Thread
windows of 15 minute size. > As for choice2: > I think you need another filed(for example, HOP_START) when join the three > tables. Only join records in same window. > To solve your problem, I think we can do non-window group by first and then > join three result tables. Furthermo

Can I only use checkpoints instead of savepoints in production?

2018-08-24 Thread
Hi All, I check the documentation of Flink release 1.6, find that I can use checkpoints to resume the program either. As I encountered some problems when using savepoints, I have the following questions: 1. Can I use checkpoints only, but not use savepoints, because it can also use to resume progra

lack of function and low usability of provided function

2018-08-23 Thread
Hi All, I found flink is lack of some basic functions , for example string split, regular express support, json parse and extract support, these function are used frequently in development , but they are not supported, use has to write UDF to support this. And some of the provide

Re: Semantic when table joins table from window

2018-08-21 Thread
mple, HOP_START) when join the three > tables. Only join records in same window. > To solve your problem, I think we can do non-window group by first and then > join three result tables. Furthermore, state retention time can be set to > keep state from growing larger. > > Best, He

Re: Semantic when table joins table from window

2018-08-21 Thread
0". > 1. if you change your sql to s"SELECT article_id FROM praise GROUP BY > article_id", the answer is "101,102,103" > 2. if you change your sql to s"SELECT last_value(article_id) FROM praise", > the answer is "100" > > Best, Hequn &g

Re: Semantic when table joins table from window

2018-08-21 Thread
ple (a growing number) of rows from > praiseAggr. > > Best, Fabian > > 2018-08-21 12:19 GMT+02:00 徐涛 <mailto:happydexu...@gmail.com>>: > Hi All, > var praiseAggr = tableEnv.sqlQuery(s"SELECT article_id FROM praise > GROUP BY HOP(updated_time, INTERVAL '1&

Semantic when table joins table from window

2018-08-21 Thread
Hi All, var praiseAggr = tableEnv.sqlQuery(s"SELECT article_id FROM praise GROUP BY HOP(updated_time, INTERVAL '1' SECOND,INTERVAL '3' MINUTE) , article_id" ) tableEnv.registerTable("praiseAggr", praiseAggr) var finalTable = tableEnv.sqlQuery(s”SELECT 1 FROM article a join pr

Re: Will idle state retention trigger retract in dynamic table?

2018-08-21 Thread
> Best, Fabian > > 2018-08-21 10:52 GMT+02:00 徐涛 <mailto:happydexu...@gmail.com>>: > Hi Fabian, > SELECT article_id FROM praise GROUP BY article_id having count(1)>=10 > If article_id 123 has 100 praises and remains its state in the dynamic >

Re: Will idle state retention trigger retract in dynamic table?

2018-08-21 Thread
le state retention is not meant to affect the semantics of a query. > The semantics of updating the result should be defined in the query, e.g., > with a WHERE clause that removes all records that are older than 1 day (note, > this is not supported yet). > > Best, Fabi

Will idle state retention trigger retract in dynamic table?

2018-08-21 Thread
Hi All, Will idle state retention trigger retract in dynamic table? Best, Henry

Re: How does flink know which data is modified in dynamic table?

2018-08-20 Thread
have this constraint(correct me if I'm > wrong). Sometimes we don't have to follow MySQL. > > Best, Hequn > > On Tue, Aug 21, 2018 at 10:21 AM, 徐涛 <mailto:happydexu...@gmail.com>> wrote: > Hi Hequn, > Maybe I do not express clearly. I mean if onl

Re: How does flink know which data is modified in dynamic table?

2018-08-20 Thread
. Best, Henry > 在 2018年8月21日,上午10:09,Hequn Cheng 写道: > > Hi Henry, > > If you upsert by key 'article_id', the result is correct, i.e, the result is > (a, 2, 2018-08-20 20:18:10.486). What do you think? > > Best, Hequn > > > > On Tue,

Re: How does flink know which data is modified in dynamic table?

2018-08-20 Thread
0 20:18:10.486) > The retract row is different from the previous row because of the time field. > > Of course, this problem should be fixed later. > > Best, Hequn > > On Mon, Aug 20, 2018 at 6:43 PM, 徐涛 <mailto:happydexu...@gmail.com>> wrote: > Hi All, >

How does flink know which data is modified in dynamic table?

2018-08-20 Thread
Hi All, Like the following code,If I use retract stream, I think Flink is able to know which item is modified( if praise has 1 items now, when one item comes to the stream, only very small amount of data is write to sink) var praiseAggr = tableEnv.sqlQuery(s"SELECT article_id

How does flink know which data is modified in dynamic table?

2018-08-20 Thread
Hi All, Like the following code,If I use retract stream, I think Flink is able to know which item is modified( if praise has 1 items now, when one item comes to the stream, only very small amount of data is write to sink)  var praiseAggr = tableEnv.sqlQuery(s"SELECT article_id,hll(uid) as PU FR

How does flink know which data is modified in dynamic table?

2018-08-20 Thread
Hi All, Like the following code,If I use retract stream, I think Flink is able to know which item is modified( if praise has 1 items now, when one item comes to the stream, only very small amount of data is write to sink) var praiseAggr = tableEnv.sqlQuery(s"SELECT article_id

Re: Flink SQL does not support rename after cast type

2018-08-16 Thread
ast. It is caused by mixing of types, for > example, the query > "CASE 1 WHEN 1 THEN true WHEN 2 THEN 'string' ELSE NULL END" > will throw the same exception since type of true and 'string' are not same. > > Best, Hequn. > > On Tue, Aug 14, 2018

Flink SQL does not support rename after cast type

2018-08-13 Thread
Hi All, I am working on a project based on Flink SQL, but found that I can`t rename column after casting, the code is as below: cast(json_type as INTEGER) as xxx And the following exception is reported: org.apache.calcite.runtime.CalciteContextException: Fr

Re: flink requires table key when insert into upsert table sink

2018-08-10 Thread
d of Table (append-only/update, keyed/non-keyed) to an external system. > > Best, Fabian > > 2018-08-10 17:06 GMT+02:00 徐涛 <mailto:happydexu...@gmail.com>>: > Hi All, > I am using flink 1.6 to generate some realtime programs. I want to > write the output to tabl

flink requires table key when insert into upsert table sink

2018-08-10 Thread
Hi All, I am using flink 1.6 to generate some realtime programs. I want to write the output to table sink, the code is as below. At first I use append table sink, which error message tells me that I should use upsert table sink, so I write one. But still another error “Caused by: org.ap