[jira] [Updated] (FLINK-14327) Getting "Could not forward element to next operator" error

2019-10-04 Thread ASK5 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASK5 updated FLINK-14327:
-
Description: 
val TEMPERATURE_THRESHOLD: Double = 50.00

val see: StreamExecutionEnvironment = 
StreamExecutionEnvironment.getExecutionEnvironment

val properties = new Properties()
 properties.setProperty("zookeeper.connect", "localhost:2181")
 properties.setProperty("bootstrap.servers", "localhost:9092")

val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast",
 new JSONKeyValueDeserializationSchema(false), properties)).name("kafkaSource")
 case class Event(locationID: String, temp: Double)

var data = src.map { v => {
 val loc = v.get("locationID").asInstanceOf[String]
 val temperature = v.get("temp").asDouble()
 (loc, temperature)
 }}

data = data
 .keyBy(
 v => v._1
 )

data.print()

see.execute()

---*

And I'm getting the following error while consuming json file from Kafka:-

 
 {{Exception in thread "main" 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed 
at flinkBroadcast1$.main(flinkBroadcast1.scala:59) at 
flinkBroadcast1.main(flinkBroadcast1.scala)Caused by: java.lang.Exception: 
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
Could not forward element to next operator...Caused by: 
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
Could not forward element to next operator...Caused by: 
java.lang.NullPointerException}}

  was:
val TEMPERATURE_THRESHOLD: Double = 50.00

val see: StreamExecutionEnvironment = 
StreamExecutionEnvironment.getExecutionEnvironment

val properties = new Properties()
properties.setProperty("zookeeper.connect", "localhost:2181")
properties.setProperty("bootstrap.servers", "localhost:9092")

val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast",
 new JSONKeyValueDeserializationSchema(false), properties)).name("kafkaSource")
case class Event(locationID: String, temp: Double)

var data = src.map { v => {
 val loc = v.get("locationID").asInstanceOf[String]
 val temperature = v.get("temp").asDouble()
 (loc, temperature)
}}


data = data
 .keyBy(
 v => v._1
 )

data.print()

see.execute()

and I'm getting the following error while consuming json file from Kafka:-


 
{{Exception in thread "main" 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed 
   at flinkBroadcast1$.main(flinkBroadcast1.scala:59)at 
flinkBroadcast1.main(flinkBroadcast1.scala)Caused by: java.lang.Exception: 
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
Could not forward element to next operator...Caused by: 
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
Could not forward element to next operator...Caused by: 
java.lang.NullPointerException}}


> Getting "Could not forward element to next operator" error
> --
>
> Key: FLINK-14327
> URL: https://issues.apache.org/jira/browse/FLINK-14327
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: ASK5
>Priority: Major
> Fix For: 1.9.0
>
> Attachments: so2.png
>
>
> val TEMPERATURE_THRESHOLD: Double = 50.00
> val see: StreamExecutionEnvironment = 
> StreamExecutionEnvironment.getExecutionEnvironment
> val properties = new Properties()
>  properties.setProperty("zookeeper.connect", "localhost:2181")
>  properties.setProperty("bootstrap.servers", "localhost:9092")
> val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast",
>  new JSONKeyValueDeserializationSchema(false), 
> properties)).name("kafkaSource")
>  case class Event(locationID: String, temp: Double)
> var data = src.map { v => {
>  val loc = v.get("locationID").asInstanceOf[String]
>  val temperature = v.get("temp").asDouble()
>  (loc, temperature)
>  }}
> data = data
>  .keyBy(
>  v => v._1
>  )
> data.print()
> see.execute()
> ---*
> And I'm getting the following error while consuming json file from Kafka:-
>  
>  {{Exception in thread "main" 
> org.apache.flink.runtime.client.JobExecutionException: Job execution 
> failed at flinkBroadcast1$.main(flinkBroadcast1.scala:59) at 
> flinkBroadcast1.main(flinkBroadcast1.scala)Caused by: java.lang.Exception: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator...Caused by: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator...Caused by: 
> java.lang.NullPointerException}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14327) Getting "Could not forward element to next operator" error

2019-10-04 Thread ASK5 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASK5 updated FLINK-14327:
-
Attachment: (was: so1.png)

> Getting "Could not forward element to next operator" error
> --
>
> Key: FLINK-14327
> URL: https://issues.apache.org/jira/browse/FLINK-14327
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: ASK5
>Priority: Major
> Fix For: 1.9.0
>
> Attachments: so2.png
>
>
> val TEMPERATURE_THRESHOLD: Double = 50.00
> val see: StreamExecutionEnvironment = 
> StreamExecutionEnvironment.getExecutionEnvironment
> val properties = new Properties()
> properties.setProperty("zookeeper.connect", "localhost:2181")
> properties.setProperty("bootstrap.servers", "localhost:9092")
> val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast",
>  new JSONKeyValueDeserializationSchema(false), 
> properties)).name("kafkaSource")
> case class Event(locationID: String, temp: Double)
> var data = src.map { v => {
>  val loc = v.get("locationID").asInstanceOf[String]
>  val temperature = v.get("temp").asDouble()
>  (loc, temperature)
> }}
> data = data
>  .keyBy(
>  v => v._1
>  )
> data.print()
> see.execute()
> and I'm getting the following error while consuming json file from Kafka:-
>  
> {{Exception in thread "main" 
> org.apache.flink.runtime.client.JobExecutionException: Job execution 
> failedat flinkBroadcast1$.main(flinkBroadcast1.scala:59)at 
> flinkBroadcast1.main(flinkBroadcast1.scala)Caused by: java.lang.Exception: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator...Caused by: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator...Caused by: 
> java.lang.NullPointerException}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14327) Getting "Could not forward element to next operator" error

2019-10-04 Thread ASK5 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASK5 updated FLINK-14327:
-
Attachment: so2.png

> Getting "Could not forward element to next operator" error
> --
>
> Key: FLINK-14327
> URL: https://issues.apache.org/jira/browse/FLINK-14327
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: ASK5
>Priority: Major
> Fix For: 1.9.0
>
> Attachments: so2.png
>
>
> val TEMPERATURE_THRESHOLD: Double = 50.00
> val see: StreamExecutionEnvironment = 
> StreamExecutionEnvironment.getExecutionEnvironment
> val properties = new Properties()
> properties.setProperty("zookeeper.connect", "localhost:2181")
> properties.setProperty("bootstrap.servers", "localhost:9092")
> val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast",
>  new JSONKeyValueDeserializationSchema(false), 
> properties)).name("kafkaSource")
> case class Event(locationID: String, temp: Double)
> var data = src.map { v => {
>  val loc = v.get("locationID").asInstanceOf[String]
>  val temperature = v.get("temp").asDouble()
>  (loc, temperature)
> }}
> data = data
>  .keyBy(
>  v => v._1
>  )
> data.print()
> see.execute()
> and I'm getting the following error while consuming json file from Kafka:-
>  
> {{Exception in thread "main" 
> org.apache.flink.runtime.client.JobExecutionException: Job execution 
> failedat flinkBroadcast1$.main(flinkBroadcast1.scala:59)at 
> flinkBroadcast1.main(flinkBroadcast1.scala)Caused by: java.lang.Exception: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator...Caused by: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator...Caused by: 
> java.lang.NullPointerException}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14327) Getting "Could not forward element to next operator" error

2019-10-04 Thread ASK5 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASK5 updated FLINK-14327:
-
Attachment: so1.png

> Getting "Could not forward element to next operator" error
> --
>
> Key: FLINK-14327
> URL: https://issues.apache.org/jira/browse/FLINK-14327
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: ASK5
>Priority: Major
> Fix For: 1.9.0
>
> Attachments: so1.png
>
>
> val TEMPERATURE_THRESHOLD: Double = 50.00
> val see: StreamExecutionEnvironment = 
> StreamExecutionEnvironment.getExecutionEnvironment
> val properties = new Properties()
> properties.setProperty("zookeeper.connect", "localhost:2181")
> properties.setProperty("bootstrap.servers", "localhost:9092")
> val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast",
>  new JSONKeyValueDeserializationSchema(false), 
> properties)).name("kafkaSource")
> case class Event(locationID: String, temp: Double)
> var data = src.map { v => {
>  val loc = v.get("locationID").asInstanceOf[String]
>  val temperature = v.get("temp").asDouble()
>  (loc, temperature)
> }}
> data = data
>  .keyBy(
>  v => v._1
>  )
> data.print()
> see.execute()
> and I'm getting the following error while consuming json file from Kafka:-
>  
> {{Exception in thread "main" 
> org.apache.flink.runtime.client.JobExecutionException: Job execution 
> failedat flinkBroadcast1$.main(flinkBroadcast1.scala:59)at 
> flinkBroadcast1.main(flinkBroadcast1.scala)Caused by: java.lang.Exception: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator...Caused by: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator...Caused by: 
> java.lang.NullPointerException}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14327) Getting "Could not forward element to next operator" error

2019-10-04 Thread ASK5 (Jira)
ASK5 created FLINK-14327:


 Summary: Getting "Could not forward element to next operator" error
 Key: FLINK-14327
 URL: https://issues.apache.org/jira/browse/FLINK-14327
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.9.0
Reporter: ASK5
 Fix For: 1.9.0


val TEMPERATURE_THRESHOLD: Double = 50.00

val see: StreamExecutionEnvironment = 
StreamExecutionEnvironment.getExecutionEnvironment

val properties = new Properties()
properties.setProperty("zookeeper.connect", "localhost:2181")
properties.setProperty("bootstrap.servers", "localhost:9092")

val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast",
 new JSONKeyValueDeserializationSchema(false), properties)).name("kafkaSource")
case class Event(locationID: String, temp: Double)

var data = src.map { v => {
 val loc = v.get("locationID").asInstanceOf[String]
 val temperature = v.get("temp").asDouble()
 (loc, temperature)
}}


data = data
 .keyBy(
 v => v._1
 )

data.print()

see.execute()

and I'm getting the following error while consuming json file from Kafka:-


 
{{Exception in thread "main" 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed 
   at flinkBroadcast1$.main(flinkBroadcast1.scala:59)at 
flinkBroadcast1.main(flinkBroadcast1.scala)Caused by: java.lang.Exception: 
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
Could not forward element to next operator...Caused by: 
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
Could not forward element to next operator...Caused by: 
java.lang.NullPointerException}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9760: [FLINK-13982][runtime] Implement memory calculation logics

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9760: [FLINK-13982][runtime] Implement 
memory calculation logics
URL: https://github.com/apache/flink/pull/9760#issuecomment-534565595
 
 
   
   ## CI report:
   
   * 6bc9773c54eed0b1646ec67413cb3855c1057d87 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128922771)
   * e6213d57ff881769fcc25f4d5ec4ad3ef59830ed : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129092692)
   * 1999237e17f11e4419ca1621cd6d01be7c521893 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129118044)
   * 870b4a6267fd38565e3b669335dbe57532920942 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129220743)
   * 20ca9135b284d94e764febbd3d49b6eeb2e90094 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/130505170)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9760: [FLINK-13982][runtime] Implement memory calculation logics

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9760: [FLINK-13982][runtime] Implement 
memory calculation logics
URL: https://github.com/apache/flink/pull/9760#issuecomment-534565595
 
 
   
   ## CI report:
   
   * 6bc9773c54eed0b1646ec67413cb3855c1057d87 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128922771)
   * e6213d57ff881769fcc25f4d5ec4ad3ef59830ed : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129092692)
   * 1999237e17f11e4419ca1621cd6d01be7c521893 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129118044)
   * 870b4a6267fd38565e3b669335dbe57532920942 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129220743)
   * 20ca9135b284d94e764febbd3d49b6eeb2e90094 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/130505170)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9760: [FLINK-13982][runtime] Implement memory calculation logics

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9760: [FLINK-13982][runtime] Implement 
memory calculation logics
URL: https://github.com/apache/flink/pull/9760#issuecomment-534565595
 
 
   
   ## CI report:
   
   * 6bc9773c54eed0b1646ec67413cb3855c1057d87 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128922771)
   * e6213d57ff881769fcc25f4d5ec4ad3ef59830ed : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129092692)
   * 1999237e17f11e4419ca1621cd6d01be7c521893 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129118044)
   * 870b4a6267fd38565e3b669335dbe57532920942 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129220743)
   * 20ca9135b284d94e764febbd3d49b6eeb2e90094 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13656) Upgrade Calcite dependency to 1.21

2019-10-04 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944978#comment-16944978
 ] 

Kurt Young commented on FLINK-13656:


[~twalthr] Sorry, didn't notice this. I will double check the sub-tasks.

> Upgrade Calcite dependency to 1.21
> --
>
> Key: FLINK-13656
> URL: https://issues.apache.org/jira/browse/FLINK-13656
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner, Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Umbrella issue for all tasks related to the next Calcite upgrade to 1.21.x 
> release
> Calcite 1.21 has been released recently, we need to upgrade to version 1.21 
> for these reasons:
> - Previously we have made some temp code to support full data types in sql 
> parser, since CALCITE-3213 has been resolved, we can do some refactoring for 
> these codes;
> - We also fixed some important bug for Join which bring in from Calcite 1.20 
> join like expression promotion, such as CALCITE-3170, CALCITE-3171.
> - CALCITE-2302 has been resolved, there is possibility we support implicit 
> type coercion for Flink now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14306) flink-python build fails with No module named pkg_resources

2019-10-04 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944976#comment-16944976
 ] 

Hequn Cheng commented on FLINK-14306:
-

Hi [~pnowojski] [~chesnay] [~trohrmann], sorry for the trouble that brings to 
you and many thanks for the advice.

The problem is caused by the plugin in the pom under flink-python. The plugin 
calls gen_protos.py to generate python files in pyflink.zip. This introduces 
python dependencies and causes builds failing. These dependencies are necessary 
because we don't want to provide a semi-finished package(i.e., the package 
without the generated python files). 

[~dianfu] and I discussed offline and we think it's better to use local 
virtualenv to solve the problem so that the dependencies can be resolved 
automatically and we don't need to document it either. This is also mentioned 
by [~pnowojski] above. 

The virtual env solution may take a couple of days(we should also take the 
builds under windows into consideration). Before this, we can create a hotfix 
to remove the plugin which calls gen_protos.py to unblock these build failures 
asap.

What do you guys think? [~pnowojski][~chesnay][~trohrmann]

> flink-python build fails with No module named pkg_resources
> ---
>
> Key: FLINK-14306
> URL: https://issues.apache.org/jira/browse/FLINK-14306
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System
>Affects Versions: 1.10.0
>Reporter: Piotr Nowojski
>Priority: Critical
> Fix For: 1.10.0
>
>
> [Benchmark 
> builds|http://codespeed.dak8s.net:8080/job/flink-master-benchmarks/4576/console]
>  started to fail with
> {noformat}
> [INFO] Adding generated sources (java): 
> /home/jenkins/workspace/flink-master-benchmarks/flink/flink-python/target/generated-sources
> [INFO] 
> [INFO] --- exec-maven-plugin:1.5.0:exec (Protos Generation) @ 
> flink-python_2.11 ---
> Traceback (most recent call last):
>   File 
> "/home/jenkins/workspace/flink-master-benchmarks/flink/flink-python/pyflink/gen_protos.py",
>  line 33, in 
> import pkg_resources
> ImportError: No module named pkg_resources
> [ERROR] Command execution failed.
> (...)
> [INFO] flink-state-processor-api .. SUCCESS [  0.299 
> s]
> [INFO] flink-python ... FAILURE [  0.434 
> s]
> [INFO] flink-scala-shell .. SKIPPED
> {noformat}
> because of this ticket: https://issues.apache.org/jira/browse/FLINK-14018
> I think I can solve the benchmark builds failing quite easily by installing 
> {{setuptools}} python package, so this ticket is not about this, but about 
> deciding how should we treat such kind of external dependencies. I don't see 
> this dependency being mentioned anywhere in the documentation ([for example 
> here|https://ci.apache.org/projects/flink/flink-docs-stable/flinkDev/building.html]).
> Probably at the very least those external dependencies should be documented, 
> but also I fear about such kind of manual steps to do before building the 
> Flink can become a problem if grow out of control. Some questions:
> # Do we really need this dependency?
> # Could this dependency be resolve automatically? By installing into a local 
> python virtual environment?
> # Should we document those dependencies somewhere?
> # Maybe we should not build flink-python by default?
> # Maybe we should add a pre-build script for flink-python to verify the 
> dependencies and to throw an easy to understand error with hint how to fix it?
> CC [~hequn] [~dian.fu] [~trohrmann] [~jincheng]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14324) Convert SqlCreateTable with SqlWatermark to CatalogTable

2019-10-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14324:
---

Assignee: Jark Wu

> Convert SqlCreateTable with SqlWatermark to CatalogTable
> 
>
> Key: FLINK-14324
> URL: https://issues.apache.org/jira/browse/FLINK-14324
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>
> This should convert {{SqlWatermark}} into {{TableSchema}} and create 
> {{CatalogTable}}. 
> This should happen in {{SqlToOperationConverter#convert}}.
> In old planner, we can simply throw an unsupported exception in 
> {{SqlToOperationConverter#convert}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14326) Support to apply watermark assigner according to the WatermarkSpec in TableSourceTable

2019-10-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-14326:

Component/s: (was: Table SQL / Planner)
 Table SQL / Runtime

> Support to apply watermark assigner according to the WatermarkSpec in 
> TableSourceTable
> --
>
> Key: FLINK-14326
> URL: https://issues.apache.org/jira/browse/FLINK-14326
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Jark Wu
>Priority: Major
>
> Apply rowtime computed column if existed and watermark assigner according to 
> the {{WatermarkSpec}} in {{TableSourceTable}} when 
> {{StreamExecTableSourceScan#translateToPlan}}. Ignore TableSource’s 
> {{DefinedRowtimeAttributes}} if {{WatermarkSpec}} exists. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14326) Support to apply watermark assigner according to the WatermarkSpec in TableSourceTable

2019-10-04 Thread Jark Wu (Jira)
Jark Wu created FLINK-14326:
---

 Summary: Support to apply watermark assigner according to the 
WatermarkSpec in TableSourceTable
 Key: FLINK-14326
 URL: https://issues.apache.org/jira/browse/FLINK-14326
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jark Wu


Apply rowtime computed column if existed and watermark assigner according to 
the {{WatermarkSpec}} in {{TableSourceTable}} when 
{{StreamExecTableSourceScan#translateToPlan}}. Ignore TableSource’s 
{{DefinedRowtimeAttributes}} if {{WatermarkSpec}} exists. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14325) Convert CatalogTable to TableSourceTable with additional watermark information

2019-10-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14325:
---

Assignee: Jark Wu

> Convert CatalogTable to TableSourceTable with additional watermark information
> --
>
> Key: FLINK-14325
> URL: https://issues.apache.org/jira/browse/FLINK-14325
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>
> Pass the watermark information (named {{WatermarkSpec}}) into 
> {{TableSourceTable}} when {{DatabaseCalciteSchema#convertCatalogTable}}. 
> We should also update the {{TableSourceTable#getRowType}} because the rowtime 
> field should be marked as rowtime indicator type. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14326) Support to apply watermark assigner according to the WatermarkSpec in TableSourceTable

2019-10-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14326:
---

Assignee: Jark Wu

> Support to apply watermark assigner according to the WatermarkSpec in 
> TableSourceTable
> --
>
> Key: FLINK-14326
> URL: https://issues.apache.org/jira/browse/FLINK-14326
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>
> Apply rowtime computed column if existed and watermark assigner according to 
> the {{WatermarkSpec}} in {{TableSourceTable}} when 
> {{StreamExecTableSourceScan#translateToPlan}}. Ignore TableSource’s 
> {{DefinedRowtimeAttributes}} if {{WatermarkSpec}} exists. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14325) Convert CatalogTable to TableSourceTable with additional watermark information

2019-10-04 Thread Jark Wu (Jira)
Jark Wu created FLINK-14325:
---

 Summary: Convert CatalogTable to TableSourceTable with additional 
watermark information
 Key: FLINK-14325
 URL: https://issues.apache.org/jira/browse/FLINK-14325
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jark Wu


Pass the watermark information (named {{WatermarkSpec}}) into 
{{TableSourceTable}} when {{DatabaseCalciteSchema#convertCatalogTable}}. 

We should also update the {{TableSourceTable#getRowType}} because the rowtime 
field should be marked as rowtime indicator type. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Mrart removed a comment on issue #9773: [FLINK-14210][metrics]support connect timeout and write timeout confi…

2019-10-04 Thread GitBox
Mrart removed a comment on issue #9773: [FLINK-14210][metrics]support connect 
timeout and write timeout confi…
URL: https://github.com/apache/flink/pull/9773#issuecomment-536870094
 
 
   @rmetzger  Could you help review this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14324) Convert SqlCreateTable with SqlWatermark to CatalogTable

2019-10-04 Thread Jark Wu (Jira)
Jark Wu created FLINK-14324:
---

 Summary: Convert SqlCreateTable with SqlWatermark to CatalogTable
 Key: FLINK-14324
 URL: https://issues.apache.org/jira/browse/FLINK-14324
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jark Wu


This should convert {{SqlWatermark}} into {{TableSchema}} and create 
{{CatalogTable}}. 
This should happen in {{SqlToOperationConverter#convert}}.

In old planner, we can simply throw an unsupported exception in 
{{SqlToOperationConverter#convert}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14323) Serialize Expression to String and parse String to Expression

2019-10-04 Thread Jark Wu (Jira)
Jark Wu created FLINK-14323:
---

 Summary: Serialize Expression to String and parse String to 
Expression
 Key: FLINK-14323
 URL: https://issues.apache.org/jira/browse/FLINK-14323
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Planner
Reporter: Jark Wu


In order to persist/store the watermark strategy expression (and the future 
computed column expression), we should find a way to serialize Expression to 
String and parse String back to Expression. 

More details can be discussed under this JIRA issue, and this may need another 
FLIP. 

There are two ways:
1) introduce a {{SqlExpression}} which wraps the raw SQL string and resolved 
DataType. Only implement the asSerializableString for {{SqlExpression}}.
2) support all of the ResolvedExpressions’ asSerializableString.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14322) Add watermark information in TableSchema

2019-10-04 Thread Jark Wu (Jira)
Jark Wu created FLINK-14322:
---

 Summary: Add watermark information in TableSchema
 Key: FLINK-14322
 URL: https://issues.apache.org/jira/browse/FLINK-14322
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Planner
Reporter: Jark Wu
Assignee: Jark Wu


As discussed in FLIP-66, the watermark information should be part of 
TableSchema, and expose to connectors vis CatalogTable#getTableSchema. 

We may need to introduce a {{WatermarkSpec}} class to describe watermark 
information. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14321) Support to parse watermark statement in SQL DDL

2019-10-04 Thread Jark Wu (Jira)
Jark Wu created FLINK-14321:
---

 Summary: Support to parse watermark statement in SQL DDL
 Key: FLINK-14321
 URL: https://issues.apache.org/jira/browse/FLINK-14321
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Jark Wu
Assignee: Jark Wu


Support to parse watermark syntax in SQL DDL. This can implemented in 
{{flink-sql-parser}} module. 

The watermark syntax is as following:

{{WATERMARK FOR columnName AS }}

We should also do some validation during parsing, for example, whether the 
referenced rowtime field exist. We should also support to reference a nested 
field as the rowtime field. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14320) [FLIP-66] Support Time Attribute in SQL DDL

2019-10-04 Thread Jark Wu (Jira)
Jark Wu created FLINK-14320:
---

 Summary: [FLIP-66] Support Time Attribute in SQL DDL
 Key: FLINK-14320
 URL: https://issues.apache.org/jira/browse/FLINK-14320
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Planner, Table SQL / Runtime
Reporter: Jark Wu
Assignee: Jark Wu


In Flink 1.9, we already introduced a basic SQL DDL to create a table. However, 
it doesn’t support to define time attributes in SQL DDL yet. Time attribute is 
a basic information required by time-based operations such as windows in both 
Table API and SQL. That means, currently, users can’t apply window operations 
on the tables created by DDL. Meanwhile, we received a lot of requirements from 
the community to define event time in DDL since 1.9 released. It will be a 
great benefit to support define time attribute in SQL.

This is the umbrella issue to track all the sub-tasks.

Please see the FLIP-66 for more details: 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-66%3A+Support+Time+Attribute+in+SQL+DDL



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9841: [FLINK-14319][api] Register user jar files in execution environment.

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9841: [FLINK-14319][api] Register user jar 
files in execution environment.
URL: https://github.com/apache/flink/pull/9841#issuecomment-538594244
 
 
   
   ## CI report:
   
   * 3560d8fdf99dc8140520bf195a09171708c1c241 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/130495831)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9841: [FLINK-14319][api] Register user jar files in execution environment.

2019-10-04 Thread GitBox
flinkbot commented on issue #9841: [FLINK-14319][api] Register user jar files 
in execution environment.
URL: https://github.com/apache/flink/pull/9841#issuecomment-538594244
 
 
   
   ## CI report:
   
   * 3560d8fdf99dc8140520bf195a09171708c1c241 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2019-10-04 Thread Leo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944912#comment-16944912
 ] 

Leo Zhang edited comment on FLINK-14055 at 10/4/19 11:40 PM:
-

PR on github for FLINK-14319 https://github.com/apache/flink/pull/9841 
[~hpeter][~twalthr]


was (Author: 50man):
PR on github https://github.com/apache/flink/pull/9841 [~hpeter][~twalthr]

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2019-10-04 Thread Leo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944912#comment-16944912
 ] 

Leo Zhang commented on FLINK-14055:
---

PR on github https://github.com/apache/flink/pull/9841 [~hpeter][~twalthr]

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #9841: [FLINK-14319][api] Register user jar files in execution environment.

2019-10-04 Thread GitBox
flinkbot commented on issue #9841: [FLINK-14319][api] Register user jar files 
in execution environment.
URL: https://github.com/apache/flink/pull/9841#issuecomment-538592727
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3560d8fdf99dc8140520bf195a09171708c1c241 (Fri Oct 04 
23:40:21 UTC 2019)
   
   **Warnings:**
* **2 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14319).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14319) Register user jar files in {Stream}ExecutionEnvironment

2019-10-04 Thread Leo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944911#comment-16944911
 ] 

Leo Zhang commented on FLINK-14319:
---

PR on github https://github.com/apache/flink/pull/9841

> Register user jar files in {Stream}ExecutionEnvironment 
> 
>
> Key: FLINK-14319
> URL: https://issues.apache.org/jira/browse/FLINK-14319
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataSet, API / DataStream
>Reporter: Leo Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  I see that there are some use cases in which people want to implement their 
> own SQL application based on loading external jars for now. And the related 
> API proposals have been issued in the task FLINK-10232 Add a SQL DDL . And 
> the related sub-task FLINK-14055 is unresolved and its status is still open. 
> I feel like it's better to split this task FLINK-14055 into two goals, one 
> for DDL and the other new task for 
> _\{Stream\}ExecutionEnvironment::registerUserJarFile()_ interface. 
>  I have implemented the interfaces both for java and scala API, and they are 
> tested well. And my implementation exactly obeys to the design doc of the  
> FLINK-10232 and chooses the first option from design alternatives. So I  
> wanna share my codes if it's ok.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14319) Register user jar files in {Stream}ExecutionEnvironment

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14319:
---
Labels: pull-request-available  (was: )

> Register user jar files in {Stream}ExecutionEnvironment 
> 
>
> Key: FLINK-14319
> URL: https://issues.apache.org/jira/browse/FLINK-14319
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataSet, API / DataStream
>Reporter: Leo Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
>  I see that there are some use cases in which people want to implement their 
> own SQL application based on loading external jars for now. And the related 
> API proposals have been issued in the task FLINK-10232 Add a SQL DDL . And 
> the related sub-task FLINK-14055 is unresolved and its status is still open. 
> I feel like it's better to split this task FLINK-14055 into two goals, one 
> for DDL and the other new task for 
> _\{Stream\}ExecutionEnvironment::registerUserJarFile()_ interface. 
>  I have implemented the interfaces both for java and scala API, and they are 
> tested well. And my implementation exactly obeys to the design doc of the  
> FLINK-10232 and chooses the first option from design alternatives. So I  
> wanna share my codes if it's ok.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] mr-cloud opened a new pull request #9841: [FLINK-14319][api] Register user jar files in execution environment.

2019-10-04 Thread GitBox
mr-cloud opened a new pull request #9841: [FLINK-14319][api] Register user jar 
files in execution environment.
URL: https://github.com/apache/flink/pull/9841
 
 
   
   
   ## What is the purpose of the change
   
   Add interfaces in {Stream}ExecutionEnviroment to allow people to register 
user jar files in an execution environment.
   
   
   ## Brief change log
 - Add interface _registerUserJarFile()_ for _StreamExecutionEnvironment_ 
both for Java and Scala API.
 - Add interface _registerUserJarFile()_ for _ExecutionEnvironment_ both 
for Java and Scala API.
   
   ## Verifying this change
   This change added tests and can be verified as follows:
 - User jar in local can be tested with `AddingUserJarTest.java`.
 - User jar in a distributed file system like HDFS can be tested with 
`UserJarDfsTest.java`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (JavaDocs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7598: [FLINK-11333][protobuf] First-class serializer support for Protobuf types

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #7598: [FLINK-11333][protobuf] First-class 
serializer support for Protobuf types
URL: https://github.com/apache/flink/pull/7598#issuecomment-538572354
 
 
   
   ## CI report:
   
   * 35cb722954d3f608e5b4b0ad02f4e3570ad858cd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130486386)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7598: [FLINK-11333][protobuf] First-class serializer support for Protobuf types

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #7598: [FLINK-11333][protobuf] First-class 
serializer support for Protobuf types
URL: https://github.com/apache/flink/pull/7598#issuecomment-538572354
 
 
   
   ## CI report:
   
   * 35cb722954d3f608e5b4b0ad02f4e3570ad858cd : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/130486386)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7598: [FLINK-11333][protobuf] First-class serializer support for Protobuf types

2019-10-04 Thread GitBox
flinkbot commented on issue #7598: [FLINK-11333][protobuf] First-class 
serializer support for Protobuf types
URL: https://github.com/apache/flink/pull/7598#issuecomment-538572354
 
 
   
   ## CI report:
   
   * 35cb722954d3f608e5b4b0ad02f4e3570ad858cd : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ElliotVilhelm commented on issue #7598: [FLINK-11333][protobuf] First-class serializer support for Protobuf types

2019-10-04 Thread GitBox
ElliotVilhelm commented on issue #7598: [FLINK-11333][protobuf] First-class 
serializer support for Protobuf types
URL: https://github.com/apache/flink/pull/7598#issuecomment-538566954
 
 
   Any expected timeline for this feature? Would really like to use this 
instead of having to register custom protobuf serializer/deserializer. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7002) Partitioning broken if enum is used in compound key specified using field expression

2019-10-04 Thread Sebastian Klemke (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944820#comment-16944820
 ] 

Sebastian Klemke commented on FLINK-7002:
-

Thanks [~NicoK] for explaining this.

> Partitioning broken if enum is used in compound key specified using field 
> expression
> 
>
> Key: FLINK-7002
> URL: https://issues.apache.org/jira/browse/FLINK-7002
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.2.0, 1.3.1
>Reporter: Sebastian Klemke
>Priority: Major
> Attachments: TestJob.java, WorkingTestJob.java, testdata.avro
>
>
> When groupBy() or keyBy() is used with multiple field expressions, at least 
> one of them being an enum type serialized using EnumTypeInfo, partitioning 
> seems random, resulting in incorrectly grouped/keyed output 
> datasets/datastreams.
> The attached Flink DataSet API jobs and the test dataset detail the issue: 
> Both jobs count (id, type) occurrences, TestJob uses field expressions to 
> group, WorkingTestJob uses a KeySelector function.
> Expected output for both is 6 records, with frequency value 100_000 each. If 
> you run in LocalEnvironment, results are in fact equivalent. But when run on 
> a cluster with 5 TaskManagers, only KeySelector function with String key 
> produces correct results whereas field expressions produce random, 
> non-repeatable, wrong results.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-9806) Add a canonical link element to documentation HTML

2019-10-04 Thread nps1337 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944795#comment-16944795
 ] 

nps1337 edited comment on FLINK-9806 at 10/4/19 8:02 PM:
-

Thanks you'for you're Information sir i hopful it's make me great soon

 

 [Vegus168 ไอดีไลน์|https://www.vegus666.com/contact.html]

 


was (Author: nps1337):
Thanks you'for you're Information sir i hopful it's make me great soon

 

https://www.vegus666.com/contact.html]/";>vegus168 ไอดีไลน์

 

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-9806) Add a canonical link element to documentation HTML

2019-10-04 Thread nps1337 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944795#comment-16944795
 ] 

nps1337 edited comment on FLINK-9806 at 10/4/19 8:01 PM:
-

Thanks you'for you're Information sir i hopful it's make me great soon

 

https://www.vegus666.com/contact.html]/";>vegus168 ไอดีไลน์

 


was (Author: nps1337):
Thanks you'for you're Information sir i hopful it's make me great soon

 

[Vegus168 ไอดีไลน์|[https://www.vegus666.com/contact.html]]

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-7002) Partitioning broken if enum is used in compound key specified using field expression

2019-10-04 Thread Nico Kruber (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber closed FLINK-7002.
--
Resolution: Won't Fix

Actually, this is not a Flink issue, but an issue of enums in Java and their 
implementation of {{hashCode}} which relies on the enum instance's memory 
address and therefore may be different in each JVM.

You could instead use the enum's ordinal or its name in the key selector 
implementation.

Please also refer to this for some more info:
https://stackoverflow.com/questions/49140654/flink-error-key-group-is-not-in-keygrouprange

> Partitioning broken if enum is used in compound key specified using field 
> expression
> 
>
> Key: FLINK-7002
> URL: https://issues.apache.org/jira/browse/FLINK-7002
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.2.0, 1.3.1
>Reporter: Sebastian Klemke
>Priority: Major
> Attachments: TestJob.java, WorkingTestJob.java, testdata.avro
>
>
> When groupBy() or keyBy() is used with multiple field expressions, at least 
> one of them being an enum type serialized using EnumTypeInfo, partitioning 
> seems random, resulting in incorrectly grouped/keyed output 
> datasets/datastreams.
> The attached Flink DataSet API jobs and the test dataset detail the issue: 
> Both jobs count (id, type) occurrences, TestJob uses field expressions to 
> group, WorkingTestJob uses a KeySelector function.
> Expected output for both is 6 records, with frequency value 100_000 each. If 
> you run in LocalEnvironment, results are in fact equivalent. But when run on 
> a cluster with 5 TaskManagers, only KeySelector function with String key 
> produces correct results whereas field expressions produce random, 
> non-repeatable, wrong results.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9806) Add a canonical link element to documentation HTML

2019-10-04 Thread nps1337 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944795#comment-16944795
 ] 

nps1337 commented on FLINK-9806:


Thanks you'for you're Information sir i hopful it's make me great soon

 

[Vegus168 ไอดีไลน์|[https://www.vegus666.com/contact.html]]

> Add a canonical link element to documentation HTML
> --
>
> Key: FLINK-9806
> URL: https://issues.apache.org/jira/browse/FLINK-9806
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.3.4, 1.4.3, 1.5.3, 1.6.0
>
>
> Flink has suffered for a while with non-optimal SEO for its documentation, 
> meaning a web search for a topic covered in the documentation often produces 
> results for many versions of Flink, even preferring older versions since 
> those pages have been around for longer.
> Using a canonical link element (see references) may alleviate this by 
> informing search engines about where to find the latest documentation (i.e. 
> pages hosted under [https://ci.apache.org/projects/flink/flink-docs-master/).]
> I think this is at least worth experimenting with, and if it doesn't cause 
> problems, even backporting it to the older release branches to eventually 
> clean up the Flink docs' SEO and converge on advertising only the latest docs 
> (unless a specific version is specified).
> References:
>  * [https://moz.com/learn/seo/canonicalization]
>  * [https://yoast.com/rel-canonical/]
>  * [https://support.google.com/webmasters/answer/139066?hl=en]
>  * [https://en.wikipedia.org/wiki/Canonical_link_element]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5
URL: https://github.com/apache/flink/pull/9840#issuecomment-538507626
 
 
   
   ## CI report:
   
   * 69a880102e3f1c81e1492ec3b56b3c858a89ed54 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130460555)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5
URL: https://github.com/apache/flink/pull/9840#issuecomment-538507626
 
 
   
   ## CI report:
   
   * 69a880102e3f1c81e1492ec3b56b3c858a89ed54 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/130460555)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5

2019-10-04 Thread GitBox
flinkbot commented on issue #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5
URL: https://github.com/apache/flink/pull/9840#issuecomment-538507626
 
 
   
   ## CI report:
   
   * 69a880102e3f1c81e1492ec3b56b3c858a89ed54 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5

2019-10-04 Thread GitBox
flinkbot commented on issue #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5
URL: https://github.com/apache/flink/pull/9840#issuecomment-538500112
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 69a880102e3f1c81e1492ec3b56b3c858a89ed54 (Fri Oct 04 
17:57:31 UTC 2019)
   
   **Warnings:**
* **13 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun opened a new pull request #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5

2019-10-04 Thread GitBox
TisonKun opened a new pull request #9840: !VERIFY ONLY! Upgrade ZK to 3.5.5
URL: https://github.com/apache/flink/pull/9840
 
 
   CC @StephanEwen @zentol @lamber-ken 
   
   We explicit depends on zk in flink-queryable-state-runtime, flink-tests and 
flink-yarn-tests for a proper zk dependencies in curator-tests resolver(by 
default it uses 3.4.x but we need a 3.5.x server for recognize message like 
create container). bump curator-tests to 4.2.0 collaborate with FLINK-10052 can 
eliminate dependencies to zk outside of flink-runtime IMO.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-14283) Update Kinesis consumer documentation for watermarks and event time alignment

2019-10-04 Thread Thomas Weise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise resolved FLINK-14283.
--
Resolution: Done

> Update Kinesis consumer documentation for watermarks and event time alignment
> -
>
> Key: FLINK-14283
> URL: https://issues.apache.org/jira/browse/FLINK-14283
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Kinesis
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Periodic per shard watermarking and event time alignment have been added over 
> past releases but the doc has not been updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tweise merged pull request #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-10-04 Thread GitBox
tweise merged pull request #9814: [FLINK-14283][kinesis][docs] Update Kinesis 
consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] jgrier commented on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis consumer docs for recent feature additions

2019-10-04 Thread GitBox
jgrier commented on issue #9814: [FLINK-14283][kinesis][docs] Update Kinesis 
consumer docs for recent feature additions
URL: https://github.com/apache/flink/pull/9814#issuecomment-538486834
 
 
   @flinkbot approve all


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14316) stuck in "Job leader ... lost leadership" error

2019-10-04 Thread Steven Zhen Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944638#comment-16944638
 ] 

Steven Zhen Wu commented on FLINK-14316:


[~trohrmann] don't know. we are just beginning rolling out of 1.9. It will take 
some time for those large-state jobs to pick up 1.9.

> stuck in "Job leader ... lost leadership" error
> ---
>
> Key: FLINK-14316
> URL: https://issues.apache.org/jira/browse/FLINK-14316
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.7.2
>Reporter: Steven Zhen Wu
>Priority: Major
>
> This is the first exception caused restart loop. Later exceptions are the 
> same. Job seems to stuck in this permanent failure state.
> {code}
> 2019-10-03 21:42:46,159 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Source: 
> clpevents -> device_filter -> processed_imps -> ios_processed_impression -> i
> mps_ts_assigner (449/1360) (d237f5e99b6a4a580498821473763edb) switched from 
> SCHEDULED to FAILED.
> java.lang.Exception: Job leader for job id ecb9ad9be934edf7b1a4f7b9dd6df365 
> lost leadership.
> at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor$JobLeaderListenerImpl.lambda$jobManagerLostLeadership$1(TaskExecutor.java:1526)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
> at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #8217: [hotfix][flink-table] Add a space when generate query

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #8217: [hotfix][flink-table] Add a space 
when generate query
URL: https://github.com/apache/flink/pull/8217#issuecomment-538423384
 
 
   
   ## CI report:
   
   * 67ed4511ea84696304aa6f93234fcece9b5b7a3c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130430611)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9689: [FLINK-7151] add a basic function ddl

2019-10-04 Thread GitBox
walterddr commented on a change in pull request #9689: [FLINK-7151] add a basic 
function ddl
URL: https://github.com/apache/flink/pull/9689#discussion_r331571437
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/api/internal/TableEnvImpl.scala
 ##
 @@ -412,6 +412,18 @@ abstract class TableEnvImpl(
   case dropTable: SqlDropTable =>
 val objectIdentifier = 
catalogManager.qualifyIdentifier(dropTable.fullTableName(): _*)
 catalogManager.dropTable(objectIdentifier, dropTable.getIfExists)
+  case createFunction: SqlCreateFunction =>
+val operation = SqlToOperationConverter
+  .convert(planner, createFunction)
+  .asInstanceOf[CreateFunctionOperation]
+val identifier = 
catalogManager.qualifyIdentifier(createFunction.fullFunctionName(): _*)
+catalogManager.createFunction(
+  operation.getCatalogFunction,
+  identifier,
+  operation.isIgnoreIfExists)
+  case dropFunction: SqlDropFunction =>
+val identifier = 
catalogManager.qualifyIdentifier(dropFunction.fullFunctionName(): _*)
+catalogManager.dropFunction(identifier, dropFunction.getIfExists)
 
 Review comment:
   do you need to convert it to `DropFunctionOperation`? do we foresee this to 
happen in the future that `DropFunctionOperation.getFunctionPath` returns 
something different from `SqlDropFunction.fullFunctionName`? 
   
   e.g. will this work?
   ```
   CREATE FUNCTION default_catalog.db.func1 AS 'my.org.Func`
   DROP FUNCTION func1
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9689: [FLINK-7151] add a basic function ddl

2019-10-04 Thread GitBox
walterddr commented on a change in pull request #9689: [FLINK-7151] add a basic 
function ddl
URL: https://github.com/apache/flink/pull/9689#discussion_r331564214
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java
 ##
 @@ -529,6 +529,30 @@ public void testDropIfExists() {
check(sql, "DROP TABLE IF EXISTS `CATALOG1`.`DB1`.`TBL1`");
}
 
+   @Test
+   public void testCreateFunction() {
+   String sql = "CREATE FUNCTION catalog1.db1.function1 AS 
'org.apache.fink.function.function1'";
 
 Review comment:
   need to add `CREATE FUNCTION func AS '...'` without `catalog`/`db`: e.g. 
test both fully qualified and just function  name?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9689: [FLINK-7151] add a basic function ddl

2019-10-04 Thread GitBox
walterddr commented on a change in pull request #9689: [FLINK-7151] add a basic 
function ddl
URL: https://github.com/apache/flink/pull/9689#discussion_r331563611
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java
 ##
 @@ -529,6 +529,30 @@ public void testDropIfExists() {
check(sql, "DROP TABLE IF EXISTS `CATALOG1`.`DB1`.`TBL1`");
}
 
+   @Test
+   public void testCreateFunction() {
+   String sql = "CREATE FUNCTION catalog1.db1.function1 AS 
'org.apache.fink.function.function1'";
+   check(sql, "CREATE FUNCTION `CATALOG1`.`DB1`.`FUNCTION1` AS 
'org.apache.fink.function.function1'");
+   }
+
+   @Test
+   public void testCreateFuntionIfNotExist() {
 
 Review comment:
   typo `Function`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9689: [FLINK-7151] add a basic function ddl

2019-10-04 Thread GitBox
walterddr commented on a change in pull request #9689: [FLINK-7151] add a basic 
function ddl
URL: https://github.com/apache/flink/pull/9689#discussion_r331567932
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
 ##
 @@ -143,6 +153,23 @@ private Operation convertDropTable(SqlDropTable 
sqlDropTable) {
return new DropTableOperation(sqlDropTable.fullTableName(), 
sqlDropTable.getIfExists());
}
 
+   /** Convert CREATE FUNCTION statement. */
+   private Operation convertCreateFunction(SqlCreateFunction 
sqlCreateFunction) {
+   CatalogFunction catalogFunction =
+   new CatalogFunctionImpl(
+   
sqlCreateFunction.getFunctionClassName().toValue(),
+   new HashMap());
 
 Review comment:
   Is there anything we need to set here for the properties on catalog 
function? @bowenli86? should we create a new constructor in 
`CatalogFunctionImpl` that have default properties value? 
   this seems to me an unnecessary api exposure and we might have to maintain 
these dangling default values in lots of places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2019-10-04 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944601#comment-16944601
 ] 

Zhenqiu Huang edited comment on FLINK-14055 at 10/4/19 3:43 PM:


[~50man]
If you already finished the jar registration in execution environment PoC, 
please create a PR for FLINK-14319. 


was (Author: zhenqiuhuang):
[~50man]
If you already finished the jar registration in execution environment PoC, 
please create a PR for LINK-14319. 

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2019-10-04 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944601#comment-16944601
 ] 

Zhenqiu Huang commented on FLINK-14055:
---

[~50man]
If you already finished the jar registration in execution environment PoC, 
please create a PR for LINK-14319. 

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-7151) Add a basic function SQL DDL

2019-10-04 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944599#comment-16944599
 ] 

Zhenqiu Huang commented on FLINK-7151:
--

[~twalthr]
Thanks for bringing this up. I join the discussion on FLIP-69, and also aligned 
with [~Terry1897] to put this effort into FLIP-69. Basically, Terry and I will 
cooperate on the Flink SQL DDL Enhancement. For the concern of handling type 
extraction in Scala and Java API, I am sorry that I am indeed not aware of 
this. I agree to disable CREATE FUNCTION syntax for Scala table environments as 
a temporary solution. We can revisit it after FLIP-57 solution is finalized. 
How do you think?

 [~suez1224] [~danny0405][~phoenixjiangnan]

> Add a basic function SQL DDL
> 
>
> Key: FLINK-7151
> URL: https://issues.apache.org/jira/browse/FLINK-7151
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: yuemeng
>Assignee: Zhenqiu Huang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Based on create function and table.we can register a udf,udaf,udtf use sql:
> {code}
> CREATE FUNCTION [IF NOT EXISTS] [catalog_name.db_name.]function_name AS 
> class_name;
> DROP FUNCTION [IF EXISTS] [catalog_name.db_name.]function_name;
> ALTER FUNCTION [IF EXISTS] [catalog_name.db_name.]function_name RENAME TO 
> new_name;
> {code}
> {code}
> CREATE function 'TOPK' AS 
> 'com..aggregate.udaf.distinctUdaf.topk.ITopKUDAF';
> INSERT INTO db_sink SELECT id, TOPK(price, 5, 'DESC') FROM kafka_source GROUP 
> BY id;
> {code}
> This ticket can assume that the function class is already loaded in classpath 
> by users. Advanced syntax like to how to dynamically load udf libraries from 
> external locations can be on a separate ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-6053) Gauge should only take subclasses of Number, rather than everything

2019-10-04 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-6053:
-
Priority: Minor  (was: Major)

> Gauge should only take subclasses of Number, rather than everything
> --
>
> Key: FLINK-6053
> URL: https://issues.apache.org/jira/browse/FLINK-6053
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Affects Versions: 1.2.0
>Reporter: Bowen Li
>Assignee: Chesnay Schepler
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, Flink's Gauge is defined as 
> ```java
> public interface Gauge extends Metric {
>   T getValue();
> }
> ```
> But it doesn't make sense to have Gauge take generic types other than Number. 
> And it blocks I from finishing FLINK-6013, because I cannot assume Gauge is 
> only about Number. So the class should be like
> ```java
> public interface Gauge extends Metric {
>   T getValue();
> }
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2019-10-04 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944598#comment-16944598
 ] 

Zhenqiu Huang commented on FLINK-14055:
---

[~twalthr]
Sure. I can't agree more. I think we should combine this effort within FLIP-69. 
I already made a comment on on the doc. Please follow up and discuss there.

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2019-10-04 Thread Leo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944595#comment-16944595
 ] 

Leo Zhang edited comment on FLINK-14055 at 10/4/19 3:28 PM:


Yeah. I totally agree with you [~twalthr] that this sub-task may be oversized. 
And it should be reasonable to fire a new issue to support loading external 
user jars. It will be useful and handy for the end-users in many cases besides 
Table API&SQL. I set up this goal in FLINK-14319 and please discuss it there 
sir. [~twalthr]. 
 


was (Author: 50man):
Yeah. I totally agree with you [~twalthr] that this sub-task may be oversized. 
And it should be reasonable to fire a new issue to support loading external 
user jars. It will be useful and handy for the end-users in many cases besides 
 Table API&SQL. I set up this goal in FLINK-14319 and please discuss it there 
sir. [~twalthr]. 
 

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2019-10-04 Thread Leo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944595#comment-16944595
 ] 

Leo Zhang commented on FLINK-14055:
---

Yeah. I totally agree with you [~twalthr] that this sub-task may be oversized. 
And it should be reasonable to fire a new issue to support loading external 
user jars. It will be useful and handy for the end-users in many cases besides 
 Table API&SQL. I set up this goal in FLINK-14319 and please discuss it there 
sir. [~twalthr]. 
 

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9104: [HOTFIX][mvn] upgrade frontend-maven-plugin version to 1.7.5

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9104: [HOTFIX][mvn] upgrade 
frontend-maven-plugin version to 1.7.5
URL: https://github.com/apache/flink/pull/9104#issuecomment-510895464
 
 
   
   ## CI report:
   
   * 1d3eb46ab1b663b59439c32011c7f959b64c18d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054601)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11340) Bump commons-configuration from 1.7 to 1.10

2019-10-04 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-11340.

Resolution: Won't Fix

I'm not aware of any concrete issue we are having with commons-configuration, 
and as far as I can tell we aren't using it directly but only transitively via 
hadoop. I'd rather not mess with hadoop dependencies unless absolutely 
necessary.

Closing this issue.



> Bump commons-configuration from 1.7 to 1.10
> ---
>
> Key: FLINK-11340
> URL: https://issues.apache.org/jira/browse/FLINK-11340
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Fokko Driesprong
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Bump commons-configuration from 1.7 to 1.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol commented on issue #7499: [FLINK-11340] Bump commons-configuration from 1.7 to 1.10

2019-10-04 Thread GitBox
zentol commented on issue #7499: [FLINK-11340] Bump commons-configuration from 
1.7 to 1.10
URL: https://github.com/apache/flink/pull/7499#issuecomment-538433635
 
 
   Closing this PR, see the JIRA for details.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol closed pull request #7499: [FLINK-11340] Bump commons-configuration from 1.7 to 1.10

2019-10-04 Thread GitBox
zentol closed pull request #7499: [FLINK-11340] Bump commons-configuration from 
1.7 to 1.10
URL: https://github.com/apache/flink/pull/7499
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8217: [hotfix][flink-table] Add a space when generate query

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #8217: [hotfix][flink-table] Add a space 
when generate query
URL: https://github.com/apache/flink/pull/8217#issuecomment-538423384
 
 
   
   ## CI report:
   
   * 67ed4511ea84696304aa6f93234fcece9b5b7a3c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/130430611)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on issue #7497: [FLINK-11338] Bump maven-enforcer-plugin from 3.0.0-M1 to 3.0.0-M2

2019-10-04 Thread GitBox
zentol commented on issue #7497: [FLINK-11338] Bump maven-enforcer-plugin from 
3.0.0-M1 to 3.0.0-M2
URL: https://github.com/apache/flink/pull/7497#issuecomment-538431361
 
 
   Closing for now, see JIRA for details.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol closed pull request #7497: [FLINK-11338] Bump maven-enforcer-plugin from 3.0.0-M1 to 3.0.0-M2

2019-10-04 Thread GitBox
zentol closed pull request #7497: [FLINK-11338] Bump maven-enforcer-plugin from 
3.0.0-M1 to 3.0.0-M2
URL: https://github.com/apache/flink/pull/7497
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11338) Bump maven-enforcer-plugin from 3.0.0-M1 to 3.0.0-M2

2019-10-04 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-11338.

Resolution: Won't Fix

Plugin seems to work just fine on Java 11, and I couldn't find anything else of 
interest in the release notes.

Closing this issue for now.

> Bump maven-enforcer-plugin from 3.0.0-M1 to 3.0.0-M2
> 
>
> Key: FLINK-11338
> URL: https://issues.apache.org/jira/browse/FLINK-11338
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Fokko Driesprong
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Bump maven-enforcer-plugin from 3.0.0-M1 to 3.0.0-M2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14211) Add jobManager address configuration for SqlClient

2019-10-04 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944562#comment-16944562
 ] 

Timo Walther edited comment on FLINK-14211 at 10/4/19 2:50 PM:
---

[~arugal] most of the deployment parameters are the ones also used for the 
Flink CLI: 
https://ci.apache.org/projects/flink/flink-docs-master/ops/cli.html#usage
So {{jobmanager: }} should work.
If this works for you, can you close this issue or open a PR for documentation?


was (Author: twalthr):
[~arugal] most of the deployment parameters are the ones also used for the 
Flink CLI: 
https://ci.apache.org/projects/flink/flink-docs-master/ops/cli.html#usage
If this works for you, can you close this issue or open a PR for documentation?

> Add jobManager address configuration for SqlClient
> --
>
> Key: FLINK-14211
> URL: https://issues.apache.org/jira/browse/FLINK-14211
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.9.0
>Reporter: zhangwei
>Priority: Trivial
>
> Add jobmanager option in deployment configuration, allow SQL clients to 
> submit jobs to the remote flink cluster
> {code:java}
> deployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
>   # (optional) jobmanager address
>   jobmanager: 127.0.0.1:8081
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14211) Add jobManager address configuration for SqlClient

2019-10-04 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944562#comment-16944562
 ] 

Timo Walther edited comment on FLINK-14211 at 10/4/19 2:49 PM:
---

[~arugal] most of the deployment parameters are the ones also used for the 
Flink CLI: 
https://ci.apache.org/projects/flink/flink-docs-master/ops/cli.html#usage
If this works for you, can you close this issue or open a PR for documentation?


was (Author: twalthr):
[~arugal] most of the deployment parameters are the ones also used for the 
Flink CLI: 
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html
If this works for you, can you close this issue or open a PR for documentation?

> Add jobManager address configuration for SqlClient
> --
>
> Key: FLINK-14211
> URL: https://issues.apache.org/jira/browse/FLINK-14211
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.9.0
>Reporter: zhangwei
>Priority: Trivial
>
> Add jobmanager option in deployment configuration, allow SQL clients to 
> submit jobs to the remote flink cluster
> {code:java}
> deployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
>   # (optional) jobmanager address
>   jobmanager: 127.0.0.1:8081
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol commented on issue #7498: [FLINK-11339] Bump exec-maven-plugin from 1.5.0 to 1.6.0

2019-10-04 Thread GitBox
zentol commented on issue #7498: [FLINK-11339] Bump exec-maven-plugin from 
1.5.0 to 1.6.0
URL: https://github.com/apache/flink/pull/7498#issuecomment-538429269
 
 
   Closing for now, see JIRA for details.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11339) Bump exec-maven-plugin from 1.5.0 to 1.6.0

2019-10-04 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-11339.

Resolution: Won't Fix

The plugin seems to work just fine on Java 11, at least for our use-cases.

Closing this issue for now as I see no reason to upgrade for the time being.

>  Bump exec-maven-plugin from 1.5.0 to 1.6.0
> ---
>
> Key: FLINK-11339
> URL: https://issues.apache.org/jira/browse/FLINK-11339
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Fokko Driesprong
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Bump exec-maven-plugin from 1.5.0 to 1.6.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol closed pull request #7498: [FLINK-11339] Bump exec-maven-plugin from 1.5.0 to 1.6.0

2019-10-04 Thread GitBox
zentol closed pull request #7498: [FLINK-11339] Bump exec-maven-plugin from 
1.5.0 to 1.6.0
URL: https://github.com/apache/flink/pull/7498
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14211) Add jobManager address configuration for SqlClient

2019-10-04 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944562#comment-16944562
 ] 

Timo Walther commented on FLINK-14211:
--

[~arugal] most of the deployment parameters are the ones also used for the 
Flink CLI: 
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html
If this works for you, can you close this issue or open a PR for documentation?

> Add jobManager address configuration for SqlClient
> --
>
> Key: FLINK-14211
> URL: https://issues.apache.org/jira/browse/FLINK-14211
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.9.0
>Reporter: zhangwei
>Priority: Trivial
>
> Add jobmanager option in deployment configuration, allow SQL clients to 
> submit jobs to the remote flink cluster
> {code:java}
> deployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
>   # (optional) jobmanager address
>   jobmanager: 127.0.0.1:8081
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9284: [FLINK-13502] move CatalogTableStatisticsConverter & TreeNode to correct package

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9284: [FLINK-13502] move 
CatalogTableStatisticsConverter & TreeNode to correct package
URL: https://github.com/apache/flink/pull/9284#issuecomment-516662768
 
 
   
   ## CI report:
   
   * fff36fd941c13764ad55fc85e4604ab64689aa21 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121345562)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8217: [hotfix][flink-table] Add a space when generate query

2019-10-04 Thread GitBox
flinkbot commented on issue #8217: [hotfix][flink-table] Add a space when 
generate query
URL: https://github.com/apache/flink/pull/8217#issuecomment-538423384
 
 
   
   ## CI report:
   
   * 67ed4511ea84696304aa6f93234fcece9b5b7a3c : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13656) Upgrade Calcite dependency to 1.21

2019-10-04 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944548#comment-16944548
 ] 

Timo Walther commented on FLINK-13656:
--

[~ykt836] this issue has been closed but still has sub-tasks? are some of those 
fixed or do they have to be moved to a new issue that upgrades to Calcite 1.22?

> Upgrade Calcite dependency to 1.21
> --
>
> Key: FLINK-13656
> URL: https://issues.apache.org/jira/browse/FLINK-13656
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner, Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Umbrella issue for all tasks related to the next Calcite upgrade to 1.21.x 
> release
> Calcite 1.21 has been released recently, we need to upgrade to version 1.21 
> for these reasons:
> - Previously we have made some temp code to support full data types in sql 
> parser, since CALCITE-3213 has been resolved, we can do some refactoring for 
> these codes;
> - We also fixed some important bug for Join which bring in from Calcite 1.20 
> join like expression promotion, such as CALCITE-3170, CALCITE-3171.
> - CALCITE-2302 has been resolved, there is possibility we support implicit 
> type coercion for Flink now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-13065) Document example snippet correction using KeySelector

2019-10-04 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-13065.

Fix Version/s: 1.10.0
   Resolution: Fixed

master: 28c5264db5297d1df8ba7e4b774eb485d02980db

> Document example snippet correction using KeySelector
> -
>
> Key: FLINK-13065
> URL: https://issues.apache.org/jira/browse/FLINK-13065
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Documentation
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: correction, doc,, example, pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The broadcast state 
> [example|[https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/broadcast_state.html#provided-apis]]
>  states:
>  
> {noformat}
> Starting from the stream of Items, we just need to key it by Color, as we 
> want pairs of the same color. This will make sure that elements of the same 
> color end up on the same physical machine.
> // key the shapes by color
> KeyedStream colorPartitionedStream = shapeStream
> .keyBy(new KeySelector Color>(){...});{noformat}
>  
> However, it uses shape stream and KeySelector but should use 
> KeySelector to create KeyedStream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol merged pull request #8954: [FLINK-13065][doc] Corrected example snippet using KeySelector

2019-10-04 Thread GitBox
zentol merged pull request #8954: [FLINK-13065][doc] Corrected example snippet 
using KeySelector
URL: https://github.com/apache/flink/pull/8954
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13065) Document example snippet correction using KeySelector

2019-10-04 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-13065:
-
Component/s: API / DataStream

> Document example snippet correction using KeySelector
> -
>
> Key: FLINK-13065
> URL: https://issues.apache.org/jira/browse/FLINK-13065
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Documentation
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: correction, doc,, example, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The broadcast state 
> [example|[https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/broadcast_state.html#provided-apis]]
>  states:
>  
> {noformat}
> Starting from the stream of Items, we just need to key it by Color, as we 
> want pairs of the same color. This will make sure that elements of the same 
> color end up on the same physical machine.
> // key the shapes by color
> KeyedStream colorPartitionedStream = shapeStream
> .keyBy(new KeySelector Color>(){...});{noformat}
>  
> However, it uses shape stream and KeySelector but should use 
> KeySelector to create KeyedStream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol commented on issue #9104: [HOTFIX][mvn] upgrade frontend-maven-plugin version to 1.7.5

2019-10-04 Thread GitBox
zentol commented on issue #9104: [HOTFIX][mvn] upgrade frontend-maven-plugin 
version to 1.7.5
URL: https://github.com/apache/flink/pull/9104#issuecomment-538417723
 
 
   Plugin version bumps are not eligible for hotfixes. Please open a dedicated 
JIRA ticket and thoroughly explain the problem and solution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on issue #9259: [MINOR][docs] Doc fix for cluster_setup.md

2019-10-04 Thread GitBox
zentol commented on issue #9259: [MINOR][docs] Doc fix for cluster_setup.md
URL: https://github.com/apache/flink/pull/9259#issuecomment-538416981
 
 
   If a config option applies to all jobs, then it is not a job-level option.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol closed pull request #9259: [MINOR][docs] Doc fix for cluster_setup.md

2019-10-04 Thread GitBox
zentol closed pull request #9259: [MINOR][docs] Doc fix for cluster_setup.md
URL: https://github.com/apache/flink/pull/9259
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9839: [BP-1.8][FLINK-14315] Make heartbeat manager fields non-nullable

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9839: [BP-1.8][FLINK-14315] Make heartbeat 
manager fields non-nullable
URL: https://github.com/apache/flink/pull/9839#issuecomment-538387060
 
 
   
   ## CI report:
   
   * 085d38ccb49f7d3ea4acb07d44903fdc8d2c6d16 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130412432)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9838: [BP-1.9][FLINK-14315] Make heartbeat manager fields non-nullable

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9838: [BP-1.9][FLINK-14315] Make heartbeat 
manager fields non-nullable
URL: https://github.com/apache/flink/pull/9838#issuecomment-538387018
 
 
   
   ## CI report:
   
   * f6a760af29b7e7c1512463497443699ea6454926 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130412356)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun closed pull request #9762: !IGNORE! ZK 3.5.5 shaded migration

2019-10-04 Thread GitBox
TisonKun closed pull request #9762: !IGNORE! ZK 3.5.5 shaded migration
URL: https://github.com/apache/flink/pull/9762
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #9762: !IGNORE! ZK 3.5.5 shaded migration

2019-10-04 Thread GitBox
TisonKun commented on issue #9762: !IGNORE! ZK 3.5.5 shaded migration
URL: https://github.com/apache/flink/pull/9762#issuecomment-538412706
 
 
   closed as prefer narrow the scope.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #9774: [hotfix][tools] Update comments in verify_scala_suffixes.sh

2019-10-04 Thread GitBox
TisonKun commented on issue #9774: [hotfix][tools] Update comments in 
verify_scala_suffixes.sh
URL: https://github.com/apache/flink/pull/9774#issuecomment-538412292
 
 
   @zentol yes I just downgraded to maven 3.2.5 recently for properly use our 
tools...
   
   closed as won't do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun closed pull request #9774: [hotfix][tools] Update comments in verify_scala_suffixes.sh

2019-10-04 Thread GitBox
TisonKun closed pull request #9774: [hotfix][tools] Update comments in 
verify_scala_suffixes.sh
URL: https://github.com/apache/flink/pull/9774
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on issue #9758: [FLINK-14185][tests]Move unstable assertion statement out of close method of QS test server.

2019-10-04 Thread GitBox
zentol commented on issue #9758: [FLINK-14185][tests]Move unstable assertion 
statement out of close method of QS test server.
URL: https://github.com/apache/flink/pull/9758#issuecomment-538411619
 
 
   What if we didn't share the stats between servers?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9837: [FLINK-14315] Make heartbeat manager fields non-nullable

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9837: [FLINK-14315] Make heartbeat manager 
fields non-nullable
URL: https://github.com/apache/flink/pull/9837#issuecomment-538378656
 
 
   
   ## CI report:
   
   * e600698ff32b962308385e8d1c9f69b7c3eeaef1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130409040)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9807: [FLINK-14280] Introduce DispatcherRunner

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9807: [FLINK-14280] Introduce 
DispatcherRunner 
URL: https://github.com/apache/flink/pull/9807#issuecomment-536314902
 
 
   
   ## CI report:
   
   * 9ef8fe986e4b06d6ae8512e5051933458363511c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129597654)
   * 748954650809fd2fa18572d1c925d3b647a55906 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129707578)
   * 1ca7b625300da1a6a522fd1ba8427f5398f61ed5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129834475)
   * d0bd4cbe32866ed3f6cceafe6dd65133f96350b3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129850793)
   * bf3c8311655872a9def94d46bf1363bc89c4715f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129867229)
   * a5b8cd87d6c2ce7a41531c34eae8ef78e1a3255f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130065977)
   * 15c8e97a9d68eabf2882f879ced393b90b16693f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/130409014)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr commented on a change in pull request #9748: [FLINK-14016][python][flink-table-planner] Introduce DataStreamPythonCalc for Python function execution

2019-10-04 Thread GitBox
twalthr commented on a change in pull request #9748: 
[FLINK-14016][python][flink-table-planner] Introduce DataStreamPythonCalc for 
Python function execution
URL: https://github.com/apache/flink/pull/9748#discussion_r331506354
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/FunctionDefinition.java
 ##
 @@ -40,6 +40,13 @@
 */
FunctionKind getKind();
 
+   /**
+* Returns the language of function this definition describes.
+*/
+   default FunctionLanguage getLanguage() {
 
 Review comment:
   From an architectural point of view, I'm unsure if a Python function should 
be a `UserDefinedFunction`. Because it is more a system bridging function for 
executing Python UDFs.
   
   As far as I can see you already do `implements PythonFunction`; isn't that 
enough information? I still don't understand why users need to be able to 
override a `getLanguage()` method. When is this method read?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14276) Scala quickstart project does not compile on Java9+

2019-10-04 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-14276:
-
Summary: Scala quickstart project does not compile on Java9+  (was: 
Quickstarts Scala nightly end-to-end test fails on Travis)

> Scala quickstart project does not compile on Java9+
> ---
>
> Key: FLINK-14276
> URL: https://issues.apache.org/jira/browse/FLINK-14276
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{Quickstarts Scala nightly end-to-end test}} fails on Travis when 
> running the {{e2e - misc - jdk11}} profile. The failure cause is
> {code}
> 19:32:57.344 [ERROR] error: java.lang.NoClassDefFoundError: 
> javax/tools/ToolProvider
> 19:32:57.344 [INFO]   at 
> scala.reflect.io.JavaToolsPlatformArchive.iterator(ZipArchive.scala:301)
> 19:32:57.344 [INFO]   at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 19:32:57.344 [INFO]   at 
> scala.reflect.io.AbstractFile.foreach(AbstractFile.scala:92)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.traverse(ClassPath.scala:277)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.x$15$lzycompute(ClassPath.scala:299)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.x$15(ClassPath.scala:299)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.packages$lzycompute(ClassPath.scala:299)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.packages(ClassPath.scala:299)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.packages(ClassPath.scala:264)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.util.MergedClassPath$$anonfun$packages$1.apply(ClassPath.scala:358)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.util.MergedClassPath$$anonfun$packages$1.apply(ClassPath.scala:358)
> 19:32:57.345 [INFO]   at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 19:32:57.345 [INFO]   at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 19:32:57.345 [INFO]   at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 19:32:57.345 [INFO]   at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.util.MergedClassPath.packages$lzycompute(ClassPath.scala:358)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.util.MergedClassPath.packages(ClassPath.scala:353)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$PackageLoader$$anonfun$doComplete$1.apply$mcV$sp(SymbolLoaders.scala:269)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$PackageLoader$$anonfun$doComplete$1.apply(SymbolLoaders.scala:260)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$PackageLoader$$anonfun$doComplete$1.apply(SymbolLoaders.scala:260)
> 19:32:57.345 [INFO]   at 
> scala.reflect.internal.SymbolTable.enteringPhase(SymbolTable.scala:235)
> 19:32:57.346 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$PackageLoader.doComplete(SymbolLoaders.scala:260)
> 19:32:57.346 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$SymbolLoader.complete(SymbolLoaders.scala:211)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1535)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Mirrors$RootsBase.init(Mirrors.scala:256)
> 19:32:57.346 [INFO]   at 
> scala.tools.nsc.Global.rootMirror$lzycompute(Global.scala:73)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Global.rootMirror(Global.scala:71)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Global.rootMirror(Global.scala:39)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:257)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:257)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1390)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Global$Run.(Global.scala:1242)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Driver.doCompile(Driver.scala:31)
> 19:32:57.346 [INFO]   at scala.tools.nsc.MainClass.doCompile(Main.scala:23)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Driver.process(Driver.scala:51)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Driver.main(Driver.scala:64)
> 19:32:57.347 [INFO]   at scala.tools.nsc.Main.main(Main.scala)
> 19:32:57.347 [INFO]   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 19:32:57.347 [INFO]   at 
> java.b

[jira] [Closed] (FLINK-14276) Quickstarts Scala nightly end-to-end test fails on Travis

2019-10-04 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-14276.

Resolution: Fixed

master: 3f495d8215e1276ad91de5e4a5b6f160ba15eb13

> Quickstarts Scala nightly end-to-end test fails on Travis
> -
>
> Key: FLINK-14276
> URL: https://issues.apache.org/jira/browse/FLINK-14276
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{Quickstarts Scala nightly end-to-end test}} fails on Travis when 
> running the {{e2e - misc - jdk11}} profile. The failure cause is
> {code}
> 19:32:57.344 [ERROR] error: java.lang.NoClassDefFoundError: 
> javax/tools/ToolProvider
> 19:32:57.344 [INFO]   at 
> scala.reflect.io.JavaToolsPlatformArchive.iterator(ZipArchive.scala:301)
> 19:32:57.344 [INFO]   at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 19:32:57.344 [INFO]   at 
> scala.reflect.io.AbstractFile.foreach(AbstractFile.scala:92)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.traverse(ClassPath.scala:277)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.x$15$lzycompute(ClassPath.scala:299)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.x$15(ClassPath.scala:299)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.packages$lzycompute(ClassPath.scala:299)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.packages(ClassPath.scala:299)
> 19:32:57.344 [INFO]   at 
> scala.tools.nsc.util.DirectoryClassPath.packages(ClassPath.scala:264)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.util.MergedClassPath$$anonfun$packages$1.apply(ClassPath.scala:358)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.util.MergedClassPath$$anonfun$packages$1.apply(ClassPath.scala:358)
> 19:32:57.345 [INFO]   at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 19:32:57.345 [INFO]   at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 19:32:57.345 [INFO]   at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 19:32:57.345 [INFO]   at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.util.MergedClassPath.packages$lzycompute(ClassPath.scala:358)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.util.MergedClassPath.packages(ClassPath.scala:353)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$PackageLoader$$anonfun$doComplete$1.apply$mcV$sp(SymbolLoaders.scala:269)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$PackageLoader$$anonfun$doComplete$1.apply(SymbolLoaders.scala:260)
> 19:32:57.345 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$PackageLoader$$anonfun$doComplete$1.apply(SymbolLoaders.scala:260)
> 19:32:57.345 [INFO]   at 
> scala.reflect.internal.SymbolTable.enteringPhase(SymbolTable.scala:235)
> 19:32:57.346 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$PackageLoader.doComplete(SymbolLoaders.scala:260)
> 19:32:57.346 [INFO]   at 
> scala.tools.nsc.symtab.SymbolLoaders$SymbolLoader.complete(SymbolLoaders.scala:211)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1535)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Mirrors$RootsBase.init(Mirrors.scala:256)
> 19:32:57.346 [INFO]   at 
> scala.tools.nsc.Global.rootMirror$lzycompute(Global.scala:73)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Global.rootMirror(Global.scala:71)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Global.rootMirror(Global.scala:39)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:257)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:257)
> 19:32:57.346 [INFO]   at 
> scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1390)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Global$Run.(Global.scala:1242)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Driver.doCompile(Driver.scala:31)
> 19:32:57.346 [INFO]   at scala.tools.nsc.MainClass.doCompile(Main.scala:23)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Driver.process(Driver.scala:51)
> 19:32:57.346 [INFO]   at scala.tools.nsc.Driver.main(Driver.scala:64)
> 19:32:57.347 [INFO]   at scala.tools.nsc.Main.main(Main.scala)
> 19:32:57.347 [INFO]   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 19:32:57.347 [INFO]   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.

[GitHub] [flink] zentol merged pull request #9833: [FLINK-14276][quickstarts] Scala quickstart compiles on JDK 11

2019-10-04 Thread GitBox
zentol merged pull request #9833: [FLINK-14276][quickstarts] Scala quickstart 
compiles on JDK 11
URL: https://github.com/apache/flink/pull/9833
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9838: [BP-1.9][FLINK-14315] Make heartbeat manager fields non-nullable

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9838: [BP-1.9][FLINK-14315] Make heartbeat 
manager fields non-nullable
URL: https://github.com/apache/flink/pull/9838#issuecomment-538387018
 
 
   
   ## CI report:
   
   * f6a760af29b7e7c1512463497443699ea6454926 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/130412356)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9839: [BP-1.8][FLINK-14315] Make heartbeat manager fields non-nullable

2019-10-04 Thread GitBox
flinkbot edited a comment on issue #9839: [BP-1.8][FLINK-14315] Make heartbeat 
manager fields non-nullable
URL: https://github.com/apache/flink/pull/9839#issuecomment-538387060
 
 
   
   ## CI report:
   
   * 085d38ccb49f7d3ea4acb07d44903fdc8d2c6d16 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/130412432)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7151) Add a basic function SQL DDL

2019-10-04 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944492#comment-16944492
 ] 

Timo Walther commented on FLINK-7151:
-

Hi everyone, sorry for entering the discussion so late. Actually, I would have 
voted for creating a little FLIP for this issue because it introduces new SQL 
syntax and public API. We should ensure that this issue is also compatible with 
FLIP-32, FLIP-69, and FLIP-57. I agree that the syntax changes are minimal and 
straight forward, which is why I support the syntax in general. For the next 
time please consider adding a FLIP with a proper voting process.

We need to take temporary functions and type extraction into account. I haven't 
looked at the PR, but I could image that the implementation could case some 
issues as we are currently handling type extraction in Scala and Java API 
differently. This is also the reason why the new unified TableEnvironment (as 
mentioned in FLIP-32) has no generic {{registerFunction(...)}} yet. FLIP-65 
(not published yet) should solve this issue. Maybe we can disable the {{CREATE 
FUNCTION}} syntax for Scala table environments as a temporary solution?

> Add a basic function SQL DDL
> 
>
> Key: FLINK-7151
> URL: https://issues.apache.org/jira/browse/FLINK-7151
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: yuemeng
>Assignee: Zhenqiu Huang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Based on create function and table.we can register a udf,udaf,udtf use sql:
> {code}
> CREATE FUNCTION [IF NOT EXISTS] [catalog_name.db_name.]function_name AS 
> class_name;
> DROP FUNCTION [IF EXISTS] [catalog_name.db_name.]function_name;
> ALTER FUNCTION [IF EXISTS] [catalog_name.db_name.]function_name RENAME TO 
> new_name;
> {code}
> {code}
> CREATE function 'TOPK' AS 
> 'com..aggregate.udaf.distinctUdaf.topk.ITopKUDAF';
> INSERT INTO db_sink SELECT id, TOPK(price, 5, 'DESC') FROM kafka_source GROUP 
> BY id;
> {code}
> This ticket can assume that the function class is already loaded in classpath 
> by users. Advanced syntax like to how to dynamically load udf libraries from 
> external locations can be on a separate ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #9838: [BP-1.9][FLINK-14315] Make heartbeat manager fields non-nullable

2019-10-04 Thread GitBox
flinkbot commented on issue #9838: [BP-1.9][FLINK-14315] Make heartbeat manager 
fields non-nullable
URL: https://github.com/apache/flink/pull/9838#issuecomment-538387018
 
 
   
   ## CI report:
   
   * f6a760af29b7e7c1512463497443699ea6454926 : UNKNOWN
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >