[GitHub] [flink] docete commented on issue #9124: [FLINK-13280][table-planner-blink] Revert blink changes in DateTimeUt…

2019-07-16 Thread GitBox
docete commented on issue #9124: [FLINK-13280][table-planner-blink] Revert 
blink changes in DateTimeUt…
URL: https://github.com/apache/flink/pull/9124#issuecomment-512129332
 
 
   @wuchong Created a ticket to track 
[FLINK-13302](https://issues.apache.org/jira/browse/FLINK-13302)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9124: [FLINK-13280][table-planner-blink] Revert blink changes in DateTimeUt…

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9124: [FLINK-13280][table-planner-blink] 
Revert blink changes in DateTimeUt…
URL: https://github.com/apache/flink/pull/9124#issuecomment-511704721
 
 
   ## CI report:
   
   * bc16d0284ddcd653a6a6387782ea9f12299566c0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119422614)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13301) Some PlannerExpression resultType is not consistent with Calcite Type inference

2019-07-16 Thread Jing Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang reassigned FLINK-13301:
--

Assignee: Jing Zhang

> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference
> ---
>
> Key: FLINK-13301
> URL: https://issues.apache.org/jira/browse/FLINK-13301
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>
> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference. The problem could be happened when run  the following example: 
> {code:java}
> // prepare source Data
> val testData = new mutable.MutableList[(Int)]
> testData.+=((3))
> val t = env.fromCollection(testData).toTable(tEnv).as('a)
> // register a TableSink
> val fieldNames = Array("f0")
> val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT())
> //val fieldTypes: Array[TypeInformation[_]] = Array(Types.LONG())
> val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
> tEnv.registerTableSink("targetTable", sink.configure(fieldNames, 
> fieldTypes))
> 
> t.select('a.floor()).insertInto("targetTable")
> env.execute()
> {code}
> The cause is ResultType of `floor` is LONG_TYPE_INFO, while in Calcite 
> `SqlFloorFunction` infers returnType is the type of the first argument(INT in 
> the above case).
> If I change `fieldTypes` to Array(Types.INT()), the following error will be 
> thrown in compile phase.
> {code:java}
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink [targetTable] do not match.
> Query result schema: [_c0: Long]
> TableSink schema:[f0: Integer]
>   at 
> org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:59)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:158)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:157)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:157)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> {code}
> And If I change `fieldTypes` to Array(Types.LONG()), the other error will be 
> thrown in runtime.
> {code:java}
> org.apache.flink.table.api.TableException: Result field does not match 
> requested type. Requested: Long; Actual: Integer
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:103)
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:98)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.Conversions$.generateRowConverterFunction(Conversions.scala:98)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.getConversionMapper(DataStreamConversions.scala:135)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.convert(DataStreamConversions.scala:91)
> {code}
> {color:red}Above inconsistent problem also exists in `Floor`, `Ceil`, `Mod` 
> and so on.  {color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13301) Some PlannerExpression resultType is not consistent with Calcite Type inference

2019-07-16 Thread Jing Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886730#comment-16886730
 ] 

Jing Zhang commented on FLINK-13301:


[~lzljs3620320] yes, totally agree with you. We need a unify type inference.

> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference
> ---
>
> Key: FLINK-13301
> URL: https://issues.apache.org/jira/browse/FLINK-13301
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>
> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference. The problem could be happened when run  the following example: 
> {code:java}
> // prepare source Data
> val testData = new mutable.MutableList[(Int)]
> testData.+=((3))
> val t = env.fromCollection(testData).toTable(tEnv).as('a)
> // register a TableSink
> val fieldNames = Array("f0")
> val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT())
> //val fieldTypes: Array[TypeInformation[_]] = Array(Types.LONG())
> val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
> tEnv.registerTableSink("targetTable", sink.configure(fieldNames, 
> fieldTypes))
> 
> t.select('a.floor()).insertInto("targetTable")
> env.execute()
> {code}
> The cause is ResultType of `floor` is LONG_TYPE_INFO, while in Calcite 
> `SqlFloorFunction` infers returnType is the type of the first argument(INT in 
> the above case).
> If I change `fieldTypes` to Array(Types.INT()), the following error will be 
> thrown in compile phase.
> {code:java}
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink [targetTable] do not match.
> Query result schema: [_c0: Long]
> TableSink schema:[f0: Integer]
>   at 
> org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:59)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:158)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:157)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:157)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> {code}
> And If I change `fieldTypes` to Array(Types.LONG()), the other error will be 
> thrown in runtime.
> {code:java}
> org.apache.flink.table.api.TableException: Result field does not match 
> requested type. Requested: Long; Actual: Integer
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:103)
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:98)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.Conversions$.generateRowConverterFunction(Conversions.scala:98)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.getConversionMapper(DataStreamConversions.scala:135)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.convert(DataStreamConversions.scala:91)
> {code}
> {color:red}Above inconsistent problem also exists in `Floor`, `Ceil`, `Mod` 
> and so on.  {color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13302) DateTimeUtils.unixDateCeil returns the same value as unixDateFloor does

2019-07-16 Thread Zhenghua Gao (JIRA)
Zhenghua Gao created FLINK-13302:


 Summary: DateTimeUtils.unixDateCeil returns the same value as 
unixDateFloor does
 Key: FLINK-13302
 URL: https://issues.apache.org/jira/browse/FLINK-13302
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Legacy Planner
Affects Versions: 1.9.0, 1.10.0
Reporter: Zhenghua Gao
Assignee: Zhenghua Gao
 Fix For: 1.9.0, 1.10.0


Internally, unixDateCeil & unixDateFloor call julianDateFloor and pass a 
boolean parameter to differentiate them. But unixDateCeil passes wrong 
parameter value and returns the same value as unixDateFloor does.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] docete commented on issue #9124: [FLINK-13280][table-planner-blink] Revert blink changes in DateTimeUt…

2019-07-16 Thread GitBox
docete commented on issue #9124: [FLINK-13280][table-planner-blink] Revert 
blink changes in DateTimeUt…
URL: https://github.com/apache/flink/pull/9124#issuecomment-512127615
 
 
   > `DateTimeUtil#838` is not the same with flink planner. Flink planner 
passes true, however blink planner passes false.
   > 
   > Others looks good to me.
   
   It's a bug of Calcite-avitica 
[CALCITE-3199](https://issues.apache.org/jira/browse/CALCITE-3199)
   I will create a ticket to fix it in flink-planner


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13301) Some PlannerExpression resultType is not consistent with Calcite Type inference

2019-07-16 Thread Jingsong Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886725#comment-16886725
 ] 

Jingsong Lee commented on FLINK-13301:
--

Some type inference in calcite are some complications., I think the best 
solution is new type system inference...

> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference
> ---
>
> Key: FLINK-13301
> URL: https://issues.apache.org/jira/browse/FLINK-13301
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Priority: Major
>
> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference. The problem could be happened when run  the following example: 
> {code:java}
> // prepare source Data
> val testData = new mutable.MutableList[(Int)]
> testData.+=((3))
> val t = env.fromCollection(testData).toTable(tEnv).as('a)
> // register a TableSink
> val fieldNames = Array("f0")
> val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT())
> //val fieldTypes: Array[TypeInformation[_]] = Array(Types.LONG())
> val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
> tEnv.registerTableSink("targetTable", sink.configure(fieldNames, 
> fieldTypes))
> 
> t.select('a.floor()).insertInto("targetTable")
> env.execute()
> {code}
> The cause is ResultType of `floor` is LONG_TYPE_INFO, while in Calcite 
> `SqlFloorFunction` infers returnType is the type of the first argument(INT in 
> the above case).
> If I change `fieldTypes` to Array(Types.INT()), the following error will be 
> thrown in compile phase.
> {code:java}
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink [targetTable] do not match.
> Query result schema: [_c0: Long]
> TableSink schema:[f0: Integer]
>   at 
> org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:59)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:158)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:157)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:157)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> {code}
> And If I change `fieldTypes` to Array(Types.LONG()), the other error will be 
> thrown in runtime.
> {code:java}
> org.apache.flink.table.api.TableException: Result field does not match 
> requested type. Requested: Long; Actual: Integer
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:103)
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:98)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.Conversions$.generateRowConverterFunction(Conversions.scala:98)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.getConversionMapper(DataStreamConversions.scala:135)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.convert(DataStreamConversions.scala:91)
> {code}
> {color:red}Above inconsistent problem also exists in `Floor`, `Ceil`, `Mod` 
> and so on.  {color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file 
replication config for yarn configuration
URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151
 
 
   ## CI report:
   
   * 6a7ca58b4a04f6dce250045e021702e67e82b893 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119421914)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13301) Some PlannerExpression resultType is not consistent with Calcite Type inference

2019-07-16 Thread Jing Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-13301:
---
Description: 
Some PlannerExpression resultType is not consistent with Calcite Type 
inference. The problem could be happened when run  the following example: 

{code:java}
// prepare source Data
val testData = new mutable.MutableList[(Int)]
testData.+=((3))
val t = env.fromCollection(testData).toTable(tEnv).as('a)

// register a TableSink
val fieldNames = Array("f0")
val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT())
//val fieldTypes: Array[TypeInformation[_]] = Array(Types.LONG())
val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
tEnv.registerTableSink("targetTable", sink.configure(fieldNames, 
fieldTypes))

t.select('a.floor()).insertInto("targetTable")

env.execute()
{code}

The cause is ResultType of `floor` is LONG_TYPE_INFO, while in Calcite 
`SqlFloorFunction` infers returnType is the type of the first argument(INT in 
the above case).
If I change `fieldTypes` to Array(Types.INT()), the following error will be 
thrown in compile phase.

{code:java}
org.apache.flink.table.api.ValidationException: Field types of query result and 
registered TableSink [targetTable] do not match.
Query result schema: [_c0: Long]
TableSink schema:[f0: Integer]

at 
org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:59)
at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:158)
at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:157)
at scala.Option.map(Option.scala:146)
at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:157)
at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
{code}

And If I change `fieldTypes` to Array(Types.LONG()), the other error will be 
thrown in runtime.

{code:java}
org.apache.flink.table.api.TableException: Result field does not match 
requested type. Requested: Long; Actual: Integer

at 
org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:103)
at 
org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:98)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
org.apache.flink.table.planner.Conversions$.generateRowConverterFunction(Conversions.scala:98)
at 
org.apache.flink.table.planner.DataStreamConversions$.getConversionMapper(DataStreamConversions.scala:135)
at 
org.apache.flink.table.planner.DataStreamConversions$.convert(DataStreamConversions.scala:91)
{code}

{color:red}Above inconsistent problem also exists in `Floor`, `Ceil`, `Mod` and 
so on.  {color}

  was:
Some PlannerExpression resultType is not consistent with Calcite Type 
inference. The problem could be happened when run  the following example: 

{code:java}
// prepare source Data
val testData = new mutable.MutableList[(Int)]
testData.+=((3))
val t = env.fromCollection(testData).toTable(tEnv).as('a)

// register a TableSink
val fieldNames = Array("f0")
val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT())
//val fieldTypes: Array[TypeInformation[_]] = Array(Types.LONG())
val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
tEnv.registerTableSink("targetTable", sink.configure(fieldNames, 
fieldTypes))

t.select('a.floor()).insertInto("targetTable")

env.execute()
{code}

The cause is ResultType of `floor` is LONG_TYPE_INFO, while in Calcite 
`SqlFloorFunction` infers returnType is the type of the first argument(INT in 
the above case).
If I change `fieldTypes` to Array(Types.INT()), the following error will be 
thrown in compile phase.

{code:java}
org.apache.flink.table.api.ValidationException: Field types of query result and 
registered TableSink [targetTable] do not match.
Query result schema: [_c0: Long]
TableSink schema:[f0: Integer]

at 
org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:59

[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file 
replication config for yarn configuration
URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151
 
 
   ## CI report:
   
   * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119394934)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13301) Some PlannerExpression resultType is not consistent with Calcite Type inference

2019-07-16 Thread Jing Zhang (JIRA)
Jing Zhang created FLINK-13301:
--

 Summary: Some PlannerExpression resultType is not consistent with 
Calcite Type inference
 Key: FLINK-13301
 URL: https://issues.apache.org/jira/browse/FLINK-13301
 Project: Flink
  Issue Type: Task
  Components: Table SQL / API
Reporter: Jing Zhang


Some PlannerExpression resultType is not consistent with Calcite Type 
inference. The problem could be happened when run  the following example: 

{code:java}
// prepare source Data
val testData = new mutable.MutableList[(Int)]
testData.+=((3))
val t = env.fromCollection(testData).toTable(tEnv).as('a)

// register a TableSink
val fieldNames = Array("f0")
val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT())
//val fieldTypes: Array[TypeInformation[_]] = Array(Types.LONG())
val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
tEnv.registerTableSink("targetTable", sink.configure(fieldNames, 
fieldTypes))

t.select('a.floor()).insertInto("targetTable")

env.execute()
{code}

The cause is ResultType of `floor` is LONG_TYPE_INFO, while in Calcite 
`SqlFloorFunction` infers returnType is the type of the first argument(INT in 
the above case).
If I change `fieldTypes` to Array(Types.INT()), the following error will be 
thrown in compile phase.

{code:java}
org.apache.flink.table.api.ValidationException: Field types of query result and 
registered TableSink [targetTable] do not match.
Query result schema: [_c0: Long]
TableSink schema:[f0: Integer]

at 
org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:59)
at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:158)
at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:157)
at scala.Option.map(Option.scala:146)
at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:157)
at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
{code}

And If I change `fieldTypes` to Array(Types.LONG()), the other error will be 
thrown in runtime.

{code:java}
org.apache.flink.table.api.TableException: Result field does not match 
requested type. Requested: Long; Actual: Integer

at 
org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:103)
at 
org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:98)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
org.apache.flink.table.planner.Conversions$.generateRowConverterFunction(Conversions.scala:98)
at 
org.apache.flink.table.planner.DataStreamConversions$.getConversionMapper(DataStreamConversions.scala:135)
at 
org.apache.flink.table.planner.DataStreamConversions$.convert(DataStreamConversions.scala:91)
{code}

Above inconsistent problem also exists in `Floor`, `Ceil`, `Mod` and so on.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9053: [FLINK-13182][Table SQL / Client] Fix Spelling mistake in flink-table

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9053: [FLINK-13182][Table SQL / Client] Fix 
Spelling mistake in flink-table
URL: https://github.com/apache/flink/pull/9053#issuecomment-511282008
 
 
   ## CI report:
   
   * 3d8aa4ead36ebc700ba46936fe009a512b0e69ba : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119420223)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink&blink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304232880
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/batch/sql/PartitionableSinkITCase.scala
 ##
 @@ -0,0 +1,278 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.batch.sql
+
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.common.typeutils.TypeSerializer
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.configuration.Configuration
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl
+import org.apache.flink.sql.parser.validate.FlinkSqlConformance
+import org.apache.flink.streaming.api.datastream.{DataStream, DataStreamSink}
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction
+import org.apache.flink.table.api.{ExecutionConfigOptions, TableConfig, 
TableException, TableSchema}
+import org.apache.flink.table.calcite.CalciteConfig
+import org.apache.flink.table.runtime.batch.sql.PartitionableSinkITCase._
+import org.apache.flink.table.runtime.utils.BatchTestBase
+import org.apache.flink.table.runtime.utils.BatchTestBase.row
+import org.apache.flink.table.runtime.utils.TestData._
+import org.apache.flink.table.sinks.{PartitionableTableSink, StreamTableSink, 
TableSink}
+import org.apache.flink.table.types.logical.{BigIntType, IntType, VarCharType}
+import org.apache.flink.types.Row
+
+import org.apache.calcite.config.Lex
+import org.apache.calcite.sql.parser.SqlParser
+import org.junit.Assert._
+import org.junit.{Before, Test}
+
+import java.util.concurrent.LinkedBlockingQueue
+import java.util.{LinkedList => JLinkedList, List => JList, Map => JMap}
+
+import scala.collection.JavaConversions._
+import scala.collection.Seq
+
+/**
+  * Test cases for [[org.apache.flink.table.sinks.PartitionableTableSink]].
+  */
+class PartitionableSinkITCase extends BatchTestBase {
 
 Review comment:
   Why don't we have `PartitionableSinkITCase` in flink planner? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink&blink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304232649
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/StreamPlanner.scala
 ##
 @@ -101,17 +104,19 @@ class StreamPlanner(
 val parsed = planner.parse(stmt)
 
 parsed match {
-  case insert: SqlInsert =>
+  case insert: RichSqlInsert =>
 val targetColumnList = insert.getTargetColumnList
 if (targetColumnList != null && insert.getTargetColumnList.size() != 
0) {
   throw new ValidationException("Partial inserts are not supported")
 }
 // get name of sink table
 val targetTablePath = 
insert.getTargetTable.asInstanceOf[SqlIdentifier].names
+val staticPartitions = insert.getStaticPartitionKVs
 
 List(new CatalogSinkModifyOperation(targetTablePath,
   SqlToOperationConverter.convert(planner,
-insert.getSource).asInstanceOf[PlannerQueryOperation])
+insert.getSource).asInstanceOf[PlannerQueryOperation],
+  staticPartitions)
 
 Review comment:
   Can we keep sync with blink planner for this? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink&blink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304232440
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/utils/BatchTestBase.scala
 ##
 @@ -65,6 +66,12 @@ class BatchTestBase extends BatchAbstractTestBase {
   val LINE_COL_TWICE_PATTERN: Pattern = Pattern.compile("(?s)From line 
([0-9]+),"
 + " column ([0-9]+) to line ([0-9]+), column ([0-9]+): (.*)")
 
+  /**
+* Subclass should overwrite this method if we want to overwrite 
configuration during
+* sql parse to sql to rel conversion phrase.
+*/
+  protected def getTableConfig: TableConfig = new TableConfig
 
 Review comment:
   It is confused we expose the create a new TableConfig. Because we use the 
`tEnv.getConfig` to set a lot of configurations in `before()`. I think if we 
want to set sql parser configurations, we can also override `before()` method 
and add sql parse configs in `conf`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink&blink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304215764
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/rules/physical/batch/BatchExecSinkRule.scala
 ##
 @@ -35,8 +41,33 @@ class BatchExecSinkRule extends ConverterRule(
   def convert(rel: RelNode): RelNode = {
 val sinkNode = rel.asInstanceOf[FlinkLogicalSink]
 val newTrait = rel.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
-// TODO Take PartitionableSink into consideration after FLINK-11993 is done
-val newInput = RelOptRule.convert(sinkNode.getInput, 
FlinkConventions.BATCH_PHYSICAL)
+var requiredTraitSet = 
sinkNode.getInput.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
+sinkNode.sink match {
+  case partitionSink: PartitionableTableSink
+if partitionSink.getPartitionFieldNames != null &&
+  partitionSink.getPartitionFieldNames.nonEmpty =>
+val partitionIndices = partitionSink
+  .getPartitionFieldNames
+  .map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
+// validate
+partitionIndices.foreach { idx =>
+  if (idx < 0) {
+throw new TableException("Partition fields must be in the schema.")
 
 Review comment:
   Can we give a more detailed exception? Which partition field is not in the 
schema? It would be better to print the table sink name if we can. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink&blink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304235151
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/batch/sql/PartitionableSinkITCase.scala
 ##
 @@ -0,0 +1,278 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.batch.sql
+
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.common.typeutils.TypeSerializer
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.configuration.Configuration
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl
+import org.apache.flink.sql.parser.validate.FlinkSqlConformance
+import org.apache.flink.streaming.api.datastream.{DataStream, DataStreamSink}
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction
+import org.apache.flink.table.api.{ExecutionConfigOptions, TableConfig, 
TableException, TableSchema}
+import org.apache.flink.table.calcite.CalciteConfig
+import org.apache.flink.table.runtime.batch.sql.PartitionableSinkITCase._
+import org.apache.flink.table.runtime.utils.BatchTestBase
+import org.apache.flink.table.runtime.utils.BatchTestBase.row
+import org.apache.flink.table.runtime.utils.TestData._
+import org.apache.flink.table.sinks.{PartitionableTableSink, StreamTableSink, 
TableSink}
+import org.apache.flink.table.types.logical.{BigIntType, IntType, VarCharType}
+import org.apache.flink.types.Row
+
+import org.apache.calcite.config.Lex
+import org.apache.calcite.sql.parser.SqlParser
+import org.junit.Assert._
+import org.junit.{Before, Test}
+
+import java.util.concurrent.LinkedBlockingQueue
+import java.util.{LinkedList => JLinkedList, List => JList, Map => JMap}
+
+import scala.collection.JavaConversions._
+import scala.collection.Seq
+
+/**
+  * Test cases for [[org.apache.flink.table.sinks.PartitionableTableSink]].
+  */
+class PartitionableSinkITCase extends BatchTestBase {
+
+  @Before
+  override def before(): Unit = {
+super.before()
+env.setParallelism(3)
+tEnv.getConfig
+  .getConfiguration
+  .setInteger(ExecutionConfigOptions.SQL_RESOURCE_DEFAULT_PARALLELISM, 3)
+registerCollection("nonSortTable", testData, type3, "a, b, c", 
dataNullables)
+registerCollection("sortTable", testData1, type3, "a, b, c", dataNullables)
+PartitionableSinkITCase.init()
+  }
+
+  override def getTableConfig: TableConfig = {
+val parserConfig = SqlParser.configBuilder
+  .setParserFactory(FlinkSqlParserImpl.FACTORY)
+  .setConformance(FlinkSqlConformance.HIVE) // set up hive dialect
+  .setLex(Lex.JAVA)
+  .setIdentifierMaxLength(256).build
+val plannerConfig = CalciteConfig.createBuilder(CalciteConfig.DEFAULT)
+  .replaceSqlParserConfig(parserConfig)
+val tableConfig = new TableConfig
+tableConfig.setPlannerConfig(plannerConfig.build())
+tableConfig
+  }
+
+  @Test
+  def testInsertWithOutPartitionGrouping(): Unit = {
+registerTableSink(grouping = false)
+tEnv.sqlUpdate("insert into sinkTable select a, max(b), c"
+  + " from nonSortTable group by a, c")
+tEnv.execute("testJob")
+val resultSet = List(RESULT1, RESULT2, RESULT3)
+assert(resultSet.exists(l => l.size() == 3))
+resultSet.filter(l => l.size() == 3).foreach{ list =>
+  assert(list.forall(r => r.getField(0).toString == "1"))
+}
+  }
+
+  @Test
+  def testInsertWithPartitionGrouping(): Unit = {
+registerTableSink(grouping = true)
+tEnv.sqlUpdate("insert into sinkTable select a, b, c from sortTable")
+tEnv.execute("testJob")
+val resultSet = List(RESULT1, RESULT2, RESULT3)
+resultSet.foreach(l => assertSortedByFirstNField(l, 1))
+assertEquals(resultSet.map(l => collectDistinctGroupCount(l, 2)).sum, 4)
+  }
+
+  @Test
+  def testInsertWithStaticPartitions(): Unit = {
+val testSink = registerTableSink(grouping = true)
+tEnv.sqlUpdate("insert into sinkTable partition(a=1) select b, c from 
sortTable")
+tEnv.exec

[GitHub] [flink] wuchong commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink&blink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304216446
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/rules/physical/stream/StreamExecSinkRule.scala
 ##
 @@ -35,8 +40,29 @@ class StreamExecSinkRule extends ConverterRule(
   def convert(rel: RelNode): RelNode = {
 val sinkNode = rel.asInstanceOf[FlinkLogicalSink]
 val newTrait = rel.getTraitSet.replace(FlinkConventions.STREAM_PHYSICAL)
-// TODO Take PartitionableSink into consideration after FLINK-11993 is done
-val newInput = RelOptRule.convert(sinkNode.getInput, 
FlinkConventions.STREAM_PHYSICAL)
+var requiredTraitSet = 
sinkNode.getInput.getTraitSet.replace(FlinkConventions.STREAM_PHYSICAL)
+sinkNode.sink match {
+  case partitionSink: PartitionableTableSink
+if partitionSink.getPartitionFieldNames != null &&
+  partitionSink.getPartitionFieldNames.nonEmpty =>
+val partitionIndices = partitionSink
+  .getPartitionFieldNames
+  .map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
+// validate
+partitionIndices.foreach { idx =>
+  if (idx < 0) {
+throw new TableException("Partition fields must be in the schema.")
+  }
+}
 
 Review comment:
   Try to configure partition grouping, even if it is streaming mode. And check 
the return values. It would be more safe if we check it. 
   
   ```scala
   if (partitionSink.configurePartitionGrouping(false)) {
 throw exception...
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink&blink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304215071
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/calcite/PreValidateReWriter.scala
 ##
 @@ -99,12 +99,15 @@ object PreValidateReWriter {
   case t: RelOptTable => t
   case _ => null
 }
+def tester(idx: Integer): Boolean = {
+  !assignedFields.contains(idx)
+}
 for (node <- partitions.getList) {
   val sqlProperty = node.asInstanceOf[SqlProperty]
   val id = sqlProperty.getKey
   val targetField = SqlValidatorUtil.getTargetField(targetRowType,
 typeFactory, id, calciteCatalogReader, relOptTable)
-  validateField(assignedFields.containsValue, id, targetField)
+  validateField(tester, id, targetField)
 
 Review comment:
   use lambda instead of an inner method? `validateField(idx => 
!assignedFields.contains(idx), id, targetField)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9053: [FLINK-13182][Table SQL / Client] Fix Spelling mistake in flink-table

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9053: [FLINK-13182][Table SQL / Client] Fix 
Spelling mistake in flink-table
URL: https://github.com/apache/flink/pull/9053#issuecomment-511282008
 
 
   ## CI report:
   
   * 3d8aa4ead36ebc700ba46936fe009a512b0e69ba : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119391558)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8820: [FLINK-12916][tests] Retry cancelWithSavepoint on cancellation barrier in AbstractOperatorRestoreTestBase

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8820: [FLINK-12916][tests] Retry 
cancelWithSavepoint on cancellation barrier in AbstractOperatorRestoreTestBase
URL: https://github.com/apache/flink/pull/8820#issuecomment-511336236
 
 
   ## CI report:
   
   * 4b0f85fab76048b96500745b7aff8105e079401c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119418732)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13282) LocalExecutorITCase#testUseCatalogAndUseDatabase failed on Travis

2019-07-16 Thread Zhenghua Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886703#comment-16886703
 ] 

Zhenghua Gao commented on FLINK-13282:
--

The reason is that the FunctionCatalog would look up functions in catalog if 
the catalog could return a valid TableFactory(HiveCatalog in this case). And 
return a FunctionLookup.Result even if find nothing. I think we must change the 
function-discovery strategy(e.g., fallback to userFunctions in this case) or 
re-think the logic of FunctionCatalog. [~phoenixjiangnan] What do you think? 

> LocalExecutorITCase#testUseCatalogAndUseDatabase failed on Travis
> -
>
> Key: FLINK-13282
> URL: https://issues.apache.org/jira/browse/FLINK-13282
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Jark Wu
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> {code}
> 06:06:00.442 [ERROR] Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 42.026 s <<< FAILURE! - in 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase
> 06:06:00.443 [ERROR] 
> testUseCatalogAndUseDatabase(org.apache.flink.table.client.gateway.local.LocalExecutorITCase)
>   Time elapsed: 17.208 s  <<< ERROR!
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not create 
> environment instance.
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testUseCatalogAndUseDatabase(LocalExecutorITCase.java:454)
> Caused by: org.apache.flink.table.client.gateway.SqlExecutionException: 
> Invalid view 'TestView1' with query:
> SELECT scalarUDF(IntegerField1) FROM 
> default_catalog.default_database.TableNumber1
> Cause: SQL validation failed. From line 1, column 8 to line 1, column 31: No 
> match found for function signature scalarUDF()
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutorITCase.testUseCatalogAndUseDatabase(LocalExecutorITCase.java:454)
> {code}
> Here is an instance: https://api.travis-ci.org/v3/job/559247230/log.txt
> This maybe effected by FLINK-13176.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13266) Relocate blink planner classes to avoid class clashes

2019-07-16 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13266:

Fix Version/s: 1.10.0

> Relocate blink planner classes to avoid class clashes
> -
>
> Key: FLINK-13266
> URL: https://issues.apache.org/jira/browse/FLINK-13266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> We should have a list to relocate classes in {{flink-table-planner-blink}} 
> and {{flink-table-runtime-blink}} to avoid class clashes to make both 
> planners available in a lib directory.
> Note that, not all the classes can/should be relocated. For examples: calcite 
> classes, {{PlannerExpressionParserImpl}} and so on. 
> The relocation package name is up to discussion. A dedicated path is 
> {{org.apache.flink.table.blink}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13266) Relocate blink planner classes to avoid class clashes

2019-07-16 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13266:
---

Assignee: (was: Jark Wu)

> Relocate blink planner classes to avoid class clashes
> -
>
> Key: FLINK-13266
> URL: https://issues.apache.org/jira/browse/FLINK-13266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Priority: Blocker
> Fix For: 1.9.0
>
>
> We should have a list to relocate classes in {{flink-table-planner-blink}} 
> and {{flink-table-runtime-blink}} to avoid class clashes to make both 
> planners available in a lib directory.
> Note that, not all the classes can/should be relocated. For examples: calcite 
> classes, {{PlannerExpressionParserImpl}} and so on. 
> The relocation package name is up to discussion. A dedicated path is 
> {{org.apache.flink.table.blink}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-10052) Tolerate temporarily suspended ZooKeeper connections

2019-07-16 Thread lamber-ken (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886699#comment-16886699
 ] 

lamber-ken edited comment on FLINK-10052 at 7/17/19 6:01 AM:
-

[~Tison] (y), I have some points need to talk with you.

First, for your first point, I thought it yesterday and wont create a new 
curator Jira like your CURATOR-532 that user can manually config ZooKeeper3.4.x 
Compatibility, but I give up that idea, because I found that it also needs to 
reflect +org.apache.zookeeper.ClientCnxn$EventThread+ which may throw 
ClassNotFoundException because of shading. Click it for more detail 
[InjectSessionExpiration|https://github.com/apache/curator/blob/master/curator-client/src/main/java/org/apache/curator/utils/InjectSessionExpiration.java].

Second, for your second point, I am not familiar with LeaderSeclector currently 
and I'm learning about it. I also think it is a ideally way we can just use 
SessionConnectionStateErrorPolicy directly in curator-4.x

Third, I don't understand the meaning of a flink scope leader latch

 


was (Author: lamber-ken):
[~Tison] (y), I have some points need to talk with you.

First, for your first point, I thought it yesterday and wont create a new 
curator Jira like your CURATOR-532 that use can manually config ZooKeeper3.4.x 
Compatibility, but I give up that idea, because I found that it also needs to 
reflect +org.apache.zookeeper.ClientCnxn$EventThread+ which may throw 
ClassNotFoundException because of shading. Click it for more detail 
[InjectSessionExpiration|https://github.com/apache/curator/blob/master/curator-client/src/main/java/org/apache/curator/utils/InjectSessionExpiration.java].

Second, for your second point, I am not familiar with LeaderSeclector currently 
and I'm learning about it. I also think it is a ideally way we can just use 
SessionConnectionStateErrorPolicy directly in curator-4.x

Third, I don't understand the meaning of a flink scope leader latch

 

> Tolerate temporarily suspended ZooKeeper connections
> 
>
> Key: FLINK-10052
> URL: https://issues.apache.org/jira/browse/FLINK-10052
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.4.2, 1.5.2, 1.6.0, 1.8.1
>Reporter: Till Rohrmann
>Assignee: Dominik Wosiński
>Priority: Major
>
> This issue results from FLINK-10011 which uncovered a problem with Flink's HA 
> recovery and proposed the following solution to harden Flink:
> The {{ZooKeeperLeaderElectionService}} uses the {{LeaderLatch}} Curator 
> recipe for leader election. The leader latch revokes leadership in case of a 
> suspended ZooKeeper connection. This can be premature in case that the system 
> can reconnect to ZooKeeper before its session expires. The effect of the lost 
> leadership is that all jobs will be canceled and directly restarted after 
> regaining the leadership.
> Instead of directly revoking the leadership upon a SUSPENDED ZooKeeper 
> connection, it would be better to wait until the ZooKeeper connection is 
> LOST. That way we would allow the system to reconnect and not lose the 
> leadership. This could be achievable by using Curator's {{LeaderSelector}} 
> instead of the {{LeaderLatch}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-10052) Tolerate temporarily suspended ZooKeeper connections

2019-07-16 Thread lamber-ken (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886699#comment-16886699
 ] 

lamber-ken commented on FLINK-10052:


[~Tison] (y), I have some points need to talk with you.

First, for your first point, I thought it yesterday and wont create a new 
curator Jira like your CURATOR-532 that use can manually config ZooKeeper3.4.x 
Compatibility, but I give up that idea, because I found that it also needs to 
reflect +org.apache.zookeeper.ClientCnxn$EventThread+ which may throw 
ClassNotFoundException because of shading. Click it for more detail 
[InjectSessionExpiration|https://github.com/apache/curator/blob/master/curator-client/src/main/java/org/apache/curator/utils/InjectSessionExpiration.java].

Second, for your second point, I am not familiar with LeaderSeclector currently 
and I'm learning about it. I also think it is a ideally way we can just use 
SessionConnectionStateErrorPolicy directly in curator-4.x

Third, I don't understand the meaning of a flink scope leader latch

 

> Tolerate temporarily suspended ZooKeeper connections
> 
>
> Key: FLINK-10052
> URL: https://issues.apache.org/jira/browse/FLINK-10052
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.4.2, 1.5.2, 1.6.0, 1.8.1
>Reporter: Till Rohrmann
>Assignee: Dominik Wosiński
>Priority: Major
>
> This issue results from FLINK-10011 which uncovered a problem with Flink's HA 
> recovery and proposed the following solution to harden Flink:
> The {{ZooKeeperLeaderElectionService}} uses the {{LeaderLatch}} Curator 
> recipe for leader election. The leader latch revokes leadership in case of a 
> suspended ZooKeeper connection. This can be premature in case that the system 
> can reconnect to ZooKeeper before its session expires. The effect of the lost 
> leadership is that all jobs will be canceled and directly restarted after 
> regaining the leadership.
> Instead of directly revoking the leadership upon a SUSPENDED ZooKeeper 
> connection, it would be better to wait until the ZooKeeper connection is 
> LOST. That way we would allow the system to reconnect and not lose the 
> leadership. This could be achievable by using Curator's {{LeaderSelector}} 
> instead of the {{LeaderLatch}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8820: [FLINK-12916][tests] Retry cancelWithSavepoint on cancellation barrier in AbstractOperatorRestoreTestBase

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8820: [FLINK-12916][tests] Retry 
cancelWithSavepoint on cancellation barrier in AbstractOperatorRestoreTestBase
URL: https://github.com/apache/flink/pull/8820#issuecomment-511336236
 
 
   ## CI report:
   
   * 4b0f85fab76048b96500745b7aff8105e079401c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119389785)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9088: [FLINK-13012][hive] Handle default partition name of Hive table

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9088: [FLINK-13012][hive] Handle default 
partition name of Hive table
URL: https://github.com/apache/flink/pull/9088#issuecomment-510484364
 
 
   ## CI report:
   
   * eac5f74690ddb0b08cb41b029f5b8ac675e63565 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118891245)
   * b22f836f5e8f95a9f376f54c68798eeb14cb1644 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054589)
   * 64007bc344772aa3496f5d8b0a456f73466bbd17 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119417375)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-4810) Checkpoint Coordinator should fail ExecutionGraph after "n" unsuccessful checkpoints

2019-07-16 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-4810:

Fix Version/s: (was: 1.9.0)

> Checkpoint Coordinator should fail ExecutionGraph after "n" unsuccessful 
> checkpoints
> 
>
> Key: FLINK-4810
> URL: https://issues.apache.org/jira/browse/FLINK-4810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Stephan Ewen
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> The Checkpoint coordinator should track the number of consecutive 
> unsuccessful checkpoints.
> If more than {{n}} (configured value) checkpoints fail in a row, it should 
> call {{fail()}} on the execution graph to trigger a recovery.
> The design document is here : 
> https://docs.google.com/document/d/1ce7RtecuTxcVUJlnU44hzcO2Dwq9g4Oyd8_biy94hJc/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-4810) Checkpoint Coordinator should fail ExecutionGraph after "n" unsuccessful checkpoints

2019-07-16 Thread Aljoscha Krettek (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886694#comment-16886694
 ] 

Aljoscha Krettek commented on FLINK-4810:
-

This feature has been implemented in FLINK-12364.

> Checkpoint Coordinator should fail ExecutionGraph after "n" unsuccessful 
> checkpoints
> 
>
> Key: FLINK-4810
> URL: https://issues.apache.org/jira/browse/FLINK-4810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Stephan Ewen
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> The Checkpoint coordinator should track the number of consecutive 
> unsuccessful checkpoints.
> If more than {{n}} (configured value) checkpoints fail in a row, it should 
> call {{fail()}} on the execution graph to trigger a recovery.
> The design document is here : 
> https://docs.google.com/document/d/1ce7RtecuTxcVUJlnU44hzcO2Dwq9g4Oyd8_biy94hJc/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-4810) Checkpoint Coordinator should fail ExecutionGraph after "n" unsuccessful checkpoints

2019-07-16 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-4810:

Release Note:   (was: This feature has been implemented in FLINK-12364.)

> Checkpoint Coordinator should fail ExecutionGraph after "n" unsuccessful 
> checkpoints
> 
>
> Key: FLINK-4810
> URL: https://issues.apache.org/jira/browse/FLINK-4810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Stephan Ewen
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> The Checkpoint coordinator should track the number of consecutive 
> unsuccessful checkpoints.
> If more than {{n}} (configured value) checkpoints fail in a row, it should 
> call {{fail()}} on the execution graph to trigger a recovery.
> The design document is here : 
> https://docs.google.com/document/d/1ce7RtecuTxcVUJlnU44hzcO2Dwq9g4Oyd8_biy94hJc/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13288) Blink planner doesn't compile with Scala 2.12

2019-07-16 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886692#comment-16886692
 ] 

Jark Wu edited comment on FLINK-13288 at 7/17/19 5:39 AM:
--

Hi [~Zentol], I ran `mvn install -DskipTests -Dscala-2.12` in my local machine 
for flink master and build success. 
Meanwhile, I rebuild the travis job you linked in the description, and it build 
success too. 
Is this an unstable failure? Because I didn't get any useful error information 
from the log.


was (Author: jark):
Hi [~Zentol], I ran `mvn install -DskipTests -Dscala-2.12` in my local machine 
for flink master and build success. 
Meanwhile, I rebuild the travis job you linked in the description, and it build 
success too. 
Is this an unstable failure? Because I can't get any useful error information 
from the log.

> Blink planner doesn't compile with Scala 2.12
> -
>
> Key: FLINK-13288
> URL: https://issues.apache.org/jira/browse/FLINK-13288
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> [https://travis-ci.org/apache/flink/jobs/559389818]
>  
> {code:java}
> 13:06:43.309 [ERROR] error: java.lang.StackOverflowError
> 13:06:43.310 [INFO]   at 
> scala.reflect.internal.util.StatisticsStatics.areSomeColdStatsEnabled(StatisticsStatics.java:31)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.silent(Typers.scala:669)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4748)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typedApply$1(Typers.scala:4776)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5571)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.typed1(Erasure.scala:789)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5617)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.typed1(Erasure.scala:789){code}
>  
> [~ykt836] [~jark]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-11654) Multiple transactional KafkaProducers writing to same cluster have clashing transaction IDs

2019-07-16 Thread Aljoscha Krettek (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886693#comment-16886693
 ] 

Aljoscha Krettek commented on FLINK-11654:
--

I agree with the result of the discussion. 👌I think it's important to fail 
fast, otherwise we might unknowingly break existing setups.

> Multiple transactional KafkaProducers writing to same cluster have clashing 
> transaction IDs
> ---
>
> Key: FLINK-11654
> URL: https://issues.apache.org/jira/browse/FLINK-11654
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.7.1
>Reporter: Jürgen Kreileder
>Priority: Major
> Fix For: 1.9.0
>
>
> We run multiple jobs on a cluster which write a lot to the same Kafka topic 
> from identically named sinks. When EXACTLY_ONCE semantic is enabled for the 
> KafkaProducers we run into a lot of ProducerFencedExceptions and all jobs go 
> into a restart cycle.
> Example exception from the Kafka log:
>  
> {code:java}
> [2019-02-18 18:05:28,485] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition finding-commands-dev-1-0 
> (kafka.server.ReplicaManager)
> org.apache.kafka.common.errors.ProducerFencedException: Producer's epoch is 
> no longer valid. There is probably another producer with a newer epoch. 483 
> (request epoch), 484 (server epoch)
> {code}
> The reason for this is the way FlinkKafkaProducer initializes the 
> TransactionalIdsGenerator:
> The IDs are only guaranteed to be unique for a single Job. But they can clash 
> between different Jobs (and Clusters).
>  
>  
> {code:java}
> --- 
> a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java
> +++ 
> b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java
> @@ -819,6 +819,7 @@ public class FlinkKafkaProducer
>                 nextTransactionalIdHintState = 
> context.getOperatorStateStore().getUnionListState(
>                         NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR);
>                 transactionalIdsGenerator = new TransactionalIdsGenerator(
> + // the prefix probably should include job id and maybe cluster id
>                         getRuntimeContext().getTaskName() + "-" + 
> ((StreamingRuntimeContext) getRuntimeContext()).getOperatorUniqueID(),
>                         getRuntimeContext().getIndexOfThisSubtask(),
>                         
> getRuntimeContext().getNumberOfParallelSubtasks(),{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13288) Blink planner doesn't compile with Scala 2.12

2019-07-16 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886692#comment-16886692
 ] 

Jark Wu edited comment on FLINK-13288 at 7/17/19 5:39 AM:
--

Hi [~Zentol], I ran `mvn install -DskipTests -Dscala-2.12` in my local machine 
for flink master and build success. 
Meanwhile, I rebuild the travis job [1] you linked in the description, and it 
build success too. 
Is this an unstable failure? Because I didn't get any useful error information 
from the log.

[1]. https://travis-ci.org/apache/flink/jobs/559389818


was (Author: jark):
Hi [~Zentol], I ran `mvn install -DskipTests -Dscala-2.12` in my local machine 
for flink master and build success. 
Meanwhile, I rebuild the travis job you linked in the description, and it build 
success too. 
Is this an unstable failure? Because I didn't get any useful error information 
from the log.

> Blink planner doesn't compile with Scala 2.12
> -
>
> Key: FLINK-13288
> URL: https://issues.apache.org/jira/browse/FLINK-13288
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> [https://travis-ci.org/apache/flink/jobs/559389818]
>  
> {code:java}
> 13:06:43.309 [ERROR] error: java.lang.StackOverflowError
> 13:06:43.310 [INFO]   at 
> scala.reflect.internal.util.StatisticsStatics.areSomeColdStatsEnabled(StatisticsStatics.java:31)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.silent(Typers.scala:669)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4748)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typedApply$1(Typers.scala:4776)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5571)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.typed1(Erasure.scala:789)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5617)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.typed1(Erasure.scala:789){code}
>  
> [~ykt836] [~jark]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13288) Blink planner doesn't compile with Scala 2.12

2019-07-16 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886692#comment-16886692
 ] 

Jark Wu commented on FLINK-13288:
-

Hi [~Zentol], I ran `mvn install -DskipTests -Dscala-2.12` in my local machine 
for flink master and build success. 
Meanwhile, I rebuild the travis job you linked in the description, and it build 
success too. 
Is this an unstable failure? Because I can't get any useful error information 
from the log.

> Blink planner doesn't compile with Scala 2.12
> -
>
> Key: FLINK-13288
> URL: https://issues.apache.org/jira/browse/FLINK-13288
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> [https://travis-ci.org/apache/flink/jobs/559389818]
>  
> {code:java}
> 13:06:43.309 [ERROR] error: java.lang.StackOverflowError
> 13:06:43.310 [INFO]   at 
> scala.reflect.internal.util.StatisticsStatics.areSomeColdStatsEnabled(StatisticsStatics.java:31)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.silent(Typers.scala:669)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4748)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typedApply$1(Typers.scala:4776)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5571)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.typed1(Erasure.scala:789)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5617)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.typed1(Erasure.scala:789){code}
>  
> [~ykt836] [~jark]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13295) set current catalog and database after registering tables and views in SQL CLI

2019-07-16 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-13295:
-
Fix Version/s: (was: 1.10.0)
   (was: 1.9.0)

> set current catalog and database after registering tables and views in SQL CLI
> --
>
> Key: FLINK-13295
> URL: https://issues.apache.org/jira/browse/FLINK-13295
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>
> This should solve the bug of FLINK-13294



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9088: [FLINK-13012][hive] Handle default partition name of Hive table

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9088: [FLINK-13012][hive] Handle default 
partition name of Hive table
URL: https://github.com/apache/flink/pull/9088#issuecomment-510484364
 
 
   ## CI report:
   
   * eac5f74690ddb0b08cb41b029f5b8ac675e63565 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118891245)
   * b22f836f5e8f95a9f376f54c68798eeb14cb1644 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054589)
   * 64007bc344772aa3496f5d8b0a456f73466bbd17 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119385134)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9113: [FLINK-13222] [runtime] Add 
documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113#issuecomment-511360277
 
 
   ## CI report:
   
   * b6bb574fbe0e3421adb07aef7ffd1c8068675a74 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119416196)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9113: [FLINK-13222] [runtime] Add 
documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113#issuecomment-511360277
 
 
   ## CI report:
   
   * b6bb574fbe0e3421adb07aef7ffd1c8068675a74 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119381321)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10052) Tolerate temporarily suspended ZooKeeper connections

2019-07-16 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886677#comment-16886677
 ] 

TisonKun commented on FLINK-10052:
--

[~lamber-ken] PR#9066 solve the problem and in FLINK-10333 we actually propose 
a flink scope leader latch.

But back to this jira, I'd like to clarify two things.

1. I know with shaded curator's auto detection fails. But for our case, flink 
use zookeeper 3.4 and there is no ongoing proposal upgrading zookeeper version. 
Given this background, the detection fails, and curator use zk34comp mode, 
which is, hacky but correctly, fit our case.(For a clean solution, we could 
1).figure out a new relocation strategy or 2) CURATOR-532 proposes a manually 
compatibility config or 3) a flink scope leader latch which would be somewhat 
like PR#9066)

2. LeaderSelector just forces the user to implement {{#stateChanged}} and take 
leadership greedily. I don't think it provides extra magic and for state 
handling we want no more than {{SessionConnectionStateErrorPolicy}}.

> Tolerate temporarily suspended ZooKeeper connections
> 
>
> Key: FLINK-10052
> URL: https://issues.apache.org/jira/browse/FLINK-10052
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.4.2, 1.5.2, 1.6.0, 1.8.1
>Reporter: Till Rohrmann
>Assignee: Dominik Wosiński
>Priority: Major
>
> This issue results from FLINK-10011 which uncovered a problem with Flink's HA 
> recovery and proposed the following solution to harden Flink:
> The {{ZooKeeperLeaderElectionService}} uses the {{LeaderLatch}} Curator 
> recipe for leader election. The leader latch revokes leadership in case of a 
> suspended ZooKeeper connection. This can be premature in case that the system 
> can reconnect to ZooKeeper before its session expires. The effect of the lost 
> leadership is that all jobs will be canceled and directly restarted after 
> regaining the leadership.
> Instead of directly revoking the leadership upon a SUSPENDED ZooKeeper 
> connection, it would be better to wait until the ZooKeeper connection is 
> LOST. That way we would allow the system to reconnect and not lose the 
> leadership. This could be achievable by using Curator's {{LeaderSelector}} 
> instead of the {{LeaderLatch}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9118: [FLINK-13206][sql client]replace `use database xxx` with `use xxx` in sql client parser

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9118: [FLINK-13206][sql client]replace `use 
database xxx` with `use xxx` in sql client parser
URL: https://github.com/apache/flink/pull/9118#issuecomment-511392553
 
 
   ## CI report:
   
   * f1da2b8c756a0af3e923aeee451e857c4d1728b7 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119414543)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9118: [FLINK-13206][sql client]replace `use database xxx` with `use xxx` in sql client parser

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9118: [FLINK-13206][sql client]replace `use 
database xxx` with `use xxx` in sql client parser
URL: https://github.com/apache/flink/pull/9118#issuecomment-511392553
 
 
   ## CI report:
   
   * f1da2b8c756a0af3e923aeee451e857c4d1728b7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119379933)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9129: [FLINK-13287][table-api] Port ExistingField to api-java and use new Expression in FieldComputer

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9129: [FLINK-13287][table-api] Port 
ExistingField to api-java and use new Expression in FieldComputer
URL: https://github.com/apache/flink/pull/9129#issuecomment-511810897
 
 
   ## CI report:
   
   * c3a620e6d2abbe5faac85b2ebcdb5340d06f8e8c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119414043)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] 
Add PartitionableTableSink bridge logic to flink&blink …
URL: https://github.com/apache/flink/pull/8966#issuecomment-511712813
 
 
   ## CI report:
   
   * 7abe4618172b34dfc190c67b3d28422afe4dd0ae : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119414109)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-13266) Relocate blink planner classes to avoid class clashes

2019-07-16 Thread godfrey he (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886673#comment-16886673
 ] 

godfrey he edited comment on FLINK-13266 at 7/17/19 4:36 AM:
-

hi [~twalthr], [~jark] I have listed the changes in 
https://docs.google.com/document/d/15Z1Khy23DwDBp956yBzkMYkGdoQAU_QgBoJmFWSffow 
(at the end of the document), and look forward your feedback, thanks


was (Author: godfreyhe):
hi [~twalthr], [~jark] I have list the changes in 
https://docs.google.com/document/d/15Z1Khy23DwDBp956yBzkMYkGdoQAU_QgBoJmFWSffow,
 and look forward your feedback, thanks

> Relocate blink planner classes to avoid class clashes
> -
>
> Key: FLINK-13266
> URL: https://issues.apache.org/jira/browse/FLINK-13266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Blocker
> Fix For: 1.9.0
>
>
> We should have a list to relocate classes in {{flink-table-planner-blink}} 
> and {{flink-table-runtime-blink}} to avoid class clashes to make both 
> planners available in a lib directory.
> Note that, not all the classes can/should be relocated. For examples: calcite 
> classes, {{PlannerExpressionParserImpl}} and so on. 
> The relocation package name is up to discussion. A dedicated path is 
> {{org.apache.flink.table.blink}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13266) Relocate blink planner classes to avoid class clashes

2019-07-16 Thread godfrey he (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886673#comment-16886673
 ] 

godfrey he commented on FLINK-13266:


hi [~twalthr], [~jark] I have list the changes in 
https://docs.google.com/document/d/15Z1Khy23DwDBp956yBzkMYkGdoQAU_QgBoJmFWSffow,
 and look forward your feedback, thanks

> Relocate blink planner classes to avoid class clashes
> -
>
> Key: FLINK-13266
> URL: https://issues.apache.org/jira/browse/FLINK-13266
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Blocker
> Fix For: 1.9.0
>
>
> We should have a list to relocate classes in {{flink-table-planner-blink}} 
> and {{flink-table-runtime-blink}} to avoid class clashes to make both 
> planners available in a lib directory.
> Note that, not all the classes can/should be relocated. For examples: calcite 
> classes, {{PlannerExpressionParserImpl}} and so on. 
> The relocation package name is up to discussion. A dedicated path is 
> {{org.apache.flink.table.blink}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink&blink …

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] 
Add PartitionableTableSink bridge logic to flink&blink …
URL: https://github.com/apache/flink/pull/8966#issuecomment-511712813
 
 
   ## CI report:
   
   * 7abe4618172b34dfc190c67b3d28422afe4dd0ae : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119375954)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13300) Add OverwritableTableSink bridge logic to flink&blink planner

2019-07-16 Thread Danny Chan (JIRA)
Danny Chan created FLINK-13300:
--

 Summary: Add OverwritableTableSink bridge logic to flink&blink 
planner
 Key: FLINK-13300
 URL: https://issues.apache.org/jira/browse/FLINK-13300
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.9.0
Reporter: Danny Chan
Assignee: Danny Chan
 Fix For: 1.9.0, 1.10.0






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on a change in pull request #9024: [FLINK-13119] add blink table config to documentation

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #9024: [FLINK-13119] add blink 
table config to documentation
URL: https://github.com/apache/flink/pull/9024#discussion_r304201755
 
 

 ##
 File path: 
flink-docs/src/main/java/org/apache/flink/docs/configuration/ConfigOptionsDocGenerator.java
 ##
 @@ -260,10 +262,26 @@ private static String toHtmlTable(final 
List options) {
private static String toHtmlString(final OptionWithMetaInfo 
optionWithMetaInfo) {
ConfigOption option = optionWithMetaInfo.option;
String defaultValue = stringifyDefault(optionWithMetaInfo);
+   Documentation.TableMeta tableMeta = 
optionWithMetaInfo.field.getAnnotation(Documentation.TableMeta.class);
+   StringBuilder sb = new StringBuilder();
+   if (tableMeta != null) {
+   Documentation.ExecMode execMode = tableMeta.execMode();
+   if 
(Documentation.ExecMode.BATCH_STREAMING.equals(execMode)) {
+   sb.append(" ")
+   .append("BATCH")
+   .append(" ")
+   .append("STREAMING")
 
 Review comment:
   Use `ExecMode.STREAMING` instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10052) Tolerate temporarily suspended ZooKeeper connections

2019-07-16 Thread lamber-ken (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1688#comment-1688
 ] 

lamber-ken commented on FLINK-10052:


hi, [~Tison]

We can learn about the difference between LeaderLatch and LeaderSelector in the 
apache curator framework from here. 
[leaderlatch-vs-leaderselector|https://stackoverflow.com/questions/17998616/leaderlatch-vs-leaderselector].

BTW, I'm trying to realize it by LeaderSelector currently.

> Tolerate temporarily suspended ZooKeeper connections
> 
>
> Key: FLINK-10052
> URL: https://issues.apache.org/jira/browse/FLINK-10052
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.4.2, 1.5.2, 1.6.0, 1.8.1
>Reporter: Till Rohrmann
>Assignee: Dominik Wosiński
>Priority: Major
>
> This issue results from FLINK-10011 which uncovered a problem with Flink's HA 
> recovery and proposed the following solution to harden Flink:
> The {{ZooKeeperLeaderElectionService}} uses the {{LeaderLatch}} Curator 
> recipe for leader election. The leader latch revokes leadership in case of a 
> suspended ZooKeeper connection. This can be premature in case that the system 
> can reconnect to ZooKeeper before its session expires. The effect of the lost 
> leadership is that all jobs will be canceled and directly restarted after 
> regaining the leadership.
> Instead of directly revoking the leadership upon a SUSPENDED ZooKeeper 
> connection, it would be better to wait until the ZooKeeper connection is 
> LOST. That way we would allow the system to reconnect and not lose the 
> leadership. This could be achievable by using Curator's {{LeaderSelector}} 
> instead of the {{LeaderLatch}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13288) Blink planner doesn't compile with Scala 2.12

2019-07-16 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-13288:
-
Priority: Blocker  (was: Major)

> Blink planner doesn't compile with Scala 2.12
> -
>
> Key: FLINK-13288
> URL: https://issues.apache.org/jira/browse/FLINK-13288
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> [https://travis-ci.org/apache/flink/jobs/559389818]
>  
> {code:java}
> 13:06:43.309 [ERROR] error: java.lang.StackOverflowError
> 13:06:43.310 [INFO]   at 
> scala.reflect.internal.util.StatisticsStatics.areSomeColdStatsEnabled(StatisticsStatics.java:31)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.silent(Typers.scala:669)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.normalTypedApply$1(Typers.scala:4748)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typedApply$1(Typers.scala:4776)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5571)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.typed1(Erasure.scala:789)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5617)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.adaptMember(Erasure.scala:714)
> 13:06:43.310 [INFO]   at 
> scala.tools.nsc.transform.Erasure$Eraser.typed1(Erasure.scala:789){code}
>  
> [~ykt836] [~jark]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-10052) Tolerate temporarily suspended ZooKeeper connections

2019-07-16 Thread lamber-ken (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886663#comment-16886663
 ] 

lamber-ken edited comment on FLINK-10052 at 7/17/19 4:06 AM:
-

[~Tison], right, it's a better way to upgrate curator dependcy to fix this 
ideally, but there's a problem that curator-4.x detect the version of zookeeper 
by test whether +org.apache.zookeeper.admin.ZooKeeperAdmin+ is in classpath or 
not, like bellow.
{code:java}
Class.forName("org.apache.admin.ZooKeeperAdmin");
{code}
But flink-runtime module shades +org.apache.zookeeper+ to 
+org.apache.flink.shaded.zookeeper.org.apache.zookeeper+ , so it'll detect 
failed.

I think two ways to fix this issue,
First, rewrite +LeaderLatch#handleStateChange+ at flink-shaded-curator 
moduleflink, like [PR#9066|https://github.com/apache/flink/pull/9066].
Second, it also could be achievable by using Curator's LeaderSelector instead 
of the LeaderLatch as mentioned in issue description 







was (Author: lamber-ken):
[~Tison], right, it's a better way to upgrate curator dependcy to fix this 
ideally, but there's a problem that curator-4.x detect the version of zookeeper 
by test whether +org.apache.zookeeper.admin.ZooKeeperAdmin+ is in classpath or 
not, like bellow.
{code:java}
Class.forName("org.apache.admin.ZooKeeperAdmin");
{code}
But flink-runtime module shades +org.apache.zookeeper+ to 
+org.apache.flink.shaded.zookeeper.org.apache.zookeeper+ , so it'll detect 
failed.

I think two ways to fix this issue,
First, rewrite +LeaderLatch#handleStateChange+ at flink-shaded-curator 
moduleflink, like [PR#9066|https://github.com/apache/flink/pull/9066].
Seconde, it also could be achievable by using Curator's LeaderSelector instead 
of the LeaderLatch as mentioned in issue description 






> Tolerate temporarily suspended ZooKeeper connections
> 
>
> Key: FLINK-10052
> URL: https://issues.apache.org/jira/browse/FLINK-10052
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.4.2, 1.5.2, 1.6.0, 1.8.1
>Reporter: Till Rohrmann
>Assignee: Dominik Wosiński
>Priority: Major
>
> This issue results from FLINK-10011 which uncovered a problem with Flink's HA 
> recovery and proposed the following solution to harden Flink:
> The {{ZooKeeperLeaderElectionService}} uses the {{LeaderLatch}} Curator 
> recipe for leader election. The leader latch revokes leadership in case of a 
> suspended ZooKeeper connection. This can be premature in case that the system 
> can reconnect to ZooKeeper before its session expires. The effect of the lost 
> leadership is that all jobs will be canceled and directly restarted after 
> regaining the leadership.
> Instead of directly revoking the leadership upon a SUSPENDED ZooKeeper 
> connection, it would be better to wait until the ZooKeeper connection is 
> LOST. That way we would allow the system to reconnect and not lose the 
> leadership. This could be achievable by using Curator's {{LeaderSelector}} 
> instead of the {{LeaderLatch}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-10052) Tolerate temporarily suspended ZooKeeper connections

2019-07-16 Thread lamber-ken (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886663#comment-16886663
 ] 

lamber-ken commented on FLINK-10052:


[~Tison], right, it's a better way to upgrate curator dependcy to fix this 
ideally, but there's a problem that curator-4.x detect the version of zookeeper 
by test whether +org.apache.zookeeper.admin.ZooKeeperAdmin+ is in classpath or 
not, like bellow.
{code:java}
Class.forName("org.apache.admin.ZooKeeperAdmin");
{code}
But flink-runtime module shades +org.apache.zookeeper+ to 
+org.apache.flink.shaded.zookeeper.org.apache.zookeeper+ , so it'll detect 
failed.

I think two ways to fix this issue,
First, rewrite +LeaderLatch#handleStateChange+ at flink-shaded-curator 
moduleflink, like [PR#9066|https://github.com/apache/flink/pull/9066].
Seconde, it also could be achievable by using Curator's LeaderSelector instead 
of the LeaderLatch as mentioned in issue description 






> Tolerate temporarily suspended ZooKeeper connections
> 
>
> Key: FLINK-10052
> URL: https://issues.apache.org/jira/browse/FLINK-10052
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.4.2, 1.5.2, 1.6.0, 1.8.1
>Reporter: Till Rohrmann
>Assignee: Dominik Wosiński
>Priority: Major
>
> This issue results from FLINK-10011 which uncovered a problem with Flink's HA 
> recovery and proposed the following solution to harden Flink:
> The {{ZooKeeperLeaderElectionService}} uses the {{LeaderLatch}} Curator 
> recipe for leader election. The leader latch revokes leadership in case of a 
> suspended ZooKeeper connection. This can be premature in case that the system 
> can reconnect to ZooKeeper before its session expires. The effect of the lost 
> leadership is that all jobs will be canceled and directly restarted after 
> regaining the leadership.
> Instead of directly revoking the leadership upon a SUSPENDED ZooKeeper 
> connection, it would be better to wait until the ZooKeeper connection is 
> LOST. That way we would allow the system to reconnect and not lose the 
> leadership. This could be achievable by using Curator's {{LeaderSelector}} 
> instead of the {{LeaderLatch}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9136: [FLINK-12768][tests] FlinkKinesisConsumerTest.testSourceSynchronization flakiness

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9136: [FLINK-12768][tests] 
FlinkKinesisConsumerTest.testSourceSynchronization flakiness
URL: https://github.com/apache/flink/pull/9136#issuecomment-511964064
 
 
   ## CI report:
   
   * 0bd82ea36266915c1c4db21b00e5fa0ac2f44e6d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119373223)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9123: [FLINK-13281] Fix the verify logic of AdaptedRestartPipelinedRegionStrategyNGAbortPendingCheckpointsTest

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9123: [FLINK-13281] Fix the verify logic of 
AdaptedRestartPipelinedRegionStrategyNGAbortPendingCheckpointsTest
URL: https://github.com/apache/flink/pull/9123#issuecomment-511694972
 
 
   ## CI report:
   
   * 18bce5a71e73470e1164479d9039a02644f3bdf1 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119411296)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9123: [FLINK-13281] Fix the verify logic of AdaptedRestartPipelinedRegionStrategyNGAbortPendingCheckpointsTest

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9123: [FLINK-13281] Fix the verify logic of 
AdaptedRestartPipelinedRegionStrategyNGAbortPendingCheckpointsTest
URL: https://github.com/apache/flink/pull/9123#issuecomment-511694972
 
 
   ## CI report:
   
   * 18bce5a71e73470e1164479d9039a02644f3bdf1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119367042)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10052) Tolerate temporarily suspended ZooKeeper connections

2019-07-16 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886655#comment-16886655
 ] 

TisonKun commented on FLINK-10052:
--

[~lamber-ken] flink now depends on zookeeper 3.4.x so it happens curator-4.x 
will use zk34 compatibility mode correctly.

We could find another way to actually fix the relocation problem but upgrade to 
curator-4.x could solve this issue for now in a non-invasive way(compare to the 
way PR#9066).

> Tolerate temporarily suspended ZooKeeper connections
> 
>
> Key: FLINK-10052
> URL: https://issues.apache.org/jira/browse/FLINK-10052
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.4.2, 1.5.2, 1.6.0, 1.8.1
>Reporter: Till Rohrmann
>Assignee: Dominik Wosiński
>Priority: Major
>
> This issue results from FLINK-10011 which uncovered a problem with Flink's HA 
> recovery and proposed the following solution to harden Flink:
> The {{ZooKeeperLeaderElectionService}} uses the {{LeaderLatch}} Curator 
> recipe for leader election. The leader latch revokes leadership in case of a 
> suspended ZooKeeper connection. This can be premature in case that the system 
> can reconnect to ZooKeeper before its session expires. The effect of the lost 
> leadership is that all jobs will be canceled and directly restarted after 
> regaining the leadership.
> Instead of directly revoking the leadership upon a SUSPENDED ZooKeeper 
> connection, it would be better to wait until the ZooKeeper connection is 
> LOST. That way we would allow the system to reconnect and not lose the 
> leadership. This could be achievable by using Curator's {{LeaderSelector}} 
> instead of the {{LeaderLatch}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zhuzhurk commented on a change in pull request #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-16 Thread GitBox
zhuzhurk commented on a change in pull request #9113: [FLINK-13222] [runtime] 
Add documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113#discussion_r304204769
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/configuration/JobManagerOptions.java
 ##
 @@ -106,17 +106,21 @@
/**
 * This option specifies the failover strategy, i.e. how the job 
computation recovers from task failures.
 */
-   @Documentation.ExcludeFromDocumentation("The failover strategy feature 
is highly experimental.")
public static final ConfigOption EXECUTION_FAILOVER_STRATEGY =
key("jobmanager.execution.failover-strategy")
.defaultValue("full")
.withDescription(Description.builder()
-   .text("This option specifies how the job 
computation recovers from task failures. " +
+   .text("This option specifies the failover 
strategy, i.e. " +
+   "how the job computation recovers from 
task failures. " +
"Accepted values are:")
.list(
-   text("'full': Restarts all tasks."),
-   text("'individual': Restarts only the 
failed task. Should only be used if all tasks are independent components."),
-   text("'region': Restarts all tasks that 
could be affected by the task failure.")
+   text("'full': Use RestartAllStrategy to 
restarts all tasks to recover the job."),
+   text("'region': Use 
AdaptedRestartPipelinedRegionStrategyNG to restart all tasks " +
 
 Review comment:
   Thanks for the reminding. Will take it this way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9024: [FLINK-13119] add blink table config to documentation

2019-07-16 Thread GitBox
flinkbot commented on issue #9024: [FLINK-13119] add blink table config to 
documentation
URL: https://github.com/apache/flink/pull/9024#issuecomment-512084139
 
 
   ## CI report:
   
   * 1e4a2e9a584232fd8f5b441567190e1149b6f72f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119409238)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8471: [FLINK-12529][runtime] Release record-deserializer buffers timely to improve the efficiency of heap usage on taskmanager

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8471: [FLINK-12529][runtime] Release 
record-deserializer buffers timely to improve the efficiency of heap usage on 
taskmanager
URL: https://github.com/apache/flink/pull/8471#issuecomment-510870797
 
 
   ## CI report:
   
   * 3a6874ec1f1e40441b068868a16570e0b96f083c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054587)
   * 9b083e9be3176b039595d1a2479bf2e14f6f83fd : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119409287)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8756: [FLINK-12406] [Runtime] Report BLOCKING_PERSISTENT result partition meta back to client

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8756: [FLINK-12406] [Runtime] Report 
BLOCKING_PERSISTENT result partition meta back to client
URL: https://github.com/apache/flink/pull/8756#issuecomment-511260167
 
 
   ## CI report:
   
   * e290f6b65758fc7d199277e4345a75335de981b2 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119409271)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13299) flink-python failed on Travis

2019-07-16 Thread Haibo Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Sun updated FLINK-13299:
--
Affects Version/s: 1.10.0

> flink-python failed on Travis
> -
>
> Key: FLINK-13299
> URL: https://issues.apache.org/jira/browse/FLINK-13299
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Haibo Sun
>Priority: Major
>
> Log: [https://api.travis-ci.com/v3/job/216620643/log.txt]
> Error:
> ___ summary 
> 
> ERROR: py27: InvocationError for command 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
> virtualenv --no-download --python 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/2.7/bin/python2.7
>  py27 (exited with code 1) py33: commands succeeded ERROR: py34: 
> InvocationError for command 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
> virtualenv --no-download --python 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/3.4/bin/python3.4
>  py34 (exited with code 100) py35: commands succeeded py36: commands 
> succeeded py37: commands succeeded tox checks... 
> [FAILED] PYTHON exited with EXIT CODE: 1. Trying to KILL watchdog 
> (12896). ./tools/travis_watchdog.sh: line 229: 12896 Terminated watchdog



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13299) flink-python failed on Travis

2019-07-16 Thread Haibo Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Sun updated FLINK-13299:
--
Description: 
Log: [https://api.travis-ci.com/v3/job/216620643/log.txt]

Error:

___ summary 
 ERROR: py27: InvocationError for command 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
virtualenv --no-download --python 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/2.7/bin/python2.7
 py27 (exited with code 1) py33: commands succeeded ERROR: py34: 
InvocationError for command 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
virtualenv --no-download --python 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/3.4/bin/python3.4
 py34 (exited with code 100) py35: commands succeeded py36: commands succeeded 
py37: commands succeeded tox checks... [FAILED] PYTHON 
exited with EXIT CODE: 1. Trying to KILL watchdog (12896). 
./tools/travis_watchdog.sh: line 229: 12896 Terminated watchdog

  was:Log: [https://api.travis-ci.com/v3/job/216620643/log.txt]


> flink-python failed on Travis
> -
>
> Key: FLINK-13299
> URL: https://issues.apache.org/jira/browse/FLINK-13299
> Project: Flink
>  Issue Type: Bug
>Reporter: Haibo Sun
>Priority: Major
>
> Log: [https://api.travis-ci.com/v3/job/216620643/log.txt]
> Error:
> ___ summary 
>  ERROR: py27: InvocationError for command 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
> virtualenv --no-download --python 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/2.7/bin/python2.7
>  py27 (exited with code 1) py33: commands succeeded ERROR: py34: 
> InvocationError for command 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
> virtualenv --no-download --python 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/3.4/bin/python3.4
>  py34 (exited with code 100) py35: commands succeeded py36: commands 
> succeeded py37: commands succeeded tox checks... 
> [FAILED] PYTHON exited with EXIT CODE: 1. Trying to KILL watchdog 
> (12896). ./tools/travis_watchdog.sh: line 229: 12896 Terminated watchdog



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13299) flink-python failed on Travis

2019-07-16 Thread Haibo Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Sun updated FLINK-13299:
--
Description: 
Log: [https://api.travis-ci.com/v3/job/216620643/log.txt]

Error:

___ summary 

ERROR: py27: InvocationError for command 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
virtualenv --no-download --python 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/2.7/bin/python2.7
 py27 (exited with code 1) py33: commands succeeded ERROR: py34: 
InvocationError for command 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
virtualenv --no-download --python 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/3.4/bin/python3.4
 py34 (exited with code 100) py35: commands succeeded py36: commands succeeded 
py37: commands succeeded tox checks... [FAILED] PYTHON 
exited with EXIT CODE: 1. Trying to KILL watchdog (12896). 
./tools/travis_watchdog.sh: line 229: 12896 Terminated watchdog

  was:
Log: [https://api.travis-ci.com/v3/job/216620643/log.txt]

Error:

___ summary 
 ERROR: py27: InvocationError for command 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
virtualenv --no-download --python 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/2.7/bin/python2.7
 py27 (exited with code 1) py33: commands succeeded ERROR: py34: 
InvocationError for command 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
virtualenv --no-download --python 
/home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/3.4/bin/python3.4
 py34 (exited with code 100) py35: commands succeeded py36: commands succeeded 
py37: commands succeeded tox checks... [FAILED] PYTHON 
exited with EXIT CODE: 1. Trying to KILL watchdog (12896). 
./tools/travis_watchdog.sh: line 229: 12896 Terminated watchdog


> flink-python failed on Travis
> -
>
> Key: FLINK-13299
> URL: https://issues.apache.org/jira/browse/FLINK-13299
> Project: Flink
>  Issue Type: Bug
>Reporter: Haibo Sun
>Priority: Major
>
> Log: [https://api.travis-ci.com/v3/job/216620643/log.txt]
> Error:
> ___ summary 
> 
> ERROR: py27: InvocationError for command 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
> virtualenv --no-download --python 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/2.7/bin/python2.7
>  py27 (exited with code 1) py33: commands succeeded ERROR: py34: 
> InvocationError for command 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/bin/python3.7 -m 
> virtualenv --no-download --python 
> /home/travis/build/flink-ci/flink/flink-python/dev/.conda/envs/3.4/bin/python3.4
>  py34 (exited with code 100) py35: commands succeeded py36: commands 
> succeeded py37: commands succeeded tox checks... 
> [FAILED] PYTHON exited with EXIT CODE: 1. Trying to KILL watchdog 
> (12896). ./tools/travis_watchdog.sh: line 229: 12896 Terminated watchdog



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13299) flink-python failed on Travis

2019-07-16 Thread Haibo Sun (JIRA)
Haibo Sun created FLINK-13299:
-

 Summary: flink-python failed on Travis
 Key: FLINK-13299
 URL: https://issues.apache.org/jira/browse/FLINK-13299
 Project: Flink
  Issue Type: Bug
Reporter: Haibo Sun


Log: [https://api.travis-ci.com/v3/job/216620643/log.txt]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8756: [FLINK-12406] [Runtime] Report BLOCKING_PERSISTENT result partition meta back to client

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8756: [FLINK-12406] [Runtime] Report 
BLOCKING_PERSISTENT result partition meta back to client
URL: https://github.com/apache/flink/pull/8756#issuecomment-511260167
 
 
   ## CI report:
   
   * e290f6b65758fc7d199277e4345a75335de981b2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119394041)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8471: [FLINK-12529][runtime] Release record-deserializer buffers timely to improve the efficiency of heap usage on taskmanager

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8471: [FLINK-12529][runtime] Release 
record-deserializer buffers timely to improve the efficiency of heap usage on 
taskmanager
URL: https://github.com/apache/flink/pull/8471#issuecomment-510870797
 
 
   ## CI report:
   
   * 3a6874ec1f1e40441b068868a16570e0b96f083c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054587)
   * 9b083e9be3176b039595d1a2479bf2e14f6f83fd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119365891)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9024: [FLINK-13119] add blink table config to documentation

2019-07-16 Thread GitBox
wuchong commented on issue #9024: [FLINK-13119] add blink table config to 
documentation
URL: https://github.com/apache/flink/pull/9024#issuecomment-512082259
 
 
   Hi @zentol do you want to have a look again? 
   I think the `TableMeta` annotation is great to generate a good SQL 
configuration page. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9110: [FLINK-13258] Add tests for LongHashPartition, StringCallGen and EqualiserCodeGenerator

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9110: [FLINK-13258] Add tests for 
LongHashPartition, StringCallGen and EqualiserCodeGenerator
URL: https://github.com/apache/flink/pull/9110#issuecomment-511305693
 
 
   ## CI report:
   
   * 26f95bfde851137cb71fa6060df7587535340afe : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119408618)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9024: [FLINK-13119] add blink table config to documentation

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #9024: [FLINK-13119] add blink 
table config to documentation
URL: https://github.com/apache/flink/pull/9024#discussion_r304201755
 
 

 ##
 File path: 
flink-docs/src/main/java/org/apache/flink/docs/configuration/ConfigOptionsDocGenerator.java
 ##
 @@ -260,10 +262,26 @@ private static String toHtmlTable(final 
List options) {
private static String toHtmlString(final OptionWithMetaInfo 
optionWithMetaInfo) {
ConfigOption option = optionWithMetaInfo.option;
String defaultValue = stringifyDefault(optionWithMetaInfo);
+   Documentation.TableMeta tableMeta = 
optionWithMetaInfo.field.getAnnotation(Documentation.TableMeta.class);
+   StringBuilder sb = new StringBuilder();
+   if (tableMeta != null) {
+   Documentation.ExecMode execMode = tableMeta.execMode();
+   if 
(Documentation.ExecMode.BATCH_STREAMING.equals(execMode)) {
+   sb.append(" ")
+   .append("BATCH")
+   .append(" ")
+   .append("STREAMING")
 
 Review comment:
   Use `Exec.STREAMING` instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9024: [FLINK-13119] add blink table config to documentation

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #9024: [FLINK-13119] add blink 
table config to documentation
URL: https://github.com/apache/flink/pull/9024#discussion_r304201721
 
 

 ##
 File path: 
flink-docs/src/main/java/org/apache/flink/docs/configuration/ConfigOptionsDocGenerator.java
 ##
 @@ -260,10 +262,26 @@ private static String toHtmlTable(final 
List options) {
private static String toHtmlString(final OptionWithMetaInfo 
optionWithMetaInfo) {
ConfigOption option = optionWithMetaInfo.option;
String defaultValue = stringifyDefault(optionWithMetaInfo);
+   Documentation.TableMeta tableMeta = 
optionWithMetaInfo.field.getAnnotation(Documentation.TableMeta.class);
+   StringBuilder sb = new StringBuilder();
+   if (tableMeta != null) {
+   Documentation.ExecMode execMode = tableMeta.execMode();
+   if 
(Documentation.ExecMode.BATCH_STREAMING.equals(execMode)) {
+   sb.append(" ")
+   .append("BATCH")
 
 Review comment:
   Use `ExecMode.BATCH` instead. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9024: [FLINK-13119] add blink table config to documentation

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #9024: [FLINK-13119] add blink 
table config to documentation
URL: https://github.com/apache/flink/pull/9024#discussion_r304201952
 
 

 ##
 File path: 
flink-docs/src/main/java/org/apache/flink/docs/configuration/ConfigOptionsDocGenerator.java
 ##
 @@ -260,10 +262,26 @@ private static String toHtmlTable(final 
List options) {
private static String toHtmlString(final OptionWithMetaInfo 
optionWithMetaInfo) {
ConfigOption option = optionWithMetaInfo.option;
String defaultValue = stringifyDefault(optionWithMetaInfo);
+   Documentation.TableMeta tableMeta = 
optionWithMetaInfo.field.getAnnotation(Documentation.TableMeta.class);
+   StringBuilder sb = new StringBuilder();
 
 Review comment:
   Use a meaningful local field name? For example: `tags`, `metaStr`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9024: [FLINK-13119] add blink table config to documentation

2019-07-16 Thread GitBox
wuchong commented on a change in pull request #9024: [FLINK-13119] add blink 
table config to documentation
URL: https://github.com/apache/flink/pull/9024#discussion_r304202149
 
 

 ##
 File path: docs/_includes/generated/execution_config_configuration.html
 ##
 @@ -0,0 +1,122 @@
+
 
 Review comment:
   Agree. We should have a separate page, for example: 
`docs/dev/table/config.md` and add some more description on the top of the 
page. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9110: [FLINK-13258] Add tests for LongHashPartition, StringCallGen and EqualiserCodeGenerator

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9110: [FLINK-13258] Add tests for 
LongHashPartition, StringCallGen and EqualiserCodeGenerator
URL: https://github.com/apache/flink/pull/9110#issuecomment-511305693
 
 
   ## CI report:
   
   * 26f95bfde851137cb71fa6060df7587535340afe : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119369702)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9135: [FLINK-13296][table] FunctionCatalog.lookupFunction() should check in memory functions if the target function doesn't exist in catalog

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9135: [FLINK-13296][table] 
FunctionCatalog.lookupFunction() should check in memory functions if the target 
function doesn't exist in catalog
URL: https://github.com/apache/flink/pull/9135#issuecomment-511940608
 
 
   ## CI report:
   
   * 85753f9eb8b07f1e384ee050bb3d70a7ca96e51b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119407960)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9135: [FLINK-13296][table] FunctionCatalog.lookupFunction() should check in memory functions if the target function doesn't exist in catalog

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9135: [FLINK-13296][table] 
FunctionCatalog.lookupFunction() should check in memory functions if the target 
function doesn't exist in catalog
URL: https://github.com/apache/flink/pull/9135#issuecomment-511940608
 
 
   ## CI report:
   
   * 85753f9eb8b07f1e384ee050bb3d70a7ca96e51b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119364175)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - 
Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#issuecomment-510464651
 
 
   ## CI report:
   
   * b2821a6ae97fd943f3a66b672e85fbd2374126c4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118909729)
   * 0699f7e5f2240a4a1bc44c15f08e6a1df47d3b01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054579)
   * 509e634257496dd2d8d42d512901f5eb46a82c50 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119406891)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-16 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886629#comment-16886629
 ] 

frank wang commented on FLINK-12894:


ok, i have do it 

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13086) add Chinese documentation for catalogs

2019-07-16 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886628#comment-16886628
 ] 

frank wang commented on FLINK-13086:


i have do it 

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13289) Blink Planner JDBCUpsertTableSink : UnsupportedOperationException "JDBCUpsertTableSink can not support "

2019-07-16 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13289:

Issue Type: Sub-task  (was: Bug)
Parent: FLINK-13285

> Blink Planner JDBCUpsertTableSink : UnsupportedOperationException 
> "JDBCUpsertTableSink can not support "
> 
>
> Key: FLINK-13289
> URL: https://issues.apache.org/jira/browse/FLINK-13289
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.9.0
>Reporter: LakeShen
>Priority: Major
>
> Hi , in flink-jdbc connector module, I change the Flink planner to Blink 
> planner to test all test case,because we want to use Blank planner in our 
> program. When I test the JDBCUpsertTableSinkITCase class , the method 
> testUpsert throw the exception:
> {color:red}java.lang.UnsupportedOperationException: JDBCUpsertTableSink can 
> not support {color}
> I saw the src code,in Flink planner , the StreamPlanner set the 
> JDBCUpsertTableSink' keyFields,
> but in Blink planner , I didn't find anywhere to set JDBCUpsertTableSink' 
> keyFields,so JDBCUpsertTableSink keyFields is null, when execute 
> JDBCUpsertTableSink newFormat(),
> it thrown the exception.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - 
Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#issuecomment-510464651
 
 
   ## CI report:
   
   * b2821a6ae97fd943f3a66b672e85fbd2374126c4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118909729)
   * 0699f7e5f2240a4a1bc44c15f08e6a1df47d3b01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054579)
   * 509e634257496dd2d8d42d512901f5eb46a82c50 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119359153)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13249) Distributed Jepsen test fails with blocked TaskExecutor

2019-07-16 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886622#comment-16886622
 ] 

Till Rohrmann commented on FLINK-13249:
---

I like this idea [~srichter]. It is not so invasive and should be easy to 
implement.

> Distributed Jepsen test fails with blocked TaskExecutor
> ---
>
> Key: FLINK-13249
> URL: https://issues.apache.org/jira/browse/FLINK-13249
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Stefan Richter
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
> Attachments: jstack_25661_YarnTaskExecutorRunner
>
>
> The distributed Jepsen test which kills {{JobMasters}} started to fail 
> recently. From a first glance, it looks as if the {{TaskExecutor's}} main 
> thread is blocked by some operation. Further investigation is required.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9111: [FLINK-13259] Fix typo about "access"

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9111: [FLINK-13259] Fix typo about "access"
URL: https://github.com/apache/flink/pull/9111#issuecomment-511309474
 
 
   ## CI report:
   
   * 439d80022ebd673b66b87ddeeb02cd89a35e6083 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119405332)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13037) Translate "Concepts -> Glossary" page into Chinese

2019-07-16 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13037:

Description: 
Translate Glossary page into Chinese: 
https://ci.apache.org/projects/flink/flink-docs-master/concepts/glossary.html
The markdown file is located in {{docs/concepts/glossary.md}}.

> Translate "Concepts -> Glossary" page into Chinese
> --
>
> Key: FLINK-13037
> URL: https://issues.apache.org/jira/browse/FLINK-13037
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Konstantin Knauf
>Priority: Major
>
> Translate Glossary page into Chinese: 
> https://ci.apache.org/projects/flink/flink-docs-master/concepts/glossary.html
> The markdown file is located in {{docs/concepts/glossary.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13037) Translate "Concepts -> Glossary" page into Chinese

2019-07-16 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886613#comment-16886613
 ] 

Jark Wu commented on FLINK-13037:
-

The Glossary page has been merged in. We can start to translate it now. 
https://ci.apache.org/projects/flink/flink-docs-master/concepts/glossary.html

> Translate "Concepts -> Glossary" page into Chinese
> --
>
> Key: FLINK-13037
> URL: https://issues.apache.org/jira/browse/FLINK-13037
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Konstantin Knauf
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13298) write Chinese documentation and quickstart for Flink-Hive compatibility

2019-07-16 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13298:

Component/s: chinese-translation

> write Chinese documentation and quickstart for Flink-Hive compatibility
> ---
>
> Key: FLINK-13298
> URL: https://issues.apache.org/jira/browse/FLINK-13298
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation
>Reporter: Bowen Li
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> its corresponding English one is FLINK-13276



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9111: [FLINK-13259] Fix typo about "access"

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9111: [FLINK-13259] Fix typo about "access"
URL: https://github.com/apache/flink/pull/9111#issuecomment-511309474
 
 
   ## CI report:
   
   * 439d80022ebd673b66b87ddeeb02cd89a35e6083 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119356257)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9134: [hotfix] [docs] Small fix in CLI documentation

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9134: [hotfix] [docs] Small fix in CLI 
documentation
URL: https://github.com/apache/flink/pull/9134#issuecomment-511919130
 
 
   ## CI report:
   
   * 603d38f589edbbec7b0767fcd9a9aff11388a342 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119356182)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9105: [FLINK-13241][Yarn/Mesos] Fix 
Yarn/MesosResourceManager setting managed memory size into wrong configuration 
instance.
URL: https://github.com/apache/flink/pull/9105#issuecomment-511499675
 
 
   ## CI report:
   
   * 7b6a81ed94056dd217f7feba9b155158fcfbfc4a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119404317)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9132: [FLINK-13245][network] Fix the bug of file resource leak while canceling partition request

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9132: [FLINK-13245][network] Fix the bug of 
file resource leak while canceling partition request
URL: https://github.com/apache/flink/pull/9132#issuecomment-511875353
 
 
   ## CI report:
   
   * 3f40e06e867f0b661c62e8ba690156b3819135d5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119403766)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong commented on issue #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-16 Thread GitBox
xintongsong commented on issue #9105: [FLINK-13241][Yarn/Mesos] Fix 
Yarn/MesosResourceManager setting managed memory size into wrong configuration 
instance.
URL: https://github.com/apache/flink/pull/9105#issuecomment-512064004
 
 
   @azagrebin 
   Ok, I see what you mean. I think you are right. We do have a 
`YarnConfigurationITCase` that checks TMs are started with correct number of 
slots and memory sizes. Unfortunately, this test case does not cover the 
managed memory size. Just pushed another commit adding managed memory checking 
to this test case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9122: [FLINK-13269] [table] copy RelDecorrelator & FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & CALCITE-3170

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9122: [FLINK-13269] [table] copy 
RelDecorrelator & FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & 
CALCITE-3170
URL: https://github.com/apache/flink/pull/9122#issuecomment-511645215
 
 
   ## CI report:
   
   * 2f968ce7ffc295992471254265a1feef450a63f3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119403274)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8621: [FLINK-12682][connectors] StringWriter support custom row delimiter

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8621: [FLINK-12682][connectors] 
StringWriter support custom row delimiter
URL: https://github.com/apache/flink/pull/8621#issuecomment-511183092
 
 
   ## CI report:
   
   * e21d12fa2c9b6305d90502ae05a9d574ce712fd1 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119402834)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9122: [FLINK-13269] [table] copy RelDecorrelator & FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & CALCITE-3170

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9122: [FLINK-13269] [table] copy 
RelDecorrelator & FlinkFilterJoinRule to flink planner to fix CALCITE-3169 & 
CALCITE-3170
URL: https://github.com/apache/flink/pull/9122#issuecomment-511645215
 
 
   ## CI report:
   
   * 2f968ce7ffc295992471254265a1feef450a63f3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/11935)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8621: [FLINK-12682][connectors] StringWriter support custom row delimiter

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8621: [FLINK-12682][connectors] 
StringWriter support custom row delimiter
URL: https://github.com/apache/flink/pull/8621#issuecomment-511183092
 
 
   ## CI report:
   
   * e21d12fa2c9b6305d90502ae05a9d574ce712fd1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119349630)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9133: [FLINK-13293][state-processor-api][build] Add state processor api to opt/ directory in flink-dist

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9133: 
[FLINK-13293][state-processor-api][build] Add state processor api to opt/ 
directory in flink-dist
URL: https://github.com/apache/flink/pull/9133#issuecomment-511911711
 
 
   ## CI report:
   
   * 66eeac910377821fae2d7da08969551e38da6f7d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119353289)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8967: [FLINK-13059][Cassandra Connector] Release Semaphore correctly on Exception in send()

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8967: [FLINK-13059][Cassandra Connector] 
Release Semaphore correctly on Exception in send()
URL: https://github.com/apache/flink/pull/8967#issuecomment-511307628
 
 
   ## CI report:
   
   * 7fe85952cde10780b93bf27d784b77f1cc381d11 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119399131)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9126: [FLINK-13242][tests][coordination] Get SlotManager.failUnfulfillableRequest in main thread of StandaloneResourceManager for verification

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #9126: [FLINK-13242][tests][coordination] 
Get SlotManager.failUnfulfillableRequest in main thread of 
StandaloneResourceManager for verification
URL: https://github.com/apache/flink/pull/9126#issuecomment-511719014
 
 
   ## CI report:
   
   * 910d6b067ec645e29edc9b05d324d9ca5a136a0b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119398369)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8967: [FLINK-13059][Cassandra Connector] Release Semaphore correctly on Exception in send()

2019-07-16 Thread GitBox
flinkbot edited a comment on issue #8967: [FLINK-13059][Cassandra Connector] 
Release Semaphore correctly on Exception in send()
URL: https://github.com/apache/flink/pull/8967#issuecomment-511307628
 
 
   ## CI report:
   
   * 7fe85952cde10780b93bf27d784b77f1cc381d11 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119346706)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #9135: [FLINK-13296][table] FunctionCatalog.lookupFunction() should check in memory functions if the target function doesn't exist in cata

2019-07-16 Thread GitBox
xuefuz commented on a change in pull request #9135: [FLINK-13296][table] 
FunctionCatalog.lookupFunction() should check in memory functions if the target 
function doesn't exist in catalog
URL: https://github.com/apache/flink/pull/9135#discussion_r304172711
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/FunctionCatalog.java
 ##
 @@ -159,32 +159,31 @@ public void registerScalarFunction(String name, 
ScalarFunction function) {
public Optional lookupFunction(String name) {
String functionName = normalizeName(name);
 
-   FunctionDefinition userCandidate = null;
+   FunctionDefinition userCandidate;
 
Catalog catalog = 
catalogManager.getCatalog(catalogManager.getCurrentCatalog()).get();
-   ObjectPath functionPath = new 
ObjectPath(catalogManager.getCurrentDatabase(), functionName);
 
-   if (catalog.functionExists(functionPath)) {
+   try {
+   CatalogFunction catalogFunction = catalog.getFunction(
+   new 
ObjectPath(catalogManager.getCurrentDatabase(), functionName));
+
if (catalog.getTableFactory().isPresent() &&
catalog.getTableFactory().get() instanceof 
FunctionDefinitionFactory) {
-   try {
-   CatalogFunction catalogFunction = 
catalog.getFunction(functionPath);
 
-   FunctionDefinitionFactory factory = 
(FunctionDefinitionFactory) catalog.getTableFactory().get();
+   FunctionDefinitionFactory factory = 
(FunctionDefinitionFactory) catalog.getTableFactory().get();
 
-   userCandidate = 
factory.createFunctionDefinition(functionName, catalogFunction);
+   userCandidate = 
factory.createFunctionDefinition(functionName, catalogFunction);
 
-   return Optional.of(
-   new FunctionLookup.Result(
-   
ObjectIdentifier.of(catalogManager.getCurrentCatalog(), 
catalogManager.getCurrentDatabase(), name),
-   userCandidate)
-   );
-   } catch (FunctionNotExistException e) {
-   // Ignore
-   }
+   return Optional.of(
+   new FunctionLookup.Result(
+   
ObjectIdentifier.of(catalogManager.getCurrentCatalog(), 
catalogManager.getCurrentDatabase(), name),
+   userCandidate)
+   );
} else {
// TODO: should go thru function definition 
discover service
}
+   } catch (FunctionNotExistException e) {
+   // Ignore
 
 Review comment:
   We can probably update the comment to be more meaningful.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >