[jira] [Assigned] (FLINK-15169) Errors happen in the scheduling of DefaultScheduler is not shown in WebUI

2019-12-10 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-15169:


Assignee: Gary Yao  (was: Zhu Zhu)

> Errors happen in the scheduling of DefaultScheduler is not shown in WebUI
> -
>
> Key: FLINK-15169
> URL: https://issues.apache.org/jira/browse/FLINK-15169
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Gary Yao
>Priority: Blocker
> Fix For: 1.10.0
>
>
> WebUI relies on {{ExecutionGraph#failureInfo}} and {{Execution#failureCause}} 
> to generate error info (via 
> {{JobExceptionsHandler#createJobExceptionsInfo}}). 
> Errors happen in the scheduling of DefaultScheduler are not recorded into 
> those fields, thus cannot be shown to users in WebUI (nor via REST queries).
> To solve it, 
> 1. global failures should be recorded into {{ExecutionGraph#failureInfo}}, 
> via {{ExecutionGraph#initFailureCause}} which can be exposed as 
> {{SchedulerBase#initFailureCause}}.
> 2. for task failures, one solution I can think of is to avoid invoking 
> {{DefaultScheduler#handleTaskFailure}} directly on scheduler's internal 
> failures. Instead, we can introduce 
> {{ExecutionVertexOperations#fail(ExecutionVertex)}} to hand the error to 
> {{ExecutionVertex}} as a common failure.
> cc [~gjy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15169) Errors happen in the scheduling of DefaultScheduler is not shown in WebUI

2019-12-10 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-15169:


Assignee: Gary Yao

> Errors happen in the scheduling of DefaultScheduler is not shown in WebUI
> -
>
> Key: FLINK-15169
> URL: https://issues.apache.org/jira/browse/FLINK-15169
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Gary Yao
>Priority: Blocker
> Fix For: 1.10.0
>
>
> WebUI relies on {{ExecutionGraph#failureInfo}} and {{Execution#failureCause}} 
> to generate error info (via 
> {{JobExceptionsHandler#createJobExceptionsInfo}}). 
> Errors happen in the scheduling of DefaultScheduler are not recorded into 
> those fields, thus cannot be shown to users in WebUI (nor via REST queries).
> To solve it, 
> 1. global failures should be recorded into {{ExecutionGraph#failureInfo}}, 
> via {{ExecutionGraph#initFailureCause}} which can be exposed as 
> {{SchedulerBase#initFailureCause}}.
> 2. for task failures, one solution I can think of is to avoid invoking 
> {{DefaultScheduler#handleTaskFailure}} directly on scheduler's internal 
> failures. Instead, we can introduce 
> {{ExecutionVertexOperations#fail(ExecutionVertex)}} to hand the error to 
> {{ExecutionVertex}} as a common failure.
> cc [~gjy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15169) Errors happen in the scheduling of DefaultScheduler is not shown in WebUI

2019-12-10 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-15169:


Assignee: Zhu Zhu  (was: Gary Yao)

> Errors happen in the scheduling of DefaultScheduler is not shown in WebUI
> -
>
> Key: FLINK-15169
> URL: https://issues.apache.org/jira/browse/FLINK-15169
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Blocker
> Fix For: 1.10.0
>
>
> WebUI relies on {{ExecutionGraph#failureInfo}} and {{Execution#failureCause}} 
> to generate error info (via 
> {{JobExceptionsHandler#createJobExceptionsInfo}}). 
> Errors happen in the scheduling of DefaultScheduler are not recorded into 
> those fields, thus cannot be shown to users in WebUI (nor via REST queries).
> To solve it, 
> 1. global failures should be recorded into {{ExecutionGraph#failureInfo}}, 
> via {{ExecutionGraph#initFailureCause}} which can be exposed as 
> {{SchedulerBase#initFailureCause}}.
> 2. for task failures, one solution I can think of is to avoid invoking 
> {{DefaultScheduler#handleTaskFailure}} directly on scheduler's internal 
> failures. Instead, we can introduce 
> {{ExecutionVertexOperations#fail(ExecutionVertex)}} to hand the error to 
> {{ExecutionVertex}} as a common failure.
> cc [~gjy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15168) Exception is thrown when using kafka source connector with flink planner

2019-12-10 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-15168:


Assignee: Dawid Wysakowicz  (was: Zhenghua Gao)

> Exception is thrown when using kafka source connector with flink planner
> 
>
> Key: FLINK-15168
> URL: https://issues.apache.org/jira/browse/FLINK-15168
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.10.0
>Reporter: Huang Xingbo
>Assignee: Dawid Wysakowicz
>Priority: Blocker
> Fix For: 1.10.0
>
>
> when running the following case using kafka as source connector in flink 
> planner, we will get a RuntimeException:
> {code:java}
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> env.setParallelism(1);StreamTableEnvironment tEnv = 
> StreamTableEnvironment.create(env);tEnv.connect(new Kafka()
> .version("0.11")
> .topic("user")
> .startFromEarliest()
> .property("zookeeper.connect", "localhost:2181")
> .property("bootstrap.servers", "localhost:9092"))
> .withFormat(new Json()
> .failOnMissingField(true)
> .jsonSchema("{" +
> "  type: 'object'," +
> "  properties: {" +
> "a: {" +
> "  type: 'string'" +
> "}," +
> "b: {" +
> "  type: 'string'" +
> "}," +
> "c: {" +
> "  type: 'string'" +
> "}," +
> "time: {" +
> "  type: 'string'," +
> "  format: 'date-time'" +
> "}" +
> "  }" +
> "}"
> ))
> .withSchema(new Schema()
> .field("rowtime", Types.SQL_TIMESTAMP)
> .rowtime(new Rowtime()
> .timestampsFromField("time")
> .watermarksPeriodicBounded(6))
> .field("a", Types.STRING)
> .field("b", Types.STRING)
> .field("c", Types.STRING))
> .inAppendMode()
> .registerTableSource("source");Table t = 
> tEnv.scan("source").select("a");tEnv.toAppendStream(t, Row.class).print();
> tEnv.execute("test");
> {code}
> The RuntimeException detail:
> {code:java}
> Exception in thread "main" java.lang.RuntimeException: Error while applying 
> rule PushProjectIntoTableSourceScanRule, args 
> [rel#26:FlinkLogicalCalc.LOGICAL(input=RelSubset#25,expr#0..3={inputs},a=$t1),
>  Scan(table:[default_catalog, default_database, source], fields:(rowtime, a, 
> b, c), source:Kafka011TableSource(rowtime, a, b, c))]Exception in thread 
> "main" java.lang.RuntimeException: Error while applying rule 
> PushProjectIntoTableSourceScanRule, args 
> [rel#26:FlinkLogicalCalc.LOGICAL(input=RelSubset#25,expr#0..3={inputs},a=$t1),
>  Scan(table:[default_catalog, default_database, source], fields:(rowtime, a, 
> b, c), source:Kafka011TableSource(rowtime, a, b, c))] at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:235)
>  at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:631)
>  at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:327) 
> at 
> org.apache.flink.table.plan.Optimizer.runVolcanoPlanner(Optimizer.scala:280) 
> at 
> org.apache.flink.table.plan.Optimizer.optimizeLogicalPlan(Optimizer.scala:199)
>  at 
> org.apache.flink.table.plan.StreamOptimizer.optimize(StreamOptimizer.scala:66)
>  at 
> org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:389)
>  at 
> org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:180)
>  at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
>  at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
>

[GitHub] [flink] flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle 
data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#issuecomment-563110907
 
 
   
   ## CI report:
   
   * bef118a977b3aa635fc748260d07b0d5079b2c0e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140180154) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3342)
 
   * fb385e77c7006db8552c81ef2c2005d37a63fb93 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140535760) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3445)
 
   * 0276c0af8515d2f81a44d253339f3b367d2bc5cb Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140541200) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3449)
 
   * c75dc3a5e2f461a9d0492c2c23fb57487ecc20a6 UNKNOWN
   * 4c4950f249f8a991d0bccf90c2020a85150fe9f0 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140549613) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3453)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] 
Fix shuffle data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#discussion_r356444335
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/buffer/BufferBuilderTestUtils.java
 ##
 @@ -67,18 +67,22 @@ public static Buffer buildSingleBuffer(BufferConsumer 
bufferConsumer) {
}
 
public static BufferConsumer createFilledFinishedBufferConsumer(int 
dataSize) {
-   return createFilledBufferConsumer(dataSize, dataSize, true);
+   return createFilledBufferConsumer(dataSize, dataSize, true, 
true);
 
 Review comment:
   Why changed the previous tests to use the `isShareable` property by default? 
   The shareable property is not the general scenarios and is only used for 
broadcast with compression. So it is enough to only cover the compression case 
for verifying. The most common cases should be non-shareable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13197) Verify querying Hive's view in Flink

2019-12-10 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993269#comment-16993269
 ] 

Dawid Wysakowicz commented on FLINK-13197:
--

Good idea [~lirui] Thank you

> Verify querying Hive's view in Flink
> 
>
> Key: FLINK-13197
> URL: https://issues.apache.org/jira/browse/FLINK-13197
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.10.0
>
>
> One goal of HiveCatalog and hive integration is to enable Flink-Hive 
> interoperability, that is Flink should understand existing Hive meta-objects, 
> and Hive meta-objects created thru Flink should be understood by Hive.
> Taking an example of a Hive view v1 in HiveCatalog and database hc.db. Unlike 
> an equivalent Flink view whose full path in expanded query should be 
> hc.db.v1, the Hive view's full path in the expanded query should be db.v1 
> such that Hive can understand it, no matter it's created by Hive or Flink.
> [~lirui] can you help to ensure that Flink can also query Hive's view in both 
> Flink planner and Blink planner?
> cc [~xuefuz]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] 
Fix shuffle data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#discussion_r356442618
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java
 ##
 @@ -115,7 +131,8 @@ public Buffer build() {
 * @return a retained copy of self with separate indexes
 */
public BufferConsumer copy() {
-   return new BufferConsumer(buffer.retainBuffer(), 
writerPosition.positionMarker, currentReaderPosition);
+   checkState(isShareable, "The underlying buffer is not 
shareable.");
 
 Review comment:
   We might supplement some descriptions for this method to warn that only 
shareable `BufferConsumer` can support `copy` function.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] 
Fix shuffle data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#discussion_r356441741
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java
 ##
 @@ -115,7 +131,8 @@ public Buffer build() {
 * @return a retained copy of self with separate indexes
 */
public BufferConsumer copy() {
-   return new BufferConsumer(buffer.retainBuffer(), 
writerPosition.positionMarker, currentReaderPosition);
+   checkState(isShareable, "The underlying buffer is not 
shareable.");
 
 Review comment:
   From this point, my previous concern of whether the `isShareable` tag is 
indicating for both buffer & event and only for buffer might have the answer. 
The semantic should be covering both buffer and event, otherwise the event copy 
would destroy this check, although the event shareable would not be used in 
practice atm.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10525: [hotfix][doc] refine docs for catalog APIs

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10525: [hotfix][doc] refine docs for 
catalog APIs
URL: https://github.com/apache/flink/pull/10525#issuecomment-564411337
 
 
   
   ## CI report:
   
   * d99fa27649e4ee125b0d53ce173ad1e21e88fd0e Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140551791) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3455)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10522: [FLINK-15153]Service selector needs to contain jobmanager component label(Backport to 1.10)

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10522: [FLINK-15153]Service selector needs 
to contain jobmanager component label(Backport to 1.10)
URL: https://github.com/apache/flink/pull/10522#issuecomment-564348907
 
 
   
   ## CI report:
   
   * 137c5df8e1bfffa09a00d7d7515c8b09d961753a Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140532162) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3438)
 
   * 427445d4701f47fecdaa76d0b9a981ddb02597c3 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140535715) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3442)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10523: [FLINK-15073][sql-client] SQL-CLI fails to run same query multiple times

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10523: [FLINK-15073][sql-client] SQL-CLI 
fails to run same query multiple times
URL: https://github.com/apache/flink/pull/10523#issuecomment-564363476
 
 
   
   ## CI report:
   
   * afdfe7941439eeec090d3123319c74ecb7fb21ee Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140535725) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3443)
 
   * 912744c3bb41f9c60634bdd1de6cb1506d0a2cf3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140539587) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3446)
 
   * 6d5506b6b961f48cab6b64caf3f93c168c2355a4 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140551772) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3454)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15192) consider split 'SQL' page into multiple sub pages for better clarity

2019-12-10 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993268#comment-16993268
 ] 

Jingsong Lee commented on FLINK-15192:
--

+1.

I kind of like hive's name:
 * Data Definition Statements
 * Data Manipulation Statements
 * Data Retrieval: Queries

[https://cwiki.apache.org/confluence/display/Hive/LanguageManual]

> consider split 'SQL' page into multiple sub pages for better clarity
> 
>
> Key: FLINK-15192
> URL: https://issues.apache.org/jira/browse/FLINK-15192
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Bowen Li
>Assignee: Terry Wang
>Priority: Major
> Fix For: 1.10.0
>
>
> with FLINK-15190, we are gonna add a bunch of ddl which makes the page too 
> long and not really readable.
> I suggest split "SQL" page into sub pages of "SQL DDL", "SQL DML", "SQL DQL", 
> and others if needed.
> assigned to Terry temporarily. 
> cc  [~jark] [~lzljs3620320] 
> As example, the SQL doc directory of Hive looks like below, which is a lot 
> better that Flink's current one
> {code:java}
> CHILD PAGES
> Pages
> LanguageManual
> LanguageManual Cli
> LanguageManual DDL
> LanguageManual DML
> LanguageManual Select
> LanguageManual Joins
> LanguageManual LateralView
> LanguageManual Union
> LanguageManual SubQueries
> LanguageManual Sampling
> LanguageManual Explain
> LanguageManual VirtualColumns
> Configuration Properties
> LanguageManual ImportExport
> LanguageManual Authorization
> LanguageManual Types
> Literals
> LanguageManual VariableSubstitution
> LanguageManual ORC
> LanguageManual WindowingAndAnalytics
> LanguageManual Indexing
> LanguageManual JoinOptimization
> LanguageManual LZO
> LanguageManual Commands
> Parquet
> Enhanced Aggregation, Cube, Grouping and Rollup
> FileFormats
> Hive HPL/SQL
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] 
Fix shuffle data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#discussion_r356429397
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/ChannelSelectorRecordWriter.java
 ##
 @@ -101,7 +101,7 @@ public BufferBuilder requestNewBufferBuilder(int 
targetChannel) throws IOExcepti
checkState(bufferBuilders[targetChannel] == null || 
bufferBuilders[targetChannel].isFinished());
 
BufferBuilder bufferBuilder = 
targetPartition.getBufferBuilder();
-   
targetPartition.addBufferConsumer(bufferBuilder.createBufferConsumer(), 
targetChannel);
+   
targetPartition.addBufferConsumer(bufferBuilder.createBufferConsumer(false), 
targetChannel);
 
 Review comment:
   nit: if we do not make this change and still use previous 
`createBufferConsumer`, then we can also avoid adding `@VisibleForTesting` 
below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] 
Fix shuffle data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#discussion_r356435083
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java
 ##
 @@ -44,42 +44,58 @@
 
private int currentReaderPosition;
 
+   /** Whether the underlying {@link Buffer} can be shared by multi {@link 
BufferConsumer} instances. */
+   private final boolean isShareable;
 
 Review comment:
   `{@link Buffer}` here is not very accurate because we have two cases for 
tagging this field as true.
   
   - `bufferBuilder.createBufferConsumer(true)`: in this case, every created 
`BufferConsumer` refers to different `Buffer` instances
   
   - `BufferConsumer#copy()`:  only in this case the underlying `Buffer` is 
shared by other instances.
   
   So maybe we adjust the description as `Whether the respective writable 
{@link BufferBuilder} is shared .`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle 
data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#issuecomment-563110907
 
 
   
   ## CI report:
   
   * bef118a977b3aa635fc748260d07b0d5079b2c0e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140180154) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3342)
 
   * fb385e77c7006db8552c81ef2c2005d37a63fb93 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140535760) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3445)
 
   * 0276c0af8515d2f81a44d253339f3b367d2bc5cb Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140541200) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3449)
 
   * c75dc3a5e2f461a9d0492c2c23fb57487ecc20a6 UNKNOWN
   * 4c4950f249f8a991d0bccf90c2020a85150fe9f0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15192) consider split 'SQL' page into multiple sub pages for better clarity

2019-12-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993261#comment-16993261
 ] 

Jark Wu commented on FLINK-15192:
-

+1 for this. I'm also frustrated by current SQL page, because the third-level 
titles are hidden. 
We can split the page into "Overview", "DDL" "DML", "Query" subpages under the 
"SQL" parent. 
I'm not sure whether "DQL" is a standard word, and maybe "Query" is safe and 
better. 

> consider split 'SQL' page into multiple sub pages for better clarity
> 
>
> Key: FLINK-15192
> URL: https://issues.apache.org/jira/browse/FLINK-15192
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Bowen Li
>Assignee: Terry Wang
>Priority: Major
> Fix For: 1.10.0
>
>
> with FLINK-15190, we are gonna add a bunch of ddl which makes the page too 
> long and not really readable.
> I suggest split "SQL" page into sub pages of "SQL DDL", "SQL DML", "SQL DQL", 
> and others if needed.
> assigned to Terry temporarily. 
> cc  [~jark] [~lzljs3620320] 
> As example, the SQL doc directory of Hive looks like below, which is a lot 
> better that Flink's current one
> {code:java}
> CHILD PAGES
> Pages
> LanguageManual
> LanguageManual Cli
> LanguageManual DDL
> LanguageManual DML
> LanguageManual Select
> LanguageManual Joins
> LanguageManual LateralView
> LanguageManual Union
> LanguageManual SubQueries
> LanguageManual Sampling
> LanguageManual Explain
> LanguageManual VirtualColumns
> Configuration Properties
> LanguageManual ImportExport
> LanguageManual Authorization
> LanguageManual Types
> Literals
> LanguageManual VariableSubstitution
> LanguageManual ORC
> LanguageManual WindowingAndAnalytics
> LanguageManual Indexing
> LanguageManual JoinOptimization
> LanguageManual LZO
> LanguageManual Commands
> Parquet
> Enhanced Aggregation, Cube, Grouping and Rollup
> FileFormats
> Hive HPL/SQL
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10434: [FLINK-15072][client] executeAsync in ContextEnvironment from CliFrontend cause unexpected exception

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10434: [FLINK-15072][client] executeAsync 
in ContextEnvironment from CliFrontend cause unexpected exception
URL: https://github.com/apache/flink/pull/10434#issuecomment-562048009
 
 
   
   ## CI report:
   
   * 3657f8be3e9a8eb83712a9eeafe0bc8e3c182c95 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139496022) 
   * ddf99958fc5bbd03e1cde6abbd58c22a4f7c5554 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139501940) 
   * 31f08bdde0f7fa3ad954b9a1c7790819c62e18cd Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139534285) 
   * 6ae5448f1725843761d4793e16846278d617dbaa Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140374151) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3397)
 
   * c984c7d529d401a6e2e9029479c26310996de496 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140394922) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3404)
 
   * 279e7a0bdbf3a4a1ea02ba18db64e2330193055b UNKNOWN
   * 77c9d550e3ad6a80c953bb789e47a0e7dafd520f Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140532207) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3439)
 
   * 5c50c03749c3f69147ae401fe4313ef28c819d7a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140535748) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3444)
 
   * 15f502bd099959436eec4251aaa8de1acd248c40 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140542849) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3450)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on issue #10522: [FLINK-15153]Service selector needs to contain jobmanager component label(Backport to 1.10)

2019-12-10 Thread GitBox
wangyang0918 commented on issue #10522: [FLINK-15153]Service selector needs to 
contain jobmanager component label(Backport to 1.10)
URL: https://github.com/apache/flink/pull/10522#issuecomment-564412859
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15191) Fix can't create table source for Kafka if watermark or computed column is defined

2019-12-10 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993258#comment-16993258
 ] 

Leonard Xu commented on FLINK-15191:


[~jark] thanks, [~lzljs3620320] I think so, I’ll fix it soon

> Fix can't create table source for Kafka if watermark or computed column is 
> defined
> --
>
> Key: FLINK-15191
> URL: https://issues.apache.org/jira/browse/FLINK-15191
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Blocker
> Fix For: 1.10.0
>
>
> We should add {{schema.watermark.*}} into the supported properties of Kafka 
> factory and add some tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15191) Fix can't create table source for Kafka if watermark or computed column is defined

2019-12-10 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993257#comment-16993257
 ] 

Jingsong Lee commented on FLINK-15191:
--

I mark it blocker since I think watermark and compute column is not useful 
without this fixing.

> Fix can't create table source for Kafka if watermark or computed column is 
> defined
> --
>
> Key: FLINK-15191
> URL: https://issues.apache.org/jira/browse/FLINK-15191
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Blocker
> Fix For: 1.10.0
>
>
> We should add {{schema.watermark.*}} into the supported properties of Kafka 
> factory and add some tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10523: [FLINK-15073][sql-client] SQL-CLI fails to run same query multiple times

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10523: [FLINK-15073][sql-client] SQL-CLI 
fails to run same query multiple times
URL: https://github.com/apache/flink/pull/10523#issuecomment-564363476
 
 
   
   ## CI report:
   
   * afdfe7941439eeec090d3123319c74ecb7fb21ee Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140535725) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3443)
 
   * 912744c3bb41f9c60634bdd1de6cb1506d0a2cf3 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140539587) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3446)
 
   * 6d5506b6b961f48cab6b64caf3f93c168c2355a4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15191) Fix can't create table source for Kafka if watermark or computed column is defined

2019-12-10 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15191:
-
Priority: Blocker  (was: Major)

> Fix can't create table source for Kafka if watermark or computed column is 
> defined
> --
>
> Key: FLINK-15191
> URL: https://issues.apache.org/jira/browse/FLINK-15191
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Blocker
> Fix For: 1.10.0
>
>
> We should add {{schema.watermark.*}} into the supported properties of Kafka 
> factory and add some tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10525: [hotfix][doc] refine docs for catalog APIs

2019-12-10 Thread GitBox
flinkbot commented on issue #10525: [hotfix][doc] refine docs for catalog APIs
URL: https://github.com/apache/flink/pull/10525#issuecomment-564411337
 
 
   
   ## CI report:
   
   * d99fa27649e4ee125b0d53ce173ad1e21e88fd0e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10518: [FLINK-15124][table] Fix types with 
precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#issuecomment-564108895
 
 
   
   ## CI report:
   
   * 9a0046d4b1d5b28907b617e5f82dab114ee97a31 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140457699) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3425)
 
   * 704e473f520d70d7dcced6ff8d36cbb87418bf7b Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140547641) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3451)
 
   * 81c7bdc33398d66e43e0578963e92d80c9ef6f04 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140549601) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3452)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15192) consider split 'SQL' page into multiple sub pages for better clarity

2019-12-10 Thread Bowen Li (Jira)
Bowen Li created FLINK-15192:


 Summary: consider split 'SQL' page into multiple sub pages for 
better clarity
 Key: FLINK-15192
 URL: https://issues.apache.org/jira/browse/FLINK-15192
 Project: Flink
  Issue Type: Task
  Components: Documentation
Reporter: Bowen Li
Assignee: Terry Wang
 Fix For: 1.10.0


with FLINK-15190, we are gonna add a bunch of ddl which makes the page too long 
and not really readable.

I suggest split "SQL" page into sub pages of "SQL DDL", "SQL DML", "SQL DQL", 
and others if needed.

assigned to Terry temporarily. 

cc  [~jark] [~lzljs3620320] 

As example, the SQL doc directory of Hive looks like below, which is a lot 
better that Flink's current one

{code:java}
CHILD PAGES
Pages
LanguageManual
LanguageManual Cli
LanguageManual DDL
LanguageManual DML
LanguageManual Select
LanguageManual Joins
LanguageManual LateralView
LanguageManual Union
LanguageManual SubQueries
LanguageManual Sampling
LanguageManual Explain
LanguageManual VirtualColumns
Configuration Properties
LanguageManual ImportExport
LanguageManual Authorization
LanguageManual Types
Literals
LanguageManual VariableSubstitution
LanguageManual ORC
LanguageManual WindowingAndAnalytics
LanguageManual Indexing
LanguageManual JoinOptimization
LanguageManual LZO
LanguageManual Commands
Parquet
Enhanced Aggregation, Cube, Grouping and Rollup
FileFormats
Hive HPL/SQL
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10501: [FLINK-15139][table sql / client]misc end to end test failed cause loss jars in converting to jobgraph

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10501: [FLINK-15139][table sql / 
client]misc end to end test failed  cause loss jars in converting to jobgraph
URL: https://github.com/apache/flink/pull/10501#issuecomment-563307886
 
 
   
   ## CI report:
   
   * 88033844e8102c06281586898f8641dd4cba3a3c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140265006) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3377)
 Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3372)
 
   * 1fd40c172a561fcd13b7f94974575450c4e3b48c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140541188) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3448)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15191) Fix can't create table source for Kafka if watermark or computed column is defined

2019-12-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15191:
---

Assignee: Leonard Xu

> Fix can't create table source for Kafka if watermark or computed column is 
> defined
> --
>
> Key: FLINK-15191
> URL: https://issues.apache.org/jira/browse/FLINK-15191
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.10.0
>
>
> We should add {{schema.watermark.*}} into the supported properties of Kafka 
> factory and add some tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang commented on a change in pull request #10514: [FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector

2019-12-10 Thread GitBox
leonardBang commented on a change in pull request #10514: 
[FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector
URL: https://github.com/apache/flink/pull/10514#discussion_r356430441
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -151,6 +153,9 @@ private ExecutionContext(
classLoader = FlinkUserCodeClassLoaders.parentFirst(
dependencies.toArray(new 
URL[dependencies.size()]),
this.getClass().getClassLoader());
+   if (!dependencies.isEmpty()) {
+   flinkConfig.set(PipelineOptions.JARS, 
dependencies.stream().map(URL::toString).collect(Collectors.toList()));
 
 Review comment:
   @lirui-apache I think this PR https://github.com/apache/flink/pull/10501 can 
fix this issue. So suggest reproduce after that PR merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10525: [hotfix][doc] refine docs for catalog APIs

2019-12-10 Thread GitBox
flinkbot commented on issue #10525: [hotfix][doc] refine docs for catalog APIs
URL: https://github.com/apache/flink/pull/10525#issuecomment-564408399
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d99fa27649e4ee125b0d53ce173ad1e21e88fd0e (Wed Dec 11 
07:02:44 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10525: [hotfix][doc] refine docs for catalog APIs

2019-12-10 Thread GitBox
bowenli86 commented on issue #10525: [hotfix][doc] refine docs for catalog APIs
URL: https://github.com/apache/flink/pull/10525#issuecomment-564407989
 
 
   @xuefuz @lirui-apache @JingsongLi @zjuwangg 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 opened a new pull request #10525: [hotfix][doc] refine docs for catalog APIs

2019-12-10 Thread GitBox
bowenli86 opened a new pull request #10525: [hotfix][doc] refine docs for 
catalog APIs
URL: https://github.com/apache/flink/pull/10525
 
 
   ## What is the purpose of the change
   
   refine docs for new Catalog APIs in case users want to programmatically 
manipulate catalog objects
   
   ## Brief change log
   
   refine docs for new Catalog APIs in case users want to programmatically 
manipulate catalog objects
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
   n/a
   
   ## Documentation
   
   n/a


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle 
data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#issuecomment-563110907
 
 
   
   ## CI report:
   
   * bef118a977b3aa635fc748260d07b0d5079b2c0e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140180154) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3342)
 
   * fb385e77c7006db8552c81ef2c2005d37a63fb93 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140535760) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3445)
 
   * 0276c0af8515d2f81a44d253339f3b367d2bc5cb Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140541200) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3449)
 
   * c75dc3a5e2f461a9d0492c2c23fb57487ecc20a6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15153) Service selector needs to contain jobmanager component label

2019-12-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993252#comment-16993252
 ] 

Dian Fu commented on FLINK-15153:
-

Thanks. Just noticed that PR. (y)

> Service selector needs to contain jobmanager component label
> 
>
> Key: FLINK-15153
> URL: https://issues.apache.org/jira/browse/FLINK-15153
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The jobmanager label needs to be added to service selector. Otherwise, it may 
> select the wrong backend pods(taskmanager).
> The internal service is used for taskmanager talking to jobmanager. If it 
> does not have correct backend pods, the taskmanager may fail to register.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] 
Fix shuffle data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#discussion_r356424126
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/EventSerializer.java
 ##
 @@ -289,7 +289,7 @@ public static BufferConsumer 
toBufferConsumer(AbstractEvent event) throws IOExce
 
MemorySegment data = 
MemorySegmentFactory.wrap(serializedEvent.array());
 
-   return new BufferConsumer(data, FreeingBufferRecycler.INSTANCE, 
false);
+   return new BufferConsumer(data, FreeingBufferRecycler.INSTANCE, 
false, true);
 
 Review comment:
   It is not always `true` for different cases calling `toBufferConsumer`. E.g. 
`ResultSubpartition#finish()` to generate `EndOfPartitionEvent`.
   So we can have two options:
   
   -  Provide the respective `isShareable` argument in the method of 
`toBufferConsumer`.  Then this tag in `BufferConsumer` is indicating for both 
buffer and event.
   
   - The `isShareable` field in `BufferConsumer` is only indicating for buffer 
without event, then we do not need to touch this method to use default 
constructor. It also makes sense because the compression is not working on 
event ATM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
wuchong commented on a change in pull request #10518: [FLINK-15124][table] Fix 
types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356428513
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/SinkCodeGenerator.scala
 ##
 @@ -223,10 +224,10 @@ object SinkCodeGenerator {
   case (fieldTypeInfo, i) =>
 val requestedTypeInfo = tt.getTypeAt(i)
 validateFieldType(requestedTypeInfo)
-if (!areTypesCompatible(
-  fromTypeInfoToLogicalType(fieldTypeInfo),
-  fromTypeInfoToLogicalType(requestedTypeInfo)) &&
-  !requestedTypeInfo.isInstanceOf[GenericTypeInfo[Object]]) {
+if (!PlannerTypeUtils.isAssignable(
+fromTypeInfoToLogicalType(fieldTypeInfo),
 
 Review comment:
   OK.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
wuchong commented on a change in pull request #10518: [FLINK-15124][table] Fix 
types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356428371
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/sinks/CsvTableSinkFactoryBase.java
 ##
 @@ -75,6 +75,8 @@
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_TYPE);
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_DATA_TYPE);
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_NAME);
+   // schema watermark
+   properties.add(SCHEMA + "." + DescriptorProperties.WATERMARK + 
".*");
 
 Review comment:
   issue created FLINK-15191.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15191) Fix can't create table source for Kafka if watermark or computed column is defined

2019-12-10 Thread Jark Wu (Jira)
Jark Wu created FLINK-15191:
---

 Summary: Fix can't create table source for Kafka if watermark or 
computed column is defined
 Key: FLINK-15191
 URL: https://issues.apache.org/jira/browse/FLINK-15191
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kafka
Reporter: Jark Wu
 Fix For: 1.10.0


We should add {{schema.watermark.*}} into the supported properties of Kafka 
factory and add some tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15153) Service selector needs to contain jobmanager component label

2019-12-10 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993251#comment-16993251
 ] 

Yang Wang commented on FLINK-15153:
---

[~dian.fu] Thanks for your kindly reminding. I have opened a backport PR to 
1.10. Once the travis passed, it will be merged.

> Service selector needs to contain jobmanager component label
> 
>
> Key: FLINK-15153
> URL: https://issues.apache.org/jira/browse/FLINK-15153
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The jobmanager label needs to be added to service selector. Otherwise, it may 
> select the wrong backend pods(taskmanager).
> The internal service is used for taskmanager talking to jobmanager. If it 
> does not have correct backend pods, the taskmanager may fail to register.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14953) Parquet table source should use schema type to build FilterPredicate

2019-12-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993249#comment-16993249
 ] 

Dian Fu commented on FLINK-14953:
-

[~lzljs3620320] You're right. I missed it. Thanks for your information! (y)

> Parquet table source should use schema type to build FilterPredicate
> 
>
> Key: FLINK-14953
> URL: https://issues.apache.org/jira/browse/FLINK-14953
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue happens when the data type of value in predicate inferred from SQL 
> doesn't match the parquet schema. For example, foo is a long type, foo < 1 is 
> the predicate. Literal will be recognized as an integration. It causes the 
> parquet FilterPredicate is mistakenly created for the column of Integer type. 
> Then, the exception comes.
> java.lang.UnsupportedOperationException
>   at 
> org.apache.parquet.filter2.recordlevel.IncrementallyUpdatedFilterPredicate$ValueInspector.update(IncrementallyUpdatedFilterPredicate.java:71)
>   at 
> org.apache.parquet.filter2.recordlevel.FilteringPrimitiveConverter.addLong(FilteringPrimitiveConverter.java:105)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$4.writeValue(ColumnReaderImpl.java:268)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.flink.formats.parquet.utils.ParquetRecordReader.readNextRecord(ParquetRecordReader.java:235)
>   at 
> org.apache.flink.formats.parquet.utils.ParquetRecordReader.reachEnd(ParquetRecordReader.java:207)
>   at 
> org.apache.flink.formats.parquet.ParquetInputFormat.reachedEnd(ParquetInputFormat.java:233)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:231)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:219)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:155)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeUnaryOperator(CollectionExecutor.java:229)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:149)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:131)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:182)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:158)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:131)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:115)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:38)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:52)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
>   at 
> org.apache.flink.formats.parquet.ParquetTableSourceITCase.testScanWithProjectionAndFilter(ParquetTableSourceITCase.java:91)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.eval

[jira] [Updated] (FLINK-14958) ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String

2019-12-10 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen updated FLINK-14958:
--
Fix Version/s: 1.11.0

> ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String
> -
>
> Key: FLINK-14958
> URL: https://issues.apache.org/jira/browse/FLINK-14958
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zili Chen
>Assignee: AT-Fieldless
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14958) ProgramTargetDescriptor#jobID can be of type JobID

2019-12-10 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen updated FLINK-14958:
--
Summary: ProgramTargetDescriptor#jobID can be of type JobID  (was: 
ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String)

> ProgramTargetDescriptor#jobID can be of type JobID
> --
>
> Key: FLINK-14958
> URL: https://issues.apache.org/jira/browse/FLINK-14958
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zili Chen
>Assignee: AT-Fieldless
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14958) ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String

2019-12-10 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993248#comment-16993248
 ] 

Zili Chen commented on FLINK-14958:
---

[~dian.fu] this is a minor code refactor that I don't think should be picked 
into 1.10.

> ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String
> -
>
> Key: FLINK-14958
> URL: https://issues.apache.org/jira/browse/FLINK-14958
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zili Chen
>Assignee: AT-Fieldless
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14958) ProgramTargetDescriptor#jobID can be of type JobID

2019-12-10 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen updated FLINK-14958:
--
Priority: Minor  (was: Major)

> ProgramTargetDescriptor#jobID can be of type JobID
> --
>
> Key: FLINK-14958
> URL: https://issues.apache.org/jira/browse/FLINK-14958
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zili Chen
>Assignee: AT-Fieldless
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14172) Implement KubeClient with official Java client library for kubernetes

2019-12-10 Thread ouyangwulin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993247#comment-16993247
 ] 

ouyangwulin commented on FLINK-14172:
-

Please assign this to me

> Implement KubeClient with official Java client library for kubernetes
> -
>
> Key: FLINK-14172
> URL: https://issues.apache.org/jira/browse/FLINK-14172
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> Official Java client library for kubernetes is become more and more active. 
> The new features(such as leader election) and some client 
> implementations(informer, lister, cache) are better. So we should use the 
> official java client for kubernetes in flink.
> https://github.com/kubernetes-client/java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #10514: [FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10514: 
[FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector
URL: https://github.com/apache/flink/pull/10514#discussion_r356425513
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -151,6 +153,9 @@ private ExecutionContext(
classLoader = FlinkUserCodeClassLoaders.parentFirst(
dependencies.toArray(new 
URL[dependencies.size()]),
this.getClass().getClassLoader());
+   if (!dependencies.isEmpty()) {
+   flinkConfig.set(PipelineOptions.JARS, 
dependencies.stream().map(URL::toString).collect(Collectors.toList()));
 
 Review comment:
   https://github.com/apache/flink/pull/10501 aims to fix it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10518: [FLINK-15124][table] Fix types with 
precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#issuecomment-564108895
 
 
   
   ## CI report:
   
   * 9a0046d4b1d5b28907b617e5f82dab114ee97a31 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140457699) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3425)
 
   * 704e473f520d70d7dcced6ff8d36cbb87418bf7b Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140547641) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3451)
 
   * 81c7bdc33398d66e43e0578963e92d80c9ef6f04 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10501: [FLINK-15139][table sql / client]misc end to end test failed cause loss jars in converting to jobgraph

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10501: [FLINK-15139][table sql / 
client]misc end to end test failed  cause loss jars in converting to jobgraph
URL: https://github.com/apache/flink/pull/10501#issuecomment-563307886
 
 
   
   ## CI report:
   
   * 88033844e8102c06281586898f8641dd4cba3a3c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140265006) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3377)
 Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3372)
 
   * 1fd40c172a561fcd13b7f94974575450c4e3b48c Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140541188) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3448)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15139) misc end to end test failed cause loss jars in converting to jobgraph

2019-12-10 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993243#comment-16993243
 ] 

Leonard Xu commented on FLINK-15139:


[~liyu] Yes,it‘s a blocker issue.
I had submitted PR and Will push it ASAP.

> misc end to end test failed  cause loss jars in converting to jobgraph
> --
>
> Key: FLINK-15139
> URL: https://issues.apache.org/jira/browse/FLINK-15139
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: wangxiyuan
>Assignee: Leonard Xu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test Running 'SQL Client end-to-end test (Old planner)' in misc e2e test 
> failed
> log:
> {code:java}
> (a94d1da25baf2a5586a296d9e933743c) switched from RUNNING to FAILED.
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load 
> user class: 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
> ClassLoader info: URL ClassLoader:
> Class not resolvable through given classloader.
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:266)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:430)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:353)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:419)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:353)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:144)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:432)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:460)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at 
> org.apache.flink.util.ChildFirstClassLoader.loadClass(ChildFirstClassLoader.java:60)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:78)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:562)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:550)
>   at 
> org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:511)
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:254)
>   ... 10 more
> {code}
> link: [https://travis-ci.org/apache/flink/jobs/622261358]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] 
Fix shuffle data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#discussion_r356424126
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/EventSerializer.java
 ##
 @@ -289,7 +289,7 @@ public static BufferConsumer 
toBufferConsumer(AbstractEvent event) throws IOExce
 
MemorySegment data = 
MemorySegmentFactory.wrap(serializedEvent.array());
 
-   return new BufferConsumer(data, FreeingBufferRecycler.INSTANCE, 
false);
+   return new BufferConsumer(data, FreeingBufferRecycler.INSTANCE, 
false, true);
 
 Review comment:
   It is not always `true` for different cases calling `toBufferConsumer`. E.g. 
`ResultSubpartition#finish()` to generate `EndOfPartitionEvent`.
   So we can have two options:
   
   -  Provide the respective `isSharable` argument in the method of 
`toBufferConsumer`.  Then this tag in `BufferConsumer` is suitable for both 
buffer and event.
   
   - The `isSharable` field in `BufferConsumer` is only indicating for buffer 
without event, then we do not need to touch this method to use default 
constructor. It also makes sense because the compression is not working on 
event ATM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
zhijiangW commented on a change in pull request #10492: [FLINK-15140][runtime] 
Fix shuffle data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#discussion_r356424126
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/EventSerializer.java
 ##
 @@ -289,7 +289,7 @@ public static BufferConsumer 
toBufferConsumer(AbstractEvent event) throws IOExce
 
MemorySegment data = 
MemorySegmentFactory.wrap(serializedEvent.array());
 
-   return new BufferConsumer(data, FreeingBufferRecycler.INSTANCE, 
false);
+   return new BufferConsumer(data, FreeingBufferRecycler.INSTANCE, 
false, true);
 
 Review comment:
   It is not always `true` for different cases calling `toBufferConsumer`. E.g. 
`ResultSubpartition#finish()` to generate `EndOfPartitionEvent`.
   So we can have two options:
   
   -  Provide the respective `isSharable` argument in the method of 
`toBufferConsumer`.  Then this tag in `BufferConsumer` is indicating for both 
buffer and event.
   
   - The `isSharable` field in `BufferConsumer` is only indicating for buffer 
without event, then we do not need to touch this method to use default 
constructor. It also makes sense because the compression is not working on 
event ATM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15179) Kubernetes should not have a CustomCommandLine.

2019-12-10 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993241#comment-16993241
 ] 

Yang Wang commented on FLINK-15179:
---

Hi, [~kkl0u] Does `AbstractCustomCommandLine` will be deprecated in the future?

If yes, the changes makes sense to me. How will we deal with the 
`FlinkYarnSessionCli`. Is it a special case?

If not, adding some cli options in `FlinkKubernetesCustomCli` will make users 
submitting a flink job easier.
{code:java}
./bin/kubernetes-session.sh -d -id flink-native-k8s-session-1 \
-i flink:flink-1.10-SNAPSHOT-k8s \
-jm 4096 -tm 4096 -s 4
{code}
If we remove the `FlinkKubernetesCustomCli`, the command will like the 
following.
{code:java}
./bin/kubernetes-session.sh -d 
-Dkubernetes.cluster-id=flink-native-k8s-session-1 \
-Dkubernetes.container.image=flink:flink-1.10-SNAPSHOT-k8s \
-Djobmanager.heap.size=4096 -Dtaskmanager.memory.total-process.size=4096 \
-Dtaskmanager.numberOfTaskSlots=4{code}

> Kubernetes should not have a CustomCommandLine.
> ---
>
> Key: FLINK-15179
> URL: https://issues.apache.org/jira/browse/FLINK-15179
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> As part of FLIP-73, all command line options are mapped to config options. 
> Given this 1-to-1 mapping, the Kubernetes command line could simply forward 
> the command line arguments to ConfigOptions directly, instead of introducing 
> new command line options. In this case, the user is expected to simply write:
>  
> {\{bin/run -e (or --executor) kubernetes-session-cluster -D 
> kubernetes.container.image=MY_IMAGE ...}} 
> and the CLI will parse the -e to figure out the correct 
> {{ClusterClientFactory}} and {{ExecutorFactory}} and then forward to that the 
> config options specified with {{-D}}. 
> For this, we need to introduce a {{GenericCustomCommandLine}} that simply 
> forward the specified parameters to the executors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] 
Fix types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356423681
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/sinks/CsvTableSinkFactoryBase.java
 ##
 @@ -75,6 +75,8 @@
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_TYPE);
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_DATA_TYPE);
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_NAME);
+   // schema watermark
+   properties.add(SCHEMA + "." + DescriptorProperties.WATERMARK + 
".*");
 
 Review comment:
   Can you create a JIRA to track it too? Should in 1.10?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15190) add documentation for DDL in FLIP-69

2019-12-10 Thread Bowen Li (Jira)
Bowen Li created FLINK-15190:


 Summary: add documentation for DDL in FLIP-69
 Key: FLINK-15190
 URL: https://issues.apache.org/jira/browse/FLINK-15190
 Project: Flink
  Issue Type: Task
  Components: Documentation
Reporter: Bowen Li
Assignee: Terry Wang
 Fix For: 1.10.0


in 
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/sql.html#ddl



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rkhachatryan commented on a change in pull request #10151: [FLINK-14231] Handle the processing-time timers when closing operators to make endInput semantics on the operator chain stri

2019-12-10 Thread GitBox
rkhachatryan commented on a change in pull request #10151: [FLINK-14231] Handle 
the processing-time timers when closing operators to make endInput semantics on 
the operator chain strict
URL: https://github.com/apache/flink/pull/10151#discussion_r356423342
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/ProcessingTimeServiceImpl.java
 ##
 @@ -18,18 +18,57 @@
 
 package org.apache.flink.streaming.runtime.tasks;
 
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.runtime.concurrent.FutureUtils;
+import org.apache.flink.util.concurrent.NeverCompleteFuture;
+
+import javax.annotation.Nonnull;
+
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.Delayed;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.Consumer;
 import java.util.function.Function;
 
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+@Internal
 class ProcessingTimeServiceImpl implements ProcessingTimeService {
+
+   private static final int STATUS_ALIVE = 0;
+   private static final int STATUS_QUIESCED = 1;
+
+   // 

+
private final TimerService timerService;
+
private final Function 
processingTimeCallbackWrapper;
 
+   private final ConcurrentHashMap, Object> 
undoneTimers;
 
 Review comment:
   We can build set on top of map using `Collections.newSetFromMap` or use 
sorted set, since 
   >// we should cancel the timers in descending timestamp order


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
wuchong commented on a change in pull request #10518: [FLINK-15124][table] Fix 
types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356423176
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/sinks/CsvTableSinkFactoryBase.java
 ##
 @@ -75,6 +75,8 @@
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_TYPE);
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_DATA_TYPE);
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_NAME);
+   // schema watermark
+   properties.add(SCHEMA + "." + DescriptorProperties.WATERMARK + 
".*");
 
 Review comment:
   Yes. I will fix it in following PR with some computed column problems. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14953) Parquet table source should use schema type to build FilterPredicate

2019-12-10 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993240#comment-16993240
 ] 

Jingsong Lee commented on FLINK-14953:
--

[~dian.fu] I can find in 1.10 branch: [FLINK-14953][formats] use table type to 
build parquet FilterPredicate

> Parquet table source should use schema type to build FilterPredicate
> 
>
> Key: FLINK-14953
> URL: https://issues.apache.org/jira/browse/FLINK-14953
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue happens when the data type of value in predicate inferred from SQL 
> doesn't match the parquet schema. For example, foo is a long type, foo < 1 is 
> the predicate. Literal will be recognized as an integration. It causes the 
> parquet FilterPredicate is mistakenly created for the column of Integer type. 
> Then, the exception comes.
> java.lang.UnsupportedOperationException
>   at 
> org.apache.parquet.filter2.recordlevel.IncrementallyUpdatedFilterPredicate$ValueInspector.update(IncrementallyUpdatedFilterPredicate.java:71)
>   at 
> org.apache.parquet.filter2.recordlevel.FilteringPrimitiveConverter.addLong(FilteringPrimitiveConverter.java:105)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$4.writeValue(ColumnReaderImpl.java:268)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.flink.formats.parquet.utils.ParquetRecordReader.readNextRecord(ParquetRecordReader.java:235)
>   at 
> org.apache.flink.formats.parquet.utils.ParquetRecordReader.reachEnd(ParquetRecordReader.java:207)
>   at 
> org.apache.flink.formats.parquet.ParquetInputFormat.reachedEnd(ParquetInputFormat.java:233)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:231)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:219)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:155)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeUnaryOperator(CollectionExecutor.java:229)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:149)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:131)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:182)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:158)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:131)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:115)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:38)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:52)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
>   at 
> org.apache.flink.formats.parquet.ParquetTableSourceITCase.testScanWithProjectionAndFilter(ParquetTableSourceITCase.java:91)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:

[jira] [Created] (FLINK-15189) add documentation for catalog view and hive view

2019-12-10 Thread Bowen Li (Jira)
Bowen Li created FLINK-15189:


 Summary: add documentation for catalog view and hive view
 Key: FLINK-15189
 URL: https://issues.apache.org/jira/browse/FLINK-15189
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive, Documentation
Reporter: Bowen Li
Assignee: Rui Li
 Fix For: 1.10.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14958) ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String

2019-12-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993238#comment-16993238
 ] 

Dian Fu edited comment on FLINK-14958 at 12/11/19 6:30 AM:
---

Hi [~tison], the fix version isn't set. Just a soft remind that please don't 
forget to cherry-pick to 1.10 release branch as it's already cut.


was (Author: dian.fu):
Hi [~tison], the fix version isn't set. Just a remind that please don't forget 
to cherry-pick to 1.10 release branch as it's already cut.

> ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String
> -
>
> Key: FLINK-14958
> URL: https://issues.apache.org/jira/browse/FLINK-14958
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zili Chen
>Assignee: AT-Fieldless
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14958) ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String

2019-12-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993238#comment-16993238
 ] 

Dian Fu commented on FLINK-14958:
-

Hi [~tison], the fix version isn't set. Just a remind that please don't forget 
to cherry-pick to 1.10 release branch as it's already cut.

> ProgramTargetDescriptor#jobID possibly can be of type JobID instead of String
> -
>
> Key: FLINK-14958
> URL: https://issues.apache.org/jira/browse/FLINK-14958
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zili Chen
>Assignee: AT-Fieldless
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rkhachatryan commented on a change in pull request #10151: [FLINK-14231] Handle the processing-time timers when closing operators to make endInput semantics on the operator chain stri

2019-12-10 Thread GitBox
rkhachatryan commented on a change in pull request #10151: [FLINK-14231] Handle 
the processing-time timers when closing operators to make endInput semantics on 
the operator chain strict
URL: https://github.com/apache/flink/pull/10151#discussion_r356422203
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
 ##
 @@ -1011,10 +989,25 @@ TimerService getTimerService() {
return timerService;
}
 
-   public ProcessingTimeService getProcessingTimeService(int 
operatorIndex) {
+   @VisibleForTesting
+   StreamOperator getHeadOperator() {
+   return operatorChain.getHeadOperator();
+   }
+
+   public ProcessingTimeService getProcessingTimeService(OperatorID 
operatorID) {
+   Preconditions.checkNotNull(operatorID);
Preconditions.checkState(timerService != null, "The timer 
service has not been initialized.");
-   MailboxExecutor mailboxExecutor = 
mailboxProcessor.getMailboxExecutor(operatorIndex);
-   return new ProcessingTimeServiceImpl(timerService, callback -> 
deferCallbackToMailbox(mailboxExecutor, callback));
+   Preconditions.checkState(operatorChain != null, "operatorChain 
has not been initialized.");
+
+   ProcessingTimeService processingTimeService = 
operatorChain.getOperatorProcessingTimeService(operatorID);
+   if (processingTimeService == null) {
+   processingTimeService = new ProcessingTimeServiceImpl(
+   timerService,
+   callback -> 
deferCallbackToMailbox(operatorChain.getOperatorMailboxExecutor(operatorID), 
callback));
+   
operatorChain.setOperatorProcessingTimeService(operatorID, 
(ProcessingTimeServiceImpl) processingTimeService);
+   }
 
 Review comment:
   We could add something like `@PublicEvolving interface ProcessingTimersAware 
{ set/get }` and make `AbstractStreamOperator` implement it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
wuchong commented on a change in pull request #10518: [FLINK-15124][table] Fix 
types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356422121
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sources/TableSourceUtil.scala
 ##
 @@ -100,18 +96,32 @@ object TableSourceUtil {
   throw new ValidationException(s"Rowtime field '$name' has invalid 
type $t. " +
 s"Rowtime attributes must be of TimestampType.")
 }
-val (physicalName, idx, tpe) = resolveInputField(name, tableSource)
+val (physicalName, idx, logicalType) = resolveInputField(name, 
tableSource)
 // validate that mapped fields are are same type
-if (!isAssignable(fromTypeInfoToLogicalType(tpe), t)) {
+if (!isAssignable(logicalType, t)) {
   throw new ValidationException(s"Type $t of table field '$name' does 
not " +
-s"match with type $tpe of the field '$physicalName' of the 
TableSource return type.")
+s"match with type $logicalType of the field '$physicalName' of the 
" +
+"TableSource return type.")
+} else if (!isInteroperable(logicalType, t)) {
+  // the produced type of TableSource is different with the logical 
type defined in DDL
+  // on the precision or nullability.
+  throw new ValidationException(
+"If the connector would like to support precision and nullability 
defined in DDL," +
 
 Review comment:
   OK. I will update the exception message and comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14953) Parquet table source should use schema type to build FilterPredicate

2019-12-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993235#comment-16993235
 ] 

Dian Fu commented on FLINK-14953:
-

Hi [~ykt836], the 1.10 release branch has been cut yesterday and it seems that 
this PR was not cherry-pick to 1.10 branch.

> Parquet table source should use schema type to build FilterPredicate
> 
>
> Key: FLINK-14953
> URL: https://issues.apache.org/jira/browse/FLINK-14953
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue happens when the data type of value in predicate inferred from SQL 
> doesn't match the parquet schema. For example, foo is a long type, foo < 1 is 
> the predicate. Literal will be recognized as an integration. It causes the 
> parquet FilterPredicate is mistakenly created for the column of Integer type. 
> Then, the exception comes.
> java.lang.UnsupportedOperationException
>   at 
> org.apache.parquet.filter2.recordlevel.IncrementallyUpdatedFilterPredicate$ValueInspector.update(IncrementallyUpdatedFilterPredicate.java:71)
>   at 
> org.apache.parquet.filter2.recordlevel.FilteringPrimitiveConverter.addLong(FilteringPrimitiveConverter.java:105)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$4.writeValue(ColumnReaderImpl.java:268)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.flink.formats.parquet.utils.ParquetRecordReader.readNextRecord(ParquetRecordReader.java:235)
>   at 
> org.apache.flink.formats.parquet.utils.ParquetRecordReader.reachEnd(ParquetRecordReader.java:207)
>   at 
> org.apache.flink.formats.parquet.ParquetInputFormat.reachedEnd(ParquetInputFormat.java:233)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:231)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:219)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:155)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeUnaryOperator(CollectionExecutor.java:229)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:149)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:131)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:182)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:158)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:131)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:115)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:38)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:52)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
>   at 
> org.apache.flink.formats.parquet.ParquetTableSourceITCase.testScanWithProjectionAndFilter(ParquetTableSourceITCase.java:91)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.j

[jira] [Commented] (FLINK-15153) Service selector needs to contain jobmanager component label

2019-12-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993234#comment-16993234
 ] 

Dian Fu commented on FLINK-15153:
-

Hi [~zjwang], the 1.10 release branch has been cut yesterday, it seems that 
this PR was not cherry-pick to 1.10 branch.

> Service selector needs to contain jobmanager component label
> 
>
> Key: FLINK-15153
> URL: https://issues.apache.org/jira/browse/FLINK-15153
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The jobmanager label needs to be added to service selector. Otherwise, it may 
> select the wrong backend pods(taskmanager).
> The internal service is used for taskmanager talking to jobmanager. If it 
> does not have correct backend pods, the taskmanager may fail to register.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rkhachatryan commented on a change in pull request #10151: [FLINK-14231] Handle the processing-time timers when closing operators to make endInput semantics on the operator chain stri

2019-12-10 Thread GitBox
rkhachatryan commented on a change in pull request #10151: [FLINK-14231] Handle 
the processing-time timers when closing operators to make endInput semantics on 
the operator chain strict
URL: https://github.com/apache/flink/pull/10151#discussion_r356420318
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/ProcessingTimeServiceImpl.java
 ##
 @@ -39,11 +78,282 @@ public long getCurrentProcessingTime() {
 
@Override
public ScheduledFuture registerTimer(long timestamp, 
ProcessingTimeCallback target) {
-   return timerService.registerTimer(timestamp, 
processingTimeCallbackWrapper.apply(target));
+   if (isQuiesced()) {
+   return new NeverCompleteFuture(
+   
ProcessingTimeServiceUtil.getProcessingTimeDelay(timestamp, 
getCurrentProcessingTime()));
+   }
+
+   final TimerScheduledFuture timer = new 
TimerScheduledFuture(false, this::removeUndoneTimer);
+   undoneTimers.put(timer, Boolean.TRUE);
+
+   // double check to deal with the following race conditions:
+   // 1. canceling timers from the undone table occurs before 
putting this timer into the undone table
+   //(see #cancelTimersNotInExecuting())
+   // 2. using the size of the undone table to determine if all 
timers have done occurs before putting
+   //this timer into the undone table (see 
#tryCompleteTimersDoneFutureIfQuiesced())
+   if (isQuiesced()) {
+   removeUndoneTimer(timer);
+   return new NeverCompleteFuture(
+   
ProcessingTimeServiceUtil.getProcessingTimeDelay(timestamp, 
getCurrentProcessingTime()));
+   }
+
+   timer.bind(timerService.registerTimer(timestamp, 
timer.getCallback(processingTimeCallbackWrapper.apply(target;
+
+   return timer;
}
 
@Override
public ScheduledFuture scheduleAtFixedRate(ProcessingTimeCallback 
callback, long initialDelay, long period) {
-   return 
timerService.scheduleAtFixedRate(processingTimeCallbackWrapper.apply(callback), 
initialDelay, period);
+   if (isQuiesced()) {
+   return new NeverCompleteFuture(initialDelay);
+   }
+
+   final TimerScheduledFuture timer = new 
TimerScheduledFuture(true, this::removeUndoneTimer);
+   undoneTimers.put(timer, Boolean.TRUE);
+
+   // double check to deal with the following race conditions:
+   // 1. canceling timers from the undone table occurs before 
putting this timer into the undone table
+   //(see #cancelTimersNotInExecuting())
+   // 2. using the size of the undone table to determine if all 
timers have done occurs before putting
+   //this timer into the undone table (see 
#tryCompleteTimersDoneFutureIfQuiesced())
+   if (isQuiesced()) {
+   removeUndoneTimer(timer);
+   return new NeverCompleteFuture(initialDelay);
+   }
+
+   timer.bind(
+   timerService.scheduleAtFixedRate(
+   
timer.getCallback(processingTimeCallbackWrapper.apply(callback)), initialDelay, 
period));
+
+   return timer;
+   }
+
+   void quiesce() {
+   status.compareAndSet(STATUS_ALIVE, STATUS_QUIESCED);
+   }
+
+   /**
+* This is an idempotent method to allow to repeatedly call.
+*/
+   CompletableFuture cancelTimersGracefullyAfterQuiesce() {
+   checkState(status.get() == STATUS_QUIESCED);
+
+   if (!timersDoneFutureAfterQuiescing.isDone()) {
+   if (!cancelTimersNotInExecuting()) {
+   return FutureUtils.completedExceptionally(new 
CancellationException("Cancel timers failed"));
+   }
+   tryCompleteTimersDoneFutureIfQuiesced();
+   }
+
+   return timersDoneFutureAfterQuiescing;
+   }
+
+   @VisibleForTesting
+   int getNumUndoneTimers() {
+   return undoneTimers.size();
+   }
+
+   private boolean isQuiesced() {
+   return status.get() == STATUS_QUIESCED;
+   }
+
+   private void removeUndoneTimer(TimerScheduledFuture timer) {
+   undoneTimers.remove(timer);
+   tryCompleteTimersDoneFutureIfQuiesced();
+   }
+
+   private void tryCompleteTimersDoneFutureIfQuiesced() {
+   if (isQuiesced() && getNumUndoneTimers() == 0) {
+   timersDoneFutureAfterQuiescing.complete(null);
+   }
+   }
+
+   private boolean cancelTimersNotInExecuting() {
+   // we should cancel the timers in descending tim

[GitHub] [flink] rkhachatryan commented on a change in pull request #10151: [FLINK-14231] Handle the processing-time timers when closing operators to make endInput semantics on the operator chain stri

2019-12-10 Thread GitBox
rkhachatryan commented on a change in pull request #10151: [FLINK-14231] Handle 
the processing-time timers when closing operators to make endInput semantics on 
the operator chain strict
URL: https://github.com/apache/flink/pull/10151#discussion_r356420318
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/ProcessingTimeServiceImpl.java
 ##
 @@ -39,11 +78,282 @@ public long getCurrentProcessingTime() {
 
@Override
public ScheduledFuture registerTimer(long timestamp, 
ProcessingTimeCallback target) {
-   return timerService.registerTimer(timestamp, 
processingTimeCallbackWrapper.apply(target));
+   if (isQuiesced()) {
+   return new NeverCompleteFuture(
+   
ProcessingTimeServiceUtil.getProcessingTimeDelay(timestamp, 
getCurrentProcessingTime()));
+   }
+
+   final TimerScheduledFuture timer = new 
TimerScheduledFuture(false, this::removeUndoneTimer);
+   undoneTimers.put(timer, Boolean.TRUE);
+
+   // double check to deal with the following race conditions:
+   // 1. canceling timers from the undone table occurs before 
putting this timer into the undone table
+   //(see #cancelTimersNotInExecuting())
+   // 2. using the size of the undone table to determine if all 
timers have done occurs before putting
+   //this timer into the undone table (see 
#tryCompleteTimersDoneFutureIfQuiesced())
+   if (isQuiesced()) {
+   removeUndoneTimer(timer);
+   return new NeverCompleteFuture(
+   
ProcessingTimeServiceUtil.getProcessingTimeDelay(timestamp, 
getCurrentProcessingTime()));
+   }
+
+   timer.bind(timerService.registerTimer(timestamp, 
timer.getCallback(processingTimeCallbackWrapper.apply(target;
+
+   return timer;
}
 
@Override
public ScheduledFuture scheduleAtFixedRate(ProcessingTimeCallback 
callback, long initialDelay, long period) {
-   return 
timerService.scheduleAtFixedRate(processingTimeCallbackWrapper.apply(callback), 
initialDelay, period);
+   if (isQuiesced()) {
+   return new NeverCompleteFuture(initialDelay);
+   }
+
+   final TimerScheduledFuture timer = new 
TimerScheduledFuture(true, this::removeUndoneTimer);
+   undoneTimers.put(timer, Boolean.TRUE);
+
+   // double check to deal with the following race conditions:
+   // 1. canceling timers from the undone table occurs before 
putting this timer into the undone table
+   //(see #cancelTimersNotInExecuting())
+   // 2. using the size of the undone table to determine if all 
timers have done occurs before putting
+   //this timer into the undone table (see 
#tryCompleteTimersDoneFutureIfQuiesced())
+   if (isQuiesced()) {
+   removeUndoneTimer(timer);
+   return new NeverCompleteFuture(initialDelay);
+   }
+
+   timer.bind(
+   timerService.scheduleAtFixedRate(
+   
timer.getCallback(processingTimeCallbackWrapper.apply(callback)), initialDelay, 
period));
+
+   return timer;
+   }
+
+   void quiesce() {
+   status.compareAndSet(STATUS_ALIVE, STATUS_QUIESCED);
+   }
+
+   /**
+* This is an idempotent method to allow to repeatedly call.
+*/
+   CompletableFuture cancelTimersGracefullyAfterQuiesce() {
+   checkState(status.get() == STATUS_QUIESCED);
+
+   if (!timersDoneFutureAfterQuiescing.isDone()) {
+   if (!cancelTimersNotInExecuting()) {
+   return FutureUtils.completedExceptionally(new 
CancellationException("Cancel timers failed"));
+   }
+   tryCompleteTimersDoneFutureIfQuiesced();
+   }
+
+   return timersDoneFutureAfterQuiescing;
+   }
+
+   @VisibleForTesting
+   int getNumUndoneTimers() {
+   return undoneTimers.size();
+   }
+
+   private boolean isQuiesced() {
+   return status.get() == STATUS_QUIESCED;
+   }
+
+   private void removeUndoneTimer(TimerScheduledFuture timer) {
+   undoneTimers.remove(timer);
+   tryCompleteTimersDoneFutureIfQuiesced();
+   }
+
+   private void tryCompleteTimersDoneFutureIfQuiesced() {
+   if (isQuiesced() && getNumUndoneTimers() == 0) {
+   timersDoneFutureAfterQuiescing.complete(null);
+   }
+   }
+
+   private boolean cancelTimersNotInExecuting() {
+   // we should cancel the timers in descending tim

[GitHub] [flink] hequn8128 commented on a change in pull request #10515: [FLINK-14007][python][docs] Add documentation for how to use Java user-defined source/sink in Python API

2019-12-10 Thread GitBox
hequn8128 commented on a change in pull request #10515: 
[FLINK-14007][python][docs] Add documentation for how to use Java user-defined 
source/sink in Python API
URL: https://github.com/apache/flink/pull/10515#discussion_r356418038
 
 

 ##
 File path: docs/dev/table/sourceSinks.md
 ##
 @@ -714,10 +714,11 @@ connector.debug=true
 
 For a type-safe, programmatic approach with explanatory Scaladoc/Javadoc, the 
Table & SQL API offers descriptors in `org.apache.flink.table.descriptors` that 
translate into string-based properties. See the [built-in 
descriptors](connect.html) for sources, sinks, and formats as a reference.
 
-A connector for `MySystem` in our example can extend `ConnectorDescriptor` as 
shown below:
-
 
 
+
+A connector for `MySystem` in our example can extend `ConnectorDescriptor` as 
shown below:
 
 Review comment:
   Hi, I'm not sure if I understand your comments clearly. The 
`ConnectorDescriptor` is an abstract class. Maybe we should change the 
descriptions to : A custom descriptor can be defined by extending the 
`ConnectorDescriptor` class?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10518: [FLINK-15124][table] Fix types with 
precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#issuecomment-564108895
 
 
   
   ## CI report:
   
   * 9a0046d4b1d5b28907b617e5f82dab114ee97a31 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140457699) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3425)
 
   * 704e473f520d70d7dcced6ff8d36cbb87418bf7b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #10515: [FLINK-14007][python][docs] Add documentation for how to use Java user-defined source/sink in Python API

2019-12-10 Thread GitBox
hequn8128 commented on a change in pull request #10515: 
[FLINK-14007][python][docs] Add documentation for how to use Java user-defined 
source/sink in Python API
URL: https://github.com/apache/flink/pull/10515#discussion_r356419153
 
 

 ##
 File path: docs/dev/table/sourceSinks.md
 ##
 @@ -743,9 +744,25 @@ public class MySystemConnector extends 
ConnectorDescriptor {
   }
 }
 {% endhighlight %}
+
+The descriptor can then be used in the API as follows:
 
 Review comment:
   How about "create a table with the table environment"?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15135) Adding e2e tests for Flink's Mesos integration

2019-12-10 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993228#comment-16993228
 ] 

Yangze Guo commented on FLINK-15135:


As the first step, I'd like to just build the facilities and include WordCount 
test in this ticket. As the discussion going on, we can open more tickets for 
new test cases. WDYT?

> Adding e2e tests for Flink's Mesos integration
> --
>
> Key: FLINK-15135
> URL: https://issues.apache.org/jira/browse/FLINK-15135
> Project: Flink
>  Issue Type: Test
>  Components: Deployment / Mesos, Tests
>Reporter: Yangze Guo
>Priority: Critical
> Fix For: 1.10.0
>
>
> Currently, there is no end to end test or IT case for Mesos deployment. We 
> want to add Mesos end-to-end tests which will benefit both Mesos users and 
> contributors.
> More discussion could be found 
> [here|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Adding-e2e-tests-for-Flink-s-Mesos-integration-td35660.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on a change in pull request #10515: [FLINK-14007][python][docs] Add documentation for how to use Java user-defined source/sink in Python API

2019-12-10 Thread GitBox
hequn8128 commented on a change in pull request #10515: 
[FLINK-14007][python][docs] Add documentation for how to use Java user-defined 
source/sink in Python API
URL: https://github.com/apache/flink/pull/10515#discussion_r356418038
 
 

 ##
 File path: docs/dev/table/sourceSinks.md
 ##
 @@ -714,10 +714,11 @@ connector.debug=true
 
 For a type-safe, programmatic approach with explanatory Scaladoc/Javadoc, the 
Table & SQL API offers descriptors in `org.apache.flink.table.descriptors` that 
translate into string-based properties. See the [built-in 
descriptors](connect.html) for sources, sinks, and formats as a reference.
 
-A connector for `MySystem` in our example can extend `ConnectorDescriptor` as 
shown below:
-
 
 
+
+A connector for `MySystem` in our example can extend `ConnectorDescriptor` as 
shown below:
 
 Review comment:
   The `ConnectorDescriptor` is an abstract class. Maybe we should change the 
descriptions to : A custom descriptor can be defined by extending the 
`ConnectorDescriptor` class?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10493: [FLINK-15061][table]create/alter and table/databases properties should be case sensitive stored in catalog

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10493: [FLINK-15061][table]create/alter and 
table/databases properties should be case sensitive stored in catalog
URL: https://github.com/apache/flink/pull/10493#issuecomment-563111006
 
 
   
   ## CI report:
   
   * a79225d75a53240951a4378380ff2aac457c1278 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140180179) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3343)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] 
Fix types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356416161
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sources/TableSourceUtil.scala
 ##
 @@ -100,18 +96,32 @@ object TableSourceUtil {
   throw new ValidationException(s"Rowtime field '$name' has invalid 
type $t. " +
 s"Rowtime attributes must be of TimestampType.")
 }
-val (physicalName, idx, tpe) = resolveInputField(name, tableSource)
+val (physicalName, idx, logicalType) = resolveInputField(name, 
tableSource)
 // validate that mapped fields are are same type
-if (!isAssignable(fromTypeInfoToLogicalType(tpe), t)) {
+if (!isAssignable(logicalType, t)) {
   throw new ValidationException(s"Type $t of table field '$name' does 
not " +
-s"match with type $tpe of the field '$physicalName' of the 
TableSource return type.")
+s"match with type $logicalType of the field '$physicalName' of the 
" +
+"TableSource return type.")
+} else if (!isInteroperable(logicalType, t)) {
+  // the produced type of TableSource is different with the logical 
type defined in DDL
+  // on the precision or nullability.
+  throw new ValidationException(
+"If the connector would like to support precision and nullability 
defined in DDL," +
 
 Review comment:
   No `nullability` check, both `isInteroperable` and `isAssignable` not check 
`nullability`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15135) Adding e2e tests for Flink's Mesos integration

2019-12-10 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993226#comment-16993226
 ] 

Yangze Guo edited comment on FLINK-15135 at 12/11/19 6:03 AM:
--

Hi, [~trohrmann]. I'm willing to work on this, could you assign this ticket to 
me?


was (Author: karmagyz):
Hi, [~trohrmann]. I'm willing to working on this, could you assign this ticket 
to me?

> Adding e2e tests for Flink's Mesos integration
> --
>
> Key: FLINK-15135
> URL: https://issues.apache.org/jira/browse/FLINK-15135
> Project: Flink
>  Issue Type: Test
>  Components: Deployment / Mesos, Tests
>Reporter: Yangze Guo
>Priority: Critical
> Fix For: 1.10.0
>
>
> Currently, there is no end to end test or IT case for Mesos deployment. We 
> want to add Mesos end-to-end tests which will benefit both Mesos users and 
> contributors.
> More discussion could be found 
> [here|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Adding-e2e-tests-for-Flink-s-Mesos-integration-td35660.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15135) Adding e2e tests for Flink's Mesos integration

2019-12-10 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993226#comment-16993226
 ] 

Yangze Guo commented on FLINK-15135:


Hi, [~trohrmann]. I'm willing to working on this, could you assign this ticket 
to me?

> Adding e2e tests for Flink's Mesos integration
> --
>
> Key: FLINK-15135
> URL: https://issues.apache.org/jira/browse/FLINK-15135
> Project: Flink
>  Issue Type: Test
>  Components: Deployment / Mesos, Tests
>Reporter: Yangze Guo
>Priority: Critical
> Fix For: 1.10.0
>
>
> Currently, there is no end to end test or IT case for Mesos deployment. We 
> want to add Mesos end-to-end tests which will benefit both Mesos users and 
> contributors.
> More discussion could be found 
> [here|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Adding-e2e-tests-for-Flink-s-Mesos-integration-td35660.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10434: [FLINK-15072][client] executeAsync in ContextEnvironment from CliFrontend cause unexpected exception

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10434: [FLINK-15072][client] executeAsync 
in ContextEnvironment from CliFrontend cause unexpected exception
URL: https://github.com/apache/flink/pull/10434#issuecomment-562048009
 
 
   
   ## CI report:
   
   * 3657f8be3e9a8eb83712a9eeafe0bc8e3c182c95 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139496022) 
   * ddf99958fc5bbd03e1cde6abbd58c22a4f7c5554 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139501940) 
   * 31f08bdde0f7fa3ad954b9a1c7790819c62e18cd Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139534285) 
   * 6ae5448f1725843761d4793e16846278d617dbaa Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140374151) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3397)
 
   * c984c7d529d401a6e2e9029479c26310996de496 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140394922) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3404)
 
   * 279e7a0bdbf3a4a1ea02ba18db64e2330193055b UNKNOWN
   * 77c9d550e3ad6a80c953bb789e47a0e7dafd520f Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140532207) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3439)
 
   * 5c50c03749c3f69147ae401fe4313ef28c819d7a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140535748) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3444)
 
   * 15f502bd099959436eec4251aaa8de1acd248c40 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140542849) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3450)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] 
Fix types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356415385
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/SinkCodeGenerator.scala
 ##
 @@ -223,10 +224,10 @@ object SinkCodeGenerator {
   case (fieldTypeInfo, i) =>
 val requestedTypeInfo = tt.getTypeAt(i)
 validateFieldType(requestedTypeInfo)
-if (!areTypesCompatible(
-  fromTypeInfoToLogicalType(fieldTypeInfo),
-  fromTypeInfoToLogicalType(requestedTypeInfo)) &&
-  !requestedTypeInfo.isInstanceOf[GenericTypeInfo[Object]]) {
+if (!PlannerTypeUtils.isAssignable(
+fromTypeInfoToLogicalType(fieldTypeInfo),
+fromTypeInfoToLogicalType(requestedTypeInfo)) &&
+!requestedTypeInfo.isInstanceOf[GenericTypeInfo[Object]]) {
 
 Review comment:
   It can be discussed to whether it should be replaced by `isInteroperable`.
   But never `just equals strictly`, it is too strict.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] 
Fix types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356414653
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sources/TableSourceUtil.scala
 ##
 @@ -100,18 +96,32 @@ object TableSourceUtil {
   throw new ValidationException(s"Rowtime field '$name' has invalid 
type $t. " +
 s"Rowtime attributes must be of TimestampType.")
 }
-val (physicalName, idx, tpe) = resolveInputField(name, tableSource)
+val (physicalName, idx, logicalType) = resolveInputField(name, 
tableSource)
 // validate that mapped fields are are same type
-if (!isAssignable(fromTypeInfoToLogicalType(tpe), t)) {
+if (!isAssignable(logicalType, t)) {
   throw new ValidationException(s"Type $t of table field '$name' does 
not " +
-s"match with type $tpe of the field '$physicalName' of the 
TableSource return type.")
+s"match with type $logicalType of the field '$physicalName' of the 
" +
+"TableSource return type.")
+} else if (!isInteroperable(logicalType, t)) {
 
 Review comment:
   I think you should add comments to both source and sink to explain why they 
are different.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] 
Fix types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356413957
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/SinkCodeGenerator.scala
 ##
 @@ -223,10 +224,10 @@ object SinkCodeGenerator {
   case (fieldTypeInfo, i) =>
 val requestedTypeInfo = tt.getTypeAt(i)
 validateFieldType(requestedTypeInfo)
-if (!areTypesCompatible(
-  fromTypeInfoToLogicalType(fieldTypeInfo),
-  fromTypeInfoToLogicalType(requestedTypeInfo)) &&
-  !requestedTypeInfo.isInstanceOf[GenericTypeInfo[Object]]) {
+if (!PlannerTypeUtils.isAssignable(
+fromTypeInfoToLogicalType(fieldTypeInfo),
 
 Review comment:
   I think Scala only one indent is enough. Correct me if I wrong.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10518: [FLINK-15124][table] 
Fix types with precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#discussion_r356413683
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/sinks/CsvTableSinkFactoryBase.java
 ##
 @@ -75,6 +75,8 @@
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_TYPE);
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_DATA_TYPE);
properties.add(SCHEMA + ".#." + 
DescriptorProperties.TABLE_SCHEMA_NAME);
+   // schema watermark
+   properties.add(SCHEMA + "." + DescriptorProperties.WATERMARK + 
".*");
 
 Review comment:
   So every connector(SourceTableFactory and SinkTableFactory) must has this?
   Kafka not work too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10514: [FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10514: 
[FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector
URL: https://github.com/apache/flink/pull/10514#discussion_r356411359
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -151,6 +153,9 @@ private ExecutionContext(
classLoader = FlinkUserCodeClassLoaders.parentFirst(
dependencies.toArray(new 
URL[dependencies.size()]),
this.getClass().getClassLoader());
+   if (!dependencies.isEmpty()) {
+   flinkConfig.set(PipelineOptions.JARS, 
dependencies.stream().map(URL::toString).collect(Collectors.toList()));
 
 Review comment:
   `ConfigUtils.encodeCollectionToConfig(configuration, PipelineOptions.JARS, 
jobJars, URL::toString);`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10514: [FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10514: 
[FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector
URL: https://github.com/apache/flink/pull/10514#discussion_r356411828
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -337,6 +342,11 @@ private Module createModule(Map 
moduleProperties, ClassLoader cl
return factory.createModule(moduleProperties);
}
 
+   private void createAndRegisterCatalog(String name, CatalogEntry entry) {
 
 Review comment:
   just inline this method?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10514: [FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10514: 
[FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector
URL: https://github.com/apache/flink/pull/10514#discussion_r356412063
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -337,6 +342,11 @@ private Module createModule(Map 
moduleProperties, ClassLoader cl
return factory.createModule(moduleProperties);
}
 
+   private void createAndRegisterCatalog(String name, CatalogEntry entry) {
+   Catalog catalog = createCatalog(name, entry.asMap(), 
Thread.currentThread().getContextClassLoader());
 
 Review comment:
   Why `Thread.currentThread().getContextClassLoader()` instead of class member 
`classLoader`?
   The only thing you need is `wrapClassLoader`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10514: [FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10514: 
[FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector
URL: https://github.com/apache/flink/pull/10514#discussion_r356412983
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -511,12 +521,10 @@ private void initializeCatalogs() {

//--
// Step.1 Create catalogs and register them.

//--
-   Map catalogs = new LinkedHashMap<>();
-   environment.getCatalogs().forEach((name, entry) ->
-   catalogs.put(name, createCatalog(name, 
entry.asMap(), classLoader))
-   );
-   // register catalogs
-   catalogs.forEach(tableEnv::registerCatalog);
+   wrapClassLoader((Supplier) () -> {
 
 Review comment:
   Add a `public void wrapClassLoader(Runnable supplier)`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10514: [FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector

2019-12-10 Thread GitBox
JingsongLi commented on a change in pull request #10514: 
[FLINK-15167][sql-client] SQL CLI library option doesn't work for Hive connector
URL: https://github.com/apache/flink/pull/10514#discussion_r356411620
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -151,6 +153,9 @@ private ExecutionContext(
classLoader = FlinkUserCodeClassLoaders.parentFirst(
dependencies.toArray(new 
URL[dependencies.size()]),
this.getClass().getClassLoader());
+   if (!dependencies.isEmpty()) {
 
 Review comment:
   Just remove this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10522: [FLINK-15153]Service selector needs to contain jobmanager component label(Backport to 1.10)

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10522: [FLINK-15153]Service selector needs 
to contain jobmanager component label(Backport to 1.10)
URL: https://github.com/apache/flink/pull/10522#issuecomment-564348907
 
 
   
   ## CI report:
   
   * 137c5df8e1bfffa09a00d7d7515c8b09d961753a Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140532162) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3438)
 
   * 427445d4701f47fecdaa76d0b9a981ddb02597c3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140535715) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3442)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-14952) Yarn containers can exceed physical memory limits when using BoundedBlockingSubpartition.

2019-12-10 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang reassigned FLINK-14952:


Assignee: zhijiang

> Yarn containers can exceed physical memory limits when using 
> BoundedBlockingSubpartition.
> -
>
> Key: FLINK-14952
> URL: https://issues.apache.org/jira/browse/FLINK-14952
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Network
>Affects Versions: 1.9.1
>Reporter: Piotr Nowojski
>Assignee: zhijiang
>Priority: Blocker
> Fix For: 1.10.0
>
>
> As [reported by a user on the user mailing 
> list|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/CoGroup-SortMerger-performance-degradation-from-1-6-4-1-9-1-td31082.html],
>  combination of using {{BoundedBlockingSubpartition}} with yarn containers 
> can cause yarn container to exceed memory limits.
> {quote}2019-11-19 12:49:23,068 INFO org.apache.flink.yarn.YarnResourceManager 
> - Closing TaskExecutor connection container_e42_1574076744505_9444_01_04 
> because: Container 
> [pid=42774,containerID=container_e42_1574076744505_9444_01_04] is running 
> beyond physical memory limits. Current usage: 12.0 GB of 12 GB physical 
> memory used; 13.9 GB of 25.2 GB virtual memory used. Killing container.
> {quote}
> This is probably happening because memory usage of mmap is not capped and not 
> accounted by configured memory limits, however yarn is tracking this memory 
> usage and once Flink exceeds some threshold, container is being killed.
> Workaround is to overrule default value and force Flink to not user mmap, by 
> setting a secret (🤫) config option:
> {noformat}
> taskmanager.network.bounded-blocking-subpartition-type: file
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on issue #10518: [FLINK-15124][table] Fix types with precision defined in DDL can't be executed

2019-12-10 Thread GitBox
wuchong commented on issue #10518: [FLINK-15124][table] Fix types with 
precision defined in DDL can't be executed
URL: https://github.com/apache/flink/pull/10518#issuecomment-564390222
 
 
   Hi @JingsongLi , @danny0405 , I have updated the PR, please have another 
look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15140) Shuffle data compression does not work with BroadcastRecordWriter.

2019-12-10 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang reassigned FLINK-15140:


Assignee: Yingjie Cao

> Shuffle data compression does not work with BroadcastRecordWriter.
> --
>
> Key: FLINK-15140
> URL: https://issues.apache.org/jira/browse/FLINK-15140
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I tested the newest code of master branch last weekend with more test cases. 
> Unfortunately, several problems were encountered, including a bug of 
> compression.
> When BroadcastRecordWriter is used, for pipelined mode, because the 
> compressor copies the data back to the input buffer, however, the underlying 
> buffer is shared when BroadcastRecordWriter is used. So we can not copy the 
> compressed buffer back to the input buffer if the underlying buffer is 
> shared. For blocking mode, we wrongly recycle the buffer when buffer is not 
> compressed, and the problem is also triggered when BroadcastRecordWriter is 
> used.
> To fix the problem, for blocking shuffle, the reference counter should be 
> maintained correctly, for pipelined shuffle, the simplest way maybe disable 
> compression when the underlying buffer is shared. I will open a PR to fix the 
> problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14952) Yarn containers can exceed physical memory limits when using BoundedBlockingSubpartition.

2019-12-10 Thread zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993207#comment-16993207
 ] 

zhijiang commented on FLINK-14952:
--

As [~kevin.cyj] mentioned above, the current blocking partition with file type 
has some potential concern for memory overhead. I created a separate ticket 
FLINK-15187 for tracking this issue and I do not tag it as a blocker for 
release-1.10 ATM.

> Yarn containers can exceed physical memory limits when using 
> BoundedBlockingSubpartition.
> -
>
> Key: FLINK-14952
> URL: https://issues.apache.org/jira/browse/FLINK-14952
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Network
>Affects Versions: 1.9.1
>Reporter: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.10.0
>
>
> As [reported by a user on the user mailing 
> list|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/CoGroup-SortMerger-performance-degradation-from-1-6-4-1-9-1-td31082.html],
>  combination of using {{BoundedBlockingSubpartition}} with yarn containers 
> can cause yarn container to exceed memory limits.
> {quote}2019-11-19 12:49:23,068 INFO org.apache.flink.yarn.YarnResourceManager 
> - Closing TaskExecutor connection container_e42_1574076744505_9444_01_04 
> because: Container 
> [pid=42774,containerID=container_e42_1574076744505_9444_01_04] is running 
> beyond physical memory limits. Current usage: 12.0 GB of 12 GB physical 
> memory used; 13.9 GB of 25.2 GB virtual memory used. Killing container.
> {quote}
> This is probably happening because memory usage of mmap is not capped and not 
> accounted by configured memory limits, however yarn is tracking this memory 
> usage and once Flink exceeds some threshold, container is being killed.
> Workaround is to overrule default value and force Flink to not user mmap, by 
> setting a secret (🤫) config option:
> {noformat}
> taskmanager.network.bounded-blocking-subpartition-type: file
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle 
data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#issuecomment-563110907
 
 
   
   ## CI report:
   
   * bef118a977b3aa635fc748260d07b0d5079b2c0e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140180154) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3342)
 
   * fb385e77c7006db8552c81ef2c2005d37a63fb93 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140535760) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3445)
 
   * 0276c0af8515d2f81a44d253339f3b367d2bc5cb Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140541200) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3449)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15188) add builder for catalog objects

2019-12-10 Thread Bowen Li (Jira)
Bowen Li created FLINK-15188:


 Summary: add builder for catalog objects
 Key: FLINK-15188
 URL: https://issues.apache.org/jira/browse/FLINK-15188
 Project: Flink
  Issue Type: Task
  Components: Table SQL / API
Reporter: Bowen Li
Assignee: Kurt Young


currently we don't have builders for catalog objects, and users are forced to 
use raw impl classes of catalog objects. E.g. to new a catalog table in Table 
API, users have to do:

{code:java}
var table = new CatalogTableImpl(tableSchema, properties, comment)
{code}

which is not very nice.

The same applies to {{CatalogDatabaseImpl}}, {{CatalogViewImpl}}, 
{{CatalogPartitionImpl}}. These impls are supposed to be internal classes, but 
we are exposing them to users.

A better API experience would be: 

{code:java}
var catalog = ...
catalog.createDatabase("mydb", new Database().withProperties().xxx(), false)
catalog.createTable("name", new Kafka().xxx().xxx(), false)
{code}

thus we may need to convert connector descriptor to catalog table impl, and add 
builders for other catalog objects.

This may or may not be a high priority task, depending on how many users are 
registering tables in Table API v.s. using DDL




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10434: [FLINK-15072][client] executeAsync in ContextEnvironment from CliFrontend cause unexpected exception

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10434: [FLINK-15072][client] executeAsync 
in ContextEnvironment from CliFrontend cause unexpected exception
URL: https://github.com/apache/flink/pull/10434#issuecomment-562048009
 
 
   
   ## CI report:
   
   * 3657f8be3e9a8eb83712a9eeafe0bc8e3c182c95 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139496022) 
   * ddf99958fc5bbd03e1cde6abbd58c22a4f7c5554 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139501940) 
   * 31f08bdde0f7fa3ad954b9a1c7790819c62e18cd Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139534285) 
   * 6ae5448f1725843761d4793e16846278d617dbaa Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140374151) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3397)
 
   * c984c7d529d401a6e2e9029479c26310996de496 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140394922) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3404)
 
   * 279e7a0bdbf3a4a1ea02ba18db64e2330193055b UNKNOWN
   * 77c9d550e3ad6a80c953bb789e47a0e7dafd520f Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140532207) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3439)
 
   * 5c50c03749c3f69147ae401fe4313ef28c819d7a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140535748) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3444)
 
   * 15f502bd099959436eec4251aaa8de1acd248c40 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13437) Add Hive SQL E2E test

2019-12-10 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16993202#comment-16993202
 ] 

Yu Li commented on FLINK-13437:
---

[~Terry1897] I'd like to help but only assignee of the issue could update the 
status. Could you try the "Start Progress" button right under the JIRA title, 
next to "Resolve Issue"? Thanks.

> Add Hive SQL E2E test
> -
>
> Key: FLINK-13437
> URL: https://issues.apache.org/jira/browse/FLINK-13437
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Terry Wang
>Priority: Major
> Fix For: 1.10.0
>
>
> We should add an E2E test for the Hive integration: List all tables and read 
> some metadata, read from an existing table, register a new table in Hive, use 
> a registered function, write to an existing table, write to a new table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10522: [FLINK-15153]Service selector needs to contain jobmanager component label(Backport to 1.10)

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10522: [FLINK-15153]Service selector needs 
to contain jobmanager component label(Backport to 1.10)
URL: https://github.com/apache/flink/pull/10522#issuecomment-564348907
 
 
   
   ## CI report:
   
   * 137c5df8e1bfffa09a00d7d7515c8b09d961753a Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140532162) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3438)
 
   * 427445d4701f47fecdaa76d0b9a981ddb02597c3 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140535715) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3442)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10501: [FLINK-15139][table sql / client]misc end to end test failed cause loss jars in converting to jobgraph

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10501: [FLINK-15139][table sql / 
client]misc end to end test failed  cause loss jars in converting to jobgraph
URL: https://github.com/apache/flink/pull/10501#issuecomment-563307886
 
 
   
   ## CI report:
   
   * 88033844e8102c06281586898f8641dd4cba3a3c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140265006) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3377)
 Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3372)
 
   * 1fd40c172a561fcd13b7f94974575450c4e3b48c Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140541188) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3448)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle data compression doesn't work with BroadcastRecordWriter.

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10492: [FLINK-15140][runtime] Fix shuffle 
data compression doesn't work with BroadcastRecordWriter.
URL: https://github.com/apache/flink/pull/10492#issuecomment-563110907
 
 
   
   ## CI report:
   
   * bef118a977b3aa635fc748260d07b0d5079b2c0e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140180154) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3342)
 
   * fb385e77c7006db8552c81ef2c2005d37a63fb93 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140535760) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3445)
 
   * 0276c0af8515d2f81a44d253339f3b367d2bc5cb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10436: [FLINK-14920] [flink-end-to-end-perf-tests] Set up environment to run performance e2e tests

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10436: [FLINK-14920] 
[flink-end-to-end-perf-tests] Set up environment to run performance e2e tests
URL: https://github.com/apache/flink/pull/10436#issuecomment-562103759
 
 
   
   ## CI report:
   
   * 3441142c77315169ce7a3b48d4d59e710c1016c2 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139518645) 
   * 68bc1d0dac16c85a35644a3444777dde5b38257c Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139548818) 
   * 60710d385f3fc62c641d36da4029813d779caf6d Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139555833) 
   * f4d638d6b1e578bc631df874eb67efdeb4673601 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/139575528) 
   * 841bea9216e1c07d2fce9093d720f64f6d6c889f Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/139812364) 
   * 87a8eb32399114c449f75a94d0243afed93adda5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/139852611) 
   * ed6a1e4c6e0fea5c0432c0dde6312c7b73ae447c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140126741) 
   * c1a1151bd3eb16de5851105700fd48928fe15e7b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140166474) 
   * a08d23d559e36ff93480728a9168e92319941a68 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/140528959) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3436)
 
   * faf3ac41b617d6aad093e61dd89445947f2898c0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140533954) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3441)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10524: [hotfix][doc] update Hive doc

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10524: [hotfix][doc] update Hive doc
URL: https://github.com/apache/flink/pull/10524#issuecomment-564373940
 
 
   
   ## CI report:
   
   * 8245ff2b8a069c227013ffe04001d6f0778e7979 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140539604) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3447)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10523: [FLINK-15073][sql-client] SQL-CLI fails to run same query multiple times

2019-12-10 Thread GitBox
flinkbot edited a comment on issue #10523: [FLINK-15073][sql-client] SQL-CLI 
fails to run same query multiple times
URL: https://github.com/apache/flink/pull/10523#issuecomment-564363476
 
 
   
   ## CI report:
   
   * afdfe7941439eeec090d3123319c74ecb7fb21ee Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140535725) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3443)
 
   * 912744c3bb41f9c60634bdd1de6cb1506d0a2cf3 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/140539587) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3446)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   7   >