[GitHub] [flink] flinkbot edited a comment on issue #10735: [hotfix][runtime] Fix variable name typos in TaskInformation.java and…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10735: [hotfix][runtime] Fix variable name 
typos in TaskInformation.java and…
URL: https://github.com/apache/flink/pull/10735#issuecomment-570028288
 
 
   
   ## CI report:
   
   * fa6bd8979b2404913c658fc035746f7b2ddbd3ae Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142788492) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4020)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10735: [hotfix][runtime] Fix variable name typos in TaskInformation.java and…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10735: [hotfix][runtime] Fix variable name 
typos in TaskInformation.java and…
URL: https://github.com/apache/flink/pull/10735#issuecomment-570028288
 
 
   
   ## CI report:
   
   * fa6bd8979b2404913c658fc035746f7b2ddbd3ae Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142788492) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4020)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10735: [hotfix][runtime] Fix variable name typos in TaskInformation.java and…

2019-12-31 Thread GitBox
flinkbot commented on issue #10735: [hotfix][runtime] Fix variable name typos 
in TaskInformation.java and…
URL: https://github.com/apache/flink/pull/10735#issuecomment-570028288
 
 
   
   ## CI report:
   
   * fa6bd8979b2404913c658fc035746f7b2ddbd3ae UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10735: [hotfix][runtime] Fix variable name typos in TaskInformation.java and…

2019-12-31 Thread GitBox
flinkbot commented on issue #10735: [hotfix][runtime] Fix variable name typos 
in TaskInformation.java and…
URL: https://github.com/apache/flink/pull/10735#issuecomment-570026137
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit fa6bd8979b2404913c658fc035746f7b2ddbd3ae (Wed Jan 01 
06:05:26 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] paul8263 opened a new pull request #10735: [hotfix][runtime] Fix variable name typos in TaskInformation.java and…

2019-12-31 Thread GitBox
paul8263 opened a new pull request #10735: [hotfix][runtime] Fix variable name 
typos in TaskInformation.java and…
URL: https://github.com/apache/flink/pull/10735
 
 
   ## What is the purpose of the change
   
   Fix variable name typos in TaskInformation.java and Task.java
   
   ## Brief change log
   
   * TaskInformation: Changed maxNumberOfSubtaks to maxNumberOfSubtasks, with 
its relative get method. Changed the maxNumberOfSubtaks in its constructor 
parameter to maxNumberOfSubtasks.
   * Task: Correspondingly changed taskInformation.getMaxNumberOfSubtaks() to 
taskInformation.getMaxNumberOfSubtasks().
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): (no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #10732: [FLINK-14980][docs] add function ddl docs

2019-12-31 Thread GitBox
bowenli86 commented on a change in pull request #10732: [FLINK-14980][docs] add 
function ddl docs
URL: https://github.com/apache/flink/pull/10732#discussion_r362301736
 
 

 ##
 File path: docs/dev/table/sql/alter.md
 ##
 @@ -134,4 +135,27 @@ Set one or more properties in the specified table. If a 
particular property is a
 ALTER DATABASE [catalog_name.]db_name SET (key1=val1, key2=val2, ...)
 {% endhighlight %}
 
-Set one or more properties in the specified database. If a particular property 
is already set in the database, override the old value with the new one.
\ No newline at end of file
+Set one or more properties in the specified database. If a particular property 
is already set in the database, override the old value with the new one.
+
+## ALTER FUNCTION
+
+{% highlight sql%}
+ALTER [TEMPORARY|TEMPORARY SYSTEM] FUNCTION 
+  [IF EXISTS] [catalog_name.][db_name.] function_name 
 
 Review comment:
   should be?
   ```suggestion
 [IF EXISTS] [catalog_name.][db_name.]function_name 
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #10732: [FLINK-14980][docs] add function ddl docs

2019-12-31 Thread GitBox
bowenli86 commented on a change in pull request #10732: [FLINK-14980][docs] add 
function ddl docs
URL: https://github.com/apache/flink/pull/10732#discussion_r362301757
 
 

 ##
 File path: docs/dev/table/sql/create.md
 ##
 @@ -225,3 +226,24 @@ Database properties used to store extra information 
related to this database.
 The key and value of expression `key1=val1` should both be string literal.
 
 {% top %}
+
+## CREATE FUNCTION
+{% highlight sql%}
+CREATE [TEMPORARY|TEMPORARY SYSTEM] FUNCTION 
+  [IF NOT EXISTS] [catalog_name.][db_name.]function_name 
+  AS identifier [LANGUAGE JAVA|SCALA]
+{% endhighlight %}
+
+Create a catalog function that has catalog and database namespaces with the 
identifier and optional language tag. If a function with the same name already 
exists in the catalog, an exception is thrown.
+
 
 Review comment:
   maybe also document that 'identifier' for java/scala is the class name?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #10732: [FLINK-14980][docs] add function ddl docs

2019-12-31 Thread GitBox
bowenli86 commented on a change in pull request #10732: [FLINK-14980][docs] add 
function ddl docs
URL: https://github.com/apache/flink/pull/10732#discussion_r362301768
 
 

 ##
 File path: docs/dev/table/sql/drop.md
 ##
 @@ -140,4 +141,21 @@ Dropping a non-empty database triggers an exception. 
Enabled by default.
 
 **CASCADE**
 
-Dropping a non-empty database also drops all associated tables and functions.
\ No newline at end of file
+Dropping a non-empty database also drops all associated tables and functions.
+
+## DROP FUNCTION
+
+{% highlight sql%}
+DROP [TEMPORARY|TEMPORARY SYSTEM] FUNCTION [IF EXISTS] 
[catalog_name.][db_name.] function_name;
 
 Review comment:
   should be 
   ```suggestion
   DROP [TEMPORARY|TEMPORARY SYSTEM] FUNCTION [IF EXISTS] 
[catalog_name.][db_name.]function_name;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #10732: [FLINK-14980][docs] add function ddl docs

2019-12-31 Thread GitBox
bowenli86 commented on a change in pull request #10732: [FLINK-14980][docs] add 
function ddl docs
URL: https://github.com/apache/flink/pull/10732#discussion_r362301750
 
 

 ##
 File path: docs/dev/table/sql/alter.md
 ##
 @@ -134,4 +135,27 @@ Set one or more properties in the specified table. If a 
particular property is a
 ALTER DATABASE [catalog_name.]db_name SET (key1=val1, key2=val2, ...)
 {% endhighlight %}
 
-Set one or more properties in the specified database. If a particular property 
is already set in the database, override the old value with the new one.
\ No newline at end of file
+Set one or more properties in the specified database. If a particular property 
is already set in the database, override the old value with the new one.
+
+## ALTER FUNCTION
+
+{% highlight sql%}
+ALTER [TEMPORARY|TEMPORARY SYSTEM] FUNCTION 
+  [IF EXISTS] [catalog_name.][db_name.] function_name 
+  AS identifier [LANGUAGE JAVA|SCALA|
+{% endhighlight %}
+
+Alter a catalog function with the new identifier and optional language tag. If 
a function doesn't exist in the catalog, an exception is thrown.
+
 
 Review comment:
   maybe also document that 'identifier' for java/scala is the class name?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-31 Thread ShijieZhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006284#comment-17006284
 ] 

ShijieZhang commented on FLINK-15081:
-

[~jark] Sorry, I didn't know there was a translation specification before, so 
do you mind if I retranslate this article?

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Assignee: ShijieZhang
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15409) Fix code generation in windowed join function

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15409.
-
Resolution: Fixed

1.11.0: e1340a006e3f76b71323dc87f61063fedfdc7f3b
1.10.0: d41f75ec749703b778c0e2f0cb56359667fa87cd

> Fix code generation in windowed join function
> -
>
> Key: FLINK-15409
> URL: https://issues.apache.org/jira/browse/FLINK-15409
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In WindowJoinUtil#generateJoinFunction, When otherCondition is none, it will 
> go into  statement:
> {code:java}
> case None =>
>   s"""
>  |$buildJoinedRow
>  |$collectorTerm.collect($joinedRow)
>  |""".stripMargin
> {code}
> And it miss a semicolon after collet($joinedRow). This will cause compile 
> fail:
> {code:java}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.Caused by: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue. at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
>  at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:65)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52)
>  ... 26 moreCaused by: org.codehaus.commons.compiler.CompileException: Line 
> 28, Column 21: Expression "c.collect(joinedRow)" is not a type
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15409) Fix code generation in windowed join function

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15409:

Affects Version/s: (was: 1.9.1)

> Fix code generation in windowed join function
> -
>
> Key: FLINK-15409
> URL: https://issues.apache.org/jira/browse/FLINK-15409
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In WindowJoinUtil#generateJoinFunction, When otherCondition is none, it will 
> go into  statement:
> {code:java}
> case None =>
>   s"""
>  |$buildJoinedRow
>  |$collectorTerm.collect($joinedRow)
>  |""".stripMargin
> {code}
> And it miss a semicolon after collet($joinedRow). This will cause compile 
> fail:
> {code:java}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.Caused by: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue. at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
>  at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:65)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52)
>  ... 26 moreCaused by: org.codehaus.commons.compiler.CompileException: Line 
> 28, Column 21: Expression "c.collect(joinedRow)" is not a type
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15409) Add semicolon to WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15409:

Fix Version/s: (was: 1.9.2)

> Add semicolon to WindowJoinUtil#generateJoinFunction 
> '$collectorTerm.collect($joinedRow)' statement
> ---
>
> Key: FLINK-15409
> URL: https://issues.apache.org/jira/browse/FLINK-15409
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In WindowJoinUtil#generateJoinFunction, When otherCondition is none, it will 
> go into  statement:
> {code:java}
> case None =>
>   s"""
>  |$buildJoinedRow
>  |$collectorTerm.collect($joinedRow)
>  |""".stripMargin
> {code}
> And it miss a semicolon after collet($joinedRow). This will cause compile 
> fail:
> {code:java}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.Caused by: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue. at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
>  at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:65)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52)
>  ... 26 moreCaused by: org.codehaus.commons.compiler.CompileException: Line 
> 28, Column 21: Expression "c.collect(joinedRow)" is not a type
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15409) Fix code generation in windowed join function

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15409:

Summary: Fix code generation in windowed join function  (was: Add semicolon 
to WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' 
statement)

> Fix code generation in windowed join function
> -
>
> Key: FLINK-15409
> URL: https://issues.apache.org/jira/browse/FLINK-15409
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In WindowJoinUtil#generateJoinFunction, When otherCondition is none, it will 
> go into  statement:
> {code:java}
> case None =>
>   s"""
>  |$buildJoinedRow
>  |$collectorTerm.collect($joinedRow)
>  |""".stripMargin
> {code}
> And it miss a semicolon after collet($joinedRow). This will cause compile 
> fail:
> {code:java}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.Caused by: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue. at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
>  at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:65)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52)
>  ... 26 moreCaused by: org.codehaus.commons.compiler.CompileException: Line 
> 28, Column 21: Expression "c.collect(joinedRow)" is not a type
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15409) Add semicolon to WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15409:

Component/s: (was: Table SQL / Runtime)
 Table SQL / Planner

> Add semicolon to WindowJoinUtil#generateJoinFunction 
> '$collectorTerm.collect($joinedRow)' statement
> ---
>
> Key: FLINK-15409
> URL: https://issues.apache.org/jira/browse/FLINK-15409
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In WindowJoinUtil#generateJoinFunction, When otherCondition is none, it will 
> go into  statement:
> {code:java}
> case None =>
>   s"""
>  |$buildJoinedRow
>  |$collectorTerm.collect($joinedRow)
>  |""".stripMargin
> {code}
> And it miss a semicolon after collet($joinedRow). This will cause compile 
> fail:
> {code:java}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.Caused by: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue. at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
>  at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:65)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
>  at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:52)
>  ... 26 moreCaused by: org.codehaus.commons.compiler.CompileException: Line 
> 28, Column 21: Expression "c.collect(joinedRow)" is not a type
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15434) testResourceManagerConnectionAfterRegainingLeadership test fail when run azure

2019-12-31 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006260#comment-17006260
 ] 

hailong wang commented on FLINK-15434:
--

I reproduced this problem on my laptop, and the debug log as follow:
{code:java}
 [flink-akka.actor.default-dispatcher-4] INFO  akka.event.slf4j.Slf4jLogger  - 
Slf4jLogger started0    [flink-akka.actor.default-dispatcher-4] INFO  
akka.event.slf4j.Slf4jLogger  - Slf4jLogger started72   
[flink-akka.actor.default-dispatcher-4] DEBUG akka.event.EventStream  - logger 
log1-Slf4jLogger started79   [flink-akka.actor.default-dispatcher-4] DEBUG 
akka.event.EventStream  - Default Loggers started1165 [main] INFO  
org.apache.flink.runtime.jobmaster.JobMasterTest  - 
Test
 
testResourceManagerConnectionAfterRegainingLeadership(org.apache.flink.runtime.jobmaster.JobMasterTest)
 is 
running.4106
 [main] INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService  - Starting RPC 
endpoint for org.apache.flink.runtime.jobmaster.JobMaster at 
akka://flink/user/jobmanager_0 .4171 [main] INFO  
org.apache.flink.runtime.jobmaster.JobMaster  - Initializing job (unnamed job) 
(f781d92dbe44f52505eff1b144e45bb6).4268 [main] INFO  
org.apache.flink.runtime.jobmaster.JobMaster  - Using restart strategy 
NoRestartStrategy for (unnamed job) (f781d92dbe44f52505eff1b144e45bb6).4362 
[main] INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph  - Job 
recovers via failover strategy: full graph restart4433 [main] INFO  
org.apache.flink.runtime.jobmaster.JobMaster  - Running initialization on 
master for job (unnamed job) (f781d92dbe44f52505eff1b144e45bb6).4433 [main] 
INFO  org.apache.flink.runtime.jobmaster.JobMaster  - Successfully ran 
initialization on master in 0 ms.4439 [main] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Adding 0 vertices from job 
graph (unnamed job) (f781d92dbe44f52505eff1b144e45bb6).4439 [main] DEBUG 
org.apache.flink.runtime.executiongraph.ExecutionGraph  - Attaching 0 
topologically sorted vertices to existing job graph with 0 vertices and 0 
intermediate results.4484 [main] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Successfully created execution 
graph from job graph (unnamed job) (f781d92dbe44f52505eff1b144e45bb6).4518 
[flink-akka.actor.default-dispatcher-4] INFO  
org.apache.flink.runtime.jobmaster.JobMaster  - Starting execution of job 
(unnamed job) (f781d92dbe44f52505eff1b144e45bb6) under job master id 
5b6d296cfdedc261c4bbd525aea971b3.4521 [flink-akka.actor.default-dispatcher-4] 
INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph  - Job (unnamed 
job) (f781d92dbe44f52505eff1b144e45bb6) switched from state CREATED to 
RUNNING.4533 [flink-akka.actor.default-dispatcher-4] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Trigger heartbeat request.4537 
[flink-akka.actor.default-dispatcher-4] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Trigger heartbeat request.4543 
[flink-akka.actor.default-dispatcher-4] INFO  
org.apache.flink.runtime.jobmaster.JobMaster  - Connecting to ResourceManager 
localhost/cc67ffdf-cb14-4f8a-bf85-ddd6f11327b2(5421d487c36920c6ee4eb0136801)4556
 [flink-akka.actor.default-dispatcher-2] INFO  
org.apache.flink.runtime.jobmaster.JobMaster  - Resolved ResourceManager 
address, beginning registration4557 [flink-akka.actor.default-dispatcher-2] 
INFO  org.apache.flink.runtime.jobmaster.JobMaster  - Registration at 
ResourceManager attempt 1 (timeout=100ms)4565 
[flink-akka.actor.default-dispatcher-3] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Trigger heartbeat request.4594 
[flink-akka.actor.default-dispatcher-3] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Trigger heartbeat request.into 
5b6d296cfdedc261c4bbd525aea971b3, f781d92dbe44f52505eff1b144e45bb64619 
[flink-akka.actor.default-dispatcher-3] INFO  
org.apache.flink.runtime.jobmaster.JobMaster  - JobManager successfully 
registered at ResourceManager, leader id: 5421d487c36920c6ee4eb0136801.4629 
[flink-akka.actor.default-dispatcher-3] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Trigger heartbeat request.4656 
[flink-akka.actor.default-dispatcher-4] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Trigger heartbeat request.4657 
[flink-akka.actor.default-dispatcher-4] INFO  
org.apache.flink.runtime.jobmaster.JobMaster  - The heartbeat of 
ResourceManager with id 4cce21c1eac0cad2675e364a1d0f41c1 timed out.4668 
[flink-akka.actor.default-dispatcher-4] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster  - Close ResourceManager 
connection 
4cce21c1eac0cad2675e364a1d0f41c1.org.apache.flink.runtime.jobmaster.JobMasterException:
 The heartbeat of ResourceManager with id 4cce21c1eac0cad2675e364a1d0f41c1 
timed out. at 

[GitHub] [flink] flinkbot edited a comment on issue #10734: [FLINK-15386] Fix logic error in SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10734: [FLINK-15386] Fix logic error in 
SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests
URL: https://github.com/apache/flink/pull/10734#issuecomment-569977123
 
 
   
   ## CI report:
   
   * 396aa2c7073c2cc4cf644eed43bda5f8436fda8d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142769087) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10734: [FLINK-15386] Fix logic error in SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10734: [FLINK-15386] Fix logic error in 
SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests
URL: https://github.com/apache/flink/pull/10734#issuecomment-569977123
 
 
   
   ## CI report:
   
   * 396aa2c7073c2cc4cf644eed43bda5f8436fda8d Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142769087) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10734: [FLINK-15386] Fix logic error in SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests

2019-12-31 Thread GitBox
flinkbot commented on issue #10734: [FLINK-15386] Fix logic error in 
SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests
URL: https://github.com/apache/flink/pull/10734#issuecomment-569977123
 
 
   
   ## CI report:
   
   * 396aa2c7073c2cc4cf644eed43bda5f8436fda8d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10734: [FLINK-15386] Fix logic error in SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests

2019-12-31 Thread GitBox
flinkbot commented on issue #10734: [FLINK-15386] Fix logic error in 
SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests
URL: https://github.com/apache/flink/pull/10734#issuecomment-569973700
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 396aa2c7073c2cc4cf644eed43bda5f8436fda8d (Tue Dec 31 
18:50:45 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15386).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15386) SingleJobSubmittedJobGraphStore.putJobGraph has a logic error

2019-12-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15386:
---
Labels: pull-request-available  (was: )

> SingleJobSubmittedJobGraphStore.putJobGraph has a logic error
> -
>
> Key: FLINK-15386
> URL: https://issues.apache.org/jira/browse/FLINK-15386
> Project: Flink
>  Issue Type: Bug
>Reporter: Ethan Li
>Priority: Minor
>  Labels: pull-request-available
>
> https://github.com/apache/flink/blob/release-1.9/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/SingleJobSubmittedJobGraphStore.java#L61-L66
> {code:java}
>   @Override
>   public void putJobGraph(SubmittedJobGraph jobGraph) throws Exception {
>   if (!jobGraph.getJobId().equals(jobGraph.getJobId())) { //this 
> always returns false.
>   throw new FlinkException("Cannot put additional jobs 
> into this submitted job graph store.");
>   }
>   }
> {code}
> The code is there since 1.5 but fixed in the master branch (1.10). It's also 
> better to add unit test for this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Ethanlm opened a new pull request #10734: [FLINK-15386] Fix logic error in SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests

2019-12-31 Thread GitBox
Ethanlm opened a new pull request #10734: [FLINK-15386] Fix logic error in 
SingleJobSubmittedJobGraphStore.putJobGraph and add unit tests
URL: https://github.com/apache/flink/pull/10734
 
 
   ## What is the purpose of the change
   
   This pull requests fixes a minor error in  SingleJobSubmittedJobGraphStore 
`putJobGraph` function.
   
   ## Brief change log
   
 - *Fix SingleJobSubmittedJobGraphStore putJobGraph function*
 - *Add unit tests for SingleJobSubmittedJobGraphStore*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Added SingleJobSubmittedJobGraphStoreTest that validates `putJobGraph` 
throws `FlinkException` when the input is a different jobGraph*
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15451) TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure failed on azure

2019-12-31 Thread Rong Rong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong reassigned FLINK-15451:
-

Assignee: (was: Rong Rong)

> TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure 
> failed on azure
> --
>
> Key: FLINK-15451
> URL: https://issues.apache.org/jira/browse/FLINK-15451
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.1
>Reporter: Congxian Qiu(klion26)
>Priority: Major
>
> 2019-12-31T02:43:39.4766254Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 42.801 s <<< FAILURE! - in 
> org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase 
> 2019-12-31T02:43:39.4768373Z [ERROR] 
> testTaskManagerProcessFailure[0](org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase)
>  Time elapsed: 2.699 s <<< ERROR! 2019-12-31T02:43:39.4768834Z 
> java.net.BindException: Address already in use 2019-12-31T02:43:39.4769096Z
>  
>  
> [https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_apis/build/builds/3995/logs/15]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15451) TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure failed on azure

2019-12-31 Thread Rong Rong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong reassigned FLINK-15451:
-

Assignee: Rong Rong

> TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure 
> failed on azure
> --
>
> Key: FLINK-15451
> URL: https://issues.apache.org/jira/browse/FLINK-15451
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.1
>Reporter: Congxian Qiu(klion26)
>Assignee: Rong Rong
>Priority: Major
>
> 2019-12-31T02:43:39.4766254Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 42.801 s <<< FAILURE! - in 
> org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase 
> 2019-12-31T02:43:39.4768373Z [ERROR] 
> testTaskManagerProcessFailure[0](org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase)
>  Time elapsed: 2.699 s <<< ERROR! 2019-12-31T02:43:39.4768834Z 
> java.net.BindException: Address already in use 2019-12-31T02:43:39.4769096Z
>  
>  
> [https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_apis/build/builds/3995/logs/15]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-31 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362246055
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/utils/DataStreamConversionUtilTest.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+/**
+ * Unit Test for DataStreamConversionUtil.
+ */
+public class DataStreamConversionUtilTest {
+   @Rule
+   public ExpectedException thrown = ExpectedException.none();
+
+   @Test
+   public void test() throws Exception {
+   StreamExecutionEnvironment env = 
MLEnvironmentFactory.getDefault().getStreamExecutionEnvironment();
+
+   DataStream input = env.fromElements(Row.of("a"));
+
+   Table table1 = 
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"word"});
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(String.class)}),
+   table1.getSchema()
+   );
+
+   input = input.map(new GenericTypeMap());
 
 Review comment:
   is this conversion suppose to remove the "string" type within the content: 
`Row.of("a")`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-31 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362246483
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/utils/DataStreamConversionUtilTest.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+/**
+ * Unit Test for DataStreamConversionUtil.
+ */
+public class DataStreamConversionUtilTest {
+   @Rule
+   public ExpectedException thrown = ExpectedException.none();
+
+   @Test
+   public void test() throws Exception {
+   StreamExecutionEnvironment env = 
MLEnvironmentFactory.getDefault().getStreamExecutionEnvironment();
+
+   DataStream input = env.fromElements(Row.of("a"));
+
+   Table table1 = 
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"word"});
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(String.class)}),
+   table1.getSchema()
+   );
+
+   input = input.map(new GenericTypeMap());
+
+   Table table2 = DataStreamConversionUtil.toTable(
+   MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
+   input,
+   new String[]{"word"},
+   new TypeInformation[]{TypeInformation.of(Integer.class)}
+   );
+
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(Integer.class)}),
+   table2.getSchema()
+   );
+
+   Table tableFromDataStreamWithTableSchema = 
DataStreamConversionUtil.toTable(
+   MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
+   input,
 
 Review comment:
   Assuming this is used for testing passing in a TableSchema instead of 
colNames and colTypes. should we do another GenericTypeMap() conversion, like 
this?
   ```suggestion
input.map(new GenericTypeMap()),
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-31 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362246155
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/utils/DataStreamConversionUtilTest.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+/**
+ * Unit Test for DataStreamConversionUtil.
+ */
+public class DataStreamConversionUtilTest {
+   @Rule
+   public ExpectedException thrown = ExpectedException.none();
+
+   @Test
+   public void test() throws Exception {
+   StreamExecutionEnvironment env = 
MLEnvironmentFactory.getDefault().getStreamExecutionEnvironment();
+
+   DataStream input = env.fromElements(Row.of("a"));
+
+   Table table1 = 
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"word"});
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(String.class)}),
+   table1.getSchema()
+   );
+
+   input = input.map(new GenericTypeMap());
+
+   Table table2 = DataStreamConversionUtil.toTable(
+   MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
+   input,
+   new String[]{"word"},
+   new TypeInformation[]{TypeInformation.of(Integer.class)}
 
 Review comment:
   `Integer.class` cannot be case to `String.class` obviously, however this 
`toTable` call is not throwing any exception, is this expected?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-31 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362246988
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/utils/DataStreamConversionUtilTest.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+/**
+ * Unit Test for DataStreamConversionUtil.
+ */
+public class DataStreamConversionUtilTest {
+   @Rule
+   public ExpectedException thrown = ExpectedException.none();
+
+   @Test
+   public void test() throws Exception {
+   StreamExecutionEnvironment env = 
MLEnvironmentFactory.getDefault().getStreamExecutionEnvironment();
+
+   DataStream input = env.fromElements(Row.of("a"));
+
+   Table table1 = 
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"word"});
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(String.class)}),
+   table1.getSchema()
+   );
+
+   input = input.map(new GenericTypeMap());
+
+   Table table2 = DataStreamConversionUtil.toTable(
+   MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
+   input,
+   new String[]{"word"},
+   new TypeInformation[]{TypeInformation.of(Integer.class)}
+   );
+
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(Integer.class)}),
+   table2.getSchema()
+   );
+
+   Table tableFromDataStreamWithTableSchema = 
DataStreamConversionUtil.toTable(
+   MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
+   input,
+   new TableSchema(
+   new String[]{"word"},
+   new 
TypeInformation[]{TypeInformation.of(Integer.class)}
+   )
+   );
+
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(Integer.class)}),
+   tableFromDataStreamWithTableSchema.getSchema()
+   );
+
+   thrown.expect(ValidationException.class);
+   
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"f0"});
 
 Review comment:
   is this exception expected to be thrown because of "f0" not found? the 
exception thrown here is:
   ```
   An input of GenericTypeInfo cannot be converted to Table. Please 
specify the type of the input with a RowTypeInfo.
   ```
   instead of not found. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-31 Thread GitBox
walterddr commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362237999
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/DataSetConversionUtil.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.operators.SingleInputUdfOperator;
+import org.apache.flink.api.java.operators.TwoInputUdfOperator;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.common.MLEnvironment;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+/**
+ * Provide functions of conversions between DataSet and Table.
+ */
+public class DataSetConversionUtil {
+   /**
+* Convert the given Table to {@link DataSet}<{@link Row}>.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param table the Table to convert.
+* @return the converted DataSet.
+*/
+   public static DataSet  fromTable(Long sessionId, Table table) {
+   return MLEnvironmentFactory
+   .get(sessionId)
+   .getBatchTableEnvironment()
+   .toDataSet(table, Row.class);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified TableSchema.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param data   the DataSet to convert.
+* @param schema the specified TableSchema.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
TableSchema schema) {
+   return toTable(sessionId, data, schema.getFieldNames(), 
schema.getFieldTypes());
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames, TypeInformation [] colTypes) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames, colTypes);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param session the MLEnvironment using to convert DataSet to Table.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public 

[jira] [Comment Edited] (FLINK-15416) Add Retry Mechanism for PartitionRequestClientFactory.ConnectingChannel

2019-12-31 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006134#comment-17006134
 ] 

Piotr Nowojski edited comment on FLINK-15416 at 12/31/19 3:31 PM:
--

I'm not sure. I think the motivating example that you provided is not very 
compelling. After all the proper fix is/was to just restart/fix the network and 
setting the number of retries is pretty arbitrary (why 2? why not 3?) and still 
can fail.

On the other hand, If we wanted to handle some more generic networking 
failures, we would need to handle a lot more things, like connection lost, 
timeouts, etc, not only connection failed, which also could impact 
restarting/recovery times.


was (Author: pnowojski):
I'm not sure. I think the motivating example that you provided is not very 
compelling. After all the proper fix is/was to just restart/fix the network and 
setting the number of retries is pretty arbitrarily and still can fail.

On the other hand, If we wanted to handle some more generic networking 
failures, we would need to handle a lot more things, like connection lost, 
timeouts, etc, not only connection failed, which also could impact 
restarting/recovery times.

> Add Retry Mechanism for PartitionRequestClientFactory.ConnectingChannel
> ---
>
> Key: FLINK-15416
> URL: https://issues.apache.org/jira/browse/FLINK-15416
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Zhenqiu Huang
>Priority: Major
>
> We run a flink with 256 TMs in production. The job internally has keyby 
> logic. Thus, it builds a 256 * 256 communication channels. An outage happened 
> when there is a chip internal link of one of the network switchs broken that 
> connecting these machines. During the outage, the flink can't restart 
> successfully as there is always an exception like  "Connecting the channel 
> failed: Connecting to remote task manager + '/10.14.139.6:41300' has 
> failed. This might indicate that the remote task manager has been lost. 
> After deep investigation with the network infrastructure team, we found there 
> are 6 switchs connecting with these machines. Each switch has 32 physcal 
> links. Every socket is round-robin assigned to each of links for load 
> balances. Thus, there is always average 256 * 256 / 6 * 32  * 2 = 170 
> channels will be assigned to the broken link. The issue lasted for 4 hours 
> until we found the broken link and restart the problematic switch. 
> Given this, we found that the retry of creating channel will help to resolve 
> this issue. For our networking topology, we can set retry to 2. As 170 / (132 
> * 132) < 1, which means after retry twice no channel in 170 channels will be 
> assigned to the broken link in the average case.
> I think it is valuable fix for this kind of partial network partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15416) Add Retry Mechanism for PartitionRequestClientFactory.ConnectingChannel

2019-12-31 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006134#comment-17006134
 ] 

Piotr Nowojski commented on FLINK-15416:


I'm not sure. I think the motivating example that you provided is not very 
compelling. After all the proper fix is/was to just restart/fix the network and 
setting the number of retries is pretty arbitrarily and still can fail.

On the other hand, If we wanted to handle some more generic networking 
failures, we would need to handle a lot more things, like connection lost, 
timeouts, etc, not only connection failed, which also could impact 
restarting/recovery times.

> Add Retry Mechanism for PartitionRequestClientFactory.ConnectingChannel
> ---
>
> Key: FLINK-15416
> URL: https://issues.apache.org/jira/browse/FLINK-15416
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Zhenqiu Huang
>Priority: Major
>
> We run a flink with 256 TMs in production. The job internally has keyby 
> logic. Thus, it builds a 256 * 256 communication channels. An outage happened 
> when there is a chip internal link of one of the network switchs broken that 
> connecting these machines. During the outage, the flink can't restart 
> successfully as there is always an exception like  "Connecting the channel 
> failed: Connecting to remote task manager + '/10.14.139.6:41300' has 
> failed. This might indicate that the remote task manager has been lost. 
> After deep investigation with the network infrastructure team, we found there 
> are 6 switchs connecting with these machines. Each switch has 32 physcal 
> links. Every socket is round-robin assigned to each of links for load 
> balances. Thus, there is always average 256 * 256 / 6 * 32  * 2 = 170 
> channels will be assigned to the broken link. The issue lasted for 4 hours 
> until we found the broken link and restart the problematic switch. 
> Given this, we found that the retry of creating channel will help to resolve 
> this issue. For our networking topology, we can set retry to 2. As 170 / (132 
> * 132) < 1, which means after retry twice no channel in 170 channels will be 
> assigned to the broken link in the average case.
> I think it is valuable fix for this kind of partial network partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15152) Job running without periodic checkpoint for stop failed at the beginning

2019-12-31 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006109#comment-17006109
 ] 

Piotr Nowojski commented on FLINK-15152:


Do you mean to duplicate a logic from 
{{org.apache.flink.runtime.scheduler.SchedulerBase#triggerSavepoint}}? Maybe 
that's a simple fix for that. However I'm not sure if restarting the checkpoint 
coordinator is the best idea, as might introduce some new issues/extra 
complexity from the more complicated state transitions of the 
`CheckpointCoordinator`.

I'm wondering if {{CheckpointCoordinator}}'s calls {{#stopCheckpointScheduler}} 
and {{#triggerSynchronousSavepoint}} should be atomic. Either operation 
succeeds and checkpoint coordinator shuts itself down, or not?

> Job running without periodic checkpoint for stop failed at the beginning
> 
>
> Key: FLINK-15152
> URL: https://issues.apache.org/jira/browse/FLINK-15152
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.1
>Reporter: Feng Jiajie
>Priority: Critical
>  Labels: checkpoint, scheduler
>
> I have a streaming job configured with periodically checkpoint, but after one 
> week running, I found there isn't any checkpoint file.
> h2. Reproduce the problem:
> 1. Job was submitted to YARN:
> {code:java}
> bin/flink run -m yarn-cluster -p 1 -yjm 1024m -ytm 4096m 
> flink-example-1.0-SNAPSHOT.jar{code}
> 2. Then immediately, before all the task switch to RUNNING (about seconds), 
> I(actually a job control script) send a "stop with savepoint" command by 
> flink cli:
> {code:java}
> bin/flink stop -yid application_1575872737452_0019 
> f75ca6f457828427ed3d413031b92722 -p file:///tmp/some_dir
> {code}
> log in jobmanager.log:
> {code:java}
> 2019-12-09 17:56:56,512 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Checkpoint 
> triggering task Source: Socket Stream -> Map (1/1) of job 
> f75ca6f457828427ed3d413031b92722 is not in state RUNNING but SCHEDULED 
> instead. Aborting checkpoint.
> {code}
> Then the job task(taskmanager) *continues to run normally without* checkpoint.
> h2. The cause of the problem:
> 1. "stop with savepoint" command call the code 
> stopCheckpointScheduler(org/apache/flink/runtime/scheduler/LegacyScheduler.java:612)
>  and then triggerSynchronousSavepoint:
> {code:java}
> // we stop the checkpoint coordinator so that we are guaranteed
> // to have only the data of the synchronous savepoint committed.
> // in case of failure, and if the job restarts, the coordinator
> // will be restarted by the CheckpointCoordinatorDeActivator.
> checkpointCoordinator.stopCheckpointScheduler();{code}
> 2. but "before all the task switch to RUNNING", triggerSynchronousSavepoint 
> failed at org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java:509
> {code:java}
> LOG.info("Checkpoint triggering task {} of job {} is not in state {} but {} 
> instead. Aborting checkpoint.",
>   tasksToTrigger[i].getTaskNameWithSubtaskIndex(),
>   job,
>   ExecutionState.RUNNING,
>   ee.getState());
> throw new 
> CheckpointException(CheckpointFailureReason.NOT_ALL_REQUIRED_TASKS_RUNNING);{code}
> 3. finally, "stop with savepoint" failed, with 
> "checkpointCoordinator.stopCheckpointScheduler()" but without the termination 
> of the job. The job is still running without periodically checkpoint. 
>  
> sample code for reproduce:
> {code:java}
> public class StreamingJob {
>   private static StateBackend makeRocksdbBackend() throws IOException {
> RocksDBStateBackend rocksdbBackend = new 
> RocksDBStateBackend("file:///tmp/aaa");
> rocksdbBackend.enableTtlCompactionFilter();
> 
> rocksdbBackend.setPredefinedOptions(PredefinedOptions.SPINNING_DISK_OPTIMIZED);
> return rocksdbBackend;
>   }
>   public static void main(String[] args) throws Exception {
> // set up the streaming execution environment
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> // 10 sec
> env.enableCheckpointing(10_000L, CheckpointingMode.AT_LEAST_ONCE);
> env.setStateBackend(makeRocksdbBackend());
> env.setRestartStrategy(RestartStrategies.noRestart());
> CheckpointConfig checkpointConfig = env.getCheckpointConfig();
> checkpointConfig.enableExternalizedCheckpoints(
> 
> CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
> checkpointConfig.setFailOnCheckpointingErrors(true);
> DataStream text = env.socketTextStream("127.0.0.1", 8912, "\n");
> text.map(new MapFunction>() {
>   @Override
>   public Tuple2 map(String s) {
> String[] s1 = s.split(" ");
> return Tuple2.of(Long.parseLong(s1[0]), Long.parseLong(s1[1]));
>   }
> 

[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   * c95faa623e5e0ea94dcb82df536fbe37e724baac Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142639483) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3983)
 
   * b5f020618f6d894a816de3fc7c0c53af295e7ce2 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142742105) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4018)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   * 4774fb7d466299f8edd58872d296462795da06a7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732180) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4013)
 
   * 2cde9109a4b6264a9a316085405282731ba3c02f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142742118) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4019)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15451) TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure failed on azure

2019-12-31 Thread Congxian Qiu(klion26) (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Congxian Qiu(klion26) updated FLINK-15451:
--
Summary: 
TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure 
failed on azure  (was: 
TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure)

> TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure 
> failed on azure
> --
>
> Key: FLINK-15451
> URL: https://issues.apache.org/jira/browse/FLINK-15451
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.1
>Reporter: Congxian Qiu(klion26)
>Priority: Major
>
> 2019-12-31T02:43:39.4766254Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 42.801 s <<< FAILURE! - in 
> org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase 
> 2019-12-31T02:43:39.4768373Z [ERROR] 
> testTaskManagerProcessFailure[0](org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase)
>  Time elapsed: 2.699 s <<< ERROR! 2019-12-31T02:43:39.4768834Z 
> java.net.BindException: Address already in use 2019-12-31T02:43:39.4769096Z
>  
>  
> [https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_apis/build/builds/3995/logs/15]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15451) TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure

2019-12-31 Thread Congxian Qiu(klion26) (Jira)
Congxian Qiu(klion26) created FLINK-15451:
-

 Summary: 
TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure
 Key: FLINK-15451
 URL: https://issues.apache.org/jira/browse/FLINK-15451
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.9.1
Reporter: Congxian Qiu(klion26)


2019-12-31T02:43:39.4766254Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 42.801 s <<< FAILURE! - in 
org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase 
2019-12-31T02:43:39.4768373Z [ERROR] 
testTaskManagerProcessFailure[0](org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase)
 Time elapsed: 2.699 s <<< ERROR! 2019-12-31T02:43:39.4768834Z 
java.net.BindException: Address already in use 2019-12-31T02:43:39.4769096Z

 

 

[https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_apis/build/builds/3995/logs/15]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   * c95faa623e5e0ea94dcb82df536fbe37e724baac Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142639483) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3983)
 
   * b5f020618f6d894a816de3fc7c0c53af295e7ce2 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142742105) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4018)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10733: [FLINK-15446][table][docs] Improve 
"Connect to External Systems" documentation page
URL: https://github.com/apache/flink/pull/10733#issuecomment-569906121
 
 
   
   ## CI report:
   
   * bfdbbc6bdc8fbee2cd764f9cb389cdf34b128756 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142736226) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4015)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to turn off vectorized orc reader for SQL CLI

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to 
turn off vectorized orc reader for SQL CLI
URL: https://github.com/apache/flink/pull/10632#issuecomment-567480631
 
 
   
   ## CI report:
   
   * c4e30954298685c2e3acac237bd9d83484140ed9 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141755289) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3765)
 
   * f1ac719f63aa28d92aff57e41e55182692dbcffc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142646581) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3987)
 
   * 253c30ccbda71196e603a69e08ec664b3153eb7a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142732161) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4011)
 
   * 0709e796ef4761b7543fb9835b46f4fb071b12b5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142740137) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4016)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   * 4774fb7d466299f8edd58872d296462795da06a7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732180) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4013)
 
   * 2cde9109a4b6264a9a316085405282731ba3c02f Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142742118) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4019)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10721: [FLINK-15429][hive] HiveObjectConversion implementations need to hand…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10721: [FLINK-15429][hive] 
HiveObjectConversion implementations need to hand…
URL: https://github.com/apache/flink/pull/10721#issuecomment-569622936
 
 
   
   ## CI report:
   
   * 62027bf514e75e43953585ade0ed57453b9df2f5 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142626733) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3980)
 
   * 44d60e2a99f9e63e32294e231f405cfcaef2f698 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142641843) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3985)
 
   * 57bd2165744e96791981981cbdffddd4587fc461 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142740143) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4017)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to turn off vectorized orc reader for SQL CLI

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to 
turn off vectorized orc reader for SQL CLI
URL: https://github.com/apache/flink/pull/10632#issuecomment-567480631
 
 
   
   ## CI report:
   
   * c4e30954298685c2e3acac237bd9d83484140ed9 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141755289) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3765)
 
   * f1ac719f63aa28d92aff57e41e55182692dbcffc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142646581) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3987)
 
   * 253c30ccbda71196e603a69e08ec664b3153eb7a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142732161) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4011)
 
   * 0709e796ef4761b7543fb9835b46f4fb071b12b5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142740137) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4016)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   * c95faa623e5e0ea94dcb82df536fbe37e724baac Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142639483) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3983)
 
   * b5f020618f6d894a816de3fc7c0c53af295e7ce2 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142742105) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4018)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10733: [FLINK-15446][table][docs] Improve 
"Connect to External Systems" documentation page
URL: https://github.com/apache/flink/pull/10733#issuecomment-569906121
 
 
   
   ## CI report:
   
   * bfdbbc6bdc8fbee2cd764f9cb389cdf34b128756 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142736226) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4015)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10721: [FLINK-15429][hive] HiveObjectConversion implementations need to hand…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10721: [FLINK-15429][hive] 
HiveObjectConversion implementations need to hand…
URL: https://github.com/apache/flink/pull/10721#issuecomment-569622936
 
 
   
   ## CI report:
   
   * 62027bf514e75e43953585ade0ed57453b9df2f5 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142626733) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3980)
 
   * 44d60e2a99f9e63e32294e231f405cfcaef2f698 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142641843) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3985)
 
   * 57bd2165744e96791981981cbdffddd4587fc461 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142740143) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4017)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   * 4774fb7d466299f8edd58872d296462795da06a7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732180) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4013)
 
   * 2cde9109a4b6264a9a316085405282731ba3c02f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to turn off vectorized orc reader for SQL CLI

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to 
turn off vectorized orc reader for SQL CLI
URL: https://github.com/apache/flink/pull/10632#issuecomment-567480631
 
 
   
   ## CI report:
   
   * c4e30954298685c2e3acac237bd9d83484140ed9 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141755289) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3765)
 
   * f1ac719f63aa28d92aff57e41e55182692dbcffc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142646581) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3987)
 
   * 253c30ccbda71196e603a69e08ec664b3153eb7a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142732161) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4011)
 
   * 0709e796ef4761b7543fb9835b46f4fb071b12b5 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142740137) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4016)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   * c95faa623e5e0ea94dcb82df536fbe37e724baac Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142639483) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3983)
 
   * b5f020618f6d894a816de3fc7c0c53af295e7ce2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-31 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006060#comment-17006060
 ] 

Piotr Nowojski commented on FLINK-15378:


[~ouyangwuli]  do I understand your problem correctly, that you are trying to 
use the same plugin, but with different configs? Can not you create a separate 
plugin but just with a different schema, instead of adding different 
{{identity}}? 

 

[~ouyangwuli] [~fly_in_gis] where are the "conf A", "conf B"  and 
{{hdfs-site.xml}} files located? Are they bundled inside the plugin's fat jar? 

> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, FileSystems
>Affects Versions: 1.9.2, 1.10.0
>Reporter: ouyangwulin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: jobmananger.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [As report from 
> maillist|[https://lists.apache.org/thread.html/7a6b1e341bde0ef632a82f8d46c9c93da358244b6bac0d8d544d11cb%40%3Cuser.flink.apache.org%3E]]
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins under the same 
> schema
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
> AS reslove method:
>     use  schema and spec identify as key for ' FileSystem#**FS_FACTORIES '
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-31 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-15378:
---
Affects Version/s: (was: 1.11.0)
   1.10.0

> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, FileSystems
>Affects Versions: 1.9.2, 1.10.0
>Reporter: ouyangwulin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: jobmananger.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [As report from 
> maillist|[https://lists.apache.org/thread.html/7a6b1e341bde0ef632a82f8d46c9c93da358244b6bac0d8d544d11cb%40%3Cuser.flink.apache.org%3E]]
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins under the same 
> schema
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
> AS reslove method:
>     use  schema and spec identify as key for ' FileSystem#**FS_FACTORIES '
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-31 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-15378:
---
Component/s: (was: API / Core)
 FileSystems
 Connectors / FileSystem

> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, FileSystems
>Affects Versions: 1.9.2, 1.11.0
>Reporter: ouyangwulin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: jobmananger.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [As report from 
> maillist|[https://lists.apache.org/thread.html/7a6b1e341bde0ef632a82f8d46c9c93da358244b6bac0d8d544d11cb%40%3Cuser.flink.apache.org%3E]]
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins under the same 
> schema
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
> AS reslove method:
>     use  schema and spec identify as key for ' FileSystem#**FS_FACTORIES '
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #10714: [FLINK-15409]Add semicolon after WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-31 Thread GitBox
wuchong merged pull request #10714: [FLINK-15409]Add semicolon after 
WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' 
statement
URL: https://github.com/apache/flink/pull/10714
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-31 Thread GitBox
klion26 commented on issue #10726: [FLINK-15427][Statebackend][test] Check TTL 
test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569911625
 
 
   ran the end-to-end test locally
   1. test succeed
   ```
   Checking of logs skipped.
   
   [PASS] 'State TTL Heap backend end-to-end test' passed after 0 minutes and 
28 seconds! Test exited with exit code 0.
   
   ...
   
   Checking of logs skipped.
   
   [PASS] 'State TTL RocksDb backend end-to-end test' passed after 0 minutes 
and 27 seconds! Test exited with exit code 0.
   ```
   
   2 comment the 
[if-check](https://github.com/apache/flink/blob/0c0dc79548fb4414e8515517a03158a416808705/flink-end-to-end-tests/flink-stream-state-ttl-test/src/main/java/org/apache/flink/streaming/tests/TtlVerifyUpdateFunction.java#L88)
 in `TtlVerifyUpdateFunction#flatMap` to simulate the fail, it will fail
   ```
   [FAIL] Test script contains errors.
   Checking of logs skipped.
   
   [FAIL] 'State TTL Heap backend end-to-end test' failed after 0 minutes and 
42 seconds! Test exited with exit code 1
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15450) Add kafka topic information to Kafka source name on Flink UI

2019-12-31 Thread Victor Wong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Wong updated FLINK-15450:

Description: 
If the user did not specify a custom name to the source, e.g. Kafka source, 
Flink would use the default name "Custom Source", which was not intuitive (Sink 
was the same).


{code:java}
Source: Custom Source -> Filter -> Map -> Sink: Unnamed
{code}

If we could add the Kafka topic information to the default Source/Sink name, it 
would be very helpful to catch the consuming/publishing topic quickly, like 
this:

{code:java}
Source: srcTopic0, srcTopic1 -> Filter -> Map -> Sink: sinkTopic0, sinkTopic1
{code}

*Suggestion* (forgive me if it makes too many changes)

1. Add a `name` method to interface `Function`

{code:java}
public interface Function extends java.io.Serializable {
default String name() { return ""; }
}
{code}

2. Source/Sink/Other functions override this method depending on their needs.

{code:java}
class FlinkKafkaConsumerBase {

String name() {
  return this.topicsDescriptor.toString();
}

}
{code}

3. Use Function#name if the returned value is not empty.

{code:java}
// StreamExecutionEnvironment
public  DataStreamSource addSource(SourceFunction 
function) {
String sourceName = function.name();
if (StringUtils.isNullOrWhitespaceOnly(sourceName)) {
sourceName = "Custom Source";
}
return addSource(function, sourceName);
}
{code}


  was:
If the user did not specify a custom name to the source, e.g. Kafka source, 
Flink would use the default name "Custom Source", which was not intuitive (Sink 
was the same).


{code:java}
Source: Custom Source -> Filter -> Map -> Sink: Unnamed
{code}

If we could add the Kafka topic information to the default Source/Sink name, it 
would be very helpful to catch the consuming/publishing topic quickly, like 
this:

{code:java}
Source: srcTopic0, srcTopic1 -> Filter -> Map -> Sink: sinkTopic0, sinkTopic1
{code}

*Suggesion* (forgive me if it makes too much changes)

1. Add a `name` method to interface `Function`

{code:java}
public interface Function extends java.io.Serializable {
default String name() { return ""; }
}
{code}

2. Source/Sink/Other functions override this method depending on their needs.

{code:java}
class FlinkKafkaConsumerBase {

String name() {
  return this.topicsDescriptor.toString();
}

}
{code}

3. Use Function#name if the returned value is not empty.

{code:java}
// StreamExecutionEnvironment
public  DataStreamSource addSource(SourceFunction 
function) {
String sourceName = function.name();
if (StringUtils.isNullOrWhitespaceOnly(sourceName)) {
sourceName = "Custom Source";
}
return addSource(function, sourceName);
}
{code}



> Add kafka topic information to Kafka source name on Flink UI
> 
>
> Key: FLINK-15450
> URL: https://issues.apache.org/jira/browse/FLINK-15450
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Victor Wong
>Priority: Major
>
> If the user did not specify a custom name to the source, e.g. Kafka source, 
> Flink would use the default name "Custom Source", which was not intuitive 
> (Sink was the same).
> {code:java}
> Source: Custom Source -> Filter -> Map -> Sink: Unnamed
> {code}
> If we could add the Kafka topic information to the default Source/Sink name, 
> it would be very helpful to catch the consuming/publishing topic quickly, 
> like this:
> {code:java}
> Source: srcTopic0, srcTopic1 -> Filter -> Map -> Sink: sinkTopic0, sinkTopic1
> {code}
> *Suggestion* (forgive me if it makes too many changes)
> 1. Add a `name` method to interface `Function`
> {code:java}
> public interface Function extends java.io.Serializable {
>   default String name() { return ""; }
> }
> {code}
> 2. Source/Sink/Other functions override this method depending on their needs.
> {code:java}
> class FlinkKafkaConsumerBase {
> String name() {
>   return this.topicsDescriptor.toString();
> }
> }
> {code}
> 3. Use Function#name if the returned value is not empty.
> {code:java}
> // StreamExecutionEnvironment
>   public  DataStreamSource addSource(SourceFunction 
> function) {
>   String sourceName = function.name();
>   if (StringUtils.isNullOrWhitespaceOnly(sourceName)) {
>   sourceName = "Custom Source";
>   }
>   return addSource(function, sourceName);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-31 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006056#comment-17006056
 ] 

Jark Wu commented on FLINK-15430:
-

Assigned to you [~libenchao].

> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Benchao Li
>Assignee: Benchao Li
>Priority: Major
>
> Our Flink SQL version is migrated from 1.5 to 1.9, and from legacy planner to 
> blink planner. We find that some large SQL meets the problem of code gen 
> which exceeds Java 64k method limitation.
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15430:

Fix Version/s: 1.10.0

> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Benchao Li
>Assignee: Benchao Li
>Priority: Major
> Fix For: 1.10.0
>
>
> Our Flink SQL version is migrated from 1.5 to 1.9, and from legacy planner to 
> blink planner. We find that some large SQL meets the problem of code gen 
> which exceeds Java 64k method limitation.
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15430:
---

Assignee: Benchao Li

> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Benchao Li
>Assignee: Benchao Li
>Priority: Major
>
> Our Flink SQL version is migrated from 1.5 to 1.9, and from legacy planner to 
> blink planner. We find that some large SQL meets the problem of code gen 
> which exceeds Java 64k method limitation.
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15450) Add kafka topic information to Kafka source name on Flink UI

2019-12-31 Thread Victor Wong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Wong updated FLINK-15450:

Summary: Add kafka topic information to Kafka source name on Flink UI  
(was: Add kafka topic information to Kafka source)

> Add kafka topic information to Kafka source name on Flink UI
> 
>
> Key: FLINK-15450
> URL: https://issues.apache.org/jira/browse/FLINK-15450
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Victor Wong
>Priority: Major
>
> If the user did not specify a custom name to the source, e.g. Kafka source, 
> Flink would use the default name "Custom Source", which was not intuitive 
> (Sink was the same).
> {code:java}
> Source: Custom Source -> Filter -> Map -> Sink: Unnamed
> {code}
> If we could add the Kafka topic information to the default Source/Sink name, 
> it would be very helpful to catch the consuming/publishing topic quickly, 
> like this:
> {code:java}
> Source: srcTopic0, srcTopic1 -> Filter -> Map -> Sink: sinkTopic0, sinkTopic1
> {code}
> *Suggesion* (forgive me if it makes too much changes)
> 1. Add a `name` method to interface `Function`
> {code:java}
> public interface Function extends java.io.Serializable {
>   default String name() { return ""; }
> }
> {code}
> 2. Source/Sink/Other functions override this method depending on their needs.
> {code:java}
> class FlinkKafkaConsumerBase {
> String name() {
>   return this.topicsDescriptor.toString();
> }
> }
> {code}
> 3. Use Function#name if the returned value is not empty.
> {code:java}
> // StreamExecutionEnvironment
>   public  DataStreamSource addSource(SourceFunction 
> function) {
>   String sourceName = function.name();
>   if (StringUtils.isNullOrWhitespaceOnly(sourceName)) {
>   sourceName = "Custom Source";
>   }
>   return addSource(function, sourceName);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15445) JDBC Table Source didn't work for Types with precision (or/and scale)

2019-12-31 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006054#comment-17006054
 ] 

Jark Wu commented on FLINK-15445:
-

Please also check that whether the result is correct as the FLINK-15379 
reported and also check that whether all the JDBC drivers support the full 
precisions. 

> JDBC Table Source didn't work for Types with precision (or/and scale)
> -
>
> Key: FLINK-15445
> URL: https://issues.apache.org/jira/browse/FLINK-15445
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
> Fix For: 1.10.0
>
>
> {code:java}
>  public class JDBCSourceExample {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inStreamingMode()
> .build();
> StreamTableEnvironment tableEnvironment = 
> StreamTableEnvironment.create(env, envSettings);
> String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   'connector.username' = 'root'," +
> "   'connector.table' = 'currency',\n" +
> "   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
> "   'connector.lookup.cache.max-rows' = '500', \n" +
> "   'connector.lookup.cache.ttl' = '10s',\n" +
> "   'connector.lookup.max-retries' = '3'" +
> ")";
> tableEnvironment.sqlUpdate(mysqlCurrencyDDL);
> String querySQL = "select * from currency";
> tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
> Row.class).print();
> tableEnvironment.execute("JdbcExample");
> }
> }{code}
>  
> Throws Exception:
>  
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Type TIMESTAMP(6) of table field 'timestamp6' does not match with the 
> physical type TIMESTAMP(3) of the 'timestamp9' field of the TableSource 
> return type.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15445) JDBC Table Source didn't work for Types with precision (or/and scale)

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15445:
---

Assignee: Zhenghua Gao

> JDBC Table Source didn't work for Types with precision (or/and scale)
> -
>
> Key: FLINK-15445
> URL: https://issues.apache.org/jira/browse/FLINK-15445
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
> Fix For: 1.10.0
>
>
> {code:java}
>  public class JDBCSourceExample {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inStreamingMode()
> .build();
> StreamTableEnvironment tableEnvironment = 
> StreamTableEnvironment.create(env, envSettings);
> String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   'connector.username' = 'root'," +
> "   'connector.table' = 'currency',\n" +
> "   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
> "   'connector.lookup.cache.max-rows' = '500', \n" +
> "   'connector.lookup.cache.ttl' = '10s',\n" +
> "   'connector.lookup.max-retries' = '3'" +
> ")";
> tableEnvironment.sqlUpdate(mysqlCurrencyDDL);
> String querySQL = "select * from currency";
> tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
> Row.class).print();
> tableEnvironment.execute("JdbcExample");
> }
> }{code}
>  
> Throws Exception:
>  
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Type TIMESTAMP(6) of table field 'timestamp6' does not match with the 
> physical type TIMESTAMP(3) of the 'timestamp9' field of the TableSource 
> return type.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15379) JDBC connector return wrong value if defined dataType contains precision

2019-12-31 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006053#comment-17006053
 ] 

Jark Wu commented on FLINK-15379:
-

Thanks [~docete].

> JDBC connector return wrong value if defined dataType contains precision
> 
>
> Key: FLINK-15379
> URL: https://issues.apache.org/jira/browse/FLINK-15379
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
>
> A mysql table like:
>  
> {code:java}
> // CREATE TABLE `currency` (
>   `currency_id` bigint(20) NOT NULL,
>   `currency_name` varchar(200) DEFAULT NULL,
>   `rate` double DEFAULT NULL,
>   `currency_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
>   `country` varchar(100) DEFAULT NULL,
>   `timestamp6` timestamp(6) NULL DEFAULT NULL,
>   `time6` time(6) DEFAULT NULL,
>   `gdp` decimal(10,4) DEFAULT NULL,
>   PRIMARY KEY (`currency_id`)
> ) ENGINE=InnoDB DEFAULT CHARSET=utf8
> +-+---+--+-+-++-+--+
> | currency_id | currency_name | rate | currency_time   | country | 
> timestamp6 | time6   | gdp  |
> +-+---+--+-+-++-+--+
> |   1 | US Dollar | 1020 | 2019-12-20 17:23:00 | America | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   2 | Euro  |  114 | 2019-12-20 12:22:00 | Germany | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   3 | RMB   |   16 | 2019-12-20 12:22:00 | China   | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   4 | Yen   |1 | 2019-12-20 12:22:00 | Japan   | 
> 2019-12-20 12:22:00.123456 | 12:22:00.123456 | 100.4112 |
> +-+---+--+-+-++-+--+{code}
>  
> If user defined a jdbc table as  dimension table like:
>  
> {code:java}
> // 
> public static final String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   'connector.username' = 'root'," +
> "   'connector.table' = 'currency',\n" +
> "   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
> "   'connector.lookup.cache.max-rows' = '500', \n" +
> "   'connector.lookup.cache.ttl' = '10s',\n" +
> "   'connector.lookup.max-retries' = '3'" +
> ")";
> {code}
>  
> User will get wrong value in column `timestamp6`,`time6`,`gdp`:
> {code:java}
> // c.currency_id, c.currency_name, c.rate, c.currency_time, c.country, 
> c.timestamp6, c.time6, c.gdp 
> 1,US 
> Dollar,1020.0,2019-12-20T17:23,America,2019-12-20T12:22:00.023456,12:22,-0.0001
> 2,Euro,114.0,2019-12-20T12:22,Germany,2019-12-20T12:22:00.023456,12:22,-0.0001
> 4,Yen,1.0,2019-12-20T12:22,Japan,2019-12-20T12:22:00.123456,12:22,-0.0001{code}
>  
>  
> {code:java}
> public class JDBCSourceExample {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inStreamingMode()
> .build();
> StreamTableEnvironment tableEnvironment = 
> StreamTableEnvironment.create(env, envSettings);
> String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   'connector.username' = 'root'," +
> "   'connector.table' = 'currency',\n" +
> "   'connector.driver' = 

[jira] [Created] (FLINK-15450) Add kafka topic information to Kafka source

2019-12-31 Thread Victor Wong (Jira)
Victor Wong created FLINK-15450:
---

 Summary: Add kafka topic information to Kafka source
 Key: FLINK-15450
 URL: https://issues.apache.org/jira/browse/FLINK-15450
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Reporter: Victor Wong


If the user did not specify a custom name to the source, e.g. Kafka source, 
Flink would use the default name "Custom Source", which was not intuitive (Sink 
was the same).


{code:java}
Source: Custom Source -> Filter -> Map -> Sink: Unnamed
{code}

If we could add the Kafka topic information to the default Source/Sink name, it 
would be very helpful to catch the consuming/publishing topic quickly, like 
this:

{code:java}
Source: srcTopic0, srcTopic1 -> Filter -> Map -> Sink: sinkTopic0, sinkTopic1
{code}

*Suggesion* (forgive me if it makes too much changes)

1. Add a `name` method to interface `Function`

{code:java}
public interface Function extends java.io.Serializable {
default String name() { return ""; }
}
{code}

2. Source/Sink/Other functions override this method depending on their needs.

{code:java}
class FlinkKafkaConsumerBase {

String name() {
  return this.topicsDescriptor.toString();
}

}
{code}

3. Use Function#name if the returned value is not empty.

{code:java}
// StreamExecutionEnvironment
public  DataStreamSource addSource(SourceFunction 
function) {
String sourceName = function.name();
if (StringUtils.isNullOrWhitespaceOnly(sourceName)) {
sourceName = "Custom Source";
}
return addSource(function, sourceName);
}
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl docs

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl 
docs
URL: https://github.com/apache/flink/pull/10732#issuecomment-569896244
 
 
   
   ## CI report:
   
   * 6fd808adc772aa42c3ed9968bbd9593c28940539 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732194) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4014)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10733: [FLINK-15446][table][docs] Improve 
"Connect to External Systems" documentation page
URL: https://github.com/apache/flink/pull/10733#issuecomment-569906121
 
 
   
   ## CI report:
   
   * bfdbbc6bdc8fbee2cd764f9cb389cdf34b128756 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142736226) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4015)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   * 4774fb7d466299f8edd58872d296462795da06a7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732180) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4013)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10721: [FLINK-15429][hive] HiveObjectConversion implementations need to hand…

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10721: [FLINK-15429][hive] 
HiveObjectConversion implementations need to hand…
URL: https://github.com/apache/flink/pull/10721#issuecomment-569622936
 
 
   
   ## CI report:
   
   * 62027bf514e75e43953585ade0ed57453b9df2f5 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142626733) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3980)
 
   * 44d60e2a99f9e63e32294e231f405cfcaef2f698 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142641843) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3985)
 
   * 57bd2165744e96791981981cbdffddd4587fc461 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10714: [FLINK-15409]Add semicolon after WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10714: [FLINK-15409]Add semicolon after 
WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' 
statement
URL: https://github.com/apache/flink/pull/10714#issuecomment-569424154
 
 
   
   ## CI report:
   
   * 3dc52c5c446e5f74f3871b3e8218a24afcd77c24 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142530511) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3967)
 
   * 18f08ec7035a48f0c6551e3062a1b25eb4d76126 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732169) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4012)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15379) JDBC connector return wrong value if defined dataType contains precision

2019-12-31 Thread Zhenghua Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenghua Gao updated FLINK-15379:
-
Description: 
A mysql table like:

 
{code:java}
// CREATE TABLE `currency` (
  `currency_id` bigint(20) NOT NULL,
  `currency_name` varchar(200) DEFAULT NULL,
  `rate` double DEFAULT NULL,
  `currency_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
  `country` varchar(100) DEFAULT NULL,
  `timestamp6` timestamp(6) NULL DEFAULT NULL,
  `time6` time(6) DEFAULT NULL,
  `gdp` decimal(10,4) DEFAULT NULL,
  PRIMARY KEY (`currency_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
+-+---+--+-+-++-+--+
| currency_id | currency_name | rate | currency_time   | country | 
timestamp6 | time6   | gdp  |
+-+---+--+-+-++-+--+
|   1 | US Dollar | 1020 | 2019-12-20 17:23:00 | America | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   2 | Euro  |  114 | 2019-12-20 12:22:00 | Germany | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   3 | RMB   |   16 | 2019-12-20 12:22:00 | China   | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   4 | Yen   |1 | 2019-12-20 12:22:00 | Japan   | 
2019-12-20 12:22:00.123456 | 12:22:00.123456 | 100.4112 |
+-+---+--+-+-++-+--+{code}
 

If user defined a jdbc table as  dimension table like:

 
{code:java}
// 
public static final String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
"  currency_id BIGINT,\n" +
"  currency_name STRING,\n" +
"  rate DOUBLE,\n" +
"  currency_time TIMESTAMP(3),\n" +
"  country STRING,\n" +
"  timestamp6 TIMESTAMP(6),\n" +
"  time6 TIME(6),\n" +
"  gdp DECIMAL(10, 4)\n" +
") WITH (\n" +
"   'connector.type' = 'jdbc',\n" +
"   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
"   'connector.username' = 'root'," +
"   'connector.table' = 'currency',\n" +
"   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
"   'connector.lookup.cache.max-rows' = '500', \n" +
"   'connector.lookup.cache.ttl' = '10s',\n" +
"   'connector.lookup.max-retries' = '3'" +
")";
{code}
 

User will get wrong value in column `timestamp6`,`time6`,`gdp`:
{code:java}
// c.currency_id, c.currency_name, c.rate, c.currency_time, c.country, 
c.timestamp6, c.time6, c.gdp 

1,US 
Dollar,1020.0,2019-12-20T17:23,America,2019-12-20T12:22:00.023456,12:22,-0.0001
2,Euro,114.0,2019-12-20T12:22,Germany,2019-12-20T12:22:00.023456,12:22,-0.0001
4,Yen,1.0,2019-12-20T12:22,Japan,2019-12-20T12:22:00.123456,12:22,-0.0001{code}
 

 
{code:java}
public class JDBCSourceExample {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);

EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inStreamingMode()
.build();
StreamTableEnvironment tableEnvironment = 
StreamTableEnvironment.create(env, envSettings);
String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
"  currency_id BIGINT,\n" +
"  currency_name STRING,\n" +
"  rate DOUBLE,\n" +
"  currency_time TIMESTAMP(3),\n" +
"  country STRING,\n" +
"  timestamp6 TIMESTAMP(6),\n" +
"  time6 TIME(6),\n" +
"  gdp DECIMAL(10, 4)\n" +
") WITH (\n" +
"   'connector.type' = 'jdbc',\n" +
"   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
"   'connector.username' = 'root'," +
"   'connector.table' = 'currency',\n" +
"   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
"   'connector.lookup.cache.max-rows' = '500', \n" +
"   'connector.lookup.cache.ttl' = '10s',\n" +
"   'connector.lookup.max-retries' = '3'" +
")";

tableEnvironment.sqlUpdate(mysqlCurrencyDDL);


String querySQL = "select * from currency";

tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
Row.class).print();

tableEnvironment.execute("JdbcExample");
}
}
{code}
 

  was:
A mysql table like:

 
{code:java}
// CREATE TABLE `currency` (
  `currency_id` bigint(20) NOT NULL,
  `currency_name` varchar(200) DEFAULT NULL,
  `rate` double 

[jira] [Updated] (FLINK-15445) JDBC Table Source didn't work for Types with precision (or/and scale)

2019-12-31 Thread Zhenghua Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenghua Gao updated FLINK-15445:
-
Description: 
{code:java}
 public class JDBCSourceExample {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);

EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inStreamingMode()
.build();
StreamTableEnvironment tableEnvironment = 
StreamTableEnvironment.create(env, envSettings);
String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
"  currency_id BIGINT,\n" +
"  currency_name STRING,\n" +
"  rate DOUBLE,\n" +
"  currency_time TIMESTAMP(3),\n" +
"  country STRING,\n" +
"  timestamp6 TIMESTAMP(6),\n" +
"  time6 TIME(6),\n" +
"  gdp DECIMAL(10, 4)\n" +
") WITH (\n" +
"   'connector.type' = 'jdbc',\n" +
"   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
"   'connector.username' = 'root'," +
"   'connector.table' = 'currency',\n" +
"   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
"   'connector.lookup.cache.max-rows' = '500', \n" +
"   'connector.lookup.cache.ttl' = '10s',\n" +
"   'connector.lookup.max-retries' = '3'" +
")";

tableEnvironment.sqlUpdate(mysqlCurrencyDDL);


String querySQL = "select * from currency";

tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
Row.class).print();

tableEnvironment.execute("JdbcExample");
}
}{code}
 

Throws Exception:

 

Exception in thread "main" org.apache.flink.table.api.ValidationException: Type 
TIMESTAMP(6) of table field 'timestamp6' does not match with the physical type 
TIMESTAMP(3) of the 'timestamp9' field of the TableSource return type.

  was:
{code:java}
 public class JDBCSourceExample {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);

EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inStreamingMode()
.build();
StreamTableEnvironment tableEnvironment = 
StreamTableEnvironment.create(env, envSettings);
String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
"  currency_id BIGINT,\n" +
"  currency_name STRING,\n" +
"  rate DOUBLE,\n" +
"  currency_time TIMESTAMP(3),\n" +
"  country STRING,\n" +
"  timestamp6 TIMESTAMP(6),\n" +
"  time6 TIME(6),\n" +
"  gdp DECIMAL(10, 4)\n" +
") WITH (\n" +
"   'connector.type' = 'jdbc',\n" +
"   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
"   'connector.username' = 'root'," +
"   'connector.table' = 'currency',\n" +
"   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
"   'connector.lookup.cache.max-rows' = '500', \n" +
"   'connector.lookup.cache.ttl' = '10s',\n" +
"   'connector.lookup.max-retries' = '3'" +
")";

tableEnvironment.sqlUpdate(mysqlCurrencyDDL);


String querySQL = "select * from currency";

tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
Row.class).print();

tableEnvironment.execute("JdbcExample");
}
}{code}


> JDBC Table Source didn't work for Types with precision (or/and scale)
> -
>
> Key: FLINK-15445
> URL: https://issues.apache.org/jira/browse/FLINK-15445
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
> Fix For: 1.10.0
>
>
> {code:java}
>  public class JDBCSourceExample {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inStreamingMode()
> .build();
> StreamTableEnvironment tableEnvironment = 
> StreamTableEnvironment.create(env, envSettings);
>  

[jira] [Updated] (FLINK-15445) JDBC Table Source didn't work for Types with precision (or/and scale)

2019-12-31 Thread Zhenghua Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenghua Gao updated FLINK-15445:
-
Description: 
{code:java}
 public class JDBCSourceExample {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);

EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inStreamingMode()
.build();
StreamTableEnvironment tableEnvironment = 
StreamTableEnvironment.create(env, envSettings);
String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
"  currency_id BIGINT,\n" +
"  currency_name STRING,\n" +
"  rate DOUBLE,\n" +
"  currency_time TIMESTAMP(3),\n" +
"  country STRING,\n" +
"  timestamp6 TIMESTAMP(6),\n" +
"  time6 TIME(6),\n" +
"  gdp DECIMAL(10, 4)\n" +
") WITH (\n" +
"   'connector.type' = 'jdbc',\n" +
"   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
"   'connector.username' = 'root'," +
"   'connector.table' = 'currency',\n" +
"   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
"   'connector.lookup.cache.max-rows' = '500', \n" +
"   'connector.lookup.cache.ttl' = '10s',\n" +
"   'connector.lookup.max-retries' = '3'" +
")";

tableEnvironment.sqlUpdate(mysqlCurrencyDDL);


String querySQL = "select * from currency";

tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
Row.class).print();

tableEnvironment.execute("JdbcExample");
}
}{code}

  was:
{code:java}
public class JDBCSourceExample { public static void main(String[] args) throws 
Exception { StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); 
EnvironmentSettings envSettings = EnvironmentSettings.newInstance() 
.useBlinkPlanner() .inStreamingMode() .build(); StreamTableEnvironment 
tableEnvironment = StreamTableEnvironment.create(env, envSettings); String 
mysqlCurrencyDDL = "CREATE TABLE currency (\n" + " currency_id BIGINT,\n" + " 
currency_name STRING,\n" + " rate DOUBLE,\n" + " currency_time TIMESTAMP(3),\n" 
+ " country STRING,\n" + " timestamp6 TIMESTAMP(6),\n" + " time6 TIME(6),\n" + 
" gdp DECIMAL(10, 4)\n" + ") WITH (\n" + " 'connector.type' = 'jdbc',\n" + " 
'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" + " 
'connector.username' = 'root'," + " 'connector.table' = 'currency',\n" + " 
'connector.driver' = 'com.mysql.jdbc.Driver',\n" + " 
'connector.lookup.cache.max-rows' = '500', \n" + " 'connector.lookup.cache.ttl' 
= '10s',\n" + " 'connector.lookup.max-retries' = '3'" + ")"; 
tableEnvironment.sqlUpdate(mysqlCurrencyDDL); String querySQL = "select * from 
currency"; tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
Row.class).print(); tableEnvironment.execute("JdbcExample"); } }
{code}
 


> JDBC Table Source didn't work for Types with precision (or/and scale)
> -
>
> Key: FLINK-15445
> URL: https://issues.apache.org/jira/browse/FLINK-15445
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
> Fix For: 1.10.0
>
>
> {code:java}
>  public class JDBCSourceExample {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inStreamingMode()
> .build();
> StreamTableEnvironment tableEnvironment = 
> StreamTableEnvironment.create(env, envSettings);
> String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   'connector.username' = 'root'," +
> "   

[GitHub] [flink] flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to turn off vectorized orc reader for SQL CLI

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to 
turn off vectorized orc reader for SQL CLI
URL: https://github.com/apache/flink/pull/10632#issuecomment-567480631
 
 
   
   ## CI report:
   
   * c4e30954298685c2e3acac237bd9d83484140ed9 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141755289) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3765)
 
   * f1ac719f63aa28d92aff57e41e55182692dbcffc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142646581) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3987)
 
   * 253c30ccbda71196e603a69e08ec664b3153eb7a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142732161) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4011)
 
   * 0709e796ef4761b7543fb9835b46f4fb071b12b5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15445) JDBC Table Source didn't work for Types with precision (or/and scale)

2019-12-31 Thread Zhenghua Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenghua Gao updated FLINK-15445:
-
Description: 
{code:java}
public class JDBCSourceExample { public static void main(String[] args) throws 
Exception { StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); 
EnvironmentSettings envSettings = EnvironmentSettings.newInstance() 
.useBlinkPlanner() .inStreamingMode() .build(); StreamTableEnvironment 
tableEnvironment = StreamTableEnvironment.create(env, envSettings); String 
mysqlCurrencyDDL = "CREATE TABLE currency (\n" + " currency_id BIGINT,\n" + " 
currency_name STRING,\n" + " rate DOUBLE,\n" + " currency_time TIMESTAMP(3),\n" 
+ " country STRING,\n" + " timestamp6 TIMESTAMP(6),\n" + " time6 TIME(6),\n" + 
" gdp DECIMAL(10, 4)\n" + ") WITH (\n" + " 'connector.type' = 'jdbc',\n" + " 
'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" + " 
'connector.username' = 'root'," + " 'connector.table' = 'currency',\n" + " 
'connector.driver' = 'com.mysql.jdbc.Driver',\n" + " 
'connector.lookup.cache.max-rows' = '500', \n" + " 'connector.lookup.cache.ttl' 
= '10s',\n" + " 'connector.lookup.max-retries' = '3'" + ")"; 
tableEnvironment.sqlUpdate(mysqlCurrencyDDL); String querySQL = "select * from 
currency"; tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
Row.class).print(); tableEnvironment.execute("JdbcExample"); } }
{code}
 

> JDBC Table Source didn't work for Types with precision (or/and scale)
> -
>
> Key: FLINK-15445
> URL: https://issues.apache.org/jira/browse/FLINK-15445
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
> Fix For: 1.10.0
>
>
> {code:java}
> public class JDBCSourceExample { public static void main(String[] args) 
> throws Exception { StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); 
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance() 
> .useBlinkPlanner() .inStreamingMode() .build(); StreamTableEnvironment 
> tableEnvironment = StreamTableEnvironment.create(env, envSettings); String 
> mysqlCurrencyDDL = "CREATE TABLE currency (\n" + " currency_id BIGINT,\n" + " 
> currency_name STRING,\n" + " rate DOUBLE,\n" + " currency_time 
> TIMESTAMP(3),\n" + " country STRING,\n" + " timestamp6 TIMESTAMP(6),\n" + " 
> time6 TIME(6),\n" + " gdp DECIMAL(10, 4)\n" + ") WITH (\n" + " 
> 'connector.type' = 'jdbc',\n" + " 'connector.url' = 
> 'jdbc:mysql://localhost:3306/test',\n" + " 'connector.username' = 'root'," + 
> " 'connector.table' = 'currency',\n" + " 'connector.driver' = 
> 'com.mysql.jdbc.Driver',\n" + " 'connector.lookup.cache.max-rows' = '500', 
> \n" + " 'connector.lookup.cache.ttl' = '10s',\n" + " 
> 'connector.lookup.max-retries' = '3'" + ")"; 
> tableEnvironment.sqlUpdate(mysqlCurrencyDDL); String querySQL = "select * 
> from currency"; 
> tableEnvironment.toAppendStream(tableEnvironment.sqlQuery(querySQL), 
> Row.class).print(); tableEnvironment.execute("JdbcExample"); } }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15379) JDBC connector return wrong value if defined dataType contains precision

2019-12-31 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006047#comment-17006047
 ] 

Zhenghua Gao commented on FLINK-15379:
--

[~jark] the issue can't reproduce after FLINK-15168, instead, the 
ValidationException appears. And I opened a ticket(FLINK-15445) to track the 
ValidationException issue.

> JDBC connector return wrong value if defined dataType contains precision
> 
>
> Key: FLINK-15379
> URL: https://issues.apache.org/jira/browse/FLINK-15379
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
>
> A mysql table like:
>  
> {code:java}
> // CREATE TABLE `currency` (
>   `currency_id` bigint(20) NOT NULL,
>   `currency_name` varchar(200) DEFAULT NULL,
>   `rate` double DEFAULT NULL,
>   `currency_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
>   `country` varchar(100) DEFAULT NULL,
>   `timestamp6` timestamp(6) NULL DEFAULT NULL,
>   `time6` time(6) DEFAULT NULL,
>   `gdp` decimal(10,4) DEFAULT NULL,
>   PRIMARY KEY (`currency_id`)
> ) ENGINE=InnoDB DEFAULT CHARSET=utf8
> +-+---+--+-+-++-+--+
> | currency_id | currency_name | rate | currency_time   | country | 
> timestamp6 | time6   | gdp  |
> +-+---+--+-+-++-+--+
> |   1 | US Dollar | 1020 | 2019-12-20 17:23:00 | America | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   2 | Euro  |  114 | 2019-12-20 12:22:00 | Germany | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   3 | RMB   |   16 | 2019-12-20 12:22:00 | China   | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   4 | Yen   |1 | 2019-12-20 12:22:00 | Japan   | 
> 2019-12-20 12:22:00.123456 | 12:22:00.123456 | 100.4112 |
> +-+---+--+-+-++-+--+{code}
>  
> If user defined a jdbc table as  dimension table like:
>  
> {code:java}
> // 
> public static final String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   'connector.username' = 'root'," +
> "   'connector.table' = 'currency',\n" +
> "   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
> "   'connector.lookup.cache.max-rows' = '500', \n" +
> "   'connector.lookup.cache.ttl' = '10s',\n" +
> "   'connector.lookup.max-retries' = '3'" +
> ")";
> {code}
>  
> User will get wrong value in column `timestamp6`,`time6`,`gdp`:
> {code:java}
> // c.currency_id, c.currency_name, c.rate, c.currency_time, c.country, 
> c.timestamp6, c.time6, c.gdp 
> 1,US 
> Dollar,1020.0,2019-12-20T17:23,America,2019-12-20T12:22:00.023456,12:22,-0.0001
> 2,Euro,114.0,2019-12-20T12:22,Germany,2019-12-20T12:22:00.023456,12:22,-0.0001
> 4,Yen,1.0,2019-12-20T12:22,Japan,2019-12-20T12:22:00.123456,12:22,-0.0001{code}
>  
>  
> {code:java}
> public class JDBCSourceExample {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inStreamingMode()
> .build();
> StreamTableEnvironment tableEnvironment = 
> StreamTableEnvironment.create(env, envSettings);
> String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
>

[GitHub] [flink] lirui-apache commented on a change in pull request #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-31 Thread GitBox
lirui-apache commented on a change in pull request #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#discussion_r362194203
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -375,7 +375,7 @@ object GenerateUtils {
  |  $SQL_TIMESTAMP.fromEpochMillis(${ts.getMillisecond}L, 
${ts.getNanoOfMillisecond});
""".stripMargin
 ctx.addReusableMember(fieldTimestamp)
-generateNonNullLiteral(literalType, fieldTerm, literalType)
+generateNonNullLiteral(literalType, fieldTerm, ts)
 
 Review comment:
   Now this change is moved to b5f0206


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-31 Thread GitBox
lirui-apache commented on a change in pull request #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#discussion_r362193585
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShim.java
 ##
 @@ -232,4 +233,10 @@ SimpleGenericUDAFParameterInfo 
createUDAFParameterInfo(ObjectInspector[] params,
 * Converts a hive date instance to LocalDate which is expected by 
DataFormatConverter.
 */
LocalDate toFlinkDate(Object hiveDate);
+
+   /**
+* Converts a Hive primitive java object to corresponding Writable 
object. Throws CatalogException if the conversion
+* is not supported.
+*/
+   Writable hivePrimitiveToWritable(Object value) throws CatalogException;
 
 Review comment:
   I'll change to throw `FlinkHiveException`. It's also a runtime exception but 
more specific to Hive connector.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15449) Retain lost task managers on Flink UI

2019-12-31 Thread Victor Wong (Jira)
Victor Wong created FLINK-15449:
---

 Summary: Retain lost task managers on Flink UI
 Key: FLINK-15449
 URL: https://issues.apache.org/jira/browse/FLINK-15449
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / YARN
Affects Versions: 1.9.1
Reporter: Victor Wong


With Flink on Yarn, sometimes our TaskManager was killed because of OOM or 
heartbeat timeout or whatever reasons, it's not convenient to check out the 
logs of the lost TaskManger.

Can we retain the lost task managers on Flink UI, and provide the log service 
through Yarn (we can redirect the URL of log/stdout to Yarn container 
log/stdout)?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-31 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006042#comment-17006042
 ] 

Benchao Li commented on FLINK-15430:


[~jark] Thanks for your reply.  I also agree to have a simple fix in 1.10

We are using 1.9 for now, and preparing to upgrade to 1.10 after the release. 
So it's good to have this fix in 1.10 release.  I'll prepare the pr in a few 
days.

> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Benchao Li
>Priority: Major
>
> Our Flink SQL version is migrated from 1.5 to 1.9, and from legacy planner to 
> blink planner. We find that some large SQL meets the problem of code gen 
> which exceeds Java 64k method limitation.
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on a change in pull request #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-31 Thread GitBox
lirui-apache commented on a change in pull request #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#discussion_r362191208
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -375,7 +375,7 @@ object GenerateUtils {
  |  $SQL_TIMESTAMP.fromEpochMillis(${ts.getMillisecond}L, 
${ts.getNanoOfMillisecond});
""".stripMargin
 ctx.addReusableMember(fieldTimestamp)
-generateNonNullLiteral(literalType, fieldTerm, literalType)
+generateNonNullLiteral(literalType, fieldTerm, ts)
 
 Review comment:
   Right, `ts` is a `SqlTimestamp `. I'll squash and make a separate commit for 
this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2019-12-31 Thread GitBox
flinkbot commented on issue #10733: [FLINK-15446][table][docs] Improve "Connect 
to External Systems" documentation page
URL: https://github.com/apache/flink/pull/10733#issuecomment-569906121
 
 
   
   ## CI report:
   
   * bfdbbc6bdc8fbee2cd764f9cb389cdf34b128756 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed column

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix 
physical schema mapping in TableFormatFactoryBase to support define orderless 
computed column
URL: https://github.com/apache/flink/pull/10693#issuecomment-568967236
 
 
   
   ## CI report:
   
   * a6b006a4d5fd8d8398d65f170d89e3fcda2f2105 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142348347) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3923)
 
   * 57edd55c4b44f33ebdda3082ed36d1fd62c2d2ae Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142717407) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3997)
 
   * a54c016397d009edebed862f421d56c1b3a5d8d1 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142727374) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4005)
 
   * 23f62a6735d0b3091dadfa39e103525c3cb7518a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142730378) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4010)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to turn off vectorized orc reader for SQL CLI

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10632: [FLINK-15290][hive] Need a way to 
turn off vectorized orc reader for SQL CLI
URL: https://github.com/apache/flink/pull/10632#issuecomment-567480631
 
 
   
   ## CI report:
   
   * c4e30954298685c2e3acac237bd9d83484140ed9 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141755289) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3765)
 
   * f1ac719f63aa28d92aff57e41e55182692dbcffc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142646581) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3987)
 
   * 253c30ccbda71196e603a69e08ec664b3153eb7a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142732161) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4011)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #10632: [FLINK-15290][hive] Need a way to turn off vectorized orc reader for SQL CLI

2019-12-31 Thread GitBox
lirui-apache commented on a change in pull request #10632: [FLINK-15290][hive] 
Need a way to turn off vectorized orc reader for SQL CLI
URL: https://github.com/apache/flink/pull/10632#discussion_r362189394
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/read/HiveTableInputFormat.java
 ##
 @@ -103,17 +105,16 @@ public HiveTableInputFormat(
this.jobConf = new JobConf(jobConf);
int rowArity = catalogTable.getSchema().getFieldCount();
selectedFields = projectedFields != null ? projectedFields : 
IntStream.range(0, rowArity).toArray();
+   useMapRedReader = 
GlobalConfiguration.loadConfiguration().getBoolean(HiveOptions.TABLE_EXEC_HIVE_FALLBACK_MAPRED_READER);
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15448) Make "ResourceID#toString" more descriptive

2019-12-31 Thread Victor Wong (Jira)
Victor Wong created FLINK-15448:
---

 Summary: Make "ResourceID#toString" more descriptive
 Key: FLINK-15448
 URL: https://issues.apache.org/jira/browse/FLINK-15448
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.9.1
Reporter: Victor Wong


With Flink on Yarn, sometimes we ran into an exception like this:

{code:java}
java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id 
container_  timed out.
{code}

We'd like to find out the host of the lost TaskManager to log into it for more 
details, we have to check the previous logs for the host information, which is 
a little time-consuming.

Maybe we can add more descriptive information to ResourceID of Yarn containers, 
e.g. "container_xxx@host_name:port_number".

Here's the demo:


{code:java}
class ResourceID {
  final String resourceId;
  final String details;

  public ResourceID(String resourceId) {
this.resourceId = resourceId;
this.details = resourceId;
  }

  public ResourceID(String resourceId, String details) {
this.resourceId = resourceId;
this.details = details;
  }

  public String toString() {
return details;
  }   
}

// in flink-yarn
private void startTaskExecutorInContainer(Container container) {
  final String containerIdStr = container.getId().toString();
  final String containerDetail = container.getId() + "@" + 
container.getNodeId();  
  final ResourceID resourceId = new ResourceID(containerIdStr, containerDetail);
  ...
}
{code}







--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-31 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006039#comment-17006039
 ] 

Jark Wu commented on FLINK-15430:
-

Hi [~libenchao], code spliting is not a simple work, I'm not sure whether we 
can do it before release. 
The community also plans to refactoring code generation in 1.11 for a better 
flexible architecture, 
also takes code spliting into account. Would be great if you can join the 
design together. 
  
So if it is not urgent, I would suggest to postpone it to 1.11. 
But I'm also fine to have it before release if you have a simple fixing. 

> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Benchao Li
>Priority: Major
>
> Our Flink SQL version is migrated from 1.5 to 1.9, and from legacy planner to 
> blink planner. We find that some large SQL meets the problem of code gen 
> which exceeds Java 64k method limitation.
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15445) JDBC Table Source didn't work for Types with precision (or/and scale)

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15445:

Fix Version/s: 1.10.0

> JDBC Table Source didn't work for Types with precision (or/and scale)
> -
>
> Key: FLINK-15445
> URL: https://issues.apache.org/jira/browse/FLINK-15445
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15379) JDBC connector return wrong value if defined dataType contains precision

2019-12-31 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006034#comment-17006034
 ] 

Jark Wu commented on FLINK-15379:
-

Hi [~docete], why this issue is closed. From the comments, it seems it is a bug 
of JDBC? And what's the relationship between this issue and FLINK-15445?

> JDBC connector return wrong value if defined dataType contains precision
> 
>
> Key: FLINK-15379
> URL: https://issues.apache.org/jira/browse/FLINK-15379
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
>
> A mysql table like:
>  
> {code:java}
> // CREATE TABLE `currency` (
>   `currency_id` bigint(20) NOT NULL,
>   `currency_name` varchar(200) DEFAULT NULL,
>   `rate` double DEFAULT NULL,
>   `currency_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
>   `country` varchar(100) DEFAULT NULL,
>   `timestamp6` timestamp(6) NULL DEFAULT NULL,
>   `time6` time(6) DEFAULT NULL,
>   `gdp` decimal(10,4) DEFAULT NULL,
>   PRIMARY KEY (`currency_id`)
> ) ENGINE=InnoDB DEFAULT CHARSET=utf8
> +-+---+--+-+-++-+--+
> | currency_id | currency_name | rate | currency_time   | country | 
> timestamp6 | time6   | gdp  |
> +-+---+--+-+-++-+--+
> |   1 | US Dollar | 1020 | 2019-12-20 17:23:00 | America | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   2 | Euro  |  114 | 2019-12-20 12:22:00 | Germany | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   3 | RMB   |   16 | 2019-12-20 12:22:00 | China   | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   4 | Yen   |1 | 2019-12-20 12:22:00 | Japan   | 
> 2019-12-20 12:22:00.123456 | 12:22:00.123456 | 100.4112 |
> +-+---+--+-+-++-+--+{code}
>  
> If user defined a jdbc table as  dimension table like:
>  
> {code:java}
> // 
> public static final String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   'connector.username' = 'root'," +
> "   'connector.table' = 'currency',\n" +
> "   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
> "   'connector.lookup.cache.max-rows' = '500', \n" +
> "   'connector.lookup.cache.ttl' = '10s',\n" +
> "   'connector.lookup.max-retries' = '3'" +
> ")";
> {code}
>  
> User will get wrong value in column `timestamp6`,`time6`,`gdp`:
> {code:java}
> // c.currency_id, c.currency_name, c.rate, c.currency_time, c.country, 
> c.timestamp6, c.time6, c.gdp 
> 1,US 
> Dollar,1020.0,2019-12-20T17:23,America,2019-12-20T12:22:00.023456,12:22,-0.0001
> 2,Euro,114.0,2019-12-20T12:22,Germany,2019-12-20T12:22:00.023456,12:22,-0.0001
> 4,Yen,1.0,2019-12-20T12:22,Japan,2019-12-20T12:22:00.123456,12:22,-0.0001{code}
>  
>  
> {code:java}
> public class JDBCSourceExample {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inStreamingMode()
> .build();
> StreamTableEnvironment tableEnvironment = 
> StreamTableEnvironment.create(env, envSettings);
> String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   

[jira] [Commented] (FLINK-13550) Support for CPU FlameGraphs in new web UI

2019-12-31 Thread Kaibo Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006031#comment-17006031
 ] 

Kaibo Zhou commented on FLINK-13550:


Hi, [~dmvk] Thank you for your work.

I'm interested in this feature. What's the progress and do you need any help?

> Support for CPU FlameGraphs in new web UI
> -
>
> Key: FLINK-13550
> URL: https://issues.apache.org/jira/browse/FLINK-13550
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Major
>
> For a better insight into a running job, it would be useful to have ability 
> to render a CPU flame graph for a particular job vertex.
> Flink already has a stack-trace sampling mechanism in-place, so it should be 
> straightforward to implement.
> This should be done by implementing a new endpoint in REST API, which would 
> sample the stack-trace the same way as current BackPressureTracker does, only 
> with a different sampling rate and length of sampling.
> [Here|https://www.youtube.com/watch?v=GUNDehj9z9o] is a little demo of the 
> feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2019-12-31 Thread GitBox
flinkbot commented on issue #10733: [FLINK-15446][table][docs] Improve "Connect 
to External Systems" documentation page
URL: https://github.com/apache/flink/pull/10733#issuecomment-569901315
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit bfdbbc6bdc8fbee2cd764f9cb389cdf34b128756 (Tue Dec 31 
10:02:41 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10731: [FLINK-15443][jdbc] Fix mismatch between java float and jdbc float

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10731: [FLINK-15443][jdbc] Fix mismatch 
between java float and jdbc float
URL: https://github.com/apache/flink/pull/10731#issuecomment-569887118
 
 
   
   ## CI report:
   
   * 6b0c6b1481b1b022f91a6318d77f32d0632eb1b3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142728836) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4009)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl docs

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl 
docs
URL: https://github.com/apache/flink/pull/10732#issuecomment-569896244
 
 
   
   ## CI report:
   
   * 6fd808adc772aa42c3ed9968bbd9593c28940539 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142732194) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4014)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15447) Change "java.io.tmpdir" of JM/TM on Yarn to "{{PWD}}/tmp"

2019-12-31 Thread Victor Wong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Wong updated FLINK-15447:

Description: 
Currently, when running Flink on Yarn, the "java.io.tmpdir" property is set to 
the default value, which is "/tmp". 

 

Sometimes we ran into exceptions caused by a full "/tmp" directory, which would 
not be cleaned automatically after applications finished.

I think we can set "java.io.tmpdir" to "PWD/tmp" directory, or 
something similar. "PWD" will be replaced with the true working 
directory of JM/TM by Yarn, which will be cleaned automatically.

 

  was:
Currently, when running Flink on Yarn, the "java.io.tmpdir" property is set to 
the default value, which is "/tmp". 

 

Sometimes we ran into exceptions caused by a full "/tmp" directory, which would 
not be cleaned automatically after applications finished.

I think we can set "java.io.tmpdir" to "\{{PWD}}/tmp" directory, or something 
similar. "\{{PWD}}" will be replaced with the true working directory of JM/TM 
by Yarn, which will be cleaned automatically.

 


> Change "java.io.tmpdir"  of JM/TM on Yarn to "{{PWD}}/tmp" 
> ---
>
> Key: FLINK-15447
> URL: https://issues.apache.org/jira/browse/FLINK-15447
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Priority: Major
>
> Currently, when running Flink on Yarn, the "java.io.tmpdir" property is set 
> to the default value, which is "/tmp". 
>  
> Sometimes we ran into exceptions caused by a full "/tmp" directory, which 
> would not be cleaned automatically after applications finished.
> I think we can set "java.io.tmpdir" to "PWD/tmp" directory, or 
> something similar. "PWD" will be replaced with the true working 
> directory of JM/TM by Yarn, which will be cleaned automatically.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong opened a new pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2019-12-31 Thread GitBox
wuchong opened a new pull request #10733: [FLINK-15446][table][docs] Improve 
"Connect to External Systems" documentation page
URL: https://github.com/apache/flink/pull/10733
 
 
   
   
   
   ## What is the purpose of the change
   
   Further improve "Connect to External Systems" page under Table API & SQL. 
   
   ## Brief change log
   
   1. Deprecate schema methods for CSV and JSON formats in API
   2. Remove documentation for format schema, which is not necessary any more 
and is deprecated.
   3. Add DDL documentation for "Table Schema" and "Rowtime Attribute" sections.
   4. Update the comments in DDL for better rendering (do not wrap line).
   
   ## Verifying this change
   
   No tests. 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10714: [FLINK-15409]Add semicolon after WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10714: [FLINK-15409]Add semicolon after 
WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' 
statement
URL: https://github.com/apache/flink/pull/10714#issuecomment-569424154
 
 
   
   ## CI report:
   
   * 3dc52c5c446e5f74f3871b3e8218a24afcd77c24 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142530511) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3967)
 
   * 18f08ec7035a48f0c6551e3062a1b25eb4d76126 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142732169) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4012)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15446) Improve "Connect to External Systems" documentation page

2019-12-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15446:
---
Labels: pull-request-available  (was: )

> Improve "Connect to External Systems" documentation page
> 
>
> Key: FLINK-15446
> URL: https://issues.apache.org/jira/browse/FLINK-15446
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> 1. Remove documentation for format schema, which is not necessary any more 
> and is deprecated.
> 2. Add DDL documentation for "Table Schema" and "Rowtime Attribute" sections.
> 3. Update the comments in DDL for better rendering (do not wrap line).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   * 4774fb7d466299f8edd58872d296462795da06a7 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142732180) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4013)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2019-12-31 Thread GitBox
wuchong commented on issue #10733: [FLINK-15446][table][docs] Improve "Connect 
to External Systems" documentation page
URL: https://github.com/apache/flink/pull/10733#issuecomment-569900836
 
 
   cc @JingsongLi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15447) Change "java.io.tmpdir" of JM/TM on Yarn to "{{PWD}}/tmp"

2019-12-31 Thread Victor Wong (Jira)
Victor Wong created FLINK-15447:
---

 Summary: Change "java.io.tmpdir"  of JM/TM on Yarn to 
"{{PWD}}/tmp" 
 Key: FLINK-15447
 URL: https://issues.apache.org/jira/browse/FLINK-15447
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / YARN
Affects Versions: 1.9.1
Reporter: Victor Wong


Currently, when running Flink on Yarn, the "java.io.tmpdir" property is set to 
the default value, which is "/tmp". 

 

Sometimes we ran into exceptions caused by a full "/tmp" directory, which would 
not be cleaned automatically after applications finished.

I think we can set "java.io.tmpdir" to "\{{PWD}}/tmp" directory, or something 
similar. "\{{PWD}}" will be replaced with the true working directory of JM/TM 
by Yarn, which will be cleaned automatically.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed column

2019-12-31 Thread GitBox
flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix 
physical schema mapping in TableFormatFactoryBase to support define orderless 
computed column
URL: https://github.com/apache/flink/pull/10693#issuecomment-568967236
 
 
   
   ## CI report:
   
   * a6b006a4d5fd8d8398d65f170d89e3fcda2f2105 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142348347) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3923)
 
   * 57edd55c4b44f33ebdda3082ed36d1fd62c2d2ae Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142717407) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3997)
 
   * a54c016397d009edebed862f421d56c1b3a5d8d1 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142727374) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4005)
 
   * 23f62a6735d0b3091dadfa39e103525c3cb7518a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142730378) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4010)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15446) Improve "Connect to External Systems" documentation page

2019-12-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15446:
---

Assignee: Jark Wu

> Improve "Connect to External Systems" documentation page
> 
>
> Key: FLINK-15446
> URL: https://issues.apache.org/jira/browse/FLINK-15446
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.10.0
>
>
> 1. Remove documentation for format schema, which is not necessary any more 
> and is deprecated.
> 2. Add DDL documentation for "Table Schema" and "Rowtime Attribute" sections.
> 3. Update the comments in DDL for better rendering (do not wrap line).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15446) Improve "Connect to External Systems" documentation page

2019-12-31 Thread Jark Wu (Jira)
Jark Wu created FLINK-15446:
---

 Summary: Improve "Connect to External Systems" documentation page
 Key: FLINK-15446
 URL: https://issues.apache.org/jira/browse/FLINK-15446
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Jark Wu
 Fix For: 1.10.0


1. Remove documentation for format schema, which is not necessary any more and 
is deprecated.
2. Add DDL documentation for "Table Schema" and "Rowtime Attribute" sections.
3. Update the comments in DDL for better rendering (do not wrap line).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >